id stringlengths 77 537 | text stringlengths 101 1.47M | source stringclasses 1 value | added stringdate 2025-03-17 19:52:06 2025-03-17 22:27:57 | created timestamp[s]date 2013-05-16 21:27:22 2025-03-14 14:40:33 | metadata dict |
|---|---|---|---|---|---|
https://med.libretexts.org/Bookshelves/Nursing/The_Scholarship_of_Writing_in_Nursing_Education_(Lapum_et_al.)/09%3A_Academic_Integrity_and_Style_Rules_(APA_6th_edition)/9.14%3A_In-text_Citation_Types__One_or_Multiple_Authors | 9.14: In-text Citation Types – One or Multiple Authors
The way you complete an in-text citation depends on whether there is one or multiple authors. Below are some examples:
The original version of this chapter contained H5P content. You may want to remove or replace this element.
Activities: Check Your Understanding
The original version of this chapter contained H5P content. You may want to remove or replace this element.
The original version of this chapter contained H5P content. You may want to remove or replace this element.
Attribution statement
The content was adapted, with editorial changes and some examples deleted or modified, from:
Writing for Success 1st Canadian Edition by Tara Horkoff, licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted. Download for free at: https://opentextbc.ca/writingforsuccess/ | libretexts | 2025-03-17T19:52:58.989925 | 2019-11-19T04:50:41 | {
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"url": "https://med.libretexts.org/Bookshelves/Nursing/The_Scholarship_of_Writing_in_Nursing_Education_(Lapum_et_al.)/09%3A_Academic_Integrity_and_Style_Rules_(APA_6th_edition)/9.14%3A_In-text_Citation_Types__One_or_Multiple_Authors",
"book_url": "https://commons.libretexts.org/book/med-16470",
"title": "9.14: In-text Citation Types – One or Multiple Authors",
"author": "Lapum et al."
} |
https://med.libretexts.org/Bookshelves/Nursing/The_Scholarship_of_Writing_in_Nursing_Education_(Lapum_et_al.)/09%3A_Academic_Integrity_and_Style_Rules_(APA_6th_edition)/9.15%3A_Other_In-text_Citation_Types | 9.15: Other In-text Citation Types
There are other types of in-text citations that are contingent on the type of author.
The original version of this chapter contained H5P content. You may want to remove or replace this element.
Activity: Check Your Understanding
The original version of this chapter contained H5P content. You may want to remove or replace this element.
Attribution statement
The content was adapted, with editorial changes and some examples deleted or modified, from:
Writing for Success 1st Canadian Edition by Tara Horkoff, licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted. Download for free at: opentextbc.ca/writingforsuccess/ | libretexts | 2025-03-17T19:52:59.047515 | 2019-11-19T04:50:42 | {
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"url": "https://med.libretexts.org/Bookshelves/Nursing/The_Scholarship_of_Writing_in_Nursing_Education_(Lapum_et_al.)/09%3A_Academic_Integrity_and_Style_Rules_(APA_6th_edition)/9.15%3A_Other_In-text_Citation_Types",
"book_url": "https://commons.libretexts.org/book/med-16470",
"title": "9.15: Other In-text Citation Types",
"author": "Lapum et al."
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/01%3A_Fundamentals_of_Science_and_Chemistry | Skip to main content
1: Fundamentals of Science and Chemistry
-
-
Last updated
-
-
Save as PDF
-
-
-
1.1: Introduction to Chemistry
-
Chemistry is a very universal and dynamically-changing subject to be confined to a fixed definition; it might be better to think of chemistry more as a point of view that places its major focus on the structure and properties of substances— particular kinds of matter— and especially on the changes they undergo.
-
-
1.2: Pseudoscience
-
A pseudoscience is a belief or process which masquerades as science in an attempt to claim a legitimacy which it would not otherwise be able to achieve on its own terms; it is often known as fringe-or alternative science. The most important of its defects is usually the lack of the carefully controlled and thoughtfully interpreted experiments which provide the foundation of the natural sciences and which contribute to their advancement. | libretexts | 2025-03-17T19:53:05.677475 | 2017-02-24T03:45:16 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/01%3A_Fundamentals_of_Science_and_Chemistry",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "1: Fundamentals of Science and Chemistry",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/01%3A_Fundamentals_of_Science_and_Chemistry/1.01%3A_Introduction_to_Chemistry | 1.1: Introduction to Chemistry
- Distinguish beween chemistry and physics ;
- Suggest ways in which the fields of engineering, economics, and geology relate to Chemistry;
- Define the following terms, and classify them as primarily microscopic or macroscopic concepts: element, atom, compound, molecule, formula, structure.
- The two underlying concepts that govern chemical change are energetics and dynamics . What aspects of chemical change does each of these areas describe?
Chemistry is a very universal and dynamically-changing subject to be confined to a fixed definition; it might be better to think of chemistry more as a point of view that places its major focus on the structure and properties of substances — particular kinds of matter— and especially on the changes they undergo.
The real importance of Chemistry is that it serves as the interface to practically all of the other sciences, as well as to many other areas of human endeavor. For this reason, Chemistry is often said (at least by chemists!) to be the "central science". Chemistry can be "central" in a much more personal way: with a solid background in Chemistry, you will find it far easier to migrate into other fields as your interests develop.
Research or teaching not for you? Chemistry is so deeply ingrained into so many areas of business, government, and environmental management that some background in the subject can be useful (and able to give you a career edge as a team member having special skills) in fields as varied as product development, marketing, management, computer science, technical writing, and even law.
So just what is chemistry?
Do you remember the story about the group of blind men who encountered an elephant? Each one moved his hands over a different part of the elephant's body— the trunk, an ear, or a leg— and came up with an entirely different description of the beast. Chemistry can similarly be approached in different ways, each yielding a different, valid, and yet hopelessly incomplete view of the subject. Thus we can view chemistry from multiple standpoints ranging from the theoretical to the eminently practical:
| Mainly theoretical | Mainly practical |
|---|---|
| Why do particular combinations of atoms hold together, but not others? | What are the properties of a certain compound? |
| How can I predict the shape of a molecule? | How can I prepare a certain compound? |
| Why are some reactions slow, while others occur rapidly? | Does a certain reaction proceed to completion? |
| Is a certain reaction possible? | How can I determine the composition of an unknown substance? |
Boiling it down to the basics
At the most fundamental level, chemistry can be organized along the lines shown here.
- Dynamics refers to the details of that rearrangements of atoms that occur during chemical change, and that strongly affect the rate at which change occurs.
- Energetics refers to the thermodynamics of chemical change, related to the uptake or release of heat. This aspect of chemistry controls the direction in which change occurs, and the mixture of substances that are produced as a result.
- Composition and structure define the substances that are produced because of a chemical change. Structure specifically refers to the relative arrangements of the atoms in space. The extent to which a given structure can persist is determined by energetics and dynamics.
- Synthesis refers to formation of new (and usually more complex) substances from simpler ones, but in the present context we use it in the more general sense to denote the operations required to bring about chemical change and to isolate the desired products.
This view of Chemistry is a rather a stringent one that is probably more appreciated by people who already know the subject than by those who are about to learn it, so we will use a somewhat expanded scheme to organize the fundamental concepts of chemical science. But if you need a single-sentence "definition" of Chemistry, this one wraps it up pretty well:
Chemistry is the study of substances ; their properties, structure, and the changes they undergo.
Micro-macro: the forest or the trees
Chemistry, like all the natural sciences, begins with the direct observation of nature — in this case, of matter. But when we look at matter in bulk, we see only the "forest", not the "trees" — the atoms and molecules of which matter is composed of — whose properties ultimately determine the nature and behavior of the matter we are looking at. This dichotomy between what we can and cannot directly see constitutes two contrasting views which run through all of chemistry, which we call macroscopic and microscopic .
In the context of Chemistry, "microscopic" implies the atomic or subatomic levels which cannot be seen directly (even with a microscope!) whereas "macroscopic" implies things that we can know by direct observations of physical properties such as mass, volume, etc. The following table provides a conceptual overview of Chemical science according to the macroscopic/microscopic dichotomy we have discussed above. It is of course only one of the many ways of looking at the subject, but you may find it helpful in organizing the many facts and ideas that you will encounter in your study of Chemistry. We will organize the discussion in this lesson along similar lines.
|
realm |
macroscopic view |
microscopic view |
|---|---|---|
| composition | formulas, mixtures | structures of solids, molecules, and atoms |
| properties | intensive properties of bulk matter | particle sizes, masses and interactions |
| change (energetics) | energetics and equilibrium | statistics of energy distribution |
| change (dynamics) | kinetics (rates of reactions) | mechanisms |
Chemical composition
Mixture or "pure substance" ?
In science it is necessary to know what we are talking about, so before we can begin to consider matter from a chemical point of view, we need to know its composition ; whether it is a single substance , or a mixture ? (We will get into the details of the definitions later, but for the moment you probably have a fair understanding of the distinction; think of a sample of salt (sodium chloride) as opposed to a solution of salt in water— a mixture of salt and water.)
Elements and compounds
It has been known for at least a thousand years that some substances can be broken down by heating or chemical treatment into "simpler" ones, but there is always a limit; we eventually get substances known as elements that cannot be reduced to any simpler forms by ordinary chemical or physical means. What is our criterion for "simpler"? The most observable (and therefore macroscopic) property is the weight.
The idea of a minimal unit of chemical identity that we call an element developed from experimental observations of the relative weights of substances involved in chemical reactions. For example, the compound mercuric oxide can be broken down by heating into two other substances:
\[\ce{2 HgO \rightarrow 2 Hg + O_2}\]
... but the two products, metallic mercury and dioxygen, cannot be decomposed into simpler substances, so they must be elements.
Elements and atoms
The definition of an element given above is an operational one; a certain result (or in this case, a non-result!) of a procedure that might lead to the decomposition of a substance into lighter units will tentatively place that substance into one of the categories: element or compound. Because this operation is carried out on bulk matter, the concept of the element is also a macroscopic one. The atom , by contrast, is a microscopic concept which in modern chemistry relates the unique character of every chemical element to an actual physical particle.
The idea of the atom as the smallest particle of matter had its origins in Greek philosophy around 400 BCE but was controversial from the start (both Plato and Aristotle maintained that matter was infinitely divisible.) It was not until 1803 that John Dalton proposed a rational atomic theory to explain the facts of chemical combination as they were then known, thus being the first to employ macroscopic evidence to illuminate the microscopic world. It wasn't until the 1900s that the atomic theory became universally accepted. In the 1920's it became possible to measure the sizes and masses of atoms, and in the 1970's techniques were developed that produced images of individual atoms.
Formula and structure
The formula of a substance expresses the relative number of atoms of each element it contains. Because the formula can be determined by experiments on bulk matter, it is a macroscopic concept even though it is expressed in terms of atoms.
What the ordinary chemical formula does not tell us is the order in which the component atoms are connected, whether they are grouped into discrete units ( molecules ) or are two- or three dimensional extended structures, as is the case with solids such as ordinary salt. The microscopic aspect of composition is the structure , which gives detailed relative locations (in two or three dimensional space) of each atom within the minimum collection needed to define the structure of the substance.
|
Macroscopic |
Microscopic |
|
|---|---|---|
| Substances are defined at the macroscopic level by their formulas or compositions , and at the microscopic level by their structures . | The elements hydrogen and oxygen combine to form a compound whose composition is expressed by the formula H 2 O. |
The molecule of water has the structure shown here. |
| Chemical substances that cannot be broken down into simpler ones are known as elements . The actual physical particles of which elements are composed are atoms or molecules . |
Sulfur – the element in its orthorhombic crystalline form.
|
molecule
is an octagonal ring of sulfur
atoms
. The crystal shown at the left is composed of an ordered array of these molecules.
(No, they don't actually move around like this, although they are in a constant state of vibrational motion.)
|
Compounds and molecules
As we indicated above, a compound is a substance containing more than one element. Since the concept of an element is macroscopic and the distinction between elements and compounds was recognized long before the existence of physical atoms was accepted, the concept of a compound must also be a macroscopic one that makes no assumptions about the nature of the ultimate. Thus when carbon burns in the presence of oxygen, the product carbon dioxide can be shown by (macroscopic) weight measurements to contain both of the original elements:
\[\ce{C + O2 -> CO2}\]
10.0 g + 26.7 g = 36.7 g
One of the important characteristics of a compound is that the proportions by weight of each element in a given compound are constant. For example, no matter what weight of carbon dioxide we have, the percentage of carbon it contains is (10.0 / 36.7) = 0.27, or 27%.
Molecules
A molecule is an assembly of atoms having a fixed composition, structure, and distinctive, measurable properties.
"Molecule" refers to a kind of particle, and is therefore a microscopic concept. Even at the end of the 19th century, when compounds and their formulas had long been in use, some prominent chemists doubted that molecules (or atoms) were any more than a convenient model.
Molecules suddenly became real in 1905, when Albert Einstein showed that Brownian motion , the irregular microscopic movements of tiny pollen grains floating in water, could be directly attributed to collisions with molecule-sized particles.
Finally, we get to see one!
In 2009, IBM scientists in Switzerland succeeded in imaging a real molecule, using a technique known as atomic force microscopy in which an atoms thin metallic probe is drawn ever-so-slightly above the surface of an immobilized pentacene molecule cooled to nearly absolute zero. In order to improve the image quality, a molecule of carbon monoxide was placed on the end of the probe.
The image produced by the AFM probe is shown at the very bottom. What is actually being imaged is the surface of the electron clouds of the molecule, which consists of six hexagonal rings of carbon atoms with hydrogen on its periphery. The tiny bumps that correspond to these hydrogen atoms attest to the remarkable resolution of this experiment.
The atomic composition of a molecule is given by its formula . Thus the formulas CO, CH 4 , and O 2 represent the molecules carbon monoxide, methane, and dioxygen. However, the fact that we can write a formula for a compound does not imply the existence of molecules having that composition. Gases and most liquids consist of molecules, but many solids exist as extended lattices of atoms or ions (electrically charged atoms or molecules.) For example, there is no such thing as a "molecule" of ordinary salt, NaCl (see below.)
Confused about the distinction between molecules and compounds?
Maybe the following will help:
|
|
|
|
| A molecule but not a compound - Ozone, O 3 , is not a compound because it contains only a single element. | This well-known molecule is a compound because it contains more than one element. | Ordinary solid salt is a compound but not a molecule . It is built from interpenetrating lattices of sodium and chloride ions that extend indefinitely. |
Structure and properties
Composition and structure lie at the core of Chemistry, but they encompass only a very small part of it. It is largely the properties of chemical substances that interest us; it is through these that we experience and find uses for substances, and much of chemistry as a science is devoted to understanding the relation between structure and properties. For some purposes it is convenient to distinguish between chemical properties and physical properties, but as with most human-constructed dichotomies, the distinction becomes more fuzzy as one looks more closely.
Chemical change
Chemical change is defined macroscopically as a process in which new substances are formed. On a microscopic basis it can be thought of as a re-arrangement of atoms. A given chemical change is commonly referred to as a chemical reaction and is described by a chemical equation that has the form
reactants → products
In elementary courses it is customary to distinguish between "chemical" and "physical" change, the latter usually relating to changes in physical state such as melting and vaporization. As with most human-created dichotomies, this begins to break down when examined closely. This is largely because of some ambiguity in what we regard as a distinct "substance".
Elemental chlorine exists as the diatomic molecule \(\ce{Cl2}\) in the gas, liquid, and solid states; the major difference between them lies in the degree of organization. In the gas the molecules move about randomly, whereas in the solid they are constrained to locations in a 3-dimensional lattice. In the liquid, this tight organization is relaxed, allowing the molecules to slip and slide around each other.
Since the basic molecular units remain the same in all three states, the processes of melting, freezing, condensation and vaporization are usually regarded as physical rather than chemical changes.
Solid salt consists of an indefinitely extended 3-dimensional array of Na + and Cl – ions (electrically charged atoms.)
When heated above 801°C, the solid melts to form a liquid consisting of these same ions. This liquid boils at 1430° to form a vapor made up of discrete molecules having the formula \(\ce{Na2Cl2 (aq)}\).. Because the ions in the solid, the hydrated ions in the solution, and the molecule \(\ce{Na2Cl2}\) are really different chemical species, so the distinction between physical and chemical change becomes a bit fuzzy.
Energetics and Equilibrium
You have probably seen chemical reaction equations such as the "generic" one shown below:
\[\ce{A + B → C + D}\]
An equation of this kind does not imply that the reactants A and B will change entirely into the products C and D, although in many cases this will be what appears to happen. Most chemical reactions proceed to some intermediate point that yields a mixture of reactants and products.
For example, if the two gases phosphorus trichloride and chlorine are mixed together at room temperature, they will combine until about half of them have changed into phosphorus pentachloride:
\[\ce{PCl_3 + Cl_2 \rightarrow PCl_5}\]
At other temperatures the extent of reaction will be smaller or greater. The result, in any case, will be an equilibrium mixture of reactants and products.
The most important question we can ask about any reaction is "what is the equilibrium composition"?
- If the answer is "all products and negligible quantities of reactants", then we say the reaction can takes place and that it "goes to completion ".
- If the answer is "negligible quantities of products", then we say the reaction cannot take place in the forward direction, but that the reverse reaction can occur.
- If the answer is "significant quantities of all components" (both reactants and products) are present in the equilibrium mixture, then we say the reaction is "reversible" or "incomplete".
The aspect of "change" we are looking at here is a property of a chemical reaction , rather than of any one substance. But if you stop to think of the huge number of possible reactions between the more than 15 million known substances, you can see that it would be an impossible task to measure and record the equilibrium compositions of every possible combination.
One or two directly measurable properties of the individual reactants and products can be combined to give a number from which the equilibrium composition at any temperature can be easily calculated. There is no need to do an experiment!
This is very much a macroscopic view because the properties we need to directly concern ourselves with are those of the reactants and products. Similarly, the equilibrium composition — the measure of the extent to which a reaction takes place — is expressed in terms of the quantities of these substances.
Chemical Energetics
Virtually all chemical changes involve the uptake or release of energy, usually in the form of heat. It turns out that these energy changes, which are the province of chemical thermodynamics , serve as a powerful means of predicting whether or not a given reaction can proceed, and to what extent. Moreover, all we need in order to make this prediction is information about the energetic properties of the reactants and products; there is no need to study the reaction itself. Because these are bulk properties of matter, chemical thermodynamics is entirely macroscopic in its outlook.
Dynamics: Kinetics and Mechanism
The energetics of chemical change that we discussed immediately above relate to the end result of chemical change: the composition of the final reaction mixture, and the quantity of heat liberated or absorbed. The dynamics of chemical change are concerned with how the reaction takes place:
- What has to happen to get the reaction started (which molecule gets bumped first, how hard, and from what direction?)
- Does the reaction take place in a single step, or are multiple steps and intermediate structures involved?
These details constitute what chemists call the mechanism of the reaction. For example, the reaction between nitric oxide and hydrogen (identified as the net reaction at the bottom left), is believed to take place in the two steps shown here. Notice that the nitrous oxide, N 2 O, is formed in the first step and consumed in the second, so it does not appear in the net reaction equation. The N 2 O is said to act as an intermediate in this reaction. Some intermediates are unstable species, often distorted or incomplete molecules that have no independent existence; these are known as transition states .
The microscopic side of dynamics looks at the mechanisms of chemical reactions. This refers to a "blow-by-blow" description of what happens when the atoms in the reacting species re-arrange themselves into the configurations they have in the products.
Mechanism represents the microscopic aspect of chemical change. Mechanisms, unlike energetics, cannot be predicted from information about the reactants and products; chemical theory has not yet advanced to the point where we can do much more than make educated guesses. To make matters even more complicated (or, to chemists, interesting! ), the same reaction can often proceed via different mechanisms under different conditions.
Kinetics
Because we cannot directly watch the molecules as they react, the best we can usually do is to infer a reaction mechanism from experimental data, particularly that which relates to the rate of the reaction as it is influenced by the concentrations of the reactants. This entirely experimental area of chemical dynamics is known as kinetics .
Reaction rates, as they are called, vary immensely: some reactions are completed in microseconds, others may take years; many are so slow that their rates are essentially zero. To make things even more interesting, there is no relation between reaction rates and "tendency to react" as governed by the factors in the top half of the above diagram; the latter can be accurately predicted from energetic data on the substances (the properties we mentioned in the previous screen), but reaction rates must be determined by experiment.
Catalysts
Catalysts can make dramatic changes in rates of reactions, especially in those whose un-catalyzed rate is essentially zero. Consider, for example, this rate data on the decomposition of hydrogen peroxide. H 2 O 2 is a by-product of respiration that is poisonous to living cells which have, as a consequence, evolved a highly efficient enzyme (a biological catalyst) that is able to destroy peroxide as quickly as it forms. Catalysts work by enabling a reaction to proceed by an alternative mechanism.
In some reactions, even light can act as a catalyst. For example, the gaseous elements hydrogen and chlorine can remain mixed together in the dark indefinitely without any sign of a reaction, but in the sunlight they combine explosively.
Currents of modern Chemistry
In the preceding section we looked at chemistry from a conceptual standpoint. If this can be considered a "macroscopic" view of chemistry, what is the "microscopic" view? It would likely be what chemists actually do. Because a thorough exploration of this would lead us into far more detail than we can accommodate here, we will mention only a few of the areas that have emerged as being especially important in modern chemistry.
Separation science
A surprisingly large part of chemistry has to do with isolating one component from a mixture. This may occur at any number of stages in a manufacturing process, including the very critical steps involved in removing toxic, odiferous, or otherwise undesirable by-products from a waste stream. But even in the research lab, a considerable amount of effort is often devoted to separating the desired substance from the many components of a reaction mixture, or in separating a component from a complex mixture (for example, a drug metabolite from a urine sample), prior to measuring the amount present.
|
Distillation - separation of liquids having different boiling points. This ancient technique (believed to have originated with Arabic alchemists in 3500 BCE), is still one of the most widely employed operations both in the laboratory and in industrial processes such as oil refining. |
|
|
Solvent extraction - separation of substances based on their differing solubilities. A common laboratory tool for isolating substances from plants and chemical reaction mixtures. Practical uses include processing of radioactive wastes and decaffeination of coffee beans. The separatory funnel shown here is the simplest apparatus for liquid-liquid extraction; for solid-liquid extraction, the Soxhlet apparatus is commonly used. |
|
|
Chromatography - This extremely versatile method depends on the tendency of different kinds of molecules to adsorb (attach) to different surfaces as they travel along a "column" of the adsorbent material. Just as the progress of people walking through a shopping mall depends on how long they spend looking in the windows they pass, those molecules that adsorb more strongly to a material will emerge from the chromatography column more slowly than molecules that are not so strongly adsorbed. |
Paper chromatography of plant juice [link] |
|
Gel electrophoresis - a powerful method for separating and "fingerprinting" macromolecules such as nucleic acids or proteins on the basis of physical properties such as size and electric charge. |
|
Identification and assay
What do the following people have in common?
- A plant manager deciding on whether to accept a rail tank car of vinyl chloride for manufacture into plastic pipe
- An agricultural chemist who wants to know about the vitamin content of a new vegetable hybrid
- The manager of a city water-treatment plant who needs to make sure that the carbonate content of the water is maintained high enough to prevent corrosion, but low enough to prevent scale build-up
The answer is that all depend on analytical techniques — measurements of the nature or quantity ("assays") of some substance of interest, sometimes at very low concentrations. A large amount of research is devoted to finding more accurate and convenient means of making such measurements. Many of these involve sophisticated instruments; among the most widely used are the following:
|
Spectrophotometers that examine the ways that light of various wavelengths is absorbed, emitted, or altered by atomic and molecular species. |
|
|
Mass spectrometers that break up molecules into fragments that can be characterized by electrical methods. |
|
|
Instruments (NMR spectrometers) that analyze the action of radio waves and magnetic fields on atomic nuclei in order to examine the nature of the chemical bonds attached to a particular kind of atom. |
|
|
"In the early 1900's a chemist could analyze about 200 samples per year for the major rock-forming elements. Today, using X-ray fluorescence spectrometry, two chemists can perform the same type of analysis on 7,000 samples per year." |
Materials, polymers, and nanotechnology
Materials science attempts to relate the physical properties and performance of engineering materials to their underlying chemical structure with the aim of developing improved materials for various applications.
|
Polymer chemistry - developing polymeric ("plastic") materials for industrial uses. Connecting individual polymer molecules by cross-links (red) increases the strength of the material. Thus ordinary polyethylene is a fairly soft material with a low melting point, but the cross-linked form is more rigid and resistent to heat. |
|
|
Organic semiconductors offer a number of potential advantages over conventional metalloid-based devices. |
|
|
Fullerenes, nanotubes, and nanowires - Fullerenes were first identified in 1985 as products of experiments in which graphite was vaporized using a laser, work for which R. F. Curl, Jr., R. E. Smally, and H. W. Kroto shared the 1996 Nobel Prize in Chemistry. Fullerene research is expected to lead to new materials, lubricants, coatings, catalysts, electro-optical devices, and medical applications. |
|
|
Nanodevice chemistry — constructing molecular-scale assemblies for specific tasks such as computing, producing motions, etc. |
|
|
Biosensors and biochips - the surfaces of metals and semiconductors "decorated" with biopolymers can serve as extremely sensitive detectors of biological substances and infectious agents. |
|
Biochemistry and Molecular biology
This field covers a wide range of studies ranging from fundamental studies on the chemistry of gene expression and enzyme-substrate interactions to drug design. Much of the activity in this area is directed to efforts in drug discovery.
|
Drug screening began as a largely scattershot approach in which a pathogen or a cancer cell line was screened against hundreds or thousands of candidate substance in the hope of finding a few "leads" that might result in a useful therapy. This field is now highly automated and usually involves combinatorial chemistry (see below) combined with innovative separation and assay methods. |
|
|
Drug design looks at interactions between enzymes and possible inhibitors. Computer-modeling is an essential tool in this work. |
|
|
Proteomics - This huge field focuses on the relations between structure and function of proteins— of which there are about 400,000 different kinds in humans. Proteomics is related to genetics in that the DNA sequences in genes get decoded into proteins which eventually define and regulate a particular organism. |
[ Science article image link] |
|
Chemical genomics explores the chain of events in which signaling molecules regulate gene expression. |
|
Synthesis
In its most general sense, this word refers to any reaction that leads to the formation of a particular molecule. It is both one of the oldest areas of chemistry and one of the most actively pursued. Some of the major threads are
| New-molecule synthesis - Chemists are always challenged to come up with molecules containing novel features such as new shapes or unusual types of bonds. |
|
|
Combinatorial chemistry refers to a group of largely-automated techniques for generating tiny quantities of huge numbers of different molecules ("libraries") and then picking out those having certain desired properties. Although it is a major drug discovery technique, it also has many other applications. |
|
|
Green chemistry - synthetic methods that focus on reducing or eliminating the use or release of toxic or non-biodegradable chemicals or byproducts. |
|
|
Process chemistry bridges the gap between chemical synthesis and chemical engineering by adapting synthetic routes to the efficient, safe, and environmentally-responsible methods for large-scale synthesis. |
|
Congratulations! You have just covered all of Chemistry, condensed into one quick and painless lesson— the world's shortest Chemistry course! Yes, we left out a lot of the details, the most important of which will take you a few months of happy discovery to pick up. But if you keep in mind the global hierarchy of composition/ structure, properties of substances, and change (equilibrium and dynamics) that we have developed in both macroscopic and microscopic views, you will find it much easier to assemble the details as you encounter them and to see where they fit into the bigger picture. | libretexts | 2025-03-17T19:53:05.818441 | 2013-10-03T01:38:02 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/01%3A_Fundamentals_of_Science_and_Chemistry/1.01%3A_Introduction_to_Chemistry",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "1.1: Introduction to Chemistry",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/01%3A_Fundamentals_of_Science_and_Chemistry/1.02%3A_Pseudoscience | 1.2: Pseudoscience
A pseudoscience is a belief or process which masquerades as science in an attempt to claim a legitimacy which it would not otherwise be able to achieve on its own terms; it is often known as fringe- or alternative science. The most important of its defects is usually the lack of the carefully controlled and thoughtfully interpreted experiments which provide the foundation of the natural sciences and which contribute to their advancement.
Of course, the pursuit of scientific knowledge usually involves elements of intuition and guesswork; experiments do not always test a theory adequately, and experimental results can be incorrectly interpreted or even wrong. In legitimate science, however, these problems tend to be self-correcting, if not by the original researchers themselves, then through the critical scrutiny of the greater scientific community. Critical thinking is an essential element of science.
Other Types of Defective Science
There have been several well-documented instances in which the correction process referred to above was delayed until after the initial incorrect interpretation became widely publicized, resulting in what has been called pathological science . The best known of these incidents are the "discoveries" of N-rays, of polywaters, and of cold fusion. All of these could have been averted if the researchers had not been so enthused with their results that they publicized them before they had received proper review by others. Human nature being what it is, there is always some danger of this happening; to discourage it, most of the prestigious scientific journals will refuse to accept reports of noteworthy work that has already been made public.
Another term, junk science , is often used to describe scientific theories or data which, while perhaps legitimate in themselves, are believed to be mistakenly used to support an opposing position. There is usually an element of political or ideological bias in the use of the term. Thus the arguments in favor of limiting the use of fossil fuels in order to reduce global warming are often characterized as junk science by those who do not wish to see such restrictions imposed, and who claim that other factors may well be the cause of global warming. A wide variety of commercial advertising (ranging from hype to outright fraud) would also fall into this category; at its most egregious it might better be described as deceptive science .
"99 44 100 % Pure: It Floats"
This description of Ivory Soap is a classic example of junk science from the 19th century. Not only is the term "pure" meaningless when applied to an undefined mixture such as bath soap, but the implication that its ability to float is evidence of this purity is deceptive. The low density is achieved by beating air bubbles into it, actually reducing the "purity" of the product and in a sense cheating the consumer.
Hoax science is another category that describes deliberately contrived sensationalist writings that have received wide publicity (and earned substantial royalties for their authors.) Immanual Velikovsky's Worlds in Collision (1950) is now probably the best known of these, followed by Erich von Däniken's Chariots of the Gods? (1968). Perhaps the most recent contender in this field is David Talbott, who with Wallace Thornhill wrote The Electric Universe and Thunderbolts of the Gods .
Fraudulent science and Scientific Misconduct refer to work that is intentionally fabricated or misrepresented for personal (recognition or career-advancement) or commercial (marketing or regulatory) reasons. Suppression of science for political reasons often occurred during the second Bush administration. The tobacco and pharmaceutical industries have been notoriously implicated in the latter category. The tobacco industry even published a phoney "scientific" journal containing articles written by hack authors that disputed warnings about smoking-induced cancer.
Charges of minor fudging go back to the days of Ptolemy, Gailileo, and Isaac Newton, but revelations of more contemporary frauds and their contamination of the scientific literature makes these far more problematic. Since about 1980, scientists at several major U.S. universities have been compelled to withdraw articles from prestigious journals. One of the most widely-publicized cases was that of the eminent Korean researcher who reported bogus stem cell results. Some rather troubling recent cases involve at least seventy articles describing bogus chemical structures reported by a group of scientists in China, and about an equal number of papers that had been forged or falsified by one or more scientists at a university in India.
Finally, there is just plain bad science , which would logically encompass all of the evils being discussed here, but is commonly used to describe well-intentioned but incorrect, obsolete, incomplete, or over-simplified expositions of scientific ideas. An example would be the statement that electrons revolve in orbits around the atomic nucleus, a picture that was discredited in the 1920's, but is so much more vivid and easily grasped than the one that supplanted it that it shows no sign of dying out.
Note: "It's only a theory"
In ordinary conversation, the word "theory" connotes an opinion, a conjecture, or a supposition. But in science, the term has a much more limited meaning. A scientific theory is an attempt to explain some aspect of the natural world in terms of empirical evidence and observation. It commonly draws upon established principles and knowledge with the aim of extending them in a logical and consistent way that enables one to make useful predictions. All scientific theories are tentative and subject to being tested and modified. As theories become more mature, they grow into more organized bodies of knowledge that enable us to understand and predict a wider range of phenomena. Examples of such theories are quantum theory, Einstein's theories of relativity, and evolution.
Scientific theories fall into two categories:
- Theories that have been shown to be incorrect, usually because they are not consistent with new observations;
- All other theories
Hence, theories cannot be proven to be correct; there is always the possibility that further observations will disprove the theory. Furthremore, a theory that cannot be refuted or falsified is not a scientific theory.
For example, the theories that underlie astrology (the doctrine that the positions of the stars can influence one's life) are not falsifiable because they, and the predictions that follow from them, are so vaguely stated that the failure of these predictions can always be "explained away" by assuming that various other influences were not taken into account. It is similarly impossible to falsify so-called "creation science" or "intelligent design" because one can simply evoke the "then a miracle occurs" at any desired stage.
Recognizing Pseudoscience?
There is no single test that unambiguously distinguishes between science and pseudoscience, but as the two diverge more and more from one another, certain differences become apparent, and these tend to be remarkably consistent across all fields of interest. In examining the following table, it might be helpful to consider examples of astronomy vs. astrology, or of chemistry vs. alchemy, which at one time were single fields that gradually diverged into sciences and pseudosciences.
Many scientists' ordinary response to pseudoscientific claims is simply to laugh at them. But mythology has always been an important part of human culture, often by giving people the illusion of having some direct control over their lives. This can lead to their becoming advocates for various kinds of health quackery, to commercial scams, and to cult-like organizations such as scientology. Worst of all, they can pressure political and educational circles to adopt their ideologies.
Does the "Establishment" Actively Suppress new Ideas?
Anyone who has been around for long enough has encountered statements like these:
- An inventor's design for a device that uses water as a fuel has been bought up and suppressed by the oil companies.
- "Alternative health" techniques (homeopathy, chiropractic, chelation therapy— you name it!) are actively suppressed by the medical profession or the pharmaceutical industry in a desperate attempt to serve their selfish interests.
- Reports of unidentified flying objects (UFO's) are suppressed by the U.S. Government in an attempt to prevent panic and/or to maintain control over citizens.
- Editors of scientific journals and the reviewers they call on to assess the worth of submitted papers reject out-of-hand anything that comes from persons who are not members of the scientific "establishment" or which report results not consistent with presently-accepted science.
Claims of these kinds are frequently made and widely believed, especially by those who are inclined to see conspiracies around every corner. There is little if any evidence for any of these claims. The real reason that new devices or new theories get thrown aside is that the arguments or evidence adduced to support them is inadequate or not credible. The individuals who believe themselves to be unfairly thwarted by the scientific community are very often so isolated from it that they are unable to appreciate its norms of clarity, rigor, and consistency with existing science.
A common refrain is that " they laughed at Galileo, at Thompson, and at Wegner," whose theores were eventually supported. Well, with Galileo, they did not exactly laugh; it was more a case of challenge to religious doctrine that forced him to recant his assertion that the Sun, and not the Earth, is at the center of the solar system. There have been innumerable cases in which the world was simply not ready to accept a new idea. This was especially common before the scientific method had been developed, and before the technology needed to apply it had become available.
When J.J. Thomson discovered evidence that the atom is not the ultimate fundamental particle and could be broken up into smaller units, even Thomson himself was reluctant to accept it, and he became a laughingstock for several years until more definitive evidence became available.
Alfred Wegener's theory of continental drift was bitterly attacked when it was first published in 1915, and it did not become generally accepted until about 50 years later. Others had made similar proposals based on the way the continents of Africa and South America could be fitted together, but Wegener was the first to make a careful study of fossil and geological similarities between the two continents. Nevertheless, the idea that continents could float around was too hard to accept at a time when nothing was known about the interior structure of the Earth, and the evidence he presented was rejected as inadequate.
On the other hand, the even-more-revolutionary concepts of special- and general relativity, and of quantum theory (which developed in several stages), achieved rapid acceptance when they were first presented, as did Louis Pasteur's germ theory of disease. In all of these cases the new theories provided credible explanations for what was previously unexplainable, and the tools for confirming them existed at the time, or in the case of general relativity, would soon become available. | libretexts | 2025-03-17T19:53:05.887978 | 2013-10-03T01:38:04 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/01%3A_Fundamentals_of_Science_and_Chemistry/1.02%3A_Pseudoscience",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "1.2: Pseudoscience",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/02%3A_Essential_Background | 2: Essential Background
Most courses in Chemistry, especially those at the college/university level, assume that their students have had prior courses in general science, and often in physics, which provide them with an understanding of important concepts such as significant figures, units of measure, treatment measurement error, density and buoyancy. But if most of that has receded into the fuzzy past, the six sections of this unit will bring you up to speed. Neglect this stuff at your peril — it will come up in one way or another in any Chemistry course you take!
-
- 2.1: Classification and Properties of Matter
- Matter is “anything that has mass and occupies space”. Matter is what chemical substances are composed of. But what do we mean by chemical substances? How do we organize our view of matter and its properties? These very practical questions will be the subjects of this lesson.
-
- 2.2: Energy, Heat, and Temperature
- All chemical changes are accompanied by the absorption or release of heat. The intimate connection between matter and energy has been a source of wonder and speculation from the most primitive times; it is no accident that fire was considered one of the four basic elements (along with earth, air, and water) as early as the fifth century BCE. This unit will cover only the very basic aspects of the subject.
-
- 2.3: The Measure of Matter
- The natural sciences begin with observation and this usually involves numerical measurements of quantities. Most of these quantities have units of some kind associated with them, and these units must be retained when you use them in calculations. All measuring units can be defined in terms of a very small number of fundamental ones that, through "dimensional analysis", provide insight into their derivation and meaning, and must be understood when converting between different unit systems.
-
- 2.4: The Meaning of Measure
- In science, there are numbers and there are "numbers". What we ordinarily think of as a "number" and will refer to here as a pure number is just that: an expression of a precise value. The other kind of numeric quantity that we encounter in the natural sciences is a measured value of something– the length or weight of an object, the volume of a fluid, or perhaps the reading on an instrument. Although we express these values numerically, it would be a mistake to regard them as pure numbers. | libretexts | 2025-03-17T19:53:05.950119 | 2013-10-03T01:37:59 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/02%3A_Essential_Background",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "2: Essential Background",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/02%3A_Essential_Background/2.01%3A_Classification_and_Properties_of_Matter | 2.1: Classification and Properties of Matter
- Give examples of extensive and intensive properties of a sample of matter. Which kind of property is more useful for describing a particular kind of matter?
- Explain what distinguishes heterogeneous matter from homogeneous matter.
- Describe the following separation processes: distillation , crystallization , liquid-liquid extraction , chromatography .
- To the somewhat limited extent to which it is meaningful, classify a given property as a physical or chemical property of matter.
Matter is “anything that has mass and occupies space”, we were taught in school. True enough, but not very satisfying. A really complete answer is unfortunately beyond the scope of this course, but we will offer a hint of it in a later chapter on atomic structure. For the moment, let’s put off trying to define matter and focus on the chemist’s view: matter is what chemical substances are composed of. But what do we mean by chemical substances? How do we organize our view of matter and its properties? These very practical questions will be the subjects of this lesson.
Properties of Matter
The science of chemistry developed from observations made about the nature and behavior of different kinds of matter, which we refer to collectively as the properties of matter. The properties we refer to in this lesson are all macroscopic properties: those that can be observed in bulk matter. At the microscopic level, matter is of course characterized by its structure : the spatial arrangement of the individual atoms in a molecular unit or an extended solid. By observing a sample of matter and measuring its various properties, we gradually acquire enough information to characterize it; to distinguish it from other kinds of matter. This is the first step in the development of chemical science, in which interest is focused on specific kinds of matter and the transformations between them.
If you think about the various observable properties of matter, it will become apparent that these fall into two classes. Some properties, such as mass and volume, depend on the quantity of matter in the sample we are studying. Clearly, these properties, as important as they may be, cannot by themselves be used to characterize a kind of matter; to say that “water has a mass of 2 kg” is nonsense, although it may be quite true in a particular instance. Properties of this kind are called extensive properties of matter.
Suppose we take further measurements, and find that the same quantity of water whose mass is 2.0 kg also occupies a volume of 2.0 liters. We have measured two extensive properties (mass and volume) of the same sample of matter. This allows us to define a new quantity, the quotient m/V which defines another property of water which we call the density . Unlike the mass and the volume, which by themselves refer only to individual samples of water, the density (mass per unit volume) is a property of all samples of pure water at the same temperature. Density is an example of an intensive property of matter.
This definition of the density illustrates an important general rule: the ratio of two extensive properties is always an intensive property.
Intensive properties are extremely important, because every possible kind of matter possesses a unique set of intensive properties that distinguishes it from every other kind of matter. Some intensive properties can be determined by simple observations: color (absorption spectrum), melting point, density, solubility, acidic or alkaline nature, and density are common examples. Even more fundamental, but less directly observable, is chemical composition.
The more intensive properties we know, the more precisely we can characterize a sample of matter.
Intensive properties are extremely important, because every possible kind of matter possesses a unique set of intensive properties that distinguishes it from every other kind of matter. In other words, intensive properties serve to characterize matter . Many of the intensive properties depend on such variables as the temperature and pressure, but the ways in which these properties change with such variables can themselves be regarded as intensive properties.
Classify each of the following as an extensive or intensive property.
- The volume of beer in a mug
- The percentage of alcohol in the beer
- The number of calories of energy you derive from eating a banana
- The number of calories of energy made available to your body when you consume 10.0 g of sugar
- The mass of iron present in your blood
- The mass of iron present in 5 mL of your blood
- The electrical resistance of a piece of 22-gauge copper wire.
- The electrical resistance of a 1-km length of 22-gauge copper wire
- The pressure of air in a bicycle tire
- Answer a
-
extensive; depends on size of the mug.
- Answer b
-
intensive; same for any same-sized sample.
- Answer c
-
extensive; depends on size and sugar content of the banana.
- Answer d
-
intensive; same for any 10 g portion of sugar.
- Answer e
-
extensive; depends on volume of blood in the body.
- Answer f
-
intensive; the same for any 5 mL sample.
- Answer g
-
extensive; depends on length of the wire.
- Answer h
-
intensive; same for any 1 km length of the same wire.
- Answer i
-
pressure itself is intensive, but is also dependent on the quantity of air in the tire.
The last example shows that not everything is black or white! But we often encounter matter that is not uniform throughout, whose different parts exhibit different sets of intensive properties. This brings up another distinction that we address immediately below.
How to classify matter?
One useful way of organizing our understanding of matter is to think of a hierarchy that extends down from the most general and complex to the simplest and most fundamental. The orange-colored boxes represent the central realm of chemistry, which deals ultimately with specific chemical substances, but as a practical matter, chemical science extends both above and below this region.
Alternatively, when we are thinking about specific samples of matter, it may be more useful to re-cast our classification in two dimensions:
Notice, in the bottom line of boxes above, that "mixtures" and "pure substances" can fall into either the homogeneous or heterogeneous categories.
Homogeneous and heterogeneous: it's a matter of phases
Homogeneous matter (from the Greek homo = same) can be thought of as being uniform and continuous, whereas heterogeneous matter ( hetero = different) implies non-uniformity and discontinuity. To take this further, we first need to define "uniformity" in a more precise way, and this takes us to the concept of phases .
A phase is a region of matter that possesses uniform intensive properties throughout its volume. A volume of water, a chunk of ice, a grain of sand, a piece of copper— each of these constitutes a single phase, and by the above definition, is said to be homogeneous. A sample of matter can contain more than a single phase; a cool drink with ice floating in it consists of at least two phases, the liquid and the ice. If it is a carbonated beverage, you can probably see gas bubbles in it that make up a third phase.
Phase boundaries
Each phase in a multiphase system is separated from its neighbors by a phase boundary , a thin region in which the intensive properties change discontinuously. Have you ever wondered why you can easily see the ice floating in a glass of water although both the water and the ice are transparent? The answer is that when light crosses a phase boundary, its direction of travel is slightly bent, and a portion of the light gets reflected back; it is these reflected and distorted light rays emerging from that reveal the chunks of ice floating in the liquid.
If, instead of visible chunks of material, the second phase is broken into tiny particles, the light rays usually bounce off the surfaces of many of these particles in random directions before they emerge from the medium and are detected by the eye. This phenomenon, known as scattering , gives multiphase systems of this kind a cloudy appearance, rendering them translucent instead of transparent. Two very common examples are ordinary fog , in which water droplets are suspended in the air, and milk , which consists of butterfat globules suspended in an aqueous solution.
Getting back to our classification, we can say that Homogeneous matter consists of a single phase throughout its volume; heterogeneous matter contains two or more phases.
Dichotomies ("either-or" classifications) often tend to break down when closely examined, and the distinction between homogeneous and heterogeneous matter is a good example; this is really a matter of degree, since at the microscopic level all matter is made up of atoms or molecules separated by empty space! For most practical purposes, we consider matter as homogeneous when any discontinuities it contains are too small to affect its visual appearance.
How large must a molecule or an agglomeration of molecules be before it begins to exhibit properties of a being a separate phase? Such particles span the gap between the micro and macro worlds, and have been known as colloids since they began to be studied around 1900. But with the development of nanotechnology in the 1990s, this distinction has become even more fuzzy.
Pure Substances and Mixtures
The air around us, most of the liquids and solids we encounter, and all too much of the water we drink consists not of pure substances, but of mixtures . You probably have a general idea of what a mixture is, and how it differs from a pure substance; what is the scientific criterion for making this distinction?
To a chemist, a pure substance usually refers to a sample of matter that has a distinct set of properties that are common to all other samples of that substance. A good example would be ordinary salt, sodium chloride. No matter what its source (from a mine, evaporated from seawater, or made in the laboratory), all samples of this substance, once they have been purified , possess the same unique set of properties.
A pure substance is one whose intensive properties are the same in any purified sample of that same substance.A mixture , in contrast, is composed of two or more substances, and it can exhibit a wide range of properties depending on the relative amounts of the components present in the mixture. For example, you can dissolve up to 357 g of salt in one litre of water at room temperature, making possible an infinite variety of "salt water" solutions. For each of these concentrations, properties such as the density, boiling and freezing points, and the vapor pressure of the resulting solution will be different.
Is anything really pure?
Those of us who enjoy peanut butter would never willingly purchase a brand advertised as "impure". But a Consumer Reports article published some years ago showed a table listing the number of "mouse droppings" and "insect parts" (presumably from peanut storage silos) they found in samples of all the major brands. Bon appetit !
Finally, we all prefer to drink "pure" water, but we don't usually concern ourselves with the dissolved atmospheric gases and ions such as Ca 2+ and HCO 3 - that are present in most drinking waters. But these harmless "impurities" are always present in those "pure" spring waters.
and para- water, respectively.The bottom line: To a chemist, the term "pure" has meaning only in the context of a particular application or process.
Operational and conceptual classifications
Since chemistry is an experimental science, we need a set of experimental criteria for placing a given sample of matter in one of these categories. There is no single experiment that will always succeed unambiguously deciding this kind of question. However, there is one principle that will always work in theory, if not in practice. This is based on the fact that the various components of a mixture can, in principle, always be separated into pure substances.
Consider a heterogeneous mixture of salt water and sand. The sand can be separated from the salt water by the mechanical process of filtration. Similarly, the butterfat contained in milk may be separated from the water by a process known as churning , in which mechanical agitation forces the butterfat droplets to coalesce into the solid mass we know as butter. These examples illustrate the general principle that heterogeneous matter may be separated into homogeneous matter by mechanical means.
Turning this around, we have an operational definition of heterogeneous matter: If, by some mechanical operation we can separate a sample of matter into two or more other kinds of matter, then our original sample was heterogeneous. To find a similar operational definition for homogeneous mixtures, consider how we might separate the two components of a solution of salt water. The most obvious way would be to evaporate off the water, leaving the salt as a solid residue. Thus a homogeneous mixture can be separated into pure substances by undergoing appropriate partial changes of state— that is, by evaporation, freezing, etc.
Note the term partial in the above sentence; in the last example, we evaporate only the water, not the salt (which would be very difficult to do anyway!) The idea is that one component of the mixture is preferentially affected by the process we are carrying out. This principle will be emphasized in the following examples.
Separating homogeneous mixtures
Some common methods of separating homogeneous mixtures into their components are outlined below.
Distillation
A mixture of two volatile liquids is partly boiled away; the first portions of the condensed vapor will be enriched in the component having the lower boiling point. Note that if all the liquid were boiled away, the distillate would be identical with the original liquid. But if, say, half of the liquid is distilled, the distillate would contain a larger fraction of the more volatile component. If the distillate is then re-distilled, it can be further enriched in the low-boiling liquid. By repeating this process many times (aided by the fractionating column above the boiling vessel), a high degree of separation can be achieved.
Fractional crystallization
A hot saturated solution containing two or more dissolved solids is allowed to cool slowly; the least-soluble material crystallizes out first, and can be separated by filtration. This process is widely employed both in the laboratory and, on a much larger scale, in industry.
Similarly, a molten mixture of several components, when slowly cooled, will first yield crystals of the material having the highest melting point. This process occurs on a huge scale in nature when molten magma from the earth's mantle rises into the lithosphere and cools underground — a process that can take up to a million years. This is how the common rock known as granite is formed. Eventually these rocks rise and become exposed on the earth's surface.
Liquid-liquid Extraction
Two mutually-insoluble liquids, one containing two or more solutes (dissolved substances), are shaken together in a separatory funnel . Each solute will concentrate in the liquid in which it is more soluble. The two solutions are then separated by opening the stopcock at the bottom, allowing the more dense solution to drain out.
Solid-liquid Extraction
In working with natural products such as plant materials, a first step is often to extract soluble substances from the plant parts. This, and similar extractions of the soluble components of complex solids, is carried out in an apparatus known as a Soxhlet extractor.
The idea is to continuously percolate an appropriate hot solvent through the material, which is contained in a porous paper "thimble". Hot vapor from the boiling flask bypasses the extraction chamber through the arm at the left (labeled "vapor" in the illustration →) and into the condenser, from which it drips down into the extraction chamber, where a portion of the soluble material mixes with the solvent. When the condensate reaches the top of the chamber, it flows out through the siphon arm, emptying its contents into the boiling flask, which becomes increasingly concentrated in the extracted material.
The advantage of this arrangement is that the percolation-and-extraction process can be repeated indefinitely (usually hours to days) without much attention.
Chromatography
As a liquid or gaseous mixture flows along a column containing an adsorbant material, the more strongly-adsorbed components tend to move more slowly and emerge later than the less-strongly adsorbed components. In this example, an extract made from plant leaves is separated into its principle components: carotene, xanthophyll, and chlorophylls A and B.
Although chromatography originated in the mid-19th Century, it was not widely employed until the 1950's. Since that time, it has encompassed a huge variety of techniques and is no longer limited to colored substances. Chromatography is now one of the most widely-employed methods for the analysis and separation of complex mixtures of liquids and gases.
Physical and Chemical Properties
Since chemistry is partly the study of the transformations that matter can undergo, we can also assign to any substance a set of chemical properties that express the various changes of composition the substance is known to undergo. Chemical properties also include the conditions of temperature, etc., required to bring about the change, and the amount of energy released or absorbed as the change takes place.
The properties that we described above are traditionally known as physical properties , and are to be distinguished from chemical properties that usually refer to changes in composition that a substance can undergo. For example, we can state some of the more distinctive physical and chemical properties of the element sodium :
|
Physical properties (25 °C)
|
Chemical properties
|
|---|---|
|
|
The more closely one looks at the distinction between physical and chemical properties, the more blurred this distinction becomes. For example, the high boiling point of water compared to that of methane, CH 4 , is a consequence of the electrostatic attractions between O-H bonds in adjacent molecules, in contrast to those between C-H bonds; at this level, we are really getting into chemistry! So although you will likely be expected to "distinguish between" physical and chemical properties on an exam, don't take it too seriously — this turns out to be a rather dubious dichotomy, loved by teachers, but of limited usefulness! | libretexts | 2025-03-17T19:53:06.048355 | 2013-10-03T01:38:00 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/02%3A_Essential_Background/2.01%3A_Classification_and_Properties_of_Matter",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "2.1: Classification and Properties of Matter",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/02%3A_Essential_Background/2.02%3A_Energy_Heat_and_Temperature | 2.2: Energy, Heat, and Temperature
- Explain the difference between kinetic energy and potential energy .
- Define chemical energy and thermal energy .
- Define heat and work , and describe an important limitation in their interconversion.
- Describe the physical meaning of temperature.
- Explain the meaning of a temperature scale and describe how a particular scale is defined.
- Convert a temperature expressed in Fahrenheit or Celsius to the other scale.
- Describe the Kelvin temperature scale and its special significance.
- Define heat capacity and specific heat , and explain how they can be measured.
All chemical changes are accompanied by the absorption or release of heat. The intimate connection between matter and energy has been a source of wonder and speculation from the most primitive times; it is no accident that fire was considered one of the four basic elements (along with earth, air, and water) as early as the fifth century BCE. This unit will cover only the very basic aspects of the subject, just enough to get you started; there is a far more complete set of lessons on chemical energetics elsewhere.
What is Energy?
Energy is one of the most fundamental and universal concepts of physical science, but one that is remarkably difficult to define in way that is meaningful to most people. This perhaps reflects the fact that energy is not a “thing” that exists by itself, but is rather an attribute of matter (and also of electromagnetic radiation) that can manifest itself in various ways. It can be observed and measured only indirectly through its effects on matter that acquires, loses, or possesses it. Energy can take many forms: mechanical, chemical, electrical, radiation (light), and thermal. You also know that energy is conserved ; it can be passed from one object or place to another, but it can never simply disappear.
In the 17th Century, the great mathematician Gottfried Leibniz (1646-1716) suggested the distinction between vis viva ("live energy") and vis mortua ("dead energy"), which later became known as kinetic energy and potential energy. Except for radiant energy that is transmitted through an electromagnetic field, most practical forms of energy we encounter are of two kinds: kinetic and potential.
- Kinetic energy is associated with the motion of an object; a body with a mass, m, and moving at a velocity, v, possesses the kinetic energy \( \frac{1}{2} mv^2\). This "v-squared" part is important; if you double your speed, you consume four times as much fuel (glucose for the runner, gasoline or electricity for your car.
- Potential energy is energy a body has by virtue of its location in a force field — a gravitational, electrical, or magnetic field. For example, if an object of mass m is raised off the floor to a height h , its potential energy increases by mgh , where g is a proportionality constant known as the acceleration of gravity . Similarly, the potential energy of a particle having an electric charge q depends on its location in an electrostatic field.
Kinetic and potential energy are freely interconvertible
Pick up a book and hold it above the table top; you have just increased its potential energy in the force field of the earth's gravity. Now let it drop. Its newly-acquired potential energy begins to re-appear as kinetic energy as it accelerates downward at a velocity increasing by 9.8 m/sec every second (9.8 m sec –2 or 32 ft sec –2 ). At the instant it strikes the surface, the potential energy you gave supplied to the book has now been entirely converted into kinetic energy.
And what happens to that kinetic energy after the book stops moving? It is still there, but you can no longer see its effect; it has now become dispersed as thermal kinetic energy ("heat") into the molecules of the book, the table top, and, ultimately, into the surroundings, including the air.
The more you think about it, the more examples of kinetic-potential conversion you will find in everyday life. In many other instances, however, the energy of an object can be seen to repeatedly alternate between potential and kinetic forms. Left alone, the process continues indefinitely until friction has dissipated the energy into the surroundings.
Energy's graveyard: Thermal energy
Energy is conserved: it can neither be created nor destroyed. But it can, and eventually always will, disappear from our view and into the microscopic world of individual molecular particles. All molecules are in a continual state of motion, and they therefore possess kinetic energy. But unlike the motion of a massive body such as a baseball or a car that is moving along a defined trajectory, the motions of individual atoms or molecules are random and chaotic, forever changing in magnitude and direction as they collide with each other or (as in the case of a gas) with the walls of the container.
The sum total of all of this microscopic-scale randomized kinetic energy within a body is given a special name, thermal energy . Although we cannot directly see thermal energy in action, we can certainly feel it; as we will see further, it correlates directly with the temperature of an object.
The chemistry connection
Atoms and molecules are the principle actors of thermal energy, but they possess other kinds of energy as well that plays a major role in chemistry.
Bond energy
H 2 + is energetically stable enough to exist as an identifiable entity, and thus fits the definition a molecule. But it is also extremely reactive, so it does not sit around for very long. It can only be observed when a high-voltage electrical discharge is passed through hydrogen gas; the blue glow one sees represents its demise as it picks up electrons and reverts to the far more stable dihydrogen molecule H 2 .
Consider, for example, the simplest possible molecule. This is the hydrogen molecule ion , H 2 + , in which a single electron simultaneously attracts two protons. These protons, having identical charges, repel each other, but this is overcome by the electron-proton attractions, leading to a net decrease in potential energy when an electron combines with two protons. This potential energy decrease is sufficient to enable H 2 + to exist as a discrete molecule which we can represent as [H—H] + in order to explicitly depict the chemical bond that joins the two atoms.
The strength of a chemical bond increases as the potential energy associated with its formation becomes more negative.
Chemical bonds also possess some kinetic energy that is associated with the "motion" of the electron as it spreads itself into the extended space it occupies in what we call the "bond". This is a quantum effect that has no classical counterpart. The kinetic energy has only half the magnitude of the potential energy and works against it; the total bond energy is the sum of the two energies.
Chemical energy
The chemical bonds in the glucose molecules store the energy that fuels our bodies.
Molecules are vehicles both for storing and transporting energy , and the means of converting it from one form to another when the formation, breaking, or rearrangement of the chemical bonds within them is accompanied by the uptake or release of energy, most commonly in the form of heat .
Chemical energy refers to the potential and kinetic energy associated with the chemical bonds in a molecule. Consider what happens when hydrogen and oxygen combine to form water. The reactants H 2 and O 2 contain more bond energy than H 2 O, so when they combine, the excess energy is liberated given off in the form of thermal energy, or "heat".
By convention, the energy content of the chemical elements in their natural state (H 2 and O 2 in this example) are defined as "zero". This makes calculations much easier, and gives most compounds negative "energies of formation". (see below)
Chemical energy manifests itself in many different ways:
- chemical → thermal → kinetic chemical → thermal → kinetic + radiant
- chemical → electrical → kinetic (nerve function, muscle movement)
- chemical → electrical
Energy scales are always arbitrary
You might at first think that a book sitting on the table has zero kinetic energy since it is not moving. In truth, however, the earth itself is moving; it is spinning on its axis, it is orbiting the sun, and the sun itself is moving away from the other stars in the general expansion of the universe. Since these motions are normally of no interest to us, we are free to adopt an arbitrary scale in which the velocity of the book is measured with respect to the table; on this so-called laboratory coordinate system , the kinetic energy of the book can be considered zero.
We do the same thing with potential energy. If we define the height of the table top as the zero of potential energy, then an object having a mass \(m\) suspended at a height h above the table top will have a potential energy of mgh . Now let the object fall; as it accelerates in the earth's gravitational field, its potential energy changes into kinetic energy. An instant before it strikes the table top, this transformation is complete and the kinetic energy \(\frac{1}{2}mv^2\) is identical with the original mgh . As the object comes to rest, its kinetic energy appears as heat (in both the object itself and in the table top) as the kinetic energy becomes randomized as thermal energy.
Energy units
Energy is measured in terms of its ability to perform work or to transfer heat. Mechanical work is done when a force f displaces an object by a distance d :
\[W = f\cdot d\]
The basic unit of energy is the joule . One joule is the amount of work done when a force of 1 newton acts over a distance of 1 m; thus 1 J = 1 N-m. One newton is the amount of force required to accelerate a 1 - kg mass by 1 meter per second in one second - 1 m sec –2 , so the basic dimensions of the joule are kg m 2 s –2 . The other two units of energy that are in wide use are the calorie and the BTU (British thermal unit). These are defined in terms of the heating effect on water. For the moment, we will confine our attention to joule and calorie.
Heat and work are both measured in energy units, but they do not constitute energy itself. As we will explain below, they refer to processes by which energy is transferred to or from something— a block of metal, a motor, or a cup of water.
Heat
When a warmer body is brought into contact with a cooler body, thermal energy flows from the warmer one to the cooler until their two temperatures are identical. The warmer body loses a quantity of thermal energy Δ E , and the cooler body acquires the same amount of energy. We describe this process by saying that "Δ E joules of heat has passed from the warmer body to the cooler one." It is important, however, to understand that Heat is the transfer of energy due to a difference in temperature.
Heat does NOT flow
We often refer to a "flow" of heat, recalling the 18th-century notion that heat was an actual substance called “caloric” that could flow like a liquid. This is a misnomer; heat is a process and is not something that can be contained or stored in a body. It is important that you understand this, because the use of the term in our ordinary conversation ("the heat is terrible today") tends to make us forget this distinction.
There are basically three mechanisms by which heat can be transferred: conduction, radiation, and convection. The latter process occurs when the two different temperatures cause different parts of a fluid to have different densities.
Work
Work is the transfer of energy by any process other than heat.Work , like energy, can take various forms: mechanical, electrical, gravitational, etc. All have in common the fact that they are the product of two factors, an intensity term and a capacity term . For example, the simplest form of mechanical work arises when an object moves a certain distance against an opposing force. Electrical work is done when a body having a certain charge moves through a potential difference.
|
type of work
|
intensity factor
|
capacity factor
|
formula
|
|---|---|---|---|
| mechanical | force | change in distance | \(f\Delta x\) |
| gravitational | gravitational potential (a function of height) | mass | mgh |
| electrical | potential difference | quantity of charge | \(Q\Delta V\) |
Performance of work involves a transformation of energy ; thus when a book drops to the floor, gravitational work is done (a mass moves through a gravitational potential difference), and the potential energy the book had before it was dropped is converted into kinetic energy which is ultimately dispersed as thermal energy.
Mechanical work is the product of the force exerted on a body and the distance it is moved: 1 N-m = 1 J .
Heat and work are best thought of as processes by which energy is exchanged, rather than as energy itself. That is, heat “exists” only when it is flowing, work “exists” only when it is being done.
When two bodies are placed in thermal contact and energy flows from the warmer body to the cooler one,we call the process “heat”. A transfer of energy to or from a system by any means other than heat is called “work”.
So you can think of heat and work as just different ways of accomplishing the same thing: the transfer of energy from one place or object to another.
To make sure you understand this, suppose you are given two identical containers of water at 25°C. Into one container you place an electrical immersion heater until the water has absorbed 100 joules of heat. The second container you stir vigorously until 100 J of work has been performed on it. At the end, both samples of water will have been warmed to the same temperature and will contain the same increased quantity of thermal energy. There is no way you can tell which contains "more work" or "more heat".
An important limitation on energy conversion
A gas engine converts the chemical energy available in its fuel into thermal energy. Only a part of this is available to perform work; the remainder is dispersed into the surroundings through the exhaust. This limitation is the essence of the Second Law of Thermodynamics which we will get to much later in this course
Thermal energy is very special in one crucial way. All other forms of energy are interconvertible : mechanical energy can be completely converted to electrical energy, and the latter can be completely converted to thermal, as in the water-heating example described above. But although work can be completely converted into thermal energy, complete conversion of thermal energy into work is impossible. A device that partially accomplishes this conversion is known as a heat engine ; a steam engine, a jet engine, and the internal combustion engine in a car are well-known examples.
Temperature and its meaning
We all have a general idea of what temperature means, and we commonly associate it with "heat", which, as we noted above, is a widely misunderstood word. Both relate to what we described above as thermal energy —the randomized kinetic energy associated with the various motions of matter at the atomic and molecular levels.
Heat , you will recall, is not something that is "contained within" a body, but is rather a process in which [thermal] energy enters or leaves a body as the result of a temperature difference .
So when you warm up your cup of tea by allowing it to absorb 1000 J of heat from the stove, you can say that the water has acquired 1000 J of energy — but not of heat . If, instead, you "heat" your tea in a microwave oven, the water acquires its added energy by direct absorption of electromagnetic energy; because this process is not driven by a temperature difference, heat was not involved at al!!
Thermometry
We commonly measure temperature by means of a thermometer — a device that employs some material possessing a property that varies in direct proportion to the temperature. The most common of these properties are the density of a liquid, the thermal expansion of a metal, or the electrical resistance of a material.
The ordinary thermometer we usually think of employs a reservoir of liquid whose thermal expansion (decrease in density) causes it to rise in a capillary tube. Metallic mercury has traditionally been used for this purpose, as has an alcohol (usually isopropyl) containing a red dye.
Mercury was the standard thermometric liquid of choice for more than 200 years, but its use for this purpose has been gradually phased out owing to its neurotoxicity. Although coal-burning, disposal of fluorescent lamps, incineration and battery disposal are major sources of mercury input to the environment, broken thermometers have long been known to release hundreds of tons of mercury. Once spilled, tiny drops of the liquid metal tend to lodge in floor depressions and cracks where they can emit vapor for years.
Temperature
Temperature is a measure of the average kinetic energy of the molecules within the water. You can think of temperature as an expression of the "intensity" with which the thermal energy in a body manifests itself in terms of chaotic, microscopic molecular motion.
- Heat is the quantity of thermal energy that enters or leaves a body.
- Temperature measures the average translational kinetic energy of the molecules in a body.
This animation depicts thermal translational motions of molecules in a gas. In liquids and solids, there is vary little empty space between molecules, and they mostly just bump against and jostle one another.
You will notice that we have sneaked the the word " translational " into this definition of temperature. Translation refers to a change in location: in this case, molecules moving around in random directions. This is the major form of thermal energy under ordinary conditions, but molecules can also undergo other kinds of motion, namely rotations and internal vibrations. These latter two forms of thermal energy are not really "chaotic" and do not contribute to the temperature.
Energy is measured in joules , and temperature in degrees . This difference reflects the important distinction between energy and temperature:
- We can say that 100 g of hot water contains more energy ( not heat !) than 100 g of cold water. And because energy is an extensive quantity, we know that a 10-g portion of this hot water contains only ten percent as much energy as the entire 100-g amount.
- Temperature, by contrast, is not a measure of quantity; being an intensive property, it is more of a "quality" that describes the "intensity" with which thermal energy manifests itself. So both the 100-g and 10-g portions of the hot water described above possess the same temperature.
Temperature scales
Temperature is measured by observing its effect on some temperature-dependent variable such as the volume of a liquid or the electrical resistance of a solid. In order to express a temperature numerically, we need to define a scale which is marked off in uniform increments which we call degrees . The nature of this scale — its zero point and the magnitude of a degree, are completely arbitrary.
Although rough means of estimating and comparing temperatures have been around since AD 170, the first mercury thermometer and temperature scale were introduced in Holland in 1714 by Gabriel Daniel Fahrenheit.
Fahrenheit established three fixed points on his thermometer. Zero degrees was the temperature of an ice, water, and salt mixture, which was about the coldest temperature that could be reproduced in a laboratory of the time. When he omitted salt from the slurry, he reached his second fixed point when the water-ice combination stabilized at "the thirty-second degree." His third fixed point was "found as the ninety-sixth degree, and the spirit expands to this degree when the thermometer is held in the mouth or under the armpit of a living man in good health." After Fahrenheit died in 1736, his thermometer was recalibrated using 212 degrees, the temperature at which water boils, as the upper fixed point. Normal human body temperature registered 98.6 rather than 96.
Belize and the U.S.A. are the only countries that still use the Fahrenheit scale!
In 1743, the Swedish astronomer Anders Celsius devised the aptly-named centigrade scale that places exactly 100 degrees between the two reference points defined by the freezing- and boiling points of water.
For reasons best known to Celsius, he assigned 100 degrees to the freezing point of water and 0 degrees to its boiling point, resulting in an inverted scale that nobody liked. After his death a year later, the scale was put the other way around. The revised centigrade scale was quickly adopted everywhere except in the English-speaking world, and became the metric unit of temperature. In 1948 it was officially renamed as the Celsius scale.
Temperature comparisons and conversions
When we say that the temperature is so many degrees, we must specify the particular scale on which we are expressing that temperature. A temperature scale has two defining characteristics, both of which can be chosen arbitrarily:
- The magnitude of the unit increment of temperature– that is, the size of the degree .
In order to express a temperature given on one scale in terms of another, it is necessary to take both of these factors into account.
Converting between Celsius and Fahrenheit is easy if you bear in mind that between the so-called ice- and steam points of water there are 180 Fahrenheit degrees, but only 100 Celsius degrees, making the F° 100/180 = 5/9 the magnitude of the C°.
Because the ice point is at 32 °F, the two scales are offset by this amount. If you remember this, there is no need to memorize a conversion formula; you can work it out whenever you need it. Note the distinction between “°C” (a temperature) and “C°” (a temperature increment ).
Absolute temperature scales
Near the end of the 19th Century when the physical significance of temperature began to be understood, the need was felt for a temperature scale whose zero really means zero — that is, the complete absence of thermal motion. This gave rise to the absolute temperature scale whose zero point is –273.15 °C, but which retains the same degree magnitude as the Celsius scale. This was eventually renamed after Lord Kelvin (William Thompson) thus the Celsius degree became the kelvin . It is now common to express an increment such as five C° as “five kelvins”
In 1859 the Scottish engineer and physicist William J.M. Rankine proposed an absolute temperature scale based on the Fahrenheit degree. Absolute zero (0° Ra) corresponds to –459.67°F. The Rankine scale has been used extensively by those same American and British engineers who delight in expressing energies in units of BTUs and masses in pounds.
The importance of absolute temperature scales is that absolute temperatures can be entered directly in all the fundamental formulas of physics and chemistry in which temperature is a variable. Perhaps the most common example, known to all beginning students, is the ideal gas equation state.
\[PV = nRT\]
Heat capacity
As a body loses or gains heat, its temperature changes in direct proportion to the amount of thermal energy q transferred:
\[q= C\Delta T\]
The proportionality constant C is known as the heat capacity
\[ C = \frac{q}{\Delta T} \]
If Δ T is expressed in kelvins (degrees) and q in joules, the units of C are J K –1 . In other words, the heat capacity tells us how many joules of energy it takes to change the temperature of a body by 1 C°. The greater the value of C , the the smaller will be the effect of a given energy change on the temperature.
It should be clear that C is an extensive property— that is, it depends on the quantity of matter. Everyone knows that a much larger amount of energy is required to bring about a 10 C° change in the temperature of 1 L of water compared to 10 mL of water. For this reason, it is customary to express C in terms of unit quantity, such as per gram, in which case it becomes the specific heat capacity , commonly referred to as the "specific heat" and has the units J K –1 g –1 .
Thus if identical quantities of heat flow into two bodies having different heat capacities, the one having the smaller heat capacity will undergo the greater change in temperature. (You might find it helpful to think of heat capacity as a measure of a body's ability to resist a change of temperature when absorbing or losing heat.) Note: you are expected to know the units of specific heat. The advantage of doing so is that you need not learn a "formula" for solving specific heat problems.
How many joules of heat must flow into 150 mL of water at 0 °C to raise its temperature to 25 °C?
Solution
The mass of the water is (150 mL) × (1.00 g mL –1 ) = 150 g. The specific heat of water is 4.18 J K –1 g –1 . From the definition of specific heat, the quantity of energy
q = Δ E is (150 g)(25.0 K)(4.18 J K –1 g –1 ) = 16700 J.
How can I rationalize this procedure? It should be obvious that the greater the mass of water and the greater the temperature change, the more heat will be required, so these two quantities go in the numerator. Similarly, the energy required will vary inversely with the specific heat, which therefore goes in the denominator.
| Substance |
C , J /g-K |
|---|---|
| Aluminum | 0.900 |
| Copper | 0.386 |
| Lead | 0.128 |
| Mercury | 0.140 |
| Zinc | 0.387 |
| Alcohol (ethanol) | 2.4 |
| Water | 4.18 |
| Ice (–10° C) | 2.05 |
| Gasoline ( n -octane) | 0.53 |
| Glass | 0.84 |
| Carbon (graphite/diamond) | 0.710 / .509 |
| Sodium chloride | 0.854 |
| Rock (granite) | 0.790 |
| Air | 1.01 |
Note especially the following:
- The molar heat capacities of the metallic elements are almost identical. This is the basis of the Law of Dulong and Petit , which served as an important tool for estimating the atomic weights of some elements.
- The intermolecular hydrogen bonding in water and alcohols results in anomalously high heat capacities for these liquids; the same is true for ice, compared to other solids.
- The values for graphite and diamond are consistent with the principle that solids that are more “ordered” tend to have larger heat capacities.
A piece of nickel weighing 2.40 g is heated to 200.0 °C, and is then dropped into 10.0 mL of water at 15.0 °C. The temperature of the metal falls and that of the water rises until thermal equilibrium is attained and both are at 18.0 °C. What is the specific heat of the metal?
Solution
The mass of the water is (10 mL) × (1.00 g mL –1 ) = 10 g. The specific heat of water is 4.18 1 J K –1 g –1 and its temperature increased by 3.0 C°, indicating that it absorbed (10 g)(3 K)(4.18 J K –1 g –1 ) = 125 J of energy. The metal sample lost this same quantity of energy, undergoing a temperature drop of 182 C° as the result. The specific heat capacity of the metal is:
(125 J) / (2.40 g)(182 K) = 0.287 J K –1 g –1 .
Notice that no "formula" is required here as long as you know the units of specific heat; you simply place the relevant quantities in the numerator or denominator to make the units come out correctly. | libretexts | 2025-03-17T19:53:06.176526 | 2013-10-03T01:38:00 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/02%3A_Essential_Background/2.02%3A_Energy_Heat_and_Temperature",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "2.2: Energy, Heat, and Temperature",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/02%3A_Essential_Background/2.03%3A_The_Measure_of_Matter | 2.3: The Measure of Matter
- Describe the names and abbreviations of the SI base units and the SI decimal prefixes .
- Define the liter and the metric ton in these units.
- Explain the meaning and use of unit dimensions ; state the dimensions of volume .
- State the quantities that are needed to define a temperature scale , and show how these apply to the Celsius, Kelvin, and Fahrenheit temperature scales.
- Explain how a Torricellian barometer works.
The natural sciences begin with observation , and this usually involves numerical measurements of quantities such as length, volume, density, and temperature. Most of these quantities have units of some kind associated with them, and these units must be retained when you use them in calculations. All measuring units can be defined in terms of a very small number of fundamental ones that, through "dimensional analysis", provide insight into their derivation and meaning, and must be understood when converting between different unit systems.
Units of Measure
Have you ever estimated a distance by “stepping it off”— that is, by counting the number of steps required to take you a certain distance? Or perhaps you have used the width of your hand, or the distance from your elbow to a fingertip to compare two dimensions. If so, you have engaged in what is probably the first kind of measurement ever undertaken by primitive mankind.
Leonardo da Vinci - Vitruvian Man
The results of a measurement are always expressed on some kind of a scale that is defined in terms of a particular kind of unit . The first scales of distance were likely related to the human body, either directly (the length of a limb) or indirectly (the distance a man could walk in a day). As civilization developed, a wide variety of measuring scales came into existence, many for the same quantity (such as length), but adapted to particular activities or trades. Eventually, it became apparent that in order for trade and commerce to be possible, these scales had to be defined in terms of standards that would allow measures to be verified, and, when expressed in different units (bushels and pecks, for example), to be correlated or converted.
Over the centuries, hundreds of measurement units and scales have developed in the many civilizations that achieved some literate means of recording them. Some, such as those used by the Aztecs, fell out of use and were largely forgotten as these civilizations died out. Other units, such as the various systems of measurement that developed in England, achieved prominence through extension of the Empire and widespread trade; many of these were confined to specific trades or industries. The examples shown here are only some of those that have been used to measure length or distance. The history of measuring units provides a fascinating reflection on the history of industrial development.
The most influential event in the history of measurement was undoubtedly the French Revolution and the Age of Enlightenment that followed. This led directly to the metric system that attempted to do away with the confusing multiplicity of measurement scales by reducing them to a few fundamental ones that could be combined in order to express any kind of quantity. The metric system spread rapidly over much of the world, and eventually even to England and the rest of the U.K. when that country established closer economic ties with Europe in the latter part of the 20th Century. The United States is presently the only major country in which “metrication” has made little progress within its own society, probably because of its relative geographical isolation and its vibrant internal economy.
The SI Units
Science, being a truly international endeavor, adopted metric measurement very early on; engineering and related technologies have been slower to make this change, but are gradually doing so. Even the within the metric system, however, a variety of units were employed to measure the same fundamental quantity; for example, energy could be expressed within the metric system in units of ergs, electron-volts, joules, and two kinds of calories. This led, in the mid-1960s, to the adoption of a more basic set of units, the Systeme Internationale ( SI ) units that are now recognized as the standard for science and, increasingly, for technology of all kinds.
In principle, any physical quantity can be expressed in terms of only seven base units . Each base unit is defined by a standard which is described in the NIST Web site.
kelvin| Observable | Base Unit | Abbreviation |
|---|---|---|
| length | meter | m |
| mass | kilogram | kg |
| time | second | s |
| temperature (absolute) | Kelvin | K |
| amount of substance | mole | mol |
| electric current | ampere | A |
| luminous intensity | candela | cd |
A few special points about some of these units are worth noting
- The base unit of mass is unique in that a decimal prefix (see below) is built-in to it; that is, it is not the gram , as you might expect.
- The base unit of time is the only one that is not metric. Numerous attempts to make it so have never garnered any success; we are still stuck with the 24:60:60 system that we inherited from ancient times. (The ancient Egyptians of around 1500 BC invented the 12-hour day, and the 60:60 part is a remnant of the base-60 system that the Sumerians used for their astronomical calculations around 100 BCE.)
- Of special interest to Chemistry is the mole , the base unit for expressing the quantity of matter . The number is also known as Avogadro’s number and is exactly \(6.02214076 \times 10^{23}\) of anything.
Owing to the wide range of values that quantities can have, it has long been the practice to employ prefixes such as milli and mega to indicate decimal fractions and multiples of metric units. As part of the SI standard, this system has been extended and formalized.
deci| prefix | abbreviation | multiplier | -- | prefix | abbreviation | multiplier |
|---|---|---|---|---|---|---|
| peta | P | 10 15 | deca | d | 10 –1 | |
| tera | T | 10 12 | centi | c | 10 –2 | |
| giga | G | 10 9 | milli | m | 10 –3 | |
| mega | M | 10 6 | micro | μ | 10 –6 | |
| kilo | k | 10 3 | nano | n | 10 –9 | |
| hecto | h | 10 2 | pico | p | 10 –12 | |
| deca | da | 10 | femto | f | 10 –15 |
There is a category of units that are “honorary” members of the SI in the sense that it is acceptable to use them along with the base units defined above. These include such mundane units as the hour, minute, and degree (of angle), etc., but the three shown here are of particular interest to chemistry, and you will need to know them.
| liter (litre) | L | 1 L = 1 dm 3 = 10 –3 m 3 |
| metric ton | t | 1 t = 10 3 kg |
| united atomic mass unit | u | 1 u = 1.66054×10 –27 kg |
SI-Derived Units and Dimensional Analysis
Most of the physical quantities we actually deal with in science and also in our daily lives, have units of their own: volume, pressure, energy and electrical resistance are only a few of hundreds of possible examples. It is important to understand, however, that all of these can be expressed in terms of the SI base units; they are consequently known as derived units. In fact, most physical quantities can be expressed in terms of one or more of the following five fundamental units:
| mass | length | time | electric charge | temperature |
|---|---|---|---|---|
| M | L | T | Q | Θ (theta) |
Dimensional analysis is an important tool in working with and converting units in calculations. Consider, for example, the unit of volume , which we denote as \(V\). To measure the volume of a rectangular box, we need to multiply the lengths as measured along the three coordinates:
\[V = x · y · z \label{eq20}\]
We say, therefore, that volume has the dimensions of length-cubed:
\[dim.V = L^3\]
Thus the units of volume will be m 3 (in the SI) or cm 3 , ft 3 (English), etc. Moreover, any formula that calculates a volume must contain within it the L 3 dimension; thus the volume of a sphere is 4/3 π r 3 .
Units and their Ranges in Chemistry
In this section, we will look at some of the quantities that are widely encountered in Chemistry, and at the units in which they are commonly expressed. In doing so, we will also consider the actual range of values these quantities can assume, both in nature in general, and also within the subset of nature that chemistry normally addresses. In looking over the various units of measure, it is interesting to note that their unit values are set close to those encountered in everyday human experience
Ranges of Mass and Weight in Chemistry
These two quantities are widely confused. Although they are often used synonymously in informal speech and writing, they have different dimensions: weight is the force exerted on a mass by the local gravitational field:
\[f = m a = m g\]
where g is the acceleration of gravity. While the nominal value of the latter quantity is 9.80 m s –2 at the Earth’s surface, its exact value varies locally. Because it is a force, the proper SI unit of weight is the newton , but it is common practice (except in physics classes!) to use the terms "weight" and "mass" interchangeably, so the units kilograms and grams are acceptable in almost all ordinary laboratory contexts.
Please note that in this diagram and in those that follow, the numeric scale represents the logarithm of the number shown. For example, the mass of the electron is 10 –30 kg.
The range of masses spans 90 orders of magnitude, more than any other unit. The range that chemistry ordinarily deals with has greatly expanded since the days when a microgram was an almost inconceivably small amount of material to handle in the laboratory; this lower limit has now fallen to the atomic level with the development of tools for directly manipulating these particles. The upper level reflects the largest masses that are handled in industrial operations, but in the recently developed fields of geochemistry and environmental chemistry, the range can be extended indefinitely. Flows of elements between the various regions of the environment (atmosphere to oceans, for example) are often quoted in t
Range of Distances Encountered in Chemistry
Chemists tend to work mostly in the moderately-small part of the distance range. Those who live in the lilliputian world of crystal- and molecular structures and atomic radii find the picometer a convenient currency, but one still sees the older non-SI unit called the Ångstrom used in this context;
\[1\,\unicode{x212B} = 10^{–10}\,\text{m} = 100\,\text{pm}. \]
Nanotechnology, the rage of the present era, also resides in this realm. The largest polymeric molecules and colloids define the top end of the particulate range; beyond that, in the normal world of doing things in the lab, the centimeter and occasionally the millimeter commonly rule.
For humans, time moves by the heartbeat; beyond that, it is the motions of our planet that count out the hours, days, and years that eventually define our lifetimes. Beyond the few thousands of years of history behind us, those years-to-the-powers-of-tens that are the fare for such fields as evolutionary biology, geology, and cosmology, cease to convey any real meaning to us. Perhaps this is why so many people are not very inclined to accept the validity of these sciences.
Most of what actually takes place in the chemist’s test tube operates on a far shorter time scale, although there is no limit to how slow a reaction can be; the upper limits of those we can directly study in the lab are in part determined by how long a graduate student can wait around before moving on to gainful employment.
Looking at the microscopic world of atoms and molecules themselves, the time scale again shifts us into an unreal world where numbers tend to lose their meaning. You can gain some appreciation of the duration of a nanosecond by noting that this is about how long it takes a beam of light to travel between your two outstretched hands. In a sense, the material foundations of chemistry itself are defined by time: neither a new element nor a molecule can be recognized as such unless it lasts around sufficiently long enough to have its “picture” taken through measurement of its distinguishing properties.
Range of Temperatures in Chemistry
Temperature, the measure of thermal intensity, spans the narrowest range of any of the base units of the chemist’s measure. The reason for this is tied into temperature’s meaning as an indicator of the intensity of thermal kinetic energy. Chemical change occurs when atoms are jostled into new arrangements, and the increasing weakness of these motions brings most chemistry to a halt as absolute zero is approached. At the upper end of the scale, thermal motions become sufficiently vigorous to shake molecules into atoms, and eventually, as in stars, strip off the electrons, leaving an essentially reaction-less gaseous fluid, or plasma, of bare nuclei (ions) and electrons.
We all know that temperature is expressed in degrees. What we frequently forget is that the degree is really an increment of temperature, a fixed fraction of the distance between two defined reference points on a temperature scale .
Range of Pressures in Chemistry
Pressure is the measure of the force exerted on a unit area of surface. Its SI units are therefore newtons per square meter, but we make such frequent use of pressure that a derived SI unit, the pascal , is commonly used:
\[1\, \text{Pa} = 1\, \text{N}\, \text{m}^{–2}\]
Pressure of the Atmosphere
The concept of pressure first developed in connection with studies relating to the atmosphere and vacuum that were first carried out in the 17th century. The molecules of a gas are in a state of constant thermal motion, moving in straight lines until experiencing a collision that exchanges momentum between pairs of molecules and sends them bouncing off in other directions.
This leads to a completely random distribution of the molecular velocities both in speed and direction— or it would in the absence of the Earth’s gravitational field which exerts a tiny downward force on each molecule, giving motions in that direction a very slight advantage. In an ordinary container this effect is too small to be noticeable, but in a very tall column of air the effect adds up: the molecules in each vertical layer experience more downward-directed hits from those above it. The resulting force is quickly randomized, resulting in an increased pressure in that layer which is then propagated downward into the layers below.
At sea level, the total mass of the sea of air pressing down on each 1 m 2 of surface is about 1034 g, or 10340 kg m –2 . The force (weight) that the Earth’s gravitational acceleration \(g\) exerts on this mass is:
\[\begin{align} f &= ma \nonumber \\[4pt] &= mg \nonumber \\[4pt] &= (10340\text{ kg})(9.81\text{ m s}^{–2}) \nonumber \\[4pt] &= 1.013 \times 10^5\text{ kg m s}^{–2} \nonumber \\[4pt] &= 1.013 \times 10^5 \text{ newtons} \end{align}\]
resulting in a pressure of
\[1.013 \times 10^5\, \text{N}\, \text{m}^{–2} = 1.013 \times 10^5 \text{ Pa}.\]
The actual pressure at sea level varies with atmospheric conditions, so it is customary to define standard atmospheric pressure as 1 atm = 1.013 × 10 5 Pa or 101 kPa.
Although the standard atmosphere (atm) is not an SI unit, it is still widely employed. In meteorology, the bar , exactly 1.000 × 10 5 = 0.967 atm, is often used.
The Barometer
In the early 17th century, the Italian physicist and mathematician Evangalisto Torricelli invented a device to measure atmospheric pressure. The Torricellian barometer consists of a vertical glass tube closed at the top and open at the bottom. It is filled with a liquid, traditionally mercury, and is then inverted, with its open end immersed in the container of the same liquid. The liquid level in the tube will fall under its own weight until the downward force is balanced by the vertical force transmitted hydrostatically to the column by the downward force of the atmosphere acting on the liquid surface in the open container. Torricelli was also the first to recognize that the space above the mercury constituted a vacuum, and is credited with being the first to create a vacuum.
One standard atmosphere will support a column of mercury that is 76 cm high, so the “millimeter of mercury”, now more commonly known as the torr , has long been a common pressure unit in the sciences: 1 atm = 760 torr. | libretexts | 2025-03-17T19:53:06.281444 | 2013-10-03T01:38:02 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/02%3A_Essential_Background/2.03%3A_The_Measure_of_Matter",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "2.3: The Measure of Matter",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/02%3A_Essential_Background/2.04%3A_The_Meaning_of_Measure | 2.4: The Meaning of Measure
- Give an example of a measured numerical value, and explain what distinguishes it from a "pure" number.
- Give examples of random and systematic errors in measurements.
- Find the mean value of a series of similar measurements.
- State the principal factors that affect the difference between the mean value of a series of measurements, and the "true value" of the quantity being measured.
- Calculate the absolute and relative precisions of a given measurement, and explain why the latter is generally more useful.
- Distinguish between the accuracy and the precision of a measured value, and on the roles of random and systematic error.
In science, there are numbers and there are "numbers". What we ordinarily think of as a "number" and will refer to here as a pure number is just that: an expression of a precise value. The first of these you ever learned were the counting numbers, or integers ; later on, you were introduced to the decimal numbers , and the rational numbers , which include numbers such as 1/3 and π (pi) that cannot be expressed as exact decimal values. The other kind of numeric quantity that we encounter in the natural sciences is a measured value of something– the length or weight of an object, the volume of a fluid, or perhaps the reading on an instrument. Although we express these values numerically, it would be a mistake to regard them as the kind of pure numbers described above.
Confusing? Suppose our instrument has an indicator such as you see here. The pointer moves up and down so as to display the measured value on this scale. What number would you write in your notebook when recording this measurement? Clearly, the value is somewhere between 130 and 140 on the scale, but the graduations enable us to be more exact and place the value between 134 and 135. The indicator points more closely to the latter value, and we can go one more step by estimating the value as perhaps 134.8, so this is the value you would report for this measurement.
Now here’s the important thing to understand: although “134.8” is itself a number , the quantity we are measuring is almost certainly not 134.8 — at least, not exactly. The reason is obvious if you note that the instrument scale is such that we are barely able to distinguish between 134.7, 134.8, and 134.9. In reporting the value 134.8 we are effectively saying that the value is probably somewhere with the range 134.75 to 134.85. In other words, there is an uncertainty of ±0.05 unit in our measurement.
All measurements of quantities that can assume a continuous range of values (lengths, masses, volumes, etc.) consist of two parts: the reported value itself (never an exactly known number), and the uncertainty associated with the measurement. By “error”, we do not mean just outright mistakes, such as incorrect use of an instrument or failure to read a scale properly; although such gross errors do sometimes happen, they usually yield results that are sufficiently unexpected to call attention to themselves.
Scale-reading error
When you measure a volume or weight, you observe a reading on a scale of some kind, such as the one illustrated above. Scales, by their very nature, are limited to fixed increments of value, indicated by the division marks. The actual quantities we are measuring, in contrast, can vary continuously, so there is an inherent limitation in how finely we can discriminate between two values that fall between the marked divisions of the measuring scale. Scale-reading error is often classified as random error (see below), but it occurs so commonly that we treat it separately here.
The same problem remains if we substitute an instrument with a digital display; there will always be a point at which some value that lies between the two smallest divisions must arbitrarily toggle between two numbers on the readout display. This introduces an element of randomness into the value we observe, even if the "true" value remains unchanged. The more sensitive the measuring instrument, the less likely it is that two successive measurements of the same sample will yield identical results. In the example we discussed above, distinguishing between the values 134.8 and 134.9 may be too difficult to do in a consistent way, so two independent observers may record different values even when viewing the same reading.
Parallax error
One form of scale-reading error that often afflicts beginners in the science laboratory is failure to properly align the eye with the part of the scale you are reading. This gives rise to parallax error . Parallax refers to the change in the apparent position of an object when viewed from different points.
The most notorious example encountered in the introductory chemistry laboratory is failure to read the volume of a liquid properly in a graduated cylinder or burette. Getting all of their students trained to make sure their eye is level with the bottom of the meniscus is the lab instructors' hope and despair.
Proper use of a measuring device can help reduce the possibility of parallax error. For example, a length scale should be in direct contact with the object ( left ), not above it as on the right.
Analog meters (those having pointer needles) are most accurate when read at about 2/3 of the length of the scale. Analog-type meters, unlike those having digital readouts, are also subject to parallax error. Those intended for high-accuracy applications often have a mirrored arc along the scale in which a reflection of the pointer needle can be seen if the viewer is not properly aligned with the instrument.
- Random (indeterminate) error: Each measurement is also influenced by a myriad of minor events, such as building vibrations, electrical fluctuations, motions of the air, and friction in any moving parts of the instrument. These tiny influences constitute a kind of "noise" that also has a random character. Whether we are conscious of it or not, all measured values contain an element of random error.
- Systematic error : Suppose that you weigh yourself on a bathroom scale, not noticing that the dial reads “1.5 kg” even before you have placed your weight on it. Similarly, you might use an old ruler with a worn-down end to measure the length of a piece of wood. In both of these examples, all subsequent measurements, either of the same object or of different ones, will be off by a constant amount. Unlike random error, which is impossible to eliminate, these systematic error (also known as determinate error ) is usually quite easy to avoid or compensate for, but only by a conscious effort in the conduct of the observation, usually by proper zeroing and calibration of the measuring instrument. However, once systematic error has found its way into the data, it is can be very hard to detect.
Accuracy and precision
We tend to use these two terms interchangeably in our ordinary conversation, but in the context of scientific measurement, they have very different meanings:
- Accuracy refers to how closely the measured value of a quantity corresponds to its “true” value.
- Precision expresses the degree of reproducibility, or agreement between repeated measurements.
Accuracy, of course, is the goal we strive for in scientific measurements. Unfortunately, however, there is no obvious way of knowing how closely we have achieved it; the “true” value, whether it be of a well-defined quantity such as the mass of a particular object, or an average that pertains to a collection of objects, can never be known — and thus we can never recognize it if we are fortunate enough to find it.
Four Scenarios
A target on a dart board serves as a convenient analogy. The results of four sets of measurements (or four dart games) are illustrated below. Each set is made up of ten observations (or throws of darts.) Each red dot corresponds to the point at which a dart has hit the target — or alternatively, to the value of an individual observation. For measurements, assume the true value of the quantity being measured lies at the center of each target. Now consider the following four sets of results:
Right on! You win the dart game, and get an A grade on your measurement results.
Your results are beautifully replicable, but your measuring device may not have been calibrated properly or your observations suffer from a systematic error of some kind. Accuracy: F, Precision, A; overall grade C.
Extremely unlikely, and probably due to pure luck; the only reason for the accurate mean is that your misses mostly canceled out. Grade D.
Pretty sad; consider switching to music or politics — or have your eyes examined.
Note
When we make real measurements, there is no dart board or target that enables one to immediately judge the quality of the result. If we make only a few observations, we may be unable distinguish between any of these scenarios.
The "true value" of a desired measurement can be quite elusive, and may not even be definable at all. This is a very common difficulty in both the social sciences (as in opinion surveys), in medicine (evaluating the efficacy of a drug or other treatment), and in all other natural sciences. The proper treatment of such problems is to make multiple observations of individual instances of what is being measured, and then use statistical methods to evaluate the results. In this introductory unit on measurement, we will defer discussion of concepts such as standard deviation and confidence intervals which become essential in courses at the second-year level and beyond. We will restrict our treatment here to the elementary considerations that are likely to be needed in a typical first-year course.
How many measurements do I need?
One measurement may be enough. If you wish to measure your height to the nearest centimeter or inch, or the volume of a liquid cooking ingredient to the nearest 1/8 “cup”, you don't ordinarily worry about random error. The error will still be present, but its magnitude will be such a small fraction of the value that it will not significantly affect whatever we are trying to achieve. Thus random error is not something we are concerned about in our daily lives. In the scientific laboratory, there are many contexts in which a single observation of a volume, mass, or instrument reading makes perfect sense; part of the "art" of science lies in making an informed judgment of how exact a given measurement must be. If we are measuring a directly observable quantity such as the weight of a solid or volume of a liquid, then a single measurement, carefully done and reported to a precision that is consistent with that of the measuring instrument, will usually be sufficient.
However more measurements are needed when there is no clearly-defined "true" value. A collection of objects (or of people) is known in statistics as a population . There is often a need to determine some quantity that describes a collection of objects. For example, a pharmaceutical researcher will need to determine the time required for half of a standard dose of a certain drug to be eliminated by the body, or a manufacturer of light bulbs might want to know how many hours a certain type of light bulb will operate before it burns out. In these cases a value for any individual sample can be determined easily enough, but since no two samples (patients or light bulbs) are identical, we are compelled to repeat the same measurement on multiple objects. And naturally, we get a variety of results, usually referred to as scatter . Even for a single object, there may be no clearly defined "true" value.
Suppose that you wish to determine the diameter of a certain type of coin. You make one measurement and record the results. If you then make a similar measurement along a different cross-section of the coin, you will likely get a different result. The same thing will happen if you make successive measurements on other coins of the same kind.
Here we are faced with two kinds of problems. First, there is the inherent limitation of the measuring device: we can never reliably measure more finely than the marked divisions on the ruler. Secondly, we cannot assume that the coin is perfectly circular; careful inspection will likely reveal some distortion resulting from a slight imperfection in the manufacturing process. In these cases, it turns out that there is no single, true value of the quantity we are trying to measure.
Mean, median, and range of a series of observations
There are a variety of ways to express the average, or central tendency of a series of measurements, with mean (more precisely, arithmetic mean) being most commonly employed. Our ordinary use of the term "average" also refers to the mean. These concepts are usually all you need as a first step in the analysis of data you are likely to collect in a first-year chemistry laboratory course.
The mean and its meaning
In our ordinary speech, the term "average" is synonymous with "mean". In statistics, however, "average" is a more general term that can refer to median, mode, and range, as well as to mean. When we obtain more than one result for a given measurement (either made repeatedly on a single sample, or more commonly, on different samples of the same material), the simplest procedure is to report the mean , or average value. The mean is defined mathematically as the sum of the values, divided by the number of measurements:
\[ x_m = \dfrac{\sum_{i=1}^n x_i}{n}\]
If you are not familiar with this notation, don’t let it scare you! It's no different from the average that you are likely already familiar with. Take a moment to see how it expresses the previous sentence; if there are \(n\) measurements, each yielding a value xi , then we sum over all \(i\) and divide by \(n\) to get the mean value \(x_m\). For example, if there are only two measurements, x 1 and x 1 , then the mean is
\[ x_m = \dfrac{x_1 + x_2}{2}\]
The general problem of determining the uncertainty of a calculated result turns out to be rather more complicated than you might think, and will not be treated here. There are, however, some very simple rules that are sufficient for most practical purposes.
Absolute and Relative Uncertainty
If you weigh out 74.1 mg of a solid sample on a laboratory balance that is accurate to within 0.1 milligram, then the actual weight of the sample is likely to fall somewhere in the range of 74.0 to 74.2 mg; the absolute uncertainty in the weight you observe is 0.2 mg, or ±0.1 mg. If you use the same balance to weigh out 3.2914 g of another sample, the actual weight is between 3.2913 g and 3.2915 g, and the absolute uncertainty is still ±0.1 mg.
Although the absolute uncertainties in these two examples are identical, we would probably consider the second measurement to be more precise because the uncertainty is a smaller fraction of the measured value. The relative uncertainties of the two results would be
0.2 ÷ 74.1 = 0.0027 (about 3 parts in 1000 (PPT), or 0.3%)
0.0002 ÷ 3.2913 = 0.000084 (about 0.8 PPT , or 0.008 %)
Relative uncertainties are widely used to express the reliability of measurements, even those for a single observation, in which case the uncertainty is that of the measuring device. Relative uncertainties can be expressed as parts per hundred ( percent ), per thousand (PPT), per million, (PPM), and so on.
Questions
- 1. Addition and subtraction, both numbers have uncertainties
-
The simplest method is to just add the absolute uncertainties.
Example : (6.3 ± 0.05 cm) – (2.1 ± 0.05 cm) = 4.2 ± 0.10 cm
However, this tends to over-estimate the uncertainty by assuming the worst possible case in which the error in one of the quantities is at its maximum positive value, while that of the other quantity is at its maximum minimum value. -
Statistical theory informs us that a more realistic value for the uncertainty of a sum or difference is to add the squares of each absolute uncertainty, and then take the square root of this sum. Applying this to the above values, we have
-
[(.05) 2 + (.05) 2 ] ½ = 0.07, so the result is 4.2 ± 0.07 cm.
- 2. Multiplication or division, both numbers have uncertainties.
- Convert the absolute uncertainties into relative uncertainties, and add these. Or better, add their squares and take the square root of the sum.
-
Problem Example 3
Estimate the absolute error in the density calculated by dividing (12.7 ± .05 g) by (10.0 ± 0.02 mL).
Solution: Relative uncertainty of the mass: 0.05 / 12.7 = 0.0039 = 0.39%
Relative uncertainty of the volume: 0.02 / 10.0 = 0.002 = 0.2%
Relative uncertainty of the density: [(.39) 2 + (0.2) 2 ] ½ = 0.44 %
Mass ÷ volume: (12.7 g) ÷ (10.0 mL) = 1.27 g mL –1
Absolute uncertainty of the density: (± 0.044) x (1.27 g mL –1 ) = ±0.06 g mL –1 - 3. Multiplication or division by a pure number
- Trivial case; multiply or divide the uncertainty by the pure number. | libretexts | 2025-03-17T19:53:06.367944 | 2013-10-03T01:38:01 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/02%3A_Essential_Background/2.04%3A_The_Meaning_of_Measure",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "2.4: The Meaning of Measure",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/03%3A_Measuring_Matter | 3: Measuring Matter
The natural sciences begin with observation , and this usually involves numerical measurements of quantities such as length, volume, density, and temperature. Most of these quantities have units of some kind associated with them, and these units must be retained when you use them in calculations. All measuring units can be defined in terms of a very small number of fundamental ones that, through "dimensional analysis", provide insight into their derivation and meaning, and must be understood when converting between different unit systems.
-
- 3.1: Units and Dimensions
- The natural sciences begin with observation, and this usually involves numerical measurements of quantities such as length, volume, density, and temperature. Most of these quantities have units of some kind associated with them, and these units must be retained when you use them in calculations. All measuring units can be defined in terms of a very small number of fundamental ones that, through "dimensional analysis", provide insight into their derivation and meaning.
-
- 3.2: The Meaning of Measure
- The "true value" of a measured quantity, if it exists at all, will always elude us; the best we can do is learn how to make meaningful use of the numbers we read off of our measuring devices. The other kind of numeric quantity that we encounter in the natural sciences is a measured value of something– the length or weight of an object, the volume of a fluid, or perhaps the reading on an instrument. Although we express these values numerically, it would be a mistake to regard them as the kind of
-
- 3.3: Significant Figures and Rounding off
- The numerical values we deal with in science (and in many other aspects of life) represent measurements whose values are never known exactly. Our pocket-calculators or computers don't know this; they treat the numbers we punch into them as "pure" mathematical entities, with the result that the operations of arithmetic frequently yield answers that are physically ridiculous even though mathematically correct.
-
- 3.4: Reliability of a measurement
- In this day of pervasive media, we are continually being bombarded with data of all kinds— public opinion polls, advertising hype, government reports etc. Often. the purveyors of this information are hoping to “sell” us on a product (known as “spin”.) In Science, we do not have this option: we collect data and make measurements in order to get closer to whatever “truth” we are seeking, but it's not really "science" until others can have confidence in the reliability of our measurements.
-
- 3.5: Drawing Conclusions from Data
- This final lesson on measurement will examine these questions and introduce you to some of the methods of dealing with data. This stuff is important not only for scientists, but also for any intelligent citizen who wishes to independently evaluate the flood of numbers served up by advertisers, politicians, "experts", and yes— by other scientists.
Contributors and Attributions
-
Stephen Lower, Professor Emeritus ( Simon Fraser U. ) Chem1 Virtual Textbook
- Thumbnail: Unsplash License ( William Warby via Unsplash ) | libretexts | 2025-03-17T19:53:06.433314 | 2017-02-23T21:14:18 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/03%3A_Measuring_Matter",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "3: Measuring Matter",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/03%3A_Measuring_Matter/3.01%3A_Units_and_Dimensions | 3.1: Units and Dimensions
Make sure you thoroughly understand the following essential ideas which have been presented above. It is especially important that you know the precise meanings of all the italicized terms in the context of this topic.
- Describe the names and abbreviations of the SI base units and the SI decimal prefixes .
- Define the liter and the metric ton in these units.
- Explain the meaning and use of unit dimensions ; state the dimensions of volume .
- State the quantities that are needed to define a temperature scale , and show how these apply to the Celsius, Kelvin, and Fahrenheit temperature scales.
- Explain how a Torricellian barometer works.
Have you ever estimated a distance by “stepping it off”— that is, by counting the number of steps required to take you a certain distance? Or perhaps you have used the width of your hand, or the distance from your elbow to a fingertip to compare two dimensions. If so, you have engaged in what is probably the first kind of measurement ever undertaken by primitive mankind. The results of a measurement are always expressed on some kind of a scale that is defined in terms of a particular kind of unit . The first scales of distance were likely related to the human body, either directly (the length of a limb) or indirectly (the distance a man could walk in a day).
As civilization developed, a wide variety of measuring scales came into existence, many for the same quantity (such as length), but adapted to particular activities or trades. Eventually, it became apparent that in order for trade and commerce to be possible, these scales had to be defined in terms of standards that would allow measures to be verified, and, when expressed in different units (bushels and pecks, for example), to be correlated or converted.
Over the centuries, hundreds of measurement units and scales have developed in the many civilizations that achieved some literate means of recording them. Some, such as those used by the Aztecs, fell out of use and were largely forgotten as these civilizations died out. Other units, such as the various systems of measurement that developed in England, achieved prominence through extension of the Empire and widespread trade; many of these were confined to specific trades or industries. The examples shown here are only some of those that have been used to measure length or distance. The history of measuring units provides a fascinating reflection on the history of industrial development.
The most influential event in the history of measurement was undoubtedly the French Revolution and the Age of Rationality that followed. This led directly to the metric system that attempted to do away with the confusing multiplicity of measurement scales by reducing them to a few fundamental ones that could be combined in order to express any kind of quantity. The metric system spread rapidly over much of the world, and eventually even to England and the rest of the U.K. when that country established closer economic ties with Europe in the latter part of the 20th Century. The United States is presently the only major country in which “metrication” has made little progress within its own society, probably because of its relative geographical isolation and its vibrant internal economy.
Science, being a truly international endeavor, adopted metric measurement very early on; engineering and related technologies have been slower to make this change, but are gradually doing so. Even the within the metric system, however, a variety of units were employed to measure the same fundamental quantity; for example, energy could be expressed within the metric system in units of ergs, electron-volts, joules, and two kinds of calories. This led, in the mid-1960s, to the adoption of a more basic set of units, the Systeme Internationale ( SI ) units that are now recognized as the standard for science and, increasingly, for technology of all kinds.
The SI base Units
In principle, any physical quantity can be expressed in terms of only seven base units . Each base unit is defined by a standard which is described in the NIST Web site .
| length | meter | m |
| mass | kilogram | kg |
| time | second | s |
| temperature (absolute) | kelvin | K |
| amount of substance | mole | mol |
| electric current | ampere | A |
| luminous intensity | candela | cd |
A few special points about some of these units are worth noting:
- The base unit of mass is unique in that a decimal prefix (see below) is built-in to it; that is, it is not the gram , as you might expect.
- The base unit of time is the only one that is not metric. Numerous attempts to make it so have never garnered any success; we are still stuck with the 24:60:60 system that we inherited from ancient times. (The ancient Egyptians of around 1500 BC invented the 12-hour day, and the 60:60 part is a remnant of the base-60 system that the Sumerians used for their astronomical calculations around 100 BCE.)
- Of special interest to Chemistry is the mole , the base unit for expressing the quantity of matter . Although the number is not explicitly mentioned in the official definition, chemists define the mole as Avogadro’s number (approximately \(6.02 \times 10^{23}\)) of anything.
The SI decimal prefixes
Owing to the wide range of values that quantities can have, it has long been the practice to employ prefixes such as milli and mega to indicate decimal fractions and multiples of metric units. As part of the SI standard, this system has been extended and formalized.
| prefix | abbreviation | multiplier | prefix | abbreviation | multiplier |
|---|---|---|---|---|---|
| exa | E | 10 18 | deci | d | 10 –1 |
| peta | P | 10 15 | centi | c | 10 –2 |
| tera | T | 10 12 | milli | m | 10 –3 |
| giga | G | 10 9 | micro | μ | 10 –6 |
| mega | M | 10 6 | nano | n | 10 –9 |
| kilo | k | 10 3 | pico | p | 10 –12 |
| hecto | h | 10 2 | femto | f | 10 –15 |
| deca | da | 10 | atto | a | 10 –18 |
For a more complete table, see the NIST page on SI prefixes
Non-SI Units
There is a category of units that are “honorary” members of the SI in the sense that it is acceptable to use them along with the base units defined above. These include such mundane units as the hour, minute, and degree (of angle), etc., but the three shown here are of particular interest to chemistry, and you will need to know them.
- liter (\(L\)) \[1\, L = 1\, dm^3 = 10^{–3} m^3 \nonumber\]
- metric ton (\(t\)) \[1\, t = 10^3 kg \nonumber\]
- atomic mass unit (\(u\)) \[1\, u = 1.66054×10^{–27}\, kg \nonumber\]
Most of the physical quantities we actually deal with in science and also in our daily lives, have units of their own: volume, pressure, energy and electrical resistance are only a few of hundreds of possible examples. It is important to understand, however, that all of these can be expressed in terms of the SI base units; they are consequently known as derived units .
In fact, most physical quantities can be expressed in terms of one or more of the following five fundamental units:
| mass M | length L | time T | electric charge Q | temperature Θ ( theta ) |
Consider, for example, the unit of volume , which we denote as V. To measure the volume of a rectangular box, we need to multiply the lengths as measured along the three coordinates:
\[V = x · y · z\]
We say, therefore, that volume has the dimensions of length-cubed:
\[dim.V = L^3\]
Thus the units of volume will be m 3 (in the SI) or cm 3 , ft 3 (English), etc. Moreover, any formula that calculates a volume must contain within it the L 3 dimension; thus the volume of a sphere is 4/3 π r 3 .
Consider, for example, the unit of volume, which we denote as V. To measure the volume of a rectangular box, we need to multiply the lengths as measured along the three coordinates: V = x · y · z We say, therefore, that volume has the dimensions of length-cubed: dim.V = L 3 Thus the units of volume will be m 3 (in the SI) or cm 3 , ft 3 (English), etc. Moreover, any formula that calculates a volume must contain within it the L 3 dimension; thus the volume of a sphere is 4/3 πr 3 .
Find the dimensions of energy.
Solution
When mechanical work is performed on a body, its energy increases by the amount of work done, so the two quantities are equivalent and we can concentrate on work. The latter is the product of the force applied to the object and the distance it is displaced. From Newton’s law, force is the product of mass and acceleration, and the latter is the rate of change of velocity, typically expressed in meters per second per second. Combining these quantities and their dimensions yields the result shown here.
Units and Their Ranges in Chemistry
In this section, we will look at some of the quantities that are widely encountered in Chemistry, and at the units in which they are commonly expressed. In doing so, we will also consider the actual range of values these quantities can assume, both in nature in general, and also within the subset of nature that chemistry normally addresses. In looking over the various units of measure, it is interesting to note that their unit values are set close to those encountered in everyday human experience
Mass and Weight
These two quantities are widely confused. Although they are often used synonymously in informal speech and writing, they have different dimensions: weight is the force exerted on a mass by the local gravational field :
\[f = m a = m g\]
where g is the acceleration of gravity. While the nominal value of the latter quantity is 9.80 m s –2 at the Earth’s surface, its exact value varies locally. Because it is a force, the SI unit of weight is properly the newton , but it is common practice (except in physics classes!) to use the terms "weight" and "mass" interchangeably, so the units kilograms and grams are acceptable in almost all ordinary laboratory contexts.
The range of masses spans 90 orders of magnitude, more than any other unit. The range that chemistry ordinarily deals with has greatly expanded since the days when a microgram was an almost inconceivably small amount of material to handle in the laboratory; this lower limit has now fallen to the atomic level with the development of tools for directly manipulating these particles. The upper level reflects the largest masses that are handled in industrial operations, but in the recently developed fields of geochemistry and environmental chemistry, the range can be extended indefinitely. Flows of elements between the various regions of the environment (atmosphere to oceans, for example) are often quoted in teragrams.
Length
Chemists tend to work mostly in the moderately-small part of the distance range. Those who live in the lilliputian world of crystal- and molecular structures and atomic radii find the picometer a convenient currency, but one still sees the older non-SI unit called the Ångstrom used in this context; 1Å = 10 –10 m = 100pm. Nanotechnology, the rage of the present era, also resides in this realm. The largest polymeric molecules and colloids define the top end of the particulate range; beyond that, in the normal world of doing things in the lab, the centimeter and occasionally the millimeter commonly rule.
Time
For humans, time moves by the heartbeat; beyond that, it is the motions of our planet that count out the hours, days, and years that eventually define our lifetimes. Beyond the few thousands of years of history behind us, those years-to-the-powers-of-tens that are the fare for such fields as evolutionary biology, geology, and cosmology, cease to convey any real meaning for us. Perhaps this is why so many people are not very inclined to accept their validity.
Most of what actually takes place in the chemist’s test tube operates on a far shorter time scale, although there is no limit to how slow a reaction can be; the upper limits of those we can directly study in the lab are in part determined by how long a graduate student can wait around before moving on to gainful employment. Looking at the microscopic world of atoms and molecules themselves, the time scale again shifts us into an unreal world where numbers tend to lose their meaning. You can gain some appreciation of the duration of a nanosecond by noting that this is about how long it takes a beam of light to travel between your two outstretched hands. In a sense, the material foundations of chemistry itself are defined by time: neither a new element nor a molecule can be recognized as such unless it lasts around sufficiently long enough to have its “picture” taken through measurement of its distinguishing properties.
Temperature
Temperature, the measure of thermal intensity, spans the narrowest range of any of the base units of the chemist’s measure. The reason for this is tied into temperature’s meaning as a measure of the intensity of thermal kinetic energy. Chemical change occurs when atoms are jostled into new arrangements, and the weakness of these motions brings most chemistry to a halt as absolute zero is approached. At the upper end of the scale, thermal motions become sufficiently vigorous to shake molecules into atoms, and eventually, as in stars, strip off the electrons, leaving an essentially reaction-less gaseous fluid, or plasma, of bare nuclei (ions) and electrons.
We all know that temperature is expressed in degrees. What we frequently forget is that the degree is really an increment of temperature, a fixed fraction of the distance between two defined reference points on a temperature scale .
Pressure
Pressure is the measure of the force exerted on a unit area of surface. Its SI units are therefore newtons per square meter, but we make such frequent use of pressure that a derived SI unit, the pascal , is commonly used:
1 Pa = 1 N m –2
The concept of pressure first developed in connection with studies relating to the atmosphere and vacuum that were first carried out in the 17th century. The molecules of a gas are in a state of constant thermal motion, moving in straight lines until experiencing a collision that exchanges momentum between pairs of molecules and sends them bouncing off in other directions. This leads to a completely random distribution of the molecular velocities both in speed and direction— or it would in the absence of the Earth’s gravitational field which exerts a tiny downward force on each molecule, giving motions in that direction a very slight advantage. In an ordinary container this effect is too small to be noticeable, but in a very tall column of air the effect adds up: the molecules in each vertical layer experience more downward-directed hits from those above it. The resulting force is quickly randomized, resulting in an increased pressure in that layer which is then propagated downward into the layers below.
At sea level, the total mass of the sea of air pressing down on each 1-cm 2 of surface is about 1034 g, or 10340 kg m –2 . The force (weight) that the Earth’s gravitional acceleration g exerts on this mass is
f = ma = mg = (10340 kg m –2 )(9.81 m s –2 ) = 1.013 × 10 5 kg m s –2 = 1.013 × 10 5 newtons
resulting in a pressure of
\[1.013 × 10^5 N\, m^{–2} = 1.013 × 10^5 pa.\]
The actual pressure at sea level varies with atmospheric conditions, so it is customary to define standard atmospheric pressure as 1 atm = 1.013 10 5 pa or 101 kpa. Although the standard atmosphere is not an SI unit, it is still widely employed. In meteorology, the bar , exactly 1.000 × 10 5 = 0.967 atm, is often used.
In the early 17th century, the Italian physicist and mathematician Evangalisto Torricelli invented a device to measure atmospheric pressure. The Torricellian barometer consists of a vertical glass tube closed at the top and open at the bottom. It is filled with a liquid, traditionally mercury, and is then inverted, with its open end immersed in the container of the same liquid. The liquid level in the tube will fall under its own weight until the downward force is balanced by the vertical force transmitted hydrostatically to the column by the downward force of the atmosphere acting on the liquid surface in the open container. Torricelli was also the first to recognize that the space above the mercury constituted a vacuum, and is credited with being the first to create a vacuum.
One standard atmosphere will support a column of mercury that is 76 cm high, so the “millimeter of mercury”, now more commonly known as the torr , has long been a common pressure unit in the sciences: 1 atm = 760 torr. | libretexts | 2025-03-17T19:53:06.534844 | 2017-02-24T03:56:44 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/03%3A_Measuring_Matter/3.01%3A_Units_and_Dimensions",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "3.1: Units and Dimensions",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/03%3A_Measuring_Matter/3.02%3A__The_Meaning_of_Measure | 3.2: The Meaning of Measure
Make sure you thoroughly understand the following essential ideas which have been presented above. It is especially important that you know the precise meanings of all the highlighted terms in the context of this topic.
- Give an example of a measured numerical value, and explain what distinguishes it from a "pure" number.
- Give examples of random and systematic errors in measurements.
- Find the mean value of a series of similar measurements.
- State the principal factors that affect the difference between the mean value of a series of measurements, and the "true value" of the quantity being measured.
- Calculate the absolute and relative precisions of a given measurement, and explain why the latter is generally more useful.
- Distinguish between the accuracy and the precision of a measured value, and on the roles of random and systematic error.
The exact distance between the upper lip and the tip of the dorsal fin will forever be hidden in a fog of uncertainty. The angle at which we hold the calipers and the force with which we close them on the object will never be exactly reproducible. A more fundamental limitation occurs whenever we try to compare a continuously-varying quantity such as distance with the fixed intervals on a measuring scale; between 59 and 60 mils there is the same infinity of distances that exists between 59 and 60 miles!
Image by Stephen Winsor ; used with permission of the artist.
The "true value" of a measured quantity, if it exists at all, will always elude us; the best we can do is learn how to make meaningful use (and to avoid mis-use!) of the numbers we read off of our measuring devices.
Uncertainty is Certain!
In science, there are numbers and there are "numbers". What we ordinarily think of as a "number" and will refer to here as a pure number is just that: an expression of a precise value. The first of these you ever learned were the counting numbers, or integers ; later on, you were introduced to the decimal numbers , and the rational numbers , which include numbers such as 1/3 and π (pi) that cannot be expressed as exact decimal values.
The other kind of numeric quantity that we encounter in the natural sciences is a measured value of something– the length or weight of an object, the volume of a fluid, or perhaps the reading on an instrument. Although we express these values numerically, it would be a mistake to regard them as the kind of pure numbers described above.
Confusing? Suppose our instrument has an indicator such as you see here. The pointer moves up and down so as to display the measured value on this scale. What number would you write in your notebook when recording this measurement? Clearly, the value is somewhere between 130 and 140 on the scale, but the graduations enable us to be more exact and place the value between 134 and 135. The indicator points more closely to the latter value, and we can go one more step by estimating the value as perhaps 134.8, so this is the value you would report for this measurement.
Now here’s the important thing to understand: although “134.8” is itself a number , the quantity we are measuring is almost certainly not 134.8— at least, not exactly. The reason is obvious if you note that the instrument scale is such that we are barely able to distinguish between 134.7, 134.8, and 134.9. In reporting the value 134.8 we are effectively saying that the value is probably somewhere with the range 134.75 to 134.85. In other words, there is an uncertainty of ±0.05 unit in our measurement.
All measurements of quantities that can assume a continuous range of values (lengths, masses, volumes, etc.) consist of two parts : the reported value itself (never an exactly known number), and the uncertainty associated with the measurement. By “error”, we do not mean just outright mistakes, such as incorrect use of an instrument or failure to read a scale properly; although such gross errors do sometimes happen, they usually yield results that are sufficiently unexpected to call attention to themselves.
All measurements are subject to error which contributes to the uncertainty of the result. By “error”, we do not mean just outright mistakes, such as incorrect use of an instrument or failure to read a scale properly; although such gross errors do sometimes happen, they usually yield results that are sufficiently unexpected to call attention to themselves.
When you measure a volume or weight, you observe a reading on a scale of some kind. Scales, by their very nature, are limited to fixed increments of value, indicated by the division marks. The actual quantities we are measuring, in contrast, can vary continuously, so there is an inherent limitation in how finely we can discriminate between two values that fall between the marked divisions of the measuring scale. The same problem remains if we substitute an instrument with a digital display; there will always be some point at which some value that lies between the two smallest divisions must arbitrarily toggle between two numbers on the readout display. This introduces an element of randomness into the value we observe, even if the "true" value remains unchanged.
The more sensitive the measuring instrument, the less likely it is that two successive measurements of the same sample will yield identical results. In the example we discussed above, distinguishing between the values 134.8 and 134.9 may be too difficult to do in a consistent way, so two independent observers may record different values even when viewing the same reading. Each measurement is also influenced by a myriad of minor events, such as building vibrations, electrical fluctuations, motions of the air, and friction in any moving parts of the instrument. These tiny influences constitute a kind of "noise" that also has a random character. Whether we are conscious of it or not, all measured values contain an element of random error.
Each measurement is also influenced by a myriad of minor events, such as building vibrations, electrical fluctuations, motions of the air, and friction in any moving parts of the instrument. These tiny influences constitute a kind of "noise" that also has a random character. Whether we are conscious of it or not, all measured values contain an element of random error.
Suppose that you weigh yourself on a bathroom scale, not noticing that the dial reads “1.5 kg” even before you have placed your weight on it. Similarly, you might use an old ruler with a worn-down end to measure the length of a piece of wood. In both of these examples, all subsequent measurements, either of the same object or of different ones, will be off by a constant amount. Unlike random error, which is impossible to eliminate, these systematic errors are usually quite easy to avoid or compensate for, but only by a conscious effort in the conduct of the observation, usually by proper zeroing and calibration of the measuring instrument. However, once systematic error has found its way into the data, it is can be very hard to detect.
The Difference Between Accuracy and Precision
We tend to use these two terms interchangeably in our ordinary conversation, but in the context of scientific measurement, they have very different meanings:
- Accuracy refers to how closely the measured value of a quantity corresponds to its “true” value.
- Precision expresses the degree of reproducibility, or agreement between repeated measurements.
Accuracy, of course, is the goal we strive for in scientific measurements. Unfortunately, however, there is no obvious way of knowing how closely we have achieved it; the “true” value, whether it be of a well-defined quantity such as the mass of a particular object, or an average that pertains to a collection of objects, can never be known– and thus we can never recognize it if we are fortunate enough to find it.
Note carefully that when we make real measurements, there is no dart board or target that enables one to immediately judge the quality of the result. If we make only a few observations, we may be unable distinguish between any of these scenarios. Thus we cannot distinguish between the four scenarios illustrated above by simply examining the results of the two measurements. We can, however, judge the precision of the results, and then apply simple statistics to estimate how closely the mean value is likely to reflect the true value in the absence of systematic error.
More than one answer in Replicate Measurements
If you wish to measure your height to the nearest centimeter or inch, or the volume of a liquid cooking ingredient to the nearest “cup”, you can probably do so without having to worry about random error. The error will still be present, but its magnitude will be such a small fraction of the value that it will not be detected. Thus random error is not something we worry about too much in our daily lives.
If we are making scientific observations, however, we need to be more careful, particularly if we are trying to exploit the full sensitivity of our measuring instruments in order to achieve a result that is as reliable as possible. If we are measuring a directly observable quantity such as the weight or volume of an object, then a single measurement, carefully done and reported to a precision that is consistent with that of the measuring instrument, will usually be sufficient.
More commonly, however, we are called upon to find the value of some quantity whose determination depends on several other measured values, each of which is subject to its own sources of error. Consider a common laboratory experiment in which you must determine the percentage of acid in a sample of vinegar by observing the volume of sodium hydroxide solution required to neutralize a given volume of the vinegar. You carry out the experiment and obtain a value. Just to be on the safe side, you repeat the procedure on another identical sample from the same bottle of vinegar. If you have actually done this in the laboratory, you will know it is highly unlikely that the second trial will yield the same result as the first. In fact, if you run a number of replicate (that is, identical in every way) determinations, you will probably obtain a scatter of results.
To understand why, consider all the individual measurements that go into each determination; the volume of the vinegar sample, your judgment of the point at which the vinegar is neutralized, and the volume of solution used to reach this point. And how accurately do you know the concentration of the sodium hydroxide solution, which was made up by dissolving a measured weight of the solid in water and then adding more water until the solution reaches some measured volume. Each of these many observations is subject to random error; because such errors are random, they can occasionally cancel out, but for most trials we will not be so lucky– hence the scatter in the results.
A similar difficulty arises when we need to determine some quantity that describes a collection of objects. For example, a pharmaceutical researcher will need to determine the time required for half of a standard dose of a certain drug to be eliminated by the body, or a manufacturer of light bulbs might want to know how many hours a certain type of light bulb will operate before it burns out. In these cases a value for any individual sample can be determined easily enough, but since no two samples (patients or light bulbs) are identical, we are compelled to repeat the same measurement on multiple samples, and once again, are faced with a scattering of results.
As a final example, suppose that you wish to determine the diameter of a certain type of coin. You make one measurement and record the results. If you then make a similar measurement along a different cross-section of the coin, you will likely get a different result. The same thing will happen if you make successive measurements on other coins of the same kind.
Here we are faced with two kinds of problems. First, there is the inherent limitation of the measuring device: we can never reliably measure more finely than the marked divisions on the ruler. Secondly, we cannot assume that the coin is perfectly circular; careful inspection will likely reveal some distortion resulting from a slight imperfection in the manufacturing process. In these cases, it turns out that there is no single, true value of either quantity we are trying to measure.
Mean, Median, and Range of a Series of Observations
There are a variety of ways to express the average, or central tendency of a series of measurements, with mean (more precisely, arithmetic mean) being most commonly employed. Our ordinary use of the term "average" also refers to the mean. When we obtain more than one result for a given measurement (either made repeatedly on a single sample, or more commonly, on different samples), the simplest procedure is to report the mean, or average value. The mean is defined mathematically as the sum of the values, divided by the number of measurements:
\[x_m = \dfrac{\displaystyle \sum_i x_i}{n} \label{mean}\]
If you are not familiar with this notation, don’t let it scare you! Take a moment to see how it expresses the previous sentence; if there are \(n\) measurements, each yielding a value xI, then we sum over all \(i\) and divide by \(n\) to get the mean value \(x_m\). For example, if there are only two measurements, \(x_1\) and \(x_1\), then the mean is \((x_1 + x_2)/2\).
Calculate the mean value of the set of eight measurements illustrated here.
Solution
There are eight data points (10.4 was found in three trials, 10.5 in two), so \(n=8\). The mean is (via Equation \ref{mean}):
\[ \dfrac{10.2+10.3+(3 x 10.4) + 10.5+10.5+10.8}{8} = 10.4. \nonumber\]
Range
The range of a data set is the difference between its smallest and largest values. As such, its value reflects the precision of the result. For example, the following data sets have the same average, but the one having the smaller range is clearly more precise.
If you arrange the list of measured values in order of their magnitude, the median is the one that has as many values above it as below it.
Examples : for the data set [22 23 23 24 26 28] the mode would be 23.
For an odd number of values n , the median is the [ (n +1)/2]th member of the set. Thus for [22 23 23 24 24 27], (n +1)/2 =3, so 23 is the median value.
Mode
This refers to the value that is observed most frequently in a series of measurements. If two or more values tie for the highest frequency, then there can be multiple modes. Mode is most useful in describing larger data sets.
Example : for the data set [22 23 23 24 26 26] the modes are 23 and 24.
The more observations, the more reliable the mean value. If this is not immediately obvious, think about it this way. You would not want to predict the outcome of the next election on the basis of interviews with only two or three voters; you would want a sample of ten to twenty at a minimum, and if the election is an important national one, a fair sample would require hundreds to thousands of people distributed over the entire geographic area and representing a variety of socio-economic groups. Similarly, you would want to test a large number of light bulbs in order to estimate the mean lifetime of bulbs of that type.
Statistical theory tells us that the more samples we have, the greater will be the chance that the mean of the results will correspond to the “true” value, which in this case would be the mean obtained if samples could be taken from the entire population (of people or of light bulbs.)
This point can be better appreciated by examining the two sets of data shown here. The set on the left consists of only three points (shown in orange), and gives a mean that is quite far removed from the "true" value, which is arbitrarily chosen for this example.
In the data set on the right, composed of nine measurements, the deviation of the mean from the true value is much smaller.
Deviation of the mean from the "true value" becomes smaller when more measurements are made.
Plots and points
A similar problem arises when you try to fit a curve to a series of plotted points. Suppose, for example, that curve 1 (red) represents the true relationship between the quantities indicated on the y -axis (dependent variable) and those on the x -axis (independent variable). This curve is derived from the seven points indicated on the plot.
Contrast this curve with the false straight-line relationships that might be obtained if only four or three points had been recorded.
Absolute and Relative Uncertainty
If you weigh out 74.1 mg of a solid sample on a laboratory balance that is accurate to within 0.1 milligram, then the actual weight of the sample is likely to fall somewhere in the range of 74.0 to 74.2 mg; the absolute uncertainty in the weight you observe is 0.2 mg, or ±0.1 mg. If you use the same balance to weigh out 3.2914 g of another sample, the actual weight is between 3.2913 g and 3.2915 g, and the absolute uncertainty is still ±0.1 mg. Thus the absolute uncertainty is is unrelated to the magnitude of the observed value.
When expressing the uncertainty of a value given in scientific notation, the exponential part should include both the value itself and the uncertainty. An example of the proper form would be (3.19 ± 0.02) × 10 4 m.
Although the absolute uncertainties in these two examples are identical, we would probably consider the second measurement to be more precise because the uncertainty is a smaller fraction of the measured value. A quantity calculated in this way is known as the relative uncertainty .
Calculate the relative uncertainties of the following absolute uncertainties:
- 74.1 ± 0.1 mg,
- 3.2914 ± 0.1 mg.
Solution
- \[\dfrac{0.2\, mg}{74.1\, mg} = 0.0027\, \text{or} \, 0.003 \nonumber\] (note that the quotient is dimensionless) this can be expressed as 0.3% (3 parts per hundred) or 3 parts per thousand.
- \[\dfrac{0.0002 \,g}{3.2913\, g} = 8.4 \times 10^{-5} \, \text{or roughly} \,8 \times 10^{-5} \nonumber\], which we can express as \(8 \times 10^{-3}\%\) (0.008 parts per hundred), or (8E–5 / 10) = 8E–6 = 8 PPM.
Relative uncertainties are widely used to express the reliability of measurements, even those for a single observation, in which case the uncertainty is that of the measuring device. Relative uncertainties can be expressed as parts per hundred (percent), per thousand (PPT), per million, (PPM), and so on.
Propagation of Error
We are often called upon to find the value of some quantity whose determination depends on several other measured values, each of which is subject to its own sources of error.
Consider a common laboratory experiment in which you must determine the percentage of acid in a sample of vinegar by observing the volume of sodium hydroxide solution required to neutralize a given volume of the vinegar. You carry out the experiment and obtain a value. Just to be on the safe side, you repeat the procedure on another identical sample from the same bottle of vinegar. If you have actually done this in the laboratory, you will know it is highly unlikely that the second trial will yield the same result as the first. In fact, if you run a number of replicate (that is, identical in every way) determinations, you will probably obtain a scatter of results.
To understand why, consider all the individual measurements that go into each determination; the volume of the vinegar sample, your judgement of the point at which the vinegar is neutralized, and the volume of solution used to reach this point. And how accurately do you know the concentration of the sodium hydroxide solution, which was made up by dissolving a measured weight of the solid in water and then adding more water until the solution reaches some measured volume. Each of these many observations is subject to random error; because such errors are random, they can occasionally cancel out, but for most trials we will not be so lucky — hence the scatter in the results.
Rules for estimating errors in calculated results
Suppose you measure the mass and volume of a sample, and are required to calculate its density by dividing one quantity by the other:
\[d = m / V. \nonumber\]
Both components of this quotient have uncertainties associated with them, and you wish to attach an uncertainty to the calculated density. The general problem of determining the uncertainty of a calculated result turns out to be rather more complicated than you might think, and will not be treated here. There are, however, some very simple rules that are sufficient for most practical purposes.
- Addition and subtraction, both numbers have uncertainties : The simplest method is to just add the absolute uncertainties.
- Multiplication or division, both numbers have uncertainties : Convert the absolute uncertainties into relative uncertainties, and add these. Or better, add their squares and take the square root of the sum.
- Multiplication or division by a pure number: Trivial case; multiply or divide the uncertainty by the pure number.
\[(6.3 ± 0.05 \,cm) – (2.1 ± 0.05 \,cm) = 4.2 ± 0.10\,cm \nonumber\]
However, this tends to over-estimate the uncertainty by assuming the worst possible case in which the error in one of the quantities is at its maximum positive value, while that of the other quantity is at its maximum minimum value.
Statistical theory informs us that a more realistic value for the uncertainty of a sum or difference is to add the squares of each absolute uncertainty, and then take the square root of this sum. Applying this to the above values, we have
\[\sqrt{(0.05)^2 + (0.05)^2} = 0.07 \nonumber\]
so the result is 4.2 ± 0.07 cm.
Estimate the absolute error in the density calculated by dividing (12.7 ± 0.05\, g) by (10.0 ± 0.02\, mL).
Solution
Relative uncertainty of the mass:
\[\dfrac{0.05}{12.7} = 0.0039 = 0.39\% \nonumber\]
Relative uncertainty of the volume:
\[\dfrac{0.02}{10.0} = 0.002 = 0.2\% \nonumber\]
Relative uncertainty of the density:
\[ \sqrt{ (0.39)^2 + (0.2)^2} = 0.44 \% \nonumber\]
Mass ÷ volume:
\[(12.7\, g) ÷ (10.0 \,mL) = 1.27 \,g \,mL^{–1} \nonumber \]
Absolute uncertainty of the density:
\[(± 0.044) \times (1.27 \,g \,mL^{–1}) = ±0.06\, g\, mL^{–1} \nonumber \] | libretexts | 2025-03-17T19:53:06.634888 | 2017-02-24T03:57:02 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/03%3A_Measuring_Matter/3.02%3A__The_Meaning_of_Measure",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "3.2: The Meaning of Measure",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/03%3A_Measuring_Matter/3.03%3A__Significant_Figures_and_Rounding_off | 3.3: Significant Figures and Rounding off
Make sure you thoroughly understand the following essential ideas which have been presented above. It is especially important that you know the precise meanings of all the highlighted terms in the context of this topic.
- Give an example of a measurement whose number of significant digits is clearly too great, and explain why.
- State the purpose of rounding off, and describe the information that must be known to do it properly.
- Round off a number to a specified number of significant digits.
- Explain how to round off a number whose second-most-significant digit is 9.
- Carry out a simple calculation that involves two or more observed quantities, and express the result in the appropriate number of significant figures.
The numerical values we deal with in science (and in many other aspects of life) represent measurements whose values are never known exactly. Our pocket-calculators or computers don't know this; they treat the numbers we punch into them as "pure" mathematical entities, with the result that the operations of arithmetic frequently yield answers that are physically ridiculous even though mathematically correct. The purpose of this unit is to help you understand why this happens, and show you what to do about it.
Digits: significant and otherwise
- "The population of our city is 157,872."
- "The number of registered voters as of Jan 1 was 27,833. "
Consider the two statements shown above. Which of these would you be justified in dismissing immediately? Certainly not the second one, because it probably comes from a database which contains one record for each voter, so the number is found simply by counting the number of records.
The first statement cannot possibly be correct. Even if a city’s population could be defined in a precise way (Permanent residents? Warm bodies?), how can we account for the minute-by minute changes that occur as people are born and die, or move in and move away?
What is the difference between the two population numbers stated above? The first one expresses a quantity that cannot be known exactly– that is, it carries with it a degree of uncertainty. It is quite possible that the last census yielded precisely 157,872 records, and that this might be the “population of the city” for legal purposes, but it is surely not the “true” population. To better reflect this fact, one might list the population (in an atlas, for example) as 157,900 or even 158,000 . These two quantities have been rounded off to four and three significant figures, respectively, and the have the following meanings:
- 1579 00 (the significant digits are underlined here) implies that the population is believed to be within the range of about 1578 50 to about 1579 50. In other words, the population is 1579 00±50. The “plus-or-minus 50” appended to this number means that we consider the absolute uncertainty of the population measurement to be 50 – (–50) = 100. We can also say that the relative uncertainty is 100/157900, which we can also express as 1 part in 1579, or 1/1579 = 0.000633, or about 0.06 percent.
- The value 158 000 implies that the population is likely between about 157 500 and 158 500, or 158 000±500. The absolute uncertainty of 1000 translates into a relative uncertainty of 1000/158000 or 1 part in 158, or about 0.6 percent.
Which of these two values we would report as “the population” will depend on the degree of confidence we have in the original census figure; if the census was completed last week, we might round to four significant digits, but if it was a year or so ago, rounding to three places might be a more prudent choice. In a case such as this, there is no really objective way of choosing between the two alternatives.
This illustrates an important point: the concept of significant digits has less to do with mathematics than with our confidence in a measurement. This confidence can often be expressed numerically (for example, the height of a liquid in a measuring tube can be read to ±0.05 cm), but when it cannot, as in our population example, we must depend on our personal experience and judgment.
So, what is a significant digit? According to the usual definition, it is all the numerals in a measured quantity (counting from the left) whose values are considered as known exactly, plus one more whose value could be one more or one less:
- In “ 1579 00” (four significant digits), the leftmost three digits are known exactly, but the fourth digit, “9” could well be “8” if the “true value” is within the implied range of 1578 50 to 1579 50.
- In “ 158 000” (three significant digits), the leftmost two digits are known exactly, while the third digit could be either “7” or “8” if the true value is within the implied range of 157 500 to 158 500.
Although rounding off always leads to the loss of numeric information, what we are getting rid of can be considered to be “numeric noise” that does not contribute to the quality of the measurement.
The purpose in rounding off is to avoid expressing a value to a greater degree of precision than is consistent with the uncertainty in the measurement.
Implied Uncertainty and Round-off error
If you know that a balance is accurate to within 0.1 mg, say, then the uncertainty in any measurement of mass carried out on this balance will be ±0.1 mg. Suppose, however, that you are simply told that an object has a length of 0.42 cm, with no indication of its precision. In this case, all you have to go on is the number of digits contained in the data. Thus the quantity “0.42 cm” is specified to 0.01 unit in 0 42, or one part in 42 . The implied relative uncertainty in this figure is 1/42, or about 2%. The precision of any numeric answer calculated from this value is therefore limited to about the same amount.
It is important to understand that the number of significant digits in a value provides only a rough indication of its precision, and that information is lost when rounding off occurs.
Suppose, for example, that we measure the weight of an object as 3.28 g on a balance believed to be accurate to within ±0.05 gram. The resulting value of 3.28±.05 gram tells us that the true weight of the object could be anywhere between 3.23 g and 3.33 g. The absolute uncertainty here is 0.1 g (±0.05 g), and the relative uncertainty is 1 part in 32.8, or about 3 percent.
How many significant digits should there be in the reported measurement? Since only the leftmost “3” in “3.28” is certain, you would probably elect to round the value to 3.3 g. So far, so good. But what is someone else supposed to make of this figure when they see it in your report? The value “3.3 g” suggests an implied uncertainty of 3.3±0.05 g, meaning that the true value is likely between 3.25 g and 3.35 g. This range is 0.02 g below that associated with the original measurement, and so rounding off has introduced a bias of this amount into the result. Since this is less than half of the ±0.05 g uncertainty in the weighing, it is not a very serious matter in itself. However, if several values that were rounded in this way are combined in a calculation, the rounding-off errors could become significant.
The standard rules for rounding off are well known. Before we set them out, let us agree on what to call the various components of a numeric value.
- The most significant digit is the leftmost digit (not counting any leading zeros which function only as placeholders and are never significant digits.)
- If you are rounding off to n significant digits, then the least significant digit is the n th digit from the most significant digit.The least significant digit can be a zero.
- The first non-significant digit is the n +1th digit.
- If the first non-significant digit is less than 5, then the least significant digit remains unchanged.
- If the first non-significant digit is greater than 5, the least significant digit is incremented by 1.
- If the first non-significant digit is 5, the least significant digit can either be incremented or left unchanged ( see below! )
- All non-significant digits are removed.
Students are sometimes told to increment the least significant digit by 1 if it is odd, and to leave it unchanged if it is even. One wonders if this reflects some idea that even numbers are somehow “better” than odd ones! (The ancient superstition is just the opposite, that only the odd numbers are "lucky".)
In fact, you could do it equally the other way around, incrementing only the even numbers. If you are only rounding a single number, it doesn’t really matter what you do. However, when you are rounding a series of numbers that will be used in a calculation, if you treated each first-nonsignificant 5 in the same way, you would be over- or underestimating the value of the rounded number, thus accumulating round-off error. Since there are equal numbers of even and odd digits, incrementing only the one kind will keep this kind of error from building up.
You could do just as well, of course, by flipping a coin!
|
number to round /
|
result |
comment |
|---|---|---|
| 34.216 / 3 | 34.2 |
First non-significant digit (1) is less than 5,
so number is simply truncated. |
| 2.252 / 2 | 2.2 or 2.3 | First non-significant digit is 5, so least sig. digit can either remain unchanged or be incremented. |
| 39.99 / 3 | 40.0 | Crossing "decimal boundary", so all numbers change. |
| 85,381 / 3 | 85,4 00 | The two zeros are just placeholders |
| 0.04597 / 3 | 0.0460 | The two leading zeros are not significant digits. |
Rounding Up The Nines
Suppose that an object is found to have a weight of 3.98 ± 0.05 g. This would place its true weight somewhere in the range of 3.93 g to 4.03 g. In judging how to round this number, you count the number of digits in “3.98” that are known exactly, and you find none! Since the “4” is the leftmost digit whose value is uncertain, this would imply that the result should be rounded to one significant figure and reported simply as 4 g. An alternative would be to bend the rule and round off to two significant digits, yielding 4.0 g. How can you decide what to do?
In a case such as this, you should look at the implied uncertainties in the two values, and compare them with the uncertainty associated with the original measurement.
|
rounded value |
implied max |
implied min |
absolute uncertainty |
relative uncertainty |
|---|---|---|---|---|
| 3.98 g | 3.985 g | 3.975 g | ±.005 g or 0.01 g | 1 in 400, or 0.25% |
| 4 g | 4.5 g | 3.5 g | ±.5 g or 1 g | 1 in 4, 25% |
| 4.0 g | 4.05 g | 3.95 g | ±.05 g or 0.1 g | 1 in 40, 2.5% |
Clearly, rounding off to two digits is the only reasonable course in this example.
The same kind of thing could happen if the original measurement was 9.98 ± 0.05 g. Again, the true value is believed to be in the range of 10.03 g to 9.93 g. The fact that no digit is certain here is an artifact of decimal notation. The absolute uncertainty in the observed value is 0.1 g, so the value itself is known to about 1 part in 100, or 1%. Rounding this value to three digits yields 10.0 g with an implied uncertainty of ±.05 g, or 1 part in 100, consistent with the uncertainty in the observed value.
Observed values should be rounded off to the number of digits that most accurately conveys the uncertainty in the measurement.
- Usually, this means rounding off to the number of significant digits in in the quantity; that is, the number of digits (counting from the left) that are known exactly, plus one more.
- When this cannot be applied (as in the example above when addition of subtraction of the absolute uncertainty bridges a power of ten), then we round in such a way that the relative implied uncertainty in the result is as close as possible to that of the observed value.
Rounding off the Results of Calculations
When carrying out calculations that involve multiple steps, you should avoid doing any rounding until you obtain the final result. In science, we frequently need to carry out calculations on measured values. For example, you might use your pocket calculator to work out the area of a rectangle:
Your calculator is of course correct as far as the pure numbers go, but you would be wrong to write down 1.57676 cm 2 as the answer. Two possible options for rounding off the calculator answer are shown below:
|
Rounded Value |
Precision |
|---|---|
| 1.58 | 1 part in 158, or 0.6% |
| 1.6 | 1 part in 16, or 6 % |
It is clear that neither option is entirely satisfactory; rounding to 3 significant digits leaves the answer too precisely specified, whereas following the rule and rounding to 2 digits has the effect of throwing away some precision. In this case, it could be argued that rounding to three digits is justified because the implied relative uncertainty in the answer, 0.6%, is more consistent with those of the two factors.
The above example is intended to point out that the rounding-off rules, although convenient to apply, do not always yield the most desirable result. When in doubt, it is better to rely on relative implied uncertainties.
Addition and Subtraction
When adding or subtracting, we go by the number of decimal places rather than by the number of significant digits. Identify the quantity having the smallest number of decimal places, and use this number to set the number of decimal places in the answer.
Multiplication and Division
The result must contain the same number of significant figures as in the value having the least number of significant figures.
Logarithms and Antilogarithms
Express the base-10 logarithm of a value using the same number of significant figures as is present in the normalized form of that value. Similarly, for antilogarithms (numbers expressed as powers of 10), use the same number of significant figures as are in that power.
mean?
If a number is expressed in the form a × 10 b ("scientific notation") with the additional restriction that the coefficient a is no less than 1 and less than 10, the number is in its normalized form.
More Rounding Examples
The following examples will illustrate the most common problems you are likely to encounter in rounding off the results of calculations. They deserve your careful study!
|
calculator result |
rounded |
remarks |
|---|---|---|
|
1.6 |
Rounding to two significant figures yields an implied uncertainty of 1/16 or 6%, three times greater than that in the least-preciseely known factor. This is a good illustration of how rounding can lead to the loss of information. | |
|
1.9E6 |
The "3.1" factor is specified to 1 part in 31, or 3%. In the answer 1.9, the value is expressed to 1 part in 19, or 5%. These precisions are comparable, so the rounding-off rule has given us a reasonable result. | |
|
A certain book has a thickness of 117 mm; find the height of a stack of 24 identical books:
|
281 0 mm |
The “24” and the “1” are exact, so the only uncertain value is the thickness of each book, given to 3 significant digits. The trailing zero in the answer is only a placeholder. |
|
|
10.4 |
In addition or subtraction, look for the term having the smallest number of decimal places, and round off the answer to the same number of places. |
|
|
23 cm |
[see below] |
The last of the examples shown above represents the very common operation of converting one unit into another. There is a certain amount of ambiguity here; if we take "9 in" to mean a distance in the range 8.5 to 9.5 in, then the uncertainty is ±0.5 in, which is 1 part in 18, or about ± 6%. The relative uncertainty in the answer must be the same, since all the values are multiplied by the same factor, 2.54 cm/in. In this case we are justified in writing the answer to two significant digits, yielding an uncertainty of about ±4 cm; if we had used the answer "20 cm" (one significant digit), its implied uncertainty would be ±5 cm, or ±25%.
When the appropriate number of significant digits is in question, calculating the relative uncertainty can help you decide. | libretexts | 2025-03-17T19:53:06.735715 | 2017-02-24T03:57:21 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/03%3A_Measuring_Matter/3.03%3A__Significant_Figures_and_Rounding_off",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "3.3: Significant Figures and Rounding off",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/03%3A_Measuring_Matter/3.04%3A_Reliability_of_a_measurement | 3.4: Reliability of a measurement
- Explain the distinction between the mean value of a series of measurements and the population mean .
- What quantity besides the mean value do we need in order to evaluate the quality of a series of measurements?
- Explain the meaning and significance of the dispersion of the mean, and state what factor controls it.
- Explain the distinction between determinate and indeterminate error.
- Describe the purpose process of using a blank and control value when making a series of measurements. What principal assumption must be made in doing this?
In this day of pervasive media, we are continually being bombarded with data of all kinds— public opinion polls, advertising hype, government reports and statements by politicians. Very frequently, the purveyors of this information are hoping to “sell” us on a product, an idea, or a way of thinking about someone or something, and in doing so, they are all too often willing to take advantage of the average person’s inability to make informed judgments about the reliability of the data, especially when it is presented in a particular context (popularly known as “spin”.) In Science, we do not have this option: we collect data and make measurements in order to get closer to whatever “truth” we are seeking, but it's not really "science" until others can have confidence in the reliability of our measurements.
Attributes of a measurement
The kinds of measurements we will deal with here are those in which a number of separate observations are made on individual samples taken from a larger population .
Population , when used in a statistical context, does not necessarily refer to people, but rather to the set of all members of the group of objects under consideration.
For example, you might wish to determine the amount of nicotine in a manufacturing run of one million cigarettes. Because no two cigarettes are likely to be exactly identical, and even if they were, random error would cause each analysis to yield a different result, the best you can do would be to test a representative sample of, say, twenty to one hundred cigarettes. You take the average (mean) of these values, and are then faced with the need to estimate how closely this sample mean is likely to approximate the population mean. The latter is the “true value” we can never know; what we can do, however, is make a reasonable estimate of the likelihood that the sample mean does not differ from the population mean by more than a certain amount.
The attributes we can assign to an individual set of measurements of some quantity x within a population are listed below. It is important that you learn the meaning of these terms:
Number of measurements
This quantity is usually represented by n .
Mean
The mean value x m (commonly known as the average ), defined as
Median
The median value, which we will not deal with in this brief presentation, is essentially the one in the middle of the list resulting from writing the individual values in order of increasing or decreasing magnitude.
Range
The range is the difference between the largest and smallest value in the set.
Problem example:
Find the mean value and range of the set of measurements depicted here.
Solution:
This set contains 8 measurements. The range is
(10.7 – 10.3) = 0.4, and the mean value is
6 More than one answer: dispersion of the mean
"Dispersion" means "spread-outedness". If you make a few measurements and average them, you get a certain value for the mean. But if you make another set of measurements, the mean of these will likely be different. The greater the difference between the means, the greater is their dispersion .
Suppose that instead of taking the five measurements as in the above example, we had made only two observations which, by chance, yielded the values that are highlighted here. This would result in a sample mean of 10.45. Of course, any number of other pairs of values could equally well have been observed, including multiple occurances of any single value, such as 10.6.
Shown at the left are the results of two possible pairs of observations, each giving rise to its own sample mean. Assuming that all observations are subject only to random error, it is easy to see that successive pairs of experiments could yield many other sample means. The range of possible sample means is known as the dispersion of the mean.
It is clear that both of the two sample means cannot correspond to the population mean, whose value we are really trying to discover. In fact, it is quite likely that neither sample mean is the “correct” one in this sense. It is a fundamental principle of statistics, however, that the more observations we make in order to obtain a sample mean, the smaller will be the dispersion of the sample means that result from repeated sets of the same number of observations. (This is important; please read the preceding sentence at least three times to make sure you understand it!)
How the dispersion of the mean depends on the number of observations
The difference between the sample mean (blue) and the population mean (the "true value", green) is the error of the measurement. It is clear that this error diminishes as the number of observations is made larger.
What is stated above is just another way of saying what you probably already know: larger samples produce more reliable results. This is the same principle that tells us that flipping a coin 100 times will be more likely to yield a 50:50 ratio of heads to tails than will be found if only ten flips (observations) are made.
The reason for this inverse relation between the sample size and the dispersion of the mean is that if the factors giving rise to the different observed values are truly random, then the more samples we observe, the more likely will these errors cancel out. It turns out that if the errors are truly random, then as you plot the number of occurrences of each value, the results begin to trace out a very special kind of curve.
The significance of this is much greater than you might at first think, because the G aussian curve has special mathematical properties that we can exploit, through the methods of statistics, to obtain some very useful information about the reliability of our data. This will be the major topic of the next lesson in this set.
For now, however, we need to establish some important principles regarding measurement error.
7 Systematic error
The scatter in measured results that we have been discussing arises from random variations in the myriad of events that affect the observed value, and over which the experimenter has no or only limited control. If we are trying to determine the properties of a collection of objects (nicotine content of cigarettes or lifetimes of lamp bulbs), then random variations between individual members of the population are an ever-present factor. This type of error is called random or indeterminate error , and it is the only kind we can deal with directly by means of statistics.
There is, however, another type of error that can afflict the measuring process. It is known as systematic or determinate error , and its effect is to shift an entire set of data points by a constant amount. Systematic error, unlike random error, is not apparent in the data itself, and must be explicitly looked for in the design of the experiment.
One common source of systematic error is failure to use a reliable measuring scale, or to misread a scale. For example, you might be measuring the length of an object with a ruler whose left end is worn, or you could misread the volume of liquid in a burette by looking at the top of the meniscus rather than at its bottom, or not having your eye level with the object being viewed against the scale, thus introducing parallax error.
8 Blanks and controls
Many kinds of measurements are made by devices that produce a response of some kind (often an electric current) that is directly proportional to the quantity being measured. For example, you might determine the amount of dissolved iron in a solution by adding a reagent that reacts with the iron to give a red color, which you measure by observing the intensity of green light that passes through a fixed thickness of the solution. In a case such as this, it is common practice to make two additional kinds of measurements:
One measurement is done on a solution as similar to the unknowns as possible except that it contains no iron at all. This sample is called the blank . You adjust a control on the photometer to set its reading to zero when examining the blank.
The other measurement is made on a sample containing a known concentration of iron; this is usually called the control . You adjust the sensitivity of the photometer to produce a reading of some arbitrary value (50, say) with the control solution. Assuming the photometer reading is directly proportional to the concentration of iron in the sample (this might also have to be checked, in which case a calibration curve must be constructed), the photometer reading can then be converted into iron concentration by simple proportion.
9 The standard deviation
Consider the two pairs of observations depicted here:
Notice that the sample means happen to have the same value of “40” (pure luck!), but the difference in the precisions of the two measurements makes it obvious that the set shown on the right is more reliable. How can we express this fact in a succinct way? We might say that one experiment yields a value of 40 ±20, and the other 40 ±5. Although this information might be useful for some purposes, it is unable to provide an answer to such questions as "how likely would another independent set of measurements yield a mean value within a certain range of values?" The answer to this question is perhaps the most meaningful way of assessing the "quality" or reliability of experimental data, but obtaining such an answer requires that we employ some formal statistics.
Deviations from the mean
We begin by looking at the differences between the sample mean and the individual data values used to compute the mean. These differences are known as deviations from the mean , x i – x m . These values are depicted below; note that the only difference from the plots above is placement of the mean value at 0 on the horizontal axis.
The variance and its square root
Next, we need to find the average of these deviations. Taking a simple average, however, will not distinguish between these two particular sets of data, because both deviations average out to zero. We therefore take the average of the squares of the deviations (squaring makes the signs of the deviations disappear so they cannot cancel out). Also, we compute the average by dividing by one less than the number of measurements, that is, by n –1 rather than by n . The result, usually denoted by S 2 , is known as the variance :
Finally, we take the square root of the variance to obtain the standard deviation S :
Problem example: Calculate the variance and standard deviation for each of the two data sets shown above.
Solution: Substitution into the two formulas yields the following results:
| data values | 20, 60 | 35,45 |
| sample mean | 40 | 40 |
| variance S 2 | ||
| standard deviation | 28 | 7.1 |
Comment: Notice how the contrasting values of S reflect the difference in the precisions of the two data sets— something that is entirely lost if only the two means are considered.
Now that we have developed the very important concept of standard deviation, we can employ it in the next section to answer practical questions about how to interpret the results of a measurement. | libretexts | 2025-03-17T19:53:06.820795 | 2017-02-23T21:18:35 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/03%3A_Measuring_Matter/3.04%3A_Reliability_of_a_measurement",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "3.4: Reliability of a measurement",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/03%3A_Measuring_Matter/3.05%3A_Drawing_Conclusions_from_Data | 3.5: Drawing Conclusions from Data
Make sure you thoroughly understand the following essential ideas which have been presented above. It is especially important that you know the precise meanings of all the italicized terms in the context of this topic.
- What is the deviation from the population mean , why can we not know its value, and why is it neverthless a fundamentally important quantity in statistics?
- Sketch out a Gaussian curve , and label the two axes, showing on the x-axis the deviation from the mean in terms of standard deviations . Shade in the area corresponding to the 95.4-percent confidence level ,
- State the meaning of a confidence interval and how it relates to the standard deviation on a plot of the Gaussian curve.
- State the meaning of a confidence interval and how it relates to the standard deviation on a plot of the Gaussian curve .
- What is the distinction between a confidence interval and the confidence level ?
- Describe the circumstances when a Student's t statistic is useful.
- Describe some of the major problems that can cause statistics to be erroneous or misleading.
OK, you have collected your data, so what does it mean? This question commonly arises when measurements made on different samples yield different values. How well do measurements of mercury concentrations in ten cans of tuna reflect the composition of the factory's entire output? Why can't you just use the average of these measurements? How much better would the results of 100 such tests be? This final lesson on measurement will examine these questions and introduce you to some of the methods of dealing with data. This stuff is important not only for scientists, but also for any intelligent citizen who wishes to independently evaluate the flood of numbers served up by advertisers, politicians, "experts", and yes— by other scientists.
The Standard Deviation
Each of these sets has the same mean value of 40, but the "quality" of the set shown on the right is greater because the data points are less scattered; the precision of the result is greater.
The quantitative measure of this precision is given by the standard deviation
whose value works out to 28 and 7 for the two sets illustrated above. A data set containing only two values is far too small for a proper statistical analysis— you would not want to judge the average mercury content of canned tuna on the basis of only two samples, for instance. Suppose, then, for purposes of illustration, that we have accumulated many more data points but the standard deviations of the two sets remain at 28 and 7 as before. What conclusions can we draw about how close the mean value of 40 is likely to come to the "true value" (the population mean μ) in each case?
Although we cannot ordinarily know the value of μ, we can assign to each data point x i a quantity ( x i – x m ) which we call the deviation from the [population] mean , an index of how far each data point differs from the elusive “true value”. We now divide this deviation from the mean by the standard deviation of the entire data set:
If we plot the values of z that correspond to each data point, we obtain the following curves for the two data sets we are using as examples:
Bear in mind that we cannot actually plot these curves from our experimental data points because we don't know the value of the population mean μ (if we did, there would be no need to make the measurements in the first place!), and we are unlikely to have enough data points to obtain a smooth curve anyway.
We won’t attempt to prove it here, but the mathematical properties of a Gaussian curve are such that its shape depends on the scale of units along the x-axis and on the standard deviation of the corresponding data set. In other words, if we know the standard deviation of a data set, we can construct a plot of z that shows how the measurements would be distributed
- if the number of observations is very large
- if the different values are due only to random error
An important corollary to the second condition is that if the data points do not approximate the shape of this curve, then it is likely that the sample is not representative, or that some complicating factor is involved. The latter often happens when a teacher plots a set of student exam scores, and gets a curve having two peaks instead of one— representing perhaps the two sub-populations of students who devote their time to studying and partying.
This minor gem was devised by the statistician W.J. Youdan and appears in The visual display of quantitative information , an engaging book by Edward R. Tufte (Graphics Press, Cheshire CT, 1983).
Confidence intervals
Clearly, the sharper and more narrow the standard error curve for a set of measurement, the more likely it will be that any single observed value approximates the true value we are trying to find. Because the shape of the curve is determined by S , we can make quantitative predictions about the reliability of our data from its standard deviation. In particular, if we plot z as a function of the number of standard deviations from the mean (rather than as the number of absolute deviations from the mean as was done above), the shape of the curve depends only on the value of S . That is, the dependence on the particular units of measurement is removed.
Moreover, it can be shown that if all measurement error is truly random, 68.3 percent (about two-thirds) of the data points will fall within one standard deviation of the population mean, while 95.4 percent of the observations will differ from the population mean by no more than two standard deviations. This is extremely important, because it allows us to express the reliability of a measurement quantitatively, in terms of confidence intervals .
You might occasionally see or hear a news report stating that the results of a certain public opinion poll are considered reliable to within, say, 5%, “nineteen times out of twenty”. This is just another way of saying that the confidence interval in the poll is 95%, the standard deviation is about 2.5% of the stated result, and that there is no more than a 5% chance that an identical poll carried out on another set of randomly-selected individuals from the same population would yield a different result. This is as close to “the truth” as we can get in scientific measurements.
Note carefully: Confidence interval (CI) and confidence level (CL) are not the same!
A given CI (denoted by the shaded range of 18-33 ppm in the diagram) is always defined in relation to some particular CL; specifying the first without the second is meaningless. If the CI illustrated here is at the 90% CL, then a CI for a higher CL would be wider, while that for a smaller CL would encompass a smaller range of values.
How the confidence level depends on the number of measurements
The more measurements we make, the more likely will their average value approximate the true value. The width of the confidence interval (expressed in the actual units of measurement) is directly proportional to the standard deviation S and to the value of z (both of these terms are defined above). The confidence interval of a single measurement in terms these quantities and of the observed sample mean is given by:
CI = x m + z S
If n replicate measurements are made, the confidence interval becomes smaller:
This relation is often used “in reverse”, that is, to determine how many replicate measurements n must be carried out in order to obtain a value within a desired confidence interval.
As we pointed out above, any relation involving the quantity z (which the standard error curve is a plot of) is of limited use unless we have some idea of the value of the population mean μ. If we make a very large number of measurements (100 to 1000, for example), then we can expect that our observed sample mean approximates μ quite closely, so there is no difficulty.
The shaded area in each plot shows the fraction of measurements that fall within two standard deviations (2 S ) of the "true" value (that is, the population mean μ). It is evident that the width of the confidence interval diminishes as the number of measurements becomes greater. This is basically a result of the fact that relatively large random errors tend to be less common than smaller ones, and are therefore less likely to cancel out if only a small number of measurements is made.
Dealing with small data sets
OK, so larger data sets are better than small ones. But what if it is simply not practical to measure the mercury content of 10,000 cans of tuna? Or if you were carrying out a forensic examination of a tiny chip of paint, you might have only enough sample (or enough time) to do two or three replicate analyses. There are two common ways of dealing with such a difficulty.
One way of getting around this is to use pooled data ; that is, to rely on similar prior determinations, carried out on other comparable samples, to arrive at a standard deviation that is representative of this particular type of determination. The other common way of dealing with small numbers of replicate measurements is to look up, in a table, a quantity t , whose value depends on the number of measurements and on the desired confidence level. For example, for a confidence level of 95%, t would be 4.3 for three samples and 2.8 for five. The magnitude of the confidence interval is then given by
CI = ± t S
This procedure is not black magic, but is based on a careful analysis of the way that the Gaussian curve becomes distorted as the number of samples diminishes. Why was the t -test invented in a brewery? And why does it have such a funny name?
Using statistical tests to make decisions
Once we have obtained enough information on a given sample to evaluate parameters such as means and standard deviations, we are often faced with the necessity of comparing that sample (or the population it represents) with another sample or with some kind of a standard. The following sections paraphrase some of the typical questions that can be decided by statistical tests based on the quantities we have defined above. It is important to understand, however, that because we are treating the questions statistically, we can only answer them in terms of statistics— that is, to a given confidence level.
The usual approach is to begin by assuming that the answer to any of the questions given below is “no” (this is called the null hypothesis ), and then use the appropriate statistical test to judge the validity of this hypothesis to the desired confidence level. Because our purpose here is to show you what can be done rather than how to do it, the following sections do not present formulas or example calculations, which are covered in most textbooks on analytical chemistry. You should concentrate here on trying to understand why questions of this kind are of importance.
“Should I throw this measurement out?”
That is, is it likely that something other than ordinary indeterminate error is responsible for this suspiciously different result? Anyone who collects data of almost any kind will occasionally be faced with this question. Very often, ordinary common sense will be sufficient, but if you need some help, two statistical tests, called the Q test and the T test, are widely employed for this purpose.
We won’t describe them here, but both tests involve computing a quantity ( Q or T ) for a particular result by means of a simple formula, and then consulting a table to determine the likelihood that the value being questioned is a member of the population represented by the other values in the data set.
“Does this met hod yield reliable results?"
This must always be asked when trying a new method for the first time; it is essentially a matter of testing for determinate error. The answer can only be had by running the same procedure on a sample whose composition is known. The deviation of the mean value of the “known” x m from its true value μ is used to compute a Student's t for the desired confidence level. You then apply this value of t to the measurements on your unknown samples.
“Are these two samples identical?”
You wish to compare the means x m1 and x m2 from two sets of measurements in order to assess whether their difference could be due to indeterminate error. Suppose, for example, that you are comparing the percent of chromium in a sample of paint removed from a car's fender with a sample found on the clothing of a hit-and-run victim. You run replicate analyses on both samples, and obtain different mean values, but the confidence intervals overlap. What are the chances that the two samples are in fact identical, and that the difference in the means is due solely to indeterminate error?
A fairly simple formula, using Student’s t , the standard deviation, and the numbers of replicate measurements made on both samples, provides an answer to this question, but only to a specified confidence level. If this is a forensic investigation that you will be presenting in court, be prepared to have your testimony demolished by the opposing lawyer if the CL is less than 99%.
“What is the smallest quantity I can detect?”
This is just a variant of the preceding question. Estimation of the detection limit of a substance by a given method begins with a set of measurements on a blank , that is, a sample in which the substance of question is assumed to be absent, but is otherwise as similar as possible to the actual samples to be tested. We then ask if any difference between the mean of the blank measurements and of the sample replicates can be attributed to indeterminate error at a given confidence level .
For example, a question that arises at every world Olympics event, is what is the minimum level of a drug metabolite that can be detected in an athlete's urine? Many sensitive methods are subject to random errors that can lead to a non-zero result even in a sample known to be entirely free of what is being tested for. So how far from "zero" must the mean value of a test be in order to be certain that the drug was present in a particular sample? A similar question comes up very frequently in environmental pollution studies.
How to Lie with Statistics
How to lie with statistics is the title of an amusing book by Darrell Huff (Norton, 1954). Some of Irving Geiss’s illustrations for this book appear below. See also
Throwing away “wrong” answers.
It occasionally happens that a few data values are so greatly separated from the rest that they cannot reasonably be regarded as representative. If these “outliers” clearly fall outside the range of reasonable statistical error, they can usually be disregarded as likely due to instrumental malfunctions or external interferences such as mechanical jolts or electrical fluctuations.
Some care must be exercised when data is thrown away however; There have been a number of well-documented cases in which investigators who had certain anticipations about the outcome of their experiments were able to bring these expectations about by removing conflicting results from the data set on the grounds that these particular data “had to be wrong”
Beware of too-small samples
The probability of ten successive flips of a coin yielding 8 heads is given by
... indicating that it is not very likely, but can be expected to happen about eight times in a thousand runs. But there is no law of nature that says it cannot happen on your first run, so it would clearly be foolish to cry “Eureka” and stop the experiment after one— or even a few tries. Or to forget about the runs that did not turn up 8 heads!
Perils of dubious "correlations"
The fact that two sets of statistics show the same trend does not prove they are connected, even in cases where a logical correlation could be argued. Thus it has been suggested that according to the two plots below, "In relative terms, the global temperature seems to be tracking the average global GDP quite nicely over the last 70 years."
The difference between confidence levels of 90% and 95% may not seem like much, but getting it wrong can transform science into junk science — a not-unknown practice by special interests intent on manipulating science to influence public policy; see the excellent 2008 book by David Michaels " Doubt is Their Product: How Industry's Assault on Science Threatens Your Health ". | libretexts | 2025-03-17T19:53:06.987330 | 2013-10-03T01:38:00 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/03%3A_Measuring_Matter/3.05%3A_Drawing_Conclusions_from_Data",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "3.5: Drawing Conclusions from Data",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/04%3A_The_Basics_of_Chemistry | 4: The Basics of Chemistry
The chapters in this unit are absolutely essential for anyone embarking on the serious study of Chemistry. The material covered here will be needed in virtually every topic you will encounter in the remainder of your first-year course, as well as in subsequent Chemistry courses — so you might as well master it now!
-
- 4.1: Atoms, Elements, and the Nucleus
- The parallel concepts of the element and the atom constitute the very foundations of chemical science. The concept of the element is a macroscopic one that relates to the world that we can observe with our senses. The atom is the microscopic realization of this concept; that is, it is the actual physical particle that is unique to each chemical element. Their very small size has long prevented atoms from being observable by direct means, so their existence was not universally accepted until the
-
- 4.2: Avogadro's Number and the Mole
- The chemical changes we observe always involve discrete numbers of atoms that rearrange themselves into new configurations. These numbers are far too large in magnitude for us to count , but they are still numbers, and we need to have a way to deal with them. We also need a bridge between these numbers, which we are unable to measure directly, and the weights of substances, which we do measure and observe. The mole concept provides this bridge, and is key to all of quantitative chemistry.
-
- 4.3: Formulas and Their Meaning
- At the heart of chemistry are substances — elements or compounds— which have adefinite composition which is expressed by a chemical formula. In this unit you will learn how to write and interpret chemical formulas both in terms of moles and masses, and to go in the reverse direction, in which we use experimental information about the composition of a compound to work out a formula.
-
- 4.4: Chemical Equations and Stoichiometry
- A chemical equation expresses the net change in composition associated with a chemical reaction by showing the number of moles of reactants and products. But because each component has its own molar mass, equations also implicitly define the way in which the masses of products and reactants are related. In this unit we will concentrate on understanding and making use of these mass relations.
-
- 4.5: Introduction to Chemical Nomenclature
- Chemical nomenclature is far too big a topic to treat comprehensively, and it would be a useless diversion to attempt to do so in a beginning course; most chemistry students pick up chemical names and the rules governing them as they go along. But we can hardly talk about chemistry without mentioning some chemical substances, all of which do have names— and often, more than one!
-
- 4.6: Significant Figures and Rounding
- The numerical values we deal with in science (and in many other aspects of life) represent measurements whose values are never known exactly. Our pocket-calculators or computers don't know this; they treat the numbers we punch into them as "pure" mathematical entities, with the result that the operations of arithmetic frequently yield answers that are physically ridiculous even though mathematically correct.
Thumbnail: Spinning Buckminsterfullerene (\(\ce{C60}\)). (CC BY-SA 3.0; unported; Sponk ). | libretexts | 2025-03-17T19:53:07.052776 | 2013-10-03T01:38:14 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/04%3A_The_Basics_of_Chemistry",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "4: The Basics of Chemistry",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/04%3A_The_Basics_of_Chemistry/4.01%3A_Atoms_Elements_and_the_Nucleus | 4.1: Atoms, Elements, and the Nucleus
Make sure you thoroughly understand the following essential ideas:
- Give a chemical definition of element , and comment on the distinction between the terms atom and element .
- You should know the names and symbols of the more common elements, including those whose symbols are derived from their Latin names.
- Describe, in your own words, the Laws of Chemical Change : mass conservation, constant composition, and multiple proportions.
- Explain how these laws follow from Dalton's atomic theory .
- Describe Rutherford's alpha-ray scattering experiment and how it led to the present model of the atom.
- Define atomic number and mass number , and explain the relation between them.
- Define isotope and nuclide , and write the symbol for a nuclide of a given element with a given number of neutrons.
- Explain the purpose of a mass spectrometer and its general principle of operation.
- Describe the atomic weight scale .
- Find the molecular weight or formula weigh t from a chemical formula.
- Define the unified atomic mass unit , and write out the mass numbers of the proton, neutron, and electron.
The parallel concepts of the element and the atom constitute the very foundations of chemical science. As such, the concept of the element is a macroscopic one that relates to the world that we can observe with our senses. The atom is the microscopic realization of this concept; that is, it is the actual physical particle that is unique to each chemical element. Their very small size has long prevented atoms from being observable by direct means, so their existence was not universally accepted until the late 19 th Century. The fact that we still hear the mention of the " atomic theory of matter" should not imply that there is now any doubt about the existence of atoms. Few theories in the history of science have been as thoroughly validated and are as well understood.
Although the word atom usually refers to a specific kind of particle (an "atom of magnesium", for example), our everyday use of element tends to be more general, referring not only to a substance composed of a particular type of atom ("bromine is one of the few elements that are liquids at room temperature"), but also to atoms in a collective sense ("magnesium is one of the elements having two electrons in its outer shell").
The underlying concept of atoms as the basic building blocks of matter has been around for a long time. As early as 600 BCE, the Gujarati (Indian) philosopher Acharya Kanad wrote that " Every object of creation is made of atoms which in turn connect with each other to form molecules ". A couple of centuries later in 460 BCE, the Greek philosopher Democritus reasoned that if you keep breaking a piece of matter into smaller and smaller fragments, there will be some point at which the pieces cannot be made any smaller. He called these "basic matter particles"— in other words, atoms. But this was just philosophy ; it would not become science until 1800 when John Dalton showed how the atomic concept followed naturally from the results of quantitative experiments based on weight measurements.
Elements
The element is the fundamental unit of chemical identity. The concept of the element is very ancient. It was developed in many different civilizations in an attempt to rationalize the variety of the world and to understand the nature of change, such as, change that occurs when a piece of wood rots, or is burnt to produce charcoal or ash. Most well known to us are the four elements "earth, air, fire and water" that were popularized by Greek philosophers (principally Empedocoles and Aristotle) in the period 500-400 BCE.
To these, Vedic (Hindu) philosophers of India added space , while the ancient Chinese concept of Wu Xing regarded earth, metal, wood, fire and water as fundamental. These basic elements were not generally considered to exist as the actual materials we know as earth, water, etc., but rather to represent the "principles" or essences that the elements conveyed to the various kinds of matter we encounter in the world.
Eventually, practical experience (largely connected with the extraction of metals from ores) and the beginnings of scientific experimentation in the 18 th Century led to our modern concept of the chemical element. An element is a substance : the simplest form to which any other chemical substance can be reduced through appropriate thermal or chemical treatment. "Simplest", in the context of experimentation at the time, was defined in terms of weight; cinnabar (mercuric sulfide) can be broken down into two substances, mercury and sulfur, which themselves cannot be reduced to any lighter forms.
Although Lavoisier got many of these right, he did manage to include a few things that do not quite fit into our modern idea of what constitutes a chemical element. There are two such mistakes in the top section of the table that you should be able to identify even if your French is less than tip-top— can you find them?
Lavoisier's other misassignment of the elements in the bottom section was not really his fault. Chalk, magnesia, barytes, alumina and silica are highly stable oxygen-containing compounds; the high temperatures required to break them down could not be achieved in Lavoisier's time (magnesia is what fire brick is made of). The proper classification of these substances was delayed until further experimentation revealed their true nature. Ten of the chemical elements have been known since ancient times and five more were discovered through the 17th Century.
Some frequently-asked questions about elements
- How many elements are there? Ninety-two elements have been found in nature. Around 25 more have been made artificially, but all of these decay into lighter elements, with some of them disappearing in minutes or even seconds.
- Where do the elements come from? The present belief is that helium and a few other very light elements were formed within about three minutes of the "big bang", and that the next 23 elements (up through iron) are formed mostly by nuclear fusion processes within stars, in which lighter nuclei combine into successively heavier elements. Elements heavier than iron cannot be formed in this way, and are produced only during the catastrophic collapse of massive stars (supernovae explosions).
- How do the elements vary in abundance? Quite markedly, and very differently in different bodies in the cosmos. Most of the atoms in the universe still consist of hydrogen, with helium being a distant second. On Earth, oxygen, silicon, and aluminum are most abundant. These profiles serve as useful guides for constructing models for the formation of the earth and other planetary bodies. The system of element symbols we use today was established by the Swedish chemist Jons Jacob Berzelius in 1814. Prior to that time, graphical alchemical symbols were used, which were later modified and popularized by John Dalton. Fortunately for English speakers, the symbols of most of the elements serve as mnemonics for their names, but this is not true for the seven metals known from antiquity, whose symbols are derived from their Latin names. The other exception is tungsten (a name derived from Swedish), whose symbol W reflects the German name which is more widely used.
-
How are the elements organized?
Two general organizing principles developed in the 19th Century: one was based on the increasing relative weights (atomic weights) of the elements, yielding a list that begins this way:
H He Li Be B C N O F Ne Na Mg Al Si P S Cl Ar K Ca...
Atoms become real
Throughout most of history the idea that matter is composed of minute particles had languished as a philosophical abstraction known as atomism, and no clear relation between these "atoms" and the chemical "elements" had been established. This began to change in the early 1800's when the development of balances that permitted reasonably precise measurements of the weight changes associated with chemical reactions ushered in a new and fruitful era of experimental chemistry. This resulted in the recognition of several laws of chemical change that laid the groundwork for the atomic theory of matter.
Laws of Chemical Change
Recall that a "law", in the context of science, is just a relationship, discovered through experimentation, that is sufficiently well established to be regarded as beyond question for most practical purposes. Because it is the nature of scientists to question the "unquestionable", it occasionally happens that exceptions do arise, in which case the law must undergo appropriate modification.
Conservation of mass-energy is usually considered the most fundamental of law of nature. It is also a good example of a law that had to be modified; it was known simply as Conservation of Mass until Einstein showed that energy and mass are interchangeable. However, the older term is perfectly acceptable within the field of ordinary chemistry in which energy changes are too small to have a measurable effect on mass relations. Within the context of chemistry, conservation of mass can be thought of as "conservation of atoms". Chemical change just shuffles them around into new arrangements.
Mass conservation had special significance in understanding chemical changes involving gases, which were for some time not always regarded as real matter at all. (Owing to their very small densities, carrying out actual weight measurements on gases is quite difficult to do, and was far beyond the capabilities of the early experimenters.) Thus when magnesium metal is burned in air, the weight of the solid product always exceeds that of the original metal, implying that the process is one in which the metal combines with what might have been thought to be a "weightless" component of the air, which we now know to be oxygen.
More importantly, this experimental result tells us something very important about the mass of the oxygen atom relative to that of the magnesium atom.
The Law of Definite Proportions, also known as the law of constant composition , states that the proportion by weight of the element present in any pure substance is always the same. This enables us to generalize the relationship we illustrated above.
How many kilograms of metallic magnesium could theoretically be obtained by decomposing 0.400 kg of magnesium oxide into its elements?
Solution
The mass ratio of Mg to O in this compound is
\[\dfrac{1}{1.66} = 0.602 \nonumber\]
so 0.400 kg of the oxide contains
\[(0.400\; kg) \times 0.602 = 0.241\; \text{kg of Mg} \nonumber .\]
The fact that we are concerned with the reverse of the reaction cited above is irrelevant.
The Law of Multiple Proportions address the fact that Many combinations of elements can react to form more than one compound. In such cases, this law states that the weights of one element that combine with a fixed weight of another of these elements are integer multiples of one another. It's easy to say this, but please make sure that you understand how it works. Nitrogen forms a very large number of oxides, five of which are shown here.
- Line shows the ratio of the relative weights of the two elements in each compound. These ratios were calculated by simply taking the molar mass of each element, and multiplying by the number of atoms of that element per mole of the compound. Thus for NO 2 , we have (1 × 14) : (2 × 16) = 13:32. (These numbers were not known in the early days of Chemistry because atomic weights (i.e., molar masses) of most elements were not reliably known.)
- The numbers in Line are just the mass ratios of O:N, found by dividing the corresponding ratios in line 1. But someone who depends solely on experiment would work these out by finding the mass of O that combines with unit mass (1 g) of nitrogen.
- Line is obtained by dividing the figures the previous line by the smallest O:N ratio in the line above, which is the one for N 2 O. Note that just as the law of multiple proportions says, the weight of oxygen that combines with unit weight of nitrogen work out to small integers.
- Of course we just as easily could have illustrated the law by considering the mass of nitrogen that combines with one gram of oxygen; it works both ways!
Nitrogen and hydrogen form many compounds, some of which involve other elements as well. The mass of hydrogen that combines with 1.00 g of nitrogen to form three of these compounds are: urea, 0.1428 g; ammonia, 0.0714 g; ammonium chloride, 0.2857 g. Show that this data is consistent with the Law of Multiple Proportions.
Solution
The "fixed weight" we are considering here is the nitrogen. Inspection of the numbers above shows that the ammonia contains the smallest weight ratio H:N = 0.0714, while the weight ratio of H:N in urea is twice this number, and that in ammonium chloride is four times 0.0714. Thus the H:N ratios are themselves stand in the ratio of 2:1:4, respectively, and the Law is confirmed.
Dalton's Interpretation Established Atomic Theory
The idea that matter is composed of tiny "atoms" of some kind had been around for at least 2000 years. Dalton's accomplishment was to identify atoms with actual chemical elements.
If Nobel prizes had existed in the early 1800's, the English school teacher/ meteorologist/ chemist John Dalton (1766-1844) would certainly have won one for showing how the experimental information available at that time, as embodied in the laws of chemical change that we have just described, are fully consistent with the hypothesis that atoms are the smallest units of chemical identity. These points of Dalton's atomic theory provided satisfactory explanations of all the laws of chemical change noted above:
Dalton's explanation of the Law of Conservation of Mass was that it is really a consequence of "conservation of atoms" which are presumed to be indestructible by chemical means. In chemical reactions, the atoms are simply rearranged, but never destroyed.
Dalton's Explanation of the law of constant composition was that if compounds are made up of definite numbers of atoms, each of which has its own characteristic mass, then the relative mass of each element in a compound must always be the same. Thus the elements must always be present in a pure sample of a compound in the same proportions by mass.
A given set of elements can usually form two or more compounds in which the numbers of atoms of some of the elements are different. Because these numbers must be integers (you can't have "half" an atom!), the mass of one element combined with a fixed mass of any other elements in any two such compounds can differ only by integer numbers. Thus, for the series of nitrogen-hydrogen compounds cited in the Problem Example above, we have the following relations:
| Compound | Formula | weight ratio H:N | ratio to 0.0714 |
|---|---|---|---|
|
urea
|
CO(NH
2
)
2
|
0.1428
|
2
|
|
ammonia
|
NH
3
|
0.0714
|
1
|
|
ammonium chloride
|
NH
4
Cl
|
0.2857
|
4
|
Although Dalton's atomic theory was immediately found to be a useful tool for organizing chemical knowledge, it was some time before it became accepted as a true representation of the world. Thus, as late as 1887, one commentator observed
"Atoms are round bits of wood invented by Mr. Dalton."
These wooden balls have evolved into computer-generated images derived from the atomic force microscope (AFM), an exquisitely sensitive electromechanical device in which the distance between the tip of a submicroscopic wire probe and the surface directly below it is recorded as the probe moves along a surface to which atoms are adsorbed. The general principle of the AFM is quite simple, but its realization in an actual device can appear somewhat intimidating! This highly specialized atomic force microscope is one of several similar devices described at this Argonne National Laboratory.
Relative Masses
Dalton's atomic theory immediately led to the realization that although atoms are far too small to be studied directly, their relative masses can be estimated by observing the weights of elements that combine to form similar compounds. These weights are sometimes referred to as combining weights . There is one difficulty, however: we need to know the formulas of the compounds we are considering in order to make valid comparisons. For example, we can find the relative masses of two atoms X and Y that combine with oxygen only if we assume that the values of n in the two formulas \(XO_n\) and \(YO_n\) are the same. But the very relative masses we are trying to find must be known in order to determine these formulas.
The way to work around this was to focus on binary (two-element) compounds that were assumed to have simple atom ratios such as 1:1, 1:2, etc., and to hope that enough 1:1 compounds would be found to provide a starting point for comparing the various pairs of combining weights. Compounds of oxygen, known as oxides, played an especially important role here, partly because almost all of the chemical elements form compounds with oxygen, and most of them do have very simple formulas.
--
The first proof that water is composed of hydrogen and oxygen was because of the discovery, in 1800, that an electric current could decompose water into these elements. Notice the 2:1 volumes of the two gases displacing the water at the tops of the tubes.
Of these oxygen compounds, the one with hydrogen — ordinary water — had been extensively studied. Earlier experiments had given the composition of water is 87.4 percent oxygen and 12.6 percent hydrogen by weight. This means that if the formula of water is assumed [incorrectly] to be HO, then the mass ratio of the two kinds of atoms must be O:H = 87.4/12.6 = 6.9. Later work corrected this figure to 8, but the wrong assumption about the formula of water would remain to plague chemistry for almost fifty years until studies on gas volumes proved that water is H 2 O.
Dalton fully acknowledged the tentative nature of weight ratios based on assumed simple formulas such as HO for water, but was nevertheless able to compile in 1810 a list of the relative weights of the atoms of some of the elements he investigated by observing weight changes in chemical reactions.
|
hydrogen
|
nitrogen
|
carbon
|
oxygen
|
phosphorus
|
sulfur
|
iron
|
zinc
|
copper
|
lead
|
|---|---|---|---|---|---|---|---|---|---|
|
1
|
5
|
5.4
|
7
|
9
|
13
|
50
|
56
|
56
|
95
|
Because hydrogen is the lightest element, it was assigned a relative weight of unity. By assigning definite relative masses to atoms of the different elements, Dalton had given reality to the concept of the atom and established the link between atom and element. Once the correct chemical formulas of more compounds became known, more precise combining-weight studies eventually led to the relative weights of the atoms we know today as the atomic weights , which we discuss farther on.
The Nuclear atom
The precise physical nature of atoms finally emerged from a series of elegant experiments carried out between 1895 and 1915. The most notable of these achievements was Ernest Rutherford's famous 1911 alpha-ray scattering experiment, which established that
•
- Almost all of the mass of an atom is contained within a tiny (and therefore extremely dense) nucleus which carries a positive electric charge whose value identifies each element and is known as the atomic number of the element.
- Almost all of the volume of an atom consists of empty space in which electrons, the fundamental carriers of negative electric charge, reside. The extremely small mass of the electron (1/1840 the mass of the hydrogen nucleus) causes it to behave as a quantum particle, which means that its location at any moment cannot be specified; the best we can do is describe its behavior in terms of the probability of its manifesting itself at any point in space. It is common (but somewhat misleading) to describe the volume of space in which the electrons of an atom have a significant probability of being found as the electron cloud . The latter has no definite outer boundary, so neither does the atom. The radius of an atom must be defined arbitrarily, such as the boundary in which the electron can be found with 95% probability. Atomic radii are typically 30-300 pm.
Protons and Neutrons
The nucleus is itself composed of two kinds of particles. Protons are the carriers of positive electric charge in the nucleus; the proton charge is exactly the same as the electron charge, but of opposite sign. This means that in any [electrically neutral] atom, the number of protons in the nucleus (often referred to as the nuclear charge ) is balanced by the same number of electrons outside the nucleus.
Ions
Because the electrons of an atom are in contact with the outside world, it is possible for one or more electrons to be lost, or some new ones to be added. The resulting electrically-charged atom is called an ion.
The other nuclear particle is the neutron . As its name implies, this particle carries no electrical charge. Its mass is almost the same as that of the proton. Most nuclei contain roughly equal numbers of neutrons and protons, so we can say that these two particles together account for almost all the mass of the atom.
Atomic Number (Z)
What single parameter uniquely characterizes the atom of a given element? It is not the atom's relative mass, as we will see in the section on isotopes below. It is, rather, the number of protons in the nucleus, which we call the atomic number and denote by the symbol Z . Each proton carries an electric charge of +1, so the atomic number also specifies the electric charge of the nucleus. In the neutral atom, the Z protons within the nucleus are balanced by Z electrons outside it.
Moseley searched for a measurable property of each element that increases linearly with atomic number. He found this in a class of X-rays emitted by an element when it is bombarded with electrons. The frequencies of these X-rays are unique to each element, and they increase uniformly in successive elements. Moseley found that the square roots of these frequencies give a straight line when plotted against Z; this enabled him to sort the elements in order of increasing atomic number.
You can think of the atomic number as a kind of serial number of an element, commencing at 1 for hydrogen and increasing by one for each successive element. The chemical name of the element and its symbol are uniquely tied to the atomic number; thus the symbol "Sr" stands for strontium, whose atoms all have Z = 38.
Mass number (A)
This is just the sum of the numbers of protons and neutrons in the nucleus. It is sometimes represented by the symbol A , so
\[A = Z + N\]
in which Z is the atomic number and N is the neutron number .
Nuclides and their Symbols
The term nuclide simply refers to any particular kind of nucleus. For example, a nucleus of atomic number 7 is a nuclide of nitrogen. Any nuclide is characterized by the pair of numbers ( Z ,A ). The element symbol depends on Z alone, so the symbol 26 Mg is used to specify the mass-26 nuclide of magnesium, whose name implies Z =12. A more explicit way of denoting a particular kind of nucleus is to add the atomic number as a subscript. Of course, this is somewhat redundant, since the symbol Mg always implies Z =12, but it is sometimes a convenience when discussing several nuclides.
Two nuclides having the same atomic number but different mass numbers are known as isotopes . Most elements occur in nature as mixtures of isotopes, but twenty-three of them (including beryllium and fluorine, shown in the table) are monoisotopic. For example, there are three natural isotopes of magnesium: 24 Mg (79% of all Mg atoms), 25 Mg (10%), and 26 Mg (11%); all three are present in all compounds of magnesium in about these same proportions.
Approximately 290 isotopes occur in nature. The two heavy isotopes of hydrogen are especially important— so much so that they have names and symbols of their own:
Deuterium accounts for only about 15 out of every one million atoms of hydrogen. Tritium, which is radioactive, is even less abundant. All the tritium on the earth is a by-product of the decay of other radioactive elements.
Atomic Weights
Atoms are of course far too small to be weighed directly; weight measurements can only be made on the massive (but unknown) numbers of atoms that are observed in chemical reactions. The early combining-weight experiments of Dalton and others established that hydrogen is the lightest of the atoms, but the crude nature of the measurements and uncertainties about the formulas of many compounds made it difficult to develop a reliable scale of the relative weights of atoms. Even the most exacting weight measurements we can make today are subject to experimental uncertainties that limit the precision to four significant figures at best.
Weighing atoms: Mass Spectrometry
An alternative way of examining the behavior of individual atomic particles became evident in 1912, when J.J. Thomson and F.W. Aston showed that a stream of gaseous neon atoms, broken up by means of an electrical discharge, yielded two kinds of subatomic particles having opposite electrical charges, as revealed by their deflections in externally-applied magnetic and electrostatic fields. (The deflections themselves could be observed by the spots the particles made when they impinged on a photographic plate.) This, combined with the finding made a year earlier by Wilhelm Wien that the degree of deflection of a particle in these fields is proportional to the ratio of its electric charge to its mass, opened the way to characterizing these otherwise invisible particles.
Neutral atoms, having no charge, cannot be accelerated along a path so as to form a beam, nor can they be deflected. They can, however, be made to acquire electric charges by directing an electron beam at them, and this was the basis of the first mass spectrometer developed by Thomson's former student F.W. Aston (1877-1945, 1922 Nobel Prize) in 1919. This enabled him to quickly identify 212 of the 287 naturally occurring isotopes.
The mass spectrometer has become one of the most widely used laboratory instruments. Mass spectrometry is now mostly used to identify molecules. Ionization usually breaks a molecule up into fragments having different charge-to-mass ratios, each molecule resulting in a unique "fingerprint" of particles whose origin can be deduced by a jigsaw puzzle-like reconstruction. For many years, "mass-spec" had been limited to small molecules, but with the development of novel ways of creating ions from molecules, it has now become a major tool for analyzing materials and large biomolecules, including proteins.
The scale of relative weights (the atomic weight scale ) we now use is based on \(\ce{^{12}_6C}\), whose relative mass is defined as exactly 12. Atomic weights are the ratios of the weights of an element to the weight of an identical number of \(\ce{^{12}_6C}\) atoms. Being ratios, atomic weights are dimensionless .
From 1850 to 1961, the atomic weight scale was defined relative to oxygen = 16.
A certain number (call it "one zillion") of oxygen atoms weighs 1.200 g. What will be the weight of an equal number of lithium atoms?
Solution
From the atomic weight table, the mass ratio of Li/O = 6.94/16.00, so the weight of one zillion lithium atoms will be
\[ (1.200\; g) \times \dfrac{6.94}{16.00} =0.570\; g \nonumber\]
You can visualize the atomic weight scale as a long line of numbers that runs from 1 to around 280. The beginning of the scale looks like this:
You will notice that the relative masses of the different elements (shown in the upper part) are not all integers. If the nuclei all differ by integral numbers of protons and neutrons that have virtually identical masses, we would expect the atomic weights to be integers. Some are very close to integers (the reason they are not exactly integral will be explained in the next section), but many are nowhere near integral. This puzzling observation eventually led to the concept of isotopes .
The atomic weights that are determined experimentally and listed in tables are weighted averages of these isotopic mixtures.Estimate the average atomic weight of magnesium from the isotopic abundance data shown in the above mass spectrometry plot.
Solution
We just take the weighted average of the mass numbers:
(0.7899 × 24) + (0.1000 × 25) + (0.1101 × 26) = 24.32
Note: The measured atomic weight of Mg (24.305) is slightly smaller than this because atomic masses of nuclear components are not strictly additive, as will be explained further below.
When there are only two significantly abundant isotopes, you can estimate the relative abundances from the mass numbers and the average atomic weight. The following is a favorite exam problem:
The average atomic weight of chlorine is 35.45 and the element has two stable isotopes \(\ce{^{35}_{17}Cl}\) and \(\ce{^{37}_{17}Cl}\). Estimate the relative abundances of these two isotopes.
Solution
Here you finally get to put your high-school algebra to work! If we let x represent the fraction of \(\ce{^{35}Cl}\), then (1- x ) gives the fraction of \(\ce{^{37}Cl}\). The weighted average atomic weight is then
35 x + 37(1- x ) = 35.45
Solving for x gives 2 x = 1.55, x = 0.775, so the abundances are 77.5% Cl 35 and 22.5% Cl 37 .
Elemental chlorine, Cl 2 , is made up of the two isotopes mentioned in the previous example. How many peaks would you expect to observe in the mass spectrum of Cl 2 ?
Solution
The mass spectrometer will detect a peak for each possible combination of the two isotopes in dichlorine: 35 Cl- 35 Cl, 35 Cl- 37 Cl, and 37 Cl- 37 Cl.
Tables of Atomic Weights
are updated every few years as better data becomes available.
One peculiarity you might notice is that the number of significant figures varies from element to element. It tends to be highest for monoisotopic elements, as you can see here for beryllium and fluorine. For some elements, the isotopic abundances vary slightly, depending on the source; this variance reduces the useful precision of a value.
Atomic weights, molecular weights and formula weights
Molecules are composed of atoms, so a molecular weight is just the sum of the atomic weights of the elements it contains.
What is the molecular weight of sulfuric acid, \(H_2SO_4\)?
Solution
The atomic weights of hydrogen and of oxygen are 1.01 and 16.00, respectively (you should have these common values memorized.) From a table, you can find that the atomic weight of sulfur is 32.06. Adding everything up, we have
\[(2 \times 1.01) + 32.06 + (4 \times 16.00) = 98.08\]
Because some solids are not made up of discrete molecules (sodium chloride, NaCl, and silica, SiO 2 are common examples), the term formula weight is often used in place of molecular weight. In general, the terms molecular weight and formula weight are interchangeable.
Isotopic Fractionation
The isotopes of a given element are so similar in their chemical behavior that what small differences may exist can be considered negligible for most practical purposes. However, heavier isotopes do tend to react or evaporate slightly more slowly than lighter ones, so that given enough time, various geochemical processes can result in an enrichment of one isotope over the other, an effect known as geochemical isotopic fractionation.
What differences do exist are most evident in the lighter elements, and especially in hydrogen, whose three isotopes differ in mass by relatively large amounts. Thus "heavy water", D 2 O ( 2 H 2 O) is not decomposed by electrolysis quite as rapidly as is 1 H 2 O, so it becomes enriched in the the un-decomposed portion of the water in an electrolysis apparatus. Its boiling point is 101.7°C and it freezes at 3.8°. Animals will die if they drink heavy water in place of ordinary water.
The minute differences between the behaviors of most isotopes constitute an invaluable tool for research in geochemistry. For example, the tiny fraction of water molecules containing O 18 evaporates more slowly than the lighter (and far more abundant) H 2 O 16 . But the ratio of O 18 to O 16 in the water that evaporates depends on the temperature at which this process occurs. By observing this ratio in glacial ice cores and in marine carbonate deposits, it is possible to determine the average temperature of the earth at various times in the past.
Atomic masses
Here again is the beginning of the atomic weight scale that you saw above:
You understand by now that atomic weights are relative weights, based on a scale defined by 6 C 12 = 12. But what is the absolute weight of an atom, expressed in grams or kilograms? In other words, what actual mass does each unit on the atomic weight scale represent?
The answer is 1.66053886 × 10 –27 kg. This quantity (whose value you do no t need to memorize) is known as the unified atomic mass unit , denoted by the abbreviation u or amu . The unified atomic mass unit is defined as 1/12 of the mass of one atom of carbon-12. Fortunately, you do not need to memorize this value, because you can easily calculate its value from Avogadro's number, N A , which you are expected to know:
\[1\, u = \dfrac{1}{N_A} \;g = \dfrac{1}{1000\; N_A} \;Kg\]
Note: Definition of Atomic Mass Unit
The unified atomic mass unit is defined as 1/12 of the mass of one atom of carbon-12.
Masses of the Subatomic Particles
Atoms are composed of protons, neutrons, and electrons, whose properties are shown below:
|
particle
|
mass, g
|
mass, u
|
charge
|
symbol
|
|---|---|---|---|---|
| electron | 9.1093897 × 10 –28 | 5.48579903 × 10 –4 | 1– | \(\ce{_{–1}^{0}e}\) |
| proton | 1.6726231 × 10 –24 | 1.007276470 | 1+ | \(\ce{_1^0H^{+}}\) or \(\ce{_1^0p}\) |
| neutron | 1.6749286 × 10 –24 | 1.008664904 | 0 | \(\ce{_1^0n}\) |
Two important points should be noted from Table \(\PageIndex{1}\):
- The mass of the electron is negligible compared to that of the two nuclear particles;
- The proton and neutron have masses that are almost, but not exactly, identical.
Nuclear Masses
As we mentioned in one of the problem examples above, the mass of a nucleus is always slightly different from the masses of the nucleons (protons and neutrons) of which it is composed. The difference, known as the mass defect , is related to the energy associated with the formation of the nucleus through Einstein's famous formula e = mc 2 . This is the one instance in chemistry in which conservation of mass-energy , rather than of mass alone, must be taken into account. But there is no need for you to be concerned with this in this part of the course.
For all practical purposes, until you come to the section of the course on nuclear chemistry, you can consider that the proton and neutron have masses of about 1 u , and that the mass of an atom (in u ) is just the sum of the neutron and proton numbers. | libretexts | 2025-03-17T19:53:07.198042 | 2013-10-03T01:38:15 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/04%3A_The_Basics_of_Chemistry/4.01%3A_Atoms_Elements_and_the_Nucleus",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "4.1: Atoms, Elements, and the Nucleus",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/04%3A_The_Basics_of_Chemistry/4.02%3A_Avogadro's_Number_and_the_Mole | 4.2: Avogadro's Number and the Mole
Make sure you thoroughly understand the following essential ideas:
- Define Avogadro's number and explain why it is important to know.
- Define the mole . Be able to calculate the number of moles in a given mass of a substance, or the mass corresponding to a given number of moles.
- Define molecular weight , formula weight , and molar mass ; explain how the latter differs from the first two.
- Be able to find the number of atoms or molecules in a given weight of a substance.
- Find the molar volume of a solid or liquid, given its density and molar mass.
- Explain how the molar volume of a metallic solid can lead to an estimate of atomic diameter.
The chemical changes we observe always involve discrete numbers of atoms that rearrange themselves into new configurations. These numbers are HUGE— far too large in magnitude for us to count or even visualize, but they are still numbers , and we need to have a way to deal with them. We also need a bridge between these numbers, which we are unable to measure directly, and the weights of substances, which we do measure and observe. The mole concept provides this bridge, and is central to all of quantitative chemistry.
Counting Atoms: Avogadro's Number
Owing to their tiny size, atoms and molecules cannot be counted by direct observation. But much as we do when "counting" beans in a jar, we can estimate the number of particles in a sample of an element or compound if we have some idea of the volume occupied by each particle and the volume of the container. Once this has been done, we know the number of formula units (to use the most general term for any combination of atoms we wish to define) in any arbitrary weight of the substance. The number will of course depend both on the formula of the substance and on the weight of the sample. However, if we consider a weight of substance that is the same as its formula (molecular) weight expressed in grams, we have only one number to know: Avogadro's number .
Avogadro's number
Avogadro's number is known to ten significant digits:
\[N_A = 6.022141527 \times 10^{23}.\]
However, you only need to know it to three significant figures:
\[N_A \approx 6.02 \times 10^{23}. \label{3.2.1}\]
So \(6.02 \times 10^{23}\) of what ? Well, of anything you like: apples, stars in the sky, burritos. However, the only practical use for \(N_A\) is to have a more convenient way of expressing the huge numbers of the tiny particles such as atoms or molecules that we deal with in chemistry. Avogadro's number is a collective number , just like a dozen. Students can think of \(6.02 \times 10^{23}\) as the "chemist's dozen".
Before getting into the use of Avogadro's number in problems, take a moment to convince yourself of the reasoning embodied in the following examples.
The atomic weights of oxygen and carbon are 16.0 and 12.0 atomic mass units (\(u\)), respectively. How much heavier is the oxygen atom in relation to carbon?
Solution
Atomic weights represent the relative masses of different kinds of atoms. This means that the atom of oxygen has a mass that is
\[\dfrac{16\, \cancel{u}}{12\, \cancel{u}} = \dfrac{4}{3} ≈ 1.33 \nonumber\]
as great as the mass of a carbon atom.
The absolute mass of a carbon atom is 12.0 unified atomic mass units (\(u\)). How many grams will a single oxygen atom weigh?
Solution
The absolute mass of a carbon atom is 12.0 \(u\) or
\[12\,\cancel{u} \times \dfrac{1.6605 \times 10^{–24}\, g}{1 \,\cancel{u}} = 1.99 \times 10^{–23} \, g \text{ (per carbon atom)} \nonumber\]
The mass of the oxygen atom will be 4/3 greater (from Example \(\PageIndex{1}\)):
\[ \left( \dfrac{4}{3} \right) 1.99 \times 10^{–23} \, g = 2.66 \times 10^{–23} \, g \text{ (per oxygen atom)} \nonumber\]
Alternatively we can do the calculation directly like with carbon:
\[16\,\cancel{u} \times \dfrac{1.6605 \times 10^{–24}\, g}{1 \,\cancel{u}} = 2.66 \times 10^{–23} \, g \text{ (per oxygen atom)} \nonumber\]
Suppose that we have \(N\) carbon atoms, where \(N\) is a number large enough to give us a pile of carbon atoms whose mass is 12.0 grams. How much would the same number, \(N\), of oxygen atoms weigh?
Solution
We use the results from Example \(\PageIndex{1}\) again. The collection of \(N\) oxygen atoms would have a mass of
\[\dfrac{4}{3} \times 12\, g = 16.0\, g. \nonumber\]
What is the numerical value of \(N\) in Example \(\PageIndex{3}\)?
- Answer
-
Using the results of Examples \(\PageIndex{2}\) and \(\PageIndex{3}\).
\[N \times 1.99 \times 10^{–23} \, g \text{ (per carbon atom)} = 12\, g \nonumber\]
or
\[N = \dfrac{12\, \cancel{g}}{1.99 \times 10^{–23} \, \cancel{g} \text{ (per carbon atom)}} = 6.03 \times 10^{23} \text{atoms} \nonumber \]
There are a lot of atoms in 12 g of carbon.
Things to understand about Avogadro's number
- It is a number , just as is "dozen", and thus is dimensionless .
- It is a huge number, far greater in magnitude than we can visualize
- Its practical use is limited to counting tiny things like atoms, molecules, "formula units", electrons, or photons.
- The value of N A can be known only to the precision that the number of atoms in a measurable weight of a substance can be estimated. Because large numbers of atoms cannot be counted directly, a variety of ingenious indirect measurements have been made involving such things as Brownian motion and X-ray scattering .
-
The current value was determined by measuring the distances between the atoms of silicon in an ultrapure crystal of this element that was shaped into a perfect sphere. (The measurement was made by X-ray scattering.) When combined with the measured mass of this sphere, it yields Avogadro's number. However, there are two problems with this:
- The silicon sphere is an artifact, rather than being something that occurs in nature, and thus may not be perfectly reproducible.
- The standard of mass, the kilogram, is not precisely known, and its value appears to be changing. For these reasons, there are proposals to revise the definitions of both N A and the kilogram.
Moles and their Uses
The mole (abbreviated mol) is the the SI measure of quantity of a "chemical entity" , which can be an atom, molecule, formula unit, electron or photon. One mole of anything is just Avogadro's number of that something. Or, if you think like a lawyer, you might prefer the official SI definition:
The mole is the amount of substance of a system which contains as many elementary entities as there are atoms in 0.012 kilogram of carbon 12
Avogadro's number (Equation \ref{3.2.1}) like any pure number, is dimensionless. However, it also defines the mole, so we can also express N A as 6.02 × 10 23 mol –1 ; in this form, it is properly known as Avogadro's constant . This construction emphasizes the role of Avogadro's number as a conversion factor between number of moles and number of "entities".
How many moles of nickel atoms are there in 80 nickel atoms?
Solution
\[\dfrac{80 \;atoms}{6.02 \times 10^{23} \; atoms\; mol^{-1}} = 1.33 \times 10^{-22} mol \nonumber\]
Is this answer reasonable? Yes, because 80 is an extremely small fraction of \(N_A\).
Molar Mass
The atomic weight, molecular weight, or formula weight of one mole of the fundamental units (atoms, molecules, or groups of atoms that correspond to the formula of a pure substance) is the ratio of its mass to 1/12 the mass of one mole of C 12 atoms, and being a ratio, is dimensionless. But at the same time, this molar mass (as many now prefer to call it) is also the observable mass of one mole ( N A ) of the substance, so we frequently emphasize this by stating it explicitly as so many grams (or kilograms) per mole: g mol –1 .
It is important always to bear in mind that the mole is a number and not a mass . But each individual particle has a mass of its own, so a mole of any specific substance will always correspond to a certain mass of that substance.
Borax is the common name of sodium tetraborate, \(\ce{Na2B4O7}\).
- how many moles of boron are present in 20.0 g of borax?
- how many grams of boron are present in 20.0 g of borax?
Solution
The formula weight of \(\ce{Na2B4O7}\) so the molecular weight is:
\[(2 \times 23.0) + (4 \times 10.8) + (7 \times 16.0) = 201.2 \nonumber\]
- 20 g of borax contains (20.0 g) ÷ (201 g mol –1 ) = 0.10 mol of borax, and thus 0.40 mol of B.
- 0.40 mol of boron has a mass of (0.40 mol) × (10.8 g mol –1 ) = 4.3 g .
The plant photosynthetic pigment chlorophyll contains 2.68 percent magnesium by weight. How many atoms of Mg will there be in 1.00 g of chlorophyll?
Solution
Each gram of chlorophyll contains 0.0268 g of Mg, atomic weight 24.3.
- Number of moles in this weight of Mg: (.0268 g) / (24.2 g mol –1 ) = 0.00110 mol
- Number of atoms: (0.00110 mol) × (6.02E23 mol –1 ) = \(6.64 \times 10^{20}\)
Is this answer reasonable? (Always be suspicious of huge-number answers!) Yes, because we would expect to have huge numbers of atoms in any observable quantity of a substance.
Molar Volume
This is the volume occupied by one mole of a pure substance. Molar volume depends on the density of a substance and, like density, varies with temperature owing to thermal expansion, and also with the pressure. For solids and liquids, these variables ordinarily have little practical effect, so the values quoted for 1 atm pressure and 25°C are generally useful over a fairly wide range of conditions. This is definitely not the case with gases, whose molar volumes must be calculated for a specific temperature and pressure.
Methanol, CH 3 OH, is a liquid having a density of 0.79 g per milliliter. Calculate the molar volume of methanol.
Solution
The molar volume will be the volume occupied by one molar mass (32 g) of the liquid. Expressing the density in liters instead of mL, we have
\[V_M = \dfrac{32\; g\; mol^{–1}}{790\; g\; L^{–1}}= 0.0405 \;L \;mol^{–1} \nonumber\]
The molar volume of a metallic element allows one to estimate the size of the atom. The idea is to mentally divide a piece of the metal into as many little cubic boxes as there are atoms, and then calculate the length of each box. Assuming that an atom sits in the center of each box and that each atom is in direct contact with its six neighbors (two along each dimension), this gives the diameter of the atom. The manner in which atoms pack together in actual metallic crystals is usually more complicated than this and it varies from metal to metal, so this calculation only provides an approximate value.
The density of metallic strontium is 2.60 g cm –3 . Use this value to estimate the radius of the atom of Sr, whose atomic weight is 87.6.
Solution
The molar volume of Sr is:
\[\dfrac{87.6 \; g \; mol^{-1}}{2.60\; g\; cm^{-3}} = 33.7\; cm^3\; mol^{–1}\]
The volume of each "box" is"
\[\dfrac{33.7\; cm^3 mol^{–1}} {6.02 \times 10^{23}\; mol^{–1}} = 5.48 \times 10^{-23}\; cm^3\]
The side length of each box will be the cube root of this value, \(3.79 \times 10^{–8}\; cm\). The atomic radius will be half this value, or
\[1.9 \times 10^{–8}\; cm = 1.9 \times 10^{–10}\; m = 190 pm\]
Note : Your calculator probably has no cube root button, but you are expected to be able to find cube roots; you can usually use the x y button with y =0.333. You should also be able estimate the magnitude of this value for checking. The easiest way is to express the number so that the exponent is a multiple of 3. Take \(54 \times 10^{-24}\), for example. Since 3 3 =27 and 4 3 = 64, you know that the cube root of 55 will be between 3 and 4, so the cube root should be a bit less than 4 × 10 –8 .
So how good is our atomic radius? Standard tables give the atomic radius of strontium is in the range 192-220 pm. | libretexts | 2025-03-17T19:53:07.299145 | 2013-10-03T01:38:15 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/04%3A_The_Basics_of_Chemistry/4.02%3A_Avogadro's_Number_and_the_Mole",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "4.2: Avogadro's Number and the Mole",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/04%3A_The_Basics_of_Chemistry/4.03%3A_Formulas_and_Their_Meaning | 4.3: Formulas and Their Meaning
Make sure you thoroughly understand the following essential concepts that have been presented above.
- Explain why the symbol of an element often differs from the formula of the element..
- Define an ion , and explain the meaning of its formula.
- Find the simplest ("empirical") formula of a substance from a more complex molecular formula. Explain the meaning of the formula of an ionic solid such as NaCl.
- Define molecular weight, formula weight , and molar mass . Calculate any of these from any chemical formula.
- Given a chemical formula, express the mole ratios of any two elements, or the mole fraction of one of its elements.
- Find the percentage composition of a compound from its formula.
- Calculate the mass ratio of any two elements present in a compound from its formula.
- Find the empirical formula of a binary compound from the mole ratio of its two elements, expressed as a decimal number.
- Find the empirical formula of a binary compound from the mass ratio of its two elements.
- Find the empirical formula of a compound from its mass- or percentage composition.
At the heart of chemistry are substances — elements or compounds— which have a definite composition which is expressed by a chemical formula . In this unit you will learn how to write and interpret chemical formulas both in terms of moles and masses, and to go in the reverse direction, in which we use experimental information about the composition of a compound to work out a formula.
The formula of a compound specifies the number of each kind of atom present in one molecular unit of a compound. Since every unique chemical substance has a definite composition, every such substance must be describable by a chemical formula.
The well-known alcohol ethanol is composed of molecules containing two atoms of carbon, five atoms of hydrogen, and one atom of oxygen. What is its molecular formula?
Solution
Just write the symbol of each element, following by a subscript indicating the number of atoms if more than one is present. Thus: C 2 H 5 O
Note that:
- The number of atoms of each element in a molecular formula is written as a subscript;
- When only a single atom of an element in a molecular formula is present, the subscript is omitted.
- In the case of organic (carbon-containing) compounds, it is customary to place the symbols of the elements C, H, (and if present,) O, N in this order in the formula.
Formulas of Elements and Ions
The symbol of an element is the one- or two-letter combination that represents the atom of a particular element, such as Au (gold) or O (oxygen). The symbol can be used as an abbreviation for an element name (it is easier to write "Mb" instead of "molybdenum"!) In more formal chemical use, an element symbol can also stand for one atom, or, depending on the context, for one mole of atoms of the element.
Some of the non-metallic elements exist in the form of molecules containing two or more atoms of the element. These molecules are described by formulas such as N 2 , S 6 , and P 4 . Some of these elements can form more than one kind of molecule; the best-known example of this is oxygen, which can exist as O 2 (the common form that makes up 21% of the molecules in air), and also as O 3 , an unstable and highly reactive molecule known as ozone . The soccer-ball-shaped carbon molecules sometimes called buckyballs have the formula C 60 .
Allotropes
Different molecular forms of the same element (such as \(\ce{O_2}\) and \(\ce{O_3})\) are called allotropes.
Ions are atoms or molecules that carry an electrical charge. These charges are represented as superscripts in the ionic formulas. Thus:
| \(\ce{Cl^{-}}\) | the chloride ion, with one negative charge per atom |
| \(\ce{S^{2-}}\) | the sulfide ion carries two negative charges |
| \(\ce{HCO3^{2–}}\) | the carbonate ion— a molecular ion |
| \(\ce{NH4^{+}}\) | the ammonium ion |
Note that the number of charges (in units of the electron charge) should always precede the positive or negative sign, but this number is omitted when the charge is ±1.
Formulas of Extended Solids
In solid CdCl 2 , the Cl and Cd atoms are organized into sheets that extend indefinitely. Each atom is surrounded by six atoms of the opposite kind, so one can arbitrarily select any Cl–Cd–Cl as the "molecular unit". One such CdCl 2 unit is indicated by the two red-colored bonds in the diagram, but it does not constitute a discrete "molecule" of CdCl 2 .
Many apparently "simple" solids exist only as ionic solids (such as NaCl) or as extended solids (such as CuCl 2 ) in which no discrete molecules can be identified. The formulas we write for these compounds simply express relative numbers of the different kinds of atoms in the compound in the smallest possible integer numbers. These are identical with the empirical or "simplest" formulas that we discuss further on.
Many minerals and most rocks contain varying ratios of certain elements and can only be precisely characterized at the structural level. Because these are usually not pure substances, the "formulas" conventionally used to describe them have limited meanings. For example the common rock olivine, which can be considered a solid solution of Mg 2 SiO 4 and Fe 2 SiO 4 , can be represented by (Mg,Fe) 2 SiO 4 . This implies that the ratio of the metals to SiO 4 is constant, and that magnesium is usually present in greater amount than iron.
Empirical Formulas
Empirical formulas give the relative numbers of the different elements in a sample of a compound, expressed in the smallest possible integers. The term empirical refers to the fact that formulas of this kind are determined experimentally; such formulas are also commonly referred to as empirical formulas .
Glucose (the "fuel" your body runs on) is composed of molecular units having the formula C 6 H 12 O 6 . What is the empirical formula of glucose?
Solution
The glucose molecule contains twice as many atoms of hydrogen as carbons or oxygens, so we divide through by 6 to get CH 2 O .
Note: this empirical formula, which applies to all 6-carbon sugars, indicates that these compounds are "composed" of carbon and water, which explains why sugars are known as carbohydrates .
Some solid compounds do not exist as discrete molecular units, but are built up as extended two- or three-dimensional lattices of atoms or ions. The compositions of such compounds are commonly described by their empirical formulas. In the very common case of ionic solids , such a formula also expresses the minimum numbers of positive and negative ions required to produce an electrically neutral unit, as in NaCl or CuCl 2 .
- Write the formula of ferric bromide, given that the ferric (iron-III) ion is Fe 3 + and the bromide ion carries a single negative charge.
- Write the formula of bismuth sulfide, formed when the ions Bi 3 + and S 2– combine.
Solution:
- Three Br – ions are required to balance the three positive charges of Fe 3 + , hence the formula FeBr 3 .
- The only way to get equal numbers of opposite charges is to have six of each, so the formula will be Bi 2 S 3 .
What formulas do not tell us
The formulas we ordinarily write convey no information about the compound's structure — that is, the order in which the atoms are connected by chemical bonds or are arranged in three-dimensional space. This limitation is especially significant in organic compounds, in which hundreds if not thousands of different molecules may share the same empirical formula. For example, both ethanol and dimethyl ether both have the empirical formula C 2 H 6 O, however the structural formulas reveal the very different nature of these two molecules:
More Complex Formulas
It is often useful to write formulas in such as way as to convey at least some information about the structure of a compound. For example, the formula of the solid (NH 4 ) 2 CO 3 is immediately identifiable as ammonium carbonate, and essentially a compound of ammonium and carbonate ions in a 2:1 ratio, whereas the simplest or empirical formula N 2 H 8 CO 3 obscures this information.
Similarly, the distinction between ethanol and dimethyl ether can be made by writing the formulas as C 2 H 5 OH and CH 3 –O–CH 3 , respectively. Although neither of these formulas specifies the structures precisely, anyone who has studied organic chemistry can work them out, and will immediately recognize the –OH (hydroxyl) group which is the defining characteristic of the large class of organic compounds known as alcohols . The –O– atom linking two carbons is similarly the defining feature of ethers .
Several related terms are used to express the mass of one mole of a substance.
- Molecular weight :This is analogous to atomic weight: it is the relative weight of one formula unit of the compound, based on the carbon-12 scale. The molecular weight is found by adding atomic weights of all the atoms present in the formula unit. Molecular weights, like atomic weights, are dimensionless; i.e., they have no units.
- Formula weight :The same thing as molecular weight. This term is sometimes used in connection with ionic solids and other substances in which discrete molecules do not exist.
- Molar mass: The mass (in grams, kilograms, or any other unit) of one mole of particles or formula units. When expressed in grams, the molar mass is numerically the same as the molecular weight, but it must be accompanied by the mass unit.
- Calculate the formula weight of copper(II) chloride, \(\ce{CuCl2}\).
- How would you express this same quantity as a molar mass ?
Solution
- The atomic weights of Cu and Cl are, respectively 63.55 and 35.45; the sum of each atomic weight, multiplied by the numbers of each kind of atom in the formula unit, yields: \[ 63.55 + 2(25.35) = 134.45.\]
- The masses of one mole of Cu and Cl atoms are, respectively, 63.55 g and 35.45 g; the mass of one mole of CuCl 2 units is: \[(63.55 g) + 2(25.35 g) =134.45 g.\]
Interpreting formulas in terms of mole ratios and mole fractions
The information contained in formulas can be used to compare the compositions of related compounds as in the following example:
The ratio of hydrogen to carbon is often of interest in comparing different fuels. Calculate these ratios for methanol (CH 3 OH) and ethanol (C 2 H 5 OH).
Solution
The H:C ratios for the two alcohols are 4:1 = 4.0 for methanol and 6:2 (3.0) for ethanol.
Alternatively, one sometimes uses mole fractions to express the same thing. The mole fraction of an element M in a compound is just the number of atoms of M divided by the total number of atoms in the formula unit.
Calculate the mole fraction and mole-percent of carbon in ethanol (C 2 H 5 OH).
Solution
The formula unit contains nine atoms, two of which are carbon. The mole fraction of carbon in the compound is 2/9 = .22. Thus 22 percent of the atoms in ethanol are carbon.
Interpreting formulas in terms of masses of the elements
Since the formula of a compound expresses the ratio of the numbers of its constituent atoms, a formula also conveys information about the relative masses of the elements it contains. But in order to make this connection, we need to know the relative masses of the different elements.
Find the masses of carbon, hydrogen and oxygen in one mole of ethanol (C 2 H 5 OH).
Solution
Using the atomic weights (molar masses) of these three elements, we have
- carbon: (2 mol)(12.0 g mol –1 ) = 24 g of C
- hydrogen: (6 mol)(1.01 g mol –1 ) = 6 g of H
- oxygen: (1 mol)(16.0 g mol –1 ) = 16 g of O
The mass fraction of an element in a compound is just the ratio of the mass of that element to the mass of the entire formula unit. Mass fractions are always between 0 and 1, but are frequently expressed as percent.
Find the mass fraction and mass percentage of oxygen in ethanol (C 2 H 5 OH)
Solution
Using the information developed in the preceding example, the molar mass of ethanol is (24 + 6 + 16)g mol –1 = 46 g mol –1 . Of this, 16 g is due to oxygen, so its mass fraction in the compound is (16 g)/(46 g) = 0.35 which corresponds to 35%.
Finding the percentage composition of a compound from its formula is a fundamental calculation that you must master; the technique is exactly as shown above. Finding a mass fraction is often the first step in solving related kinds of problems:
How many tons of potassium are contained in 10 tons of KCl?
Solution
The mass fraction of K in KCl is 39.1/74.6=.524; 10 tons of KCl contains(39.1/74.6) × 10 tons of K, or 5.24 tons of K. (Atomic weights: K = 39.1, Cl = 35.5. )
Note that there is no need to deal explicitly with moles, which would require converting tons to kg.
How many grams of KCl will contain 10 g of potassium?
Solution
The mass ratio of KCl/K is 74.6 ÷ 39.1; 10 g of potassium will be present in (74.6/39.1) × 10 grams of KCl, or 19 grams .
Mass ratios of two elements in a compound can be found directly from the mole ratios that are expressed in formulas.
Molten magnesium chloride (MgCl 2 ) can be decomposed into its elements by passing an electric current through it. How many kg of chlorine will be released when 2.5 kg of magnesium is formed? (Mg = 24.3, Cl = 35.5)
Solution
The mass ratio of Cl/Mg is (35.5 ×2)/24.3, or 2.9; thus 2.9 kg of chlorine will be produced for every kg of Mg, or (2.9 × 2.5) = 7.2 kg of chlorine for 2.5 kg of Mg. (Note that is is not necessary to know the formula of elemental chlorine (Cl 2 ) in order to solve this problem.)
Empirical formulas from Experimental data
As was explained above, the empirical formula ( empirical formula ) is one in which the relative numbers of the various elements are expressed in the smallest possible whole numbers. Aluminum chloride, for example, exists in the form of structural units having the composition Al 2 Cl 6 ; the empirical formula of this substance is AlCl 3 . Some methods of analysis provide information about the relative numbers of the different kinds of atoms in a compound. The process of finding the formula of a compound from an analysis of its composition depends on your ability to recognize the decimal equivalents of common integer ratios such as 2:3, 3:2, 4:5, etc.
Analysis of an aluminum compound showed that 1.7 mol of Al is combined with 5.1 mol of chlorine. Write the empirical formula of this compound.
Solution
The formula Al 1.7 Cl 5.1 expresses the relative numbers of moles of the two elements in the compound. It can be converted into the empirical formula by dividing both subscripts by the smaller one, yielding AlCl 3 .
More commonly, an arbitrary mass of a compound is found to contain certain masses of its elements. These must be converted to moles in order to find the formula.
In a student lab experiment, it was found that 0.5684 g of magnesium burns in air to form 0.9426 g of magnesium oxide. Find the empirical formula of this compound. Atomic weights: Mg = 24.305, O=16.00.
Solution
Express this ratio as 0.375 g of C to 1.00 g of O.
- moles of carbon: (.375 g)/(12 g/mol) = 0.03125 mol C;
- moles of oxygen: (1.00 g)/(16 g/mol) = 0.0625 mol O
- mole ratio of C/O = 0.03125/0.0625 = 0.5;
this corresponds to the formula C 0.5 O, which we express in integers as CO 2 .
A 4.67-g sample of an aluminum compound was found to contain 0.945 g of Al and 3.72 g of Cl. Find the empirical formula of this compound. Atomic weights: Al = 27.0, Cl=35.45.
SolutionThe sample contains (0.945 g)/(27.0 g mol –1 ) = .035 mol of aluminum and (3.72 g)(35.45) = 0.105 mol of chlorine. The formula Al .035 Cl .105 expresses the relative numbers of moles of the two elements in the compound. It can be converted into the empirical formula by dividing both subscripts by the smaller one, yielding AlCl 3 .
The composition of a binary (two-element) compound is sometimes expressed as a mass ratio. The easiest approach here is to treat the numbers that express the ratio as masses, thus turning the problem into the kind described immediately above.
A compound composed of only carbon and oxygen contains these two elements in a mass ratio C:H of 0.375. Find the empirical formula.
Solution
Express this ratio as 0.375 g of C to 1.00 g of O.
- moles of carbon: (.375 g)/(12 g/mol) = .03125 mol C;
- moles of oxygen: (1.00 g)/(16 g/mol) = .0625 mol O
- mole ratio of C/O = .03125/.0625 = 0.5;
this corresponds to the formula C 0.5 O, which we express in integers as CO 2 .
The composition-by-mass of a compound is most commonly expressed as weight percent (grams per 100 grams of compound). The first step is again to convert these to relative numbers of moles of each element in a fixed mass of the compound. Although this fixed mass is completely arbitrary (there is nothing special about 100 grams!), the ratios of the mole amounts of the various elements are not arbitrary: these ratios must be expressible as integers, since they represent ratios of integral numbers of atoms.
Find the empirical formula of a compound having the following mass-percent composition. Atomic weights are given in parentheses: 36.4 % Mn (54.9), 21.2 % S (32.06), 42.4 % O (16.0)
Solution100 g of this compound contains:
- Mn: (36.4 g) / (54.9 g mol –1 ) = 0.663 mol
- S: (21.2 g) / (32.06 g mol –1 ) = 0.660 mol
- O: (42.4 g) / (16.0 g mol –1 ) = 2.65 mol
The formula Mn .663 S .660 O 2.65 expresses the relative numbers of moles of the three elements in the compound. It can be converted into the empirical formula by dividing all subscripts by the smallest one, yielding Mn 1.00 S 1.00 O 4.01 which we write as MnSO 4 .
Note: because experimentally-determined masses are subject to small errors, it is usually necessary to neglect small deviations from integer values.
Find the empirical formula of a compound having the following mass-percent composition. Atomic weights are given in parentheses: 27.6 % Mn (54.9), 24.2 % S (32.06), 48.2 % O (16.0).
Solution
A preliminary formula based on 100 g of this compound can be written as
or
Mn .503 S .754 O 3.01
Dividing through by the smallest subscript yields Mn 1 S 1.5 O 6 . Inspection of this formula suggests that multiplying each subscript by 2 yields the all-integer formula Mn 2 S 3 O 12 .
Notes on experimental methods
One of the most fundamental operations in chemistry consists of breaking down a compound into its elements (a process known as analysis ) and then determining the empirical formula from the relative amounts of each kind of atom present in the compound. In only a very few cases is it practical to carry out such a process directly: thus heating mercury(II) sulfide results in its direct decomposition:
\[\ce{2 HgS -> 2Hg + O2}.\]
Similarly, electrolysis of water produces the gases H 2 and O 2 in a 2:1 volume ratio.
Most elemental analyses must be carried out indirectly, however. The most widely used of these methods has traditionally been the combustion analysis of organic compounds. An unknown hydrocarbon C a H b O c can be characterized by heating it in an oxygen stream so that it is completely decomposed into gaseous CO 2 and H 2 O. These gases are passed through tubes containing substances which absorb each gas selectively. By careful weighing of each tube before and after the combustion process, the values of a and b for carbon and hydrogen, respectively, can be calculated. The subscript c for oxygen is found by subtracting the calculated masses of carbon and hydrogen from that of the original sample.
Since the 1970s, it has been possible to carry out combustion analyses with automated equipment. This one can also determine nitrogen and sulfur:
Measurements of mass or weight have long been the principal tool for understanding chemical change in a quantitative way. Balances and weighing scales have been in use for commercial and pharmaceutical purposes since the beginning of recorded history, but these devices lacked the 0.001-g precision required for quantitative chemistry and elemental analysis carried out on the laboratory scale.
It was not until the mid-18th century that the Scottish chemist Joseph Black invented the equal arm analytical balance . The key feature of this invention was a lightweight, rigid beam supported on a knife-edged fulcrum; additional knife-edges supported the weighing pans. The knife-edges greatly reduced the friction that limited the sensitivity of previous designs; it is no coincidence that accurate measurements of combining weights and atomic weights began at about this time.
Analytical balances are enclosed in a glass case to avoid interference from air currents, and the calibrated weights are handled with forceps to prevent adsorption of moisture or oils from bare fingers.
Anyone who was enrolled in college-level general chemistry up through the 1960's will recall the training (and tedium) associated with these devices. These could read directly to 1 milligram and allow estimates to ±0.1 mg. Later technical refinements added magnetic damping of beam swinging, pan brakes, and built-in weight sets operated by knobs. The very best research-grade balances achieved precisions of 0.001 mg.
Beginning in the 1970's, electronic balances have come into wide use, with single-pan types being especially popular. A single-pan balance eliminates the need for comparing the weight of the sample with that of calibrated weights. Addition of a sample to the pan causes a displacement of a load cell which generates a compensating electromagnetic field of sufficient magnitude to raise the pan to its original position. The current required to accomplish this is sensed and converted into a weight measurement. The best research-grade electronic balances can read to 1 microgram, but 0.1-mg sensitivities are more common for student laboratory use. | libretexts | 2025-03-17T19:53:07.432452 | 2013-10-03T01:38:15 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/04%3A_The_Basics_of_Chemistry/4.03%3A_Formulas_and_Their_Meaning",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "4.3: Formulas and Their Meaning",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/04%3A_The_Basics_of_Chemistry/4.04%3A_Chemical_Equations_and_Stoichiometry | 4.4: Chemical Equations and Stoichiometry
Make sure you thoroughly understand the following essential ideas
- Given the formulas of reactants and products, write a balanced chemical equation for the reaction.
- Given the relative solubilities, write a balanced net ionic equation for a reaction between aqueous solutions of two ionic compounds.
- Write appropriate chemical conversion factors to calculate the masses of all components of a chemical reaction when the mass of any single component is specified in any system of units. and find the masses of all components present when the reaction is complete.
- Describe the manner in which the concept of limiting reactant relates to combustion and human exercise.
A chemical equation expresses the net change in composition associated with a chemical reaction by showing the number of moles of reactants and products. But because each component has its own molar mass, equations also implicitly define the way in which the masses of products and reactants are related. In this unit we will concentrate on understanding and making use of these mass relations.
In a chemical reaction , one or more reactants are transformed into products :
reactants → products
The purpose of a chemical equation is to express this relation in terms of the formulas of the actual reactants and products that define a particular chemical change. For example, the reaction of mercury with oxygen to produce mercuric oxide would be expressed by the equation
Hg + O 2 → HgO 2
Sometimes, for convenience, it is desirable to indicate the physical state (gas, liquid or solid) of one or more of the species by appropriate abbreviations:
Hg(l) + O 2 (g) → HgO 2 (s)
C(graphite) + O 2 (g) → CO 2 (g)
C(diamond) + O 2 (g) → CO 2 (g)
However, this is always optional.
How to read and write chemical equations
A chemical equation is a statement of a fact: it expresses the net change that occurs as the result of a chemical reaction. In doing so, is must be consistent with the law of conservation of mass :
In the context of an ordinary chemical reaction, conservation of mass means that atoms are neither created nor destroyed. This requirement is easily met by making sure that there are equal numbers of all atoms on both sides of the equation.
When we balance an equation, we simply make it consistent with the observed fact that individual atoms are conserved in chemical changes. There is no set “recipe’’ for balancing ordinary chemical equations; it is best to begin by carefully studying selected examples such as those given below.
Write a balanced equation for the combustion of propane C 3 H 8 in oxygen O 2 . The products are carbon dioxide CO 2 and water H 2 O.
Solution
Begin by writing the unbalanced equation
\[C_3H_8 + O_2 → CO_2 + H_2O \nonumber\]
It is usually best to begin by balancing compounds containing the least abundant element, so we first balance the equation for carbon:
C 3 H 8 + O 2 → 3 CO 2 + H 2 O
In balancing the oxygen, we see that there is no way that an even number of O 2 molecules on the left can yield the uneven number of O atoms shown on the right. Don't worry about this now— just use the appropriate fractional coefficient:
C 3 H 8 + 3 ½ O 2 → 3 CO 2 + H 2 O
Finally, we balance the hydrogens by adding more waters on the right:
C 3 H 8 + 7/2 O 2 → 3 CO 2 + 4 H 2 O
Ah, but now the oxygens are off again — fixing this also allows us to get rid of the fraction on the left side:
C 3 H 8 + 5 O 2 → 3 CO 2 + 4 H 2 O
It often happens, however, that we do end up with a fractional coefficient, as in this variant of the above example.
Write a balanced equation for the combustion of ethane C 2 H 6 in oxygen O 2 . The products are carbon dioxide CO 2 and water H 2 O.
Solution
Begin by writing the unbalanced equation
C 2 H 6 + O 2 → CO 2 + H 2 O
...then balance the carbon:
C 2 H 6 + O 2 → 2 CO 2 + H 2 O
Let's balance the hydrogen next:
C 2 H 6 + O 2 → 2 CO 2 + 3 H 2 O
...but now we need a non-integral number of dioxygen molecules on the left:
C 2 H 6 + 7/2 O 2 → 2 CO 2 + 3 H 2 O
My preference is to simply leave it in this form; there is nothing wrong with 7/2 = 3 ½ moles of O 2 , and little to be gained by multiplying every term by two— not unless your teacher is a real stickler for doing it "by the book", in which case you had better write
2 C 2 H 6 + 7 O 2 → 4 CO 2 + 6 H 2 O
Net Ionic Equations
Ionic compounds are usually dissociated in aqueous solution; thus if we combine solutions of silver nitrate AgNO 3 and sodium chloride NaCl we are really combining four different species: the cations (positive ions) Ag + and Na + and the anions (negative ions) NO 3 – and Cl – . It happens that when the ions Ag + and Cl – are brought together, they will combine to form an insoluble precipitate of silver chloride. The net equation for this reaction is
\[Ag^+_{(aq)} + Cl^–_{(aq)} → AgCl_{(s)}\]
Note that
- the ions NO 3 – and Cl – are not directly involved in this reaction; the equation expresses only the net change , which is the removal of the silver and chloride ions from the solution to form an insoluble solid.
- the symbol ( aq ) signifies that the ions are in aqueous solution, and thus are hydrated , or attached to water molecules.
- the symbol ( s ) indicates that the substance AgCl exists as a solid. When a solid is formed in a reaction that takes place in solution, it is known as a precipitate . The formation of a precipitate is often indicated by underscoring.
From the above example involving silver chloride, it is clear that a meaningful net ionic equation can be written only if two ions combine to form an insoluble compound. To make this determination, it helps to know the solubility rules — which all students of chemistry were at one time required to memorize, but are nowadays usually obtained from tables such as the one shown below.
| Anion (negative ion) | Cation (positive ion) | Soluble? |
|---|---|---|
| any anion | alkali metal ions (Li + , Na + , K + , etc.) |
yes
|
| nitrate, NO 3 – | any cation |
yes
|
| acetate, CH 3 COO – | any cation except Ag + |
yes
|
| halide ions Cl – , Br – , or I – | Ag + , Pb 2 + , Hg 2 2 + , Cu 2 + |
no
|
| halide ions Cl – , Br – , or I – | any other cation |
yes
|
| sulfate, SO 4 2– | Ca 2 + , Sr 2 + , Ba 2 + , Ag + , Pb 2 + |
no
|
| sulfate, SO 4 2– | any other cation |
yes
|
| sulfide, S 2 – | alkali metal ions or NH 4 + |
yes
|
| sulfide, S 2 – | Be 2 + , Mg 2 + , Ca 2 + , Sr 2 + , Ba 2 + , Ra 2 + |
yes
|
| sulfide, S 2 – | any other cation |
no
|
| hydroxide, OH – | alkali metal ions or NH 4 + |
yes
|
| hydroxide, OH – | Sr 2 + , Ba 2 + , Ra 2 + |
slightly
|
| hydroxide, OH – | any other cation |
no
|
| phosphate, PO 4 3– , carbonate CO 3 2– | alkali metal ions or NH 4 + |
yes
|
| phosphate, PO 4 3– , carbonate CO 3 2– | any other cation |
no
|
Write net ionic equations for what happens when aqueous solutions of the following salts are combined:
- PbCl 2 + K 2 SO 4
- K 2 CO 3 + Sr(NO 3 ) 2
- AlCl 3 + CaSO 4
- Na 3 PO 4 + CaCl 2
Use the solubility rules table(above) to find the insoluble combinations:
- Pb 2 + ( aq ) + SO 4 2– ( aq ) → PbSO 4 ( s )
- Sr 2 + ( aq ) + CO 3 2– ( aq ) → SrCO 3 ( s )
- no net reaction
- 3 Ca 2 + ( aq ) + 2 PO 4 3– ( aq ) → 3 Ca 3 (PO 4 ) 2 ( s )
Note the need to balance the electric charges
Mass Relations in Chemical Equations
A balanced chemical equation expresses the relative number of moles of each component (product or reactant), but because each formula in the equation implies a definite mass of the substance (its molar mass), the equation also implies that certain weight relations exist between the components. For example, the equation describing the combustion of carbon monoxide to carbon dioxide
\[2 CO + O_2 → 2 CO_2\]
implies the following relations:
The relative masses shown in the bottom line establish the stoichiometry of the reaction, that is, the relations between the masses of the various components. Since these masses vary in direct proportion to one another, we can define what amounts to a conversion factor (sometimes referred to as a chemical factor ) that relates the mass of any one component to that of any other component.
Evaluate the chemical factor and the conversion factor that relates the mass of carbon dioxide to that of the CO consumed in the reaction.
Solution
From the above box, the mass ratio of CO 2 to CO in this reaction is 88/56 = 1.57 ; this is the chemical factor for the conversion of CO into CO 2 . The conversion factor is just 1.57/1 with the mass units explicitly stated:
\[\dfrac{1.57\; g\; CO_2}{ 1\; g\; CO} = 1\]
- How many tons of CO 2 can be obtained from the combustion of 10 tons of CO?
- How many kg of CO must be burnt to produce 20 kg of CO 2 ?
Solutions
- (1.57 T CO 2 / 1 T CO) × (10 T CO) = 15.7 T CO 2
- Notice the answer to this one must refer to carbon monoxide, not CO 2 , so we write the conversion factor in reverse:
(1 kg CO / 1.57 kg CO 2 ) × (20 kg CO 2 ) = (20/1.57)g CO = 12.7 kg CO .
Is this answer reasonable? Yes, because the mass of CO must always be smaller than that of CO 2 in this reaction.
More mass-mass problems
Don't expect to pass Chemistry unless you can handle problems such as the ones below; they come up frequently in all kinds of contexts.
The ore FeS 2 can be converted into the important industrial chemical sulfuric acid H 2 SO 4 by a series of processes. Assuming that the conversion is complete, how many liters of sulfuric acid (density 1.86 kg L –1 ) can be made from 50 kg of ore?
Solution
As with most problems, this breaks down into several simpler ones. We begin by working out the stoichiometry on the assumption that all the sulfur in the or ends up as H 2 SO 4 , allowing us to write
FeS 2 → 2 H 2 SO 4
is balanced in respect to the two components of interest, and this is all we need here. The molar masses of the two components are 120.0 and 98 g mol –1 , respectively, so the equation can be interpreted in terms of masses as[120 mass units] FeS 2 → [2 × 98 mass units] H 2 SO 4
Thus 50 kg of ore will yield (50 kg) × (196/120) = 81.7 kg of product.
[
Check
: is this answer reasonable? Yes, because the factor (196/120) is close to (200/120) = 5/3, so the mass of product should be slightly smaller than twice the mass of ore consumed.]
From the density information we find that the volume of liquid H 2 SO 4 is
(81.7 kg) ÷ (1.86 kg L –1 ) = 43.9 L
[ Check : is this answer reasonable? Yes, because density tells us that the number of liters of acid will be slightly greater than half of its weight.]
Barium chloride forms a crystalline hydrate, BaCl 2 ·xH 2 O, in which x molecules of water are incorporated into the crystalline solid for every unit of BaCl 2 . This water can be driven off by heat; if 1.10 g of the hydrated salt is heated and reweighed several times until no further loss of weight (i.e., loss of water) occurs, the final weight of the sample is 0.937 g. What is the value of x in the formula of the hydrate?
Solution
The first step is to find the number of moles of BaCl 2 (molecular weight 208.2) from the mass of the dehydrated sample.
(0.937 g) / (208.2 g mol –1 ) = 0.00450 mol
Now find the moles of H 2 O (molecular weight 18) lost when the sample was dried:
(1.10 – .937)g / (18 g mol –1 ) = .00905 mol
Allowing for a reasonable amount of measurement error, it is apparent that the mole ratio of BaCl 2 :H2O = 1:2. The formula of the hydrate is BaCl 2 ·2H 2 O .
Limiting Reactants
Most chemical reactions that take place in the real world begin with more or less arbitrary amounts of the various reactants; we usually have to make a special effort if we want to ensure that stoichiometric amounts of the reactants are combined. This means that one or more reactant will usually be present in excess; there will be more present than can react, and some will remain after the reaction is over. At the same time, one reactant will be completely used up; we call this the limiting reactant because the amount of this substance present will control, or limit, the quantities of the other reactants that are consumed as well as the amounts of products produced.
Limiting reactant problems are handled in the same way as ordinary stoichiometry problems with one additional preliminary step: you must first determine which of the reactants is limiting— that is, which one will be completely used up. To start you off, consider the following very simple example
For the hypothetical reaction 3 A + 4 B → [products] , determine which reactant will be completely consumed when we combine
- equimolar quantities of A and B;
- 0.57 mol of A and 0.68 mol of B.
Solution
a) Simple inspection of the equation shows clearly that more moles of B are required, so this component will be consumed (and is thus the limiting reactant), leaving behind ¾ as many moles of A.
b)
How many moles of B will react with .57 mol of A? The answer will be
(4/3 × 0.57 mol). If this comes to less than 0.68 mol, then B will be the limiting reactant, and you must continue the problem on the basis of the amount of B present. If the limiting reactant is A, then all 0.57 mol of A will react, leaving some of the B in excess. Work it out!
Sulfur and copper, when heated together, react to form copper(I) sulfide, Cu 2 S. How many grams of Cu 2 S can be made from 10 g of sulfur and 15 g of copper?
Solution
From the atomic weights of Cu (63.55) and S (32.06) we can interpret the he reaction 2 Cu + S → Cu 2 S as
[2 × 63.55 = 127.1 mass units]
Cu
+ [32.06 mass units] S
→ [159.2 mass units]
Cu
2
S
Thus 10 g of S will require
(10 g S) × (127.1 g Cu)/(32.06 g S) = 39.6 g Cu
...which is a lot more than what is available, so copper is the limiting reactant here.
[Check: is this answer reasonable? Yes, because the chemical factor (127/32) works out to about 4, indicating that sulfur reacts with about four times its weight of copper.]
The mass of copper sulfide formed will be determined by the mass of copper available:
(15 g Cu) × (159.2 g Cu 2 S) / (127.1 g Cu) = 18.8 g Cu 2 S
[Check: is this answer reasonable? Yes, because the chemical factor (159.2/127.1) is just a bit greater than unity, indicating that the mass of the product will slightly exceed that of the copper consumed.]
The concept of limiting reactants touches us all in our everyday lives — and as we will show in the second example below, even in the maintenance of life itself!
Air-to-fuel ratios in combustion
Combustion is an exothermic process in which a fuel is combined with oxygen; complete combustion of a hydrocarbon fuel such as methane or gasoline yields carbon dioxide and water:
CH 4 + 2 O 2 → CO 2 + 2 H 2 O (g)
Calculate the mass ratio of CH 4 to O 2 required for complete combustion.
Solution
This is just the ratio of the molar mass of CH 4 (16 g) to that of two moles of dioxygen (2 x 32 g)
Thus (64 g) / (16 g) = 4/1 = 4.0 .
Incomplete combustion is generally undesirable because it wastes fuel, produces less heat, and releases pollutants such as carbon soot. Energy-producing combustion processes should always operate in fuel-limited mode.
In ordinary combustion processes, the source of oxygen is air. Because only about 20 percent of the molecules in dry air consist of O 2 , the volume of air that must be supplied is five times greater than what would be required for pure O 2 . Calculation of the air-to-fuel mass ratio ("A/F ratio") employed by combustion engineers is complicated by the differing molar masses of dioxygen and air. For methane combustion, the A/F ratio works out to about 17.2
A/F ratios which exceed the stoichiometric values are said to be lean , while those in which air becomes the limiting component are characterized as rich . In order to ensure complete combustion, it is common practice to maintain a slightly lean mixture. The quantities of so-called excess air commonly admitted to burners vary from 5-10% for natural gas to up to 100% for certain grades of coal.For internal combustion engines fueled by gasoline (roughly equivalent to C 7 H 14 ), the stoichiometric A/F ratio is 15:1. However, practical considerations necessitate differing ratios at various stages of operation. Typical values vary from a rich ratio for starting or acceleration to slightly lean ratios for ordinary driving. These ratios are set by the carburetor, with additional control by the engine computer and exhaust-line oxygen sensor in modern vehicles, or by a manual choke in earlier ones.
Aerobic and anaerobic respiration
Our bodies require a continual supply of energy in order to maintain neural activity, synthesize proteins and other essential biochemical components, replace cells, and to power muscular action. The "fuel" — the carrier of chemical energy — glucose, a simple sugar which is released as needed from the starch-like polymer glycogen, the form in which the energy we derive from food is stored. Arterial blood carries dissolved glucose along with hemoglobin-bound dioxygen to individual cells which are the sites of glucose "combustion":
\[C_6H_{12}O_6 + 6 O_2 → 6 CO_2 + 6 H_2O\]
The net reaction and the quantity of energy released are the same as if the glucose were burned in the open air, but within the cells the reaction proceeds in a series of tiny steps which capture most of this energy for the body's use, liberating only a small fraction of it as thermal energy (heat). Because this process utilizes oxygen from the air we breath, it is known as aerobic respiration . And as with any efficient combustion process, glucose is the limiting reactant here.
However, there are times when vigorous physical activity causes muscles to consume glucose at a rate that exceeds the capacity of the blood to deliver the required quantity of oxygen. Under these conditions, cellular respiration shifts to an alternative anaerobic mode:
\[C_6H_{12}O_6 → 2 CH_3CH(OH)COOH\]
As you can see from this equation, glucose is only partially broken down (into lactic acid ), and thus only part of its chemical energy is captured by the body. There are numerous health benefits to aerobic exercise including increased ability of the body to maintain an aerobic condition. But if you are into short-distance running (sprinting) or being pursued by a tiger, the reduced efficiency of anaerobic exercise may be a small price to pay. | libretexts | 2025-03-17T19:53:07.575956 | 2013-10-03T01:38:15 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/04%3A_The_Basics_of_Chemistry/4.04%3A_Chemical_Equations_and_Stoichiometry",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "4.4: Chemical Equations and Stoichiometry",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/04%3A_The_Basics_of_Chemistry/4.05%3A_Introduction_to_Chemical_Nomenclature | 4.5: Introduction to Chemical Nomenclature
Different instructors set out widely varying requirements for chemical nomenclature. The following are probably the most commonly expected:
- You should know the name and symbols of at least the first twenty elements, as well as all of the halogen and noble gas groups (groups 17-18).
- Name any binary molecule, using the standard prefixes for 1-10.
- All of the commonly-encountered ions.
- Salts and other ion-derived compounds, including the acids listed here. In some courses you will not need to know the -ous/-ic names for salts of copper, iron, etc., but in others you will.
- Find out from your instructor which organic compounds you must be able to name.
Chemical nomenclature is far too big a topic to treat comprehensively, and it would be a useless diversion to attempt to do so in a beginning course; most chemistry students pick up chemical names and the rules governing them as they go along. But we can hardly talk about chemistry without mentioning some chemical substances, all of which do have names— and often, more than one! All we will try to do here is cover what you need to know to make sense of first-year chemistry. For those of you who plan to go on in chemistry, the really fun stuff comes later!
There are more than 100 million named chemical substances. Who thinks up the names for all these chemicals? Are we in danger of running out of new names? The answer to the last question is "no", for the simple reason that the vast majority of the names are not "thought up"; there are elaborate rules for assigning names to chemical substances on the basis of their structures. These are called systematic names ; they may be a bit ponderous, but they uniquely identify a given substance. The rules for these names are defined by an international body. But in order to make indexing and identification easier, every known chemical substance has its own numeric "personal ID", known as a CAS registry number. For example, caffeine is uniquely identified by the registry number 58-08-2. About 15,000 new numbers are issued every day.
Common Names vs. Systematic Names
Many chemicals are so much a part of our life that we know them by their familiar names, just like our other friends. A given substance may have several common or trivial names ; ordinary cane sugar, for example, is more formally known as "sucrose", but asking for it at the dinner table by that name will likely be a conversation-stopper, and I won't even venture to predict the outcome if you try using its systematic name in the same context:
"please pass the α-D-glucopyranosyl-(1,2)- β-D-fructofuranoside!"
But "sucrose" would be quite appropriate if you need to distinguish this particular sugar from the hundreds of other named sugars. The only place you would come across a systematic name like the rather unwieldy one mentioned here is when referring (in print or in a computer data base) to a sugar that has no common name.
Chemical substances have been a part the fabric of civilization and culture for thousands of years, and present-day chemistry retains a lot of this ancient baggage in the form of terms whose hidden cultural and historic connections add color and interest to the subject. Many common chemical names have reached us only after remarkably long journeys through time and place, as the following two examples illustrate.
Ammonia
Most people can associate the name ammonia (\(NH_3\)) with a gas having a pungent odor; the systematic name "nitrogen trihydride" (which is rarely used) will tell you its formula. What it will not tell you is that smoke from burning camel dung (the staple fuel of North Africa) condenses on cool surfaces to form a crystalline deposit. The ancient Romans first noticed this on the walls and ceiling of the temple that the Egyptians had built to the Sun-god Amun in Thebes, and they named the material sal ammoniac, meaning "salt of Amun". In 1774, Joseph Priestly (the discoverer of oxygen) found that heating sal ammoniac produced a gas with a pungent odor, which a T. Bergman named "ammonia" eight years later.
Alcohol
Alcohol entered the English language in the 17th Century with the meaning of a "sublimated" substance, then became the "pure spirit" of anything, and only became associated with "spirit of wine" in 1753. Finally, in 1852, it become a part of chemical nomenclature that denoted a common class of organic compound. But it's still common practice to refer to the specific substance CH 3 CH 2 OH as "alcohol" rather then its systematic name ethanol .
Arabic alchemy has given us a number of chemical terms; for example,alcohol is believed to derive from Arabic or al-ghawl whose original meaning was a metallic powder used to darken women's eyelids (kohl).
The general practice among chemists is to use the more common chemical names whenever it is practical to do so, especially in spoken or informal written communication. For many of the very simplest compounds (including most of those you will encounter in a first-year course), the systematic and common names are the same, but where there is a difference and if the context permits it, the common name is usually preferred.
Many of the "common" names we refer to in this lesson are known and used mainly by the scientific community. Chemical substances that are employed in the home, the arts, or in industry have acquired traditional or "popular" names that are still in wide use. Many, like sal ammoniac mentioned above, have fascinating stories to tell.
B 4 O 7 ·10H2O| popular name | chemical name | formula |
|---|---|---|
| borax | sodium tetraborate decahydrate | |
| calomel | mercury(I) chloride | Hg 2 Cl 2 |
| milk of magnesia | magnesium hydroxide | Mg(OH) 2 |
| muriatic acid | hydrochloric acid | HCl (aq) |
| oil of vitriol | sulfuric acid | H 2 SO 4 |
| saltpeter | sodium nitrate | NaNO 3 |
| slaked lime | calcium hydroxide | Ca(OH) 2 |
Minerals : Minerals are solid materials that occur in the earth which are classified and named according to their compositions (which often vary over a continuous range) and the arrangement of the atoms in their crystal lattices. There are about 4000 named minerals. Many are named after places, people, or properties, and most frequently end with -ite.
Proprietary names: Chemistry is a major industry, so it is not surprising that many substances are sold under trademarked names. This is especially common in the pharmaceutical industry, which uses computers to churn out names that they hope will distinguish a new product from those of its competitors. Perhaps the most famous of these is Aspirin, whose name was coined by the German company Bayer in 1899. This trade name was seized by the U.S. government following World War I, and is no longer a protected trade mark in that country.
Names and symbols of the Elements
Naming of chemical substances begins with the names of the elements . The discoverer of an element has traditionally had the right to name it, and one can find some interesting human and cultural history in these names, many of which refer to the element's properties or to geographic locations. Only some of the more recently-discovered (and artificially produced) elements are named after people. Some elements were not really "discovered", but have been known since ancient times; many of these have symbols that are derived from the Latin names of the elements. There are nine elements whose Latin-derived symbols you are expected to know (Table \(\PageIndex{2}\) ).
|
element name
|
symbol
|
Latin name
|
|---|---|---|
| antimony | Sb | stibium |
| copper | Cu | cuprum |
| gold | Au | aurum |
| iron | Fe | ferrum |
| lead | Pb | plumbum |
| mercury | Hg | hydrargyrum |
| potassium | K | kalium |
| sodium | Na | natrium |
| tin | Sn | stannum |
There is a lot of history and tradition in many of these names. For example, the Latin name for mercury, hydrargyrum , means "water silver", or quicksilver. The appellation "quack", as applied to an incompetent physician, is a corruption of the Flemish word for quicksilver, and derives from the use of mercury compounds in 17th century medicine. The name "mercury" is of alchemical origin and is of course derived from the name of the Greek god after whom the planet is named; the enigmatic properties of the element, at the same time metallic, fluid, and vaporizable, suggest the same messenger with the winged feet who circles through the heavens close to the sun.
Naming the binary molecules
The system used for naming chemical substances depends on the nature of the molecular units making up the compound. These are usually either ions or molecules; different rules apply to each. In this section, we discuss the simplest binary (two-atom) molecules.
It is often necessary to distinguish between compounds in which the same elements are present in different proportions; carbon monoxide CO and carbon dioxide CO 2 are familiar to everyone. Chemists, perhaps hoping it will legitimize them as scholars, employ Greek (of sometimes Latin) prefixes to designate numbers within names; you will encounter these frequently, and you should know them:
| 1/2 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
|---|---|---|---|---|---|---|---|---|---|---|
| hemi | mono | di | tri | tetra | penta | hexa | hepta | octa | nona | deca |
You will occasionally see names such as di hydrogen and di chlorine used to distinguish the common forms of these elements (H 2 , Cl 2 ) from the atoms that have the same name when it is required for clarity.
Examples:
- N 2 O 4 - dinitrogen tetroxide [note the missing a preceding the vowel]
- N 2 O - dinitrogen oxide [more commonly, nitrous oxide ]
- SF 6 - sulfur hexafluoride
- P 4 S 3 - tetraphosphorus trisulfide [more commonly, phosphorus sesquisulfide ]
- Na 2 HPO 4 - disodium hydrogen phosphate
- H 2 S - hydrogen sulfide [we skip both the di and mono ]
- CO - carbon monoxide [ mono - to distinguish it from the dioxide]
- CaSO 4 ·½H 2 O - calcium sulfate hemihydrate [In this solid, two CaSO 4 units share one water of hydration between them; more commonly called Plaster of Paris]
It will be apparent from these examples that chemists are in the habit of taking a few liberties in applying the strict numeric prefixes to the more commonly known substances.
These two-element compounds are usually quite easy to name because most of them follow the systematic rule of adding the suffix -ide to the root name of the second element, which is normally the more "negative" one. Several such examples are shown above. But as noted above, there are some important exceptions in which common or trivial names take precedence over systematic names:
- H 2 O ( water , not dihydrogen oxide)/
- H 2 O 2 ( hydrogen peroxide , not dihydrogen dioxide)
- H 2 S ( hydrogen sulfide , not dihydrogen sulfide)
- NH 3 ( ammonia , not nitrogen trihydride)
- NO ( nitric oxide , not nitrogen monoxide)
- N 2 O ( nitrous oxide , not dinitrogen oxide)
- CH 4 ( methane , not carbon tetrahydride)
Naming the ions
An ion is an electrically charged atom or molecule— that is, one in which the number of electrons differs from the number of nuclear protons. Many simple compounds can be regarded, at least in a formal way, as being made up of a pair of ions having opposite charge signs. The positive ions, also known as cations , are mostly those of metallic elements which simply take the name of the element itself.
|
calcium
|
sodium
|
magnesium
|
cadmium
|
potassium
|
|---|---|---|---|---|
|
Ca
2
+
|
Na
+
|
Mg
2
+
|
Cd
2
+
|
K
+
|
The only important non-metallic cations you need to know about are
|
hydrogen
|
hydronium
|
ammonium
|
|---|---|---|
|
H
+
|
H
3
O
+
|
NH
4
+
|
(Later on, when you study acids and bases, you will learn that the first two represent the same chemical species.)
Some of the metallic ions are multivalent , meaning that they can exhibit more than one electric charge. For these there are systematic names that use Roman numerals, and the much older and less cumbersome common names that mostly employ the Latin names of the elements, using the endings - ous and - ic to denote the lower and higher charges, respectively (Table \(\PageIndex{4}\)). (In cases where more than two charge values are possible, the systematic names are used.)
|
\(Cu^+\)
|
\(Cu^{2+}\)
|
\(Fe^{2+}\)
|
\(Fe^{3+}\)
|
\(^*Hg_2^{2+}\)
|
\(Hg^{2+}\)
|
\(Sn^{2+}\)
|
\(Sn^{4+}\)
|
|---|---|---|---|---|---|---|---|
|
copper(I)
|
copper(II)
|
iron(II)
|
iron(III)
|
mercury(I)
|
mercury(II)
|
tin(II)
|
tin(IV)
|
|
cuprous
|
cupric
|
ferrous
|
ferric
|
mercurous
|
mercuric
|
stannous
|
stannic
|
| * The mercurous ion is a unique double cation that is sometimes incorrectly represented as Hg + . |
The non-metallic elements generally form negative ions ( anions ). The names of the monatomic anions all end with the -ide suffix:
|
Cl
–
|
S
2–
|
O
2–
|
C
4–
|
I
–
|
H
–
|
|---|---|---|---|---|---|
|
chloride
|
sulfide
|
oxide
|
carbide
|
iodide
|
hydride
|
There are a number of important polyatomic anions which, for naming purposes, can be divided into several categories. A few follow the pattern for the monatomic anions:
|
OH
–
|
CN
–
|
O
2
–
|
|---|---|---|
|
hydroxide
|
cyanide
|
peroxide
|
Oxyanions
The most common oxygen-containing anions ( oxyanions ) have names ending in -ate , but if a variant containing a small number of oxygen atoms exists, it takes the suffix -ite .
|
CO
3
2–
|
NO
3
–
|
NO
2
–
|
SO
4
2–
|
SO
3
2–
|
PO
4
3–
|
|---|---|---|---|---|---|
|
carbonate
|
nitrate
|
nitrite
|
sulfate
|
sulfite
|
phosphate
|
The above ions (with the exception of nitrate) can also combine with H+ to produce "acid" forms having smaller negative charges. For rather obscure historic reasons, some of them have common names that begin with - bi which, although officially discouraged, are still in wide use:
|
ion
|
systematic name
|
common name
|
|---|---|---|
|
HCO
3
–
|
hydrogen carbonate
|
bicarbonate
|
|
HSO
4
–
|
hydrogen sulfate
|
bisulfate
|
|
HSO
3
–
|
hydrogen sulfite
|
bisulfite
|
Chlorine, and to a smaller extent bromine and iodine, form a more extensive series of oxyanions that requires a somewhat more intricate naming convention:
|
ClO
–
|
ClO
2
–
|
ClO
3
–
|
ClO
4
–
|
|---|---|---|---|
|
hypochlorite
|
chlorite
|
chlorate
|
perchlorate
|
Ion-derived compounds
These compounds are formally derived from positive ions ( cations ) and negative ions ( anions ) in a ratio that gives an electrically neutral unit. Salts, of which ordinary "salt" (sodium chloride) is the most common example, are all solids under ordinary conditions. A small number of these (such as NaCl) do retain their component ions and are properly called "ionic solids". In many cases, however, the ions lose their electrically charged character and form largely-non-ionic solids such as CuCl 2 . The term "ion-derived solids" encompasses both of these classes of compounds.
Most of the cations and anions described above can combine to form solid compounds that are usually known as salts . The one overriding requirement is that the resulting compound must be electrically neutral: thus the ions Ca 2+ and Br – combine only in a 1:2 ratio to form calcium bromide, CaBr 2 . Because no other simplest formula is possible, there is no need to name it "calcium dibromide".
Since some metallic elements form cations having different positive charges, the names of ionic compounds derived from these elements must contain some indication of the cation charge. The older method uses the suffixes -ous and -ic to denote the lower and higher charges, respectively. In the cases of iron and copper, the Latin names of the elements are used: ferrous , cupric .
This system is still widely used, although it has been officially supplanted by the more precise, if slightly cumbersome Stock system in which one indicates the cationic charge (actually, the oxidation number) by means of Roman numerals following the symbol for the cation. In both systems, the name of the anion ends in - ide .
|
formula
|
systematic name
|
common name
|
|---|---|---|
| CuCl | copper(I) chloride | cuprous chloride |
| CuCl 2 | copper(II) chloride | cupric chloride |
| Hg 2 Cl | mercury(I) chloride | mercurous chloride |
| HgO | mercury(II) oxide | mercuric oxide |
| FeS | iron(II) sulfide | ferrous sulfide |
| Fe 2 S 3 | iron(III) sulfide | ferric sulfide |
Acids
Most acids can be regarded as a combination of a hydrogen ion H + with an anion; the name of the anion is reflected in the name of the acid. Notice, in the case of the oxyacids, how the anion suffixes -ate and -ite become -ic and -ous , respectively, in the acid name.
|
anion
|
anion name
|
acid
|
acid name
|
|---|---|---|---|
| Cl – | chloride ion |
HCl
|
hydrochloric acid
|
| CO 3 2– | carbonate ion |
H
2
CO
3
|
carbonic acid
|
| NO 2 – | nitrite ion |
HNO
2
|
nitrous acid
|
| NO 3 – | nitrate ion |
HNO
3
|
nitric acid
|
| SO 3 2– | sulfite ion |
H
2
SO
3
|
sulfurous acid
|
| SO 4 2– | sulfate ion |
H
2
SO
4
|
sulfuric acid
|
| CH 3 COO – | acetate ion | CH 3 COOH |
acetic acid
|
Organic compounds
Since organic (carbon) compounds constitute the vast majority of all known chemical substances, organic nomenclature is a huge subject in itself. We present here only the very basic part of it that you need to know in first-year chemistry— much more awaits those of you who are to experience the pleasures of an organic chemistry course later on. The simplest organic compounds are built of straight chains of carbon atoms which are named by means of prefixes that denote the number of carbons in the chain. Using the convention C n to denote a straight chain of n atoms (don't even ask about branched chains!), the prefixes for chain lengths from 1 through 10 are given here:
| C 1 | C 2 | C 3 | C 4 | C 5 | C 6 | C 7 | C 8 | C 9 | C 10 |
|---|---|---|---|---|---|---|---|---|---|
| meth- | eth- | prop- | but- | pent- | hex- | hept- | oct- | non- | dec- |
As you can see, chains from C 5 onward use Greek number prefixes, so you don't have a lot new to learn here. The simplest of these compounds are hydrocarbons having the general formula C n H 2 n +2 . They are known generically as alkanes , and their names all combine the appropriate numerical prefix with the ending -ane :
|
CH
4
|
C
2
H
6
|
C
3
H
8
|
C
8
H
18
|
|---|---|---|---|
|
C
|
C–C
|
C–C–C
|
C–C–C–C–C–C–C–C
|
|
methane
|
ethane
|
propane
|
octane
|
All carbon atoms must have four bonds attached to them; notice the common convention of not showing hydrogen atoms explicitly.
Functional groups
and higher chains, the substituent can be in more than one location, thus giving rise to numerous isomers .|
formula
|
common name
|
systematic name
|
|---|---|---|
|
CH
3
OH
|
methyl alcohol
|
methanol
|
|
CH
3
CH
2
OH
|
ethyl alcohol
|
ethanol
|
|
C
8
H
15
OH
|
octyl alcohol
|
octanol
|
|
formula
|
common name
|
systematic name
|
|---|---|---|
|
HCOOH
|
formic acid
|
methanoic acid
|
|
CH
3
COOH
|
acetic acid
|
ethanoic acid
|
|
C
4
H
9
COOH
|
butyric acid
|
butanoic acid
|
|
class
|
example
|
name
|
|---|---|---|
|
amine
|
methylamine |
CH
3
NH
2
|
|
ketone
|
acetone (dimethylketone)
|
CH
3
-CO-CH
3
|
|
ether
|
diethyl ether
|
C
2
H
5
-O-C
2
H
5
| | libretexts | 2025-03-17T19:53:07.754703 | 2013-10-03T01:38:16 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/04%3A_The_Basics_of_Chemistry/4.05%3A_Introduction_to_Chemical_Nomenclature",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "4.5: Introduction to Chemical Nomenclature",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/04%3A_The_Basics_of_Chemistry/4.06%3A_Significant_Figures_and_Rounding | 4.6: Significant Figures and Rounding
- Give an example of a measurement whose number of significant digits is clearly too great, and explain why.
- State the purpose of rounding off, and describe the information that must be known to do it properly.
- Round off a number to a specified number of significant digits.
- Explain how to round off a number whose second-most-significant digit is 9.
- Carry out a simple calculation that involves two or more observed quantities, and express the result in the appropriate number of significant figures.
The numerical values we deal with in science (and in many other aspects of life) represent measurements whose values are never known exactly. Our pocket-calculators or computers don't know this; they treat the numbers we punch into them as "pure" mathematical entities, with the result that the operations of arithmetic frequently yield answers that are physically ridiculous even though mathematically correct. The purpose of this unit is to help you understand why this happens, and to show you what to do about it.
Digits: Significant and otherwise
Consider the two statements shown below:
- "The population of our city is 157,872."
- "The number of registered voters as of Jan 1 was 27,833.
Which of these would you be justified in dismissing immediately? Certainly not the second one, because it probably comes from a database which contains one record for each voter, so the number is found simply by counting the number of records. The first statement cannot possibly be correct. Even if a city’s population could be defined in a precise way (Permanent residents? Warm bodies?), how can we account for the minute-by minute changes that occur as people are born and die, or move in and move away?
What is the difference between the two population numbers stated above? The first one expresses a quantity that cannot be known exactly — that is, it carries with it a degree of uncertainty. It is quite possible that the last census yielded precisely 157,872 records, and that this might be the “population of the city” for legal purposes, but it is surely not the “true” population. To better reflect this fact, one might list the population (in an atlas, for example) as 157,900 or even 158,000 . These two quantities have been rounded off to four and three significant figures, respectively, and the have the following meanings:
- 1579 00 (the significant digits are underlined here) implies that the population is believed to be within the range of about 1578 50 to about 1579 50. In other words, the population is 1579 00±50. The “plus-or-minus 50” appended to this number means that we consider the absolute uncertainty of the population measurement to be 50 – (–50) = 100. We can also say that the relative uncertainty is 100/157900, which we can also express as 1 part in 1579, or 1/1579 = 0.000633, or about 0.06 percent.
- The value 158 000 implies that the population is likely between about 157 500 and 158 500, or 158 000±500. The absolute uncertainty of 1000 translates into a relative uncertainty of 1000/158000 or 1 part in 158, or about 0.6 percent.
Which of these two values we would report as “the population” will depend on the degree of confidence we have in the original census figure; if the census was completed last week, we might round to four significant digits, but if it was a year or so ago, rounding to three places might be a more prudent choice. In a case such as this, there is no really objective way of choosing between the two alternatives.
This illustrates an important point: the concept of significant digits has less to do with mathematics than with our confidence in a measurement. This confidence can often be expressed numerically (for example, the height of a liquid in a measuring tube can be read to ±0.05 cm), but when it cannot, as in our population example, we must depend on our personal experience and judgment.
So, what is a significant digit? According to the usual definition, it is all the numerals in a measured quantity (counting from the left) whose values are considered as known exactly, plus one more whose value could be one more or one less:
- In “ 1579 00” (four significant digits), the left most three digits are known exactly, but the fourth digit, “9” could well be “8” if the “true value” is within the implied range of 1578 50 to 1579 50.
- In “ 158 000” (three significant digits), the left most two digits are known exactly, while the third digit could be either “7” or “8” if the true value is within the implied range of 157 500 to 158 500.
Although rounding off always leads to the loss of numeric information , what we are getting rid of can be considered to be “numeric noise” that does not contribute to the quality of the measurement. The purpose in rounding off is to avoid expressing a value to a greater degree of precision than is consistent with the uncertainty in the measurement.
If you know that a balance is accurate to within 0.1 mg, say, then the uncertainty in any measurement of mass carried out on this balance will be ±0.1 mg. Suppose, however, that you are simply told that an object has a length of 0.42 cm, with no indication of its precision. In this case, all you have to go on is the number of digits contained in the data. Thus the quantity “0.42 cm” is specified to 0.01 unit in 0 42, or one part in 42 . The implied relative uncertainty in this figure is 1/42, or about 2%. The precision of any numeric answer calculated from this value is therefore limited to about the same amount.
Rounding Error
It is important to understand that the number of significant digits in a value provides only a rough indication of its precision, and that information is lost when rounding off occurs. Suppose, for example, that we measure the weight of an object as 3.28 g on a balance believed to be accurate to within ±0.05 gram. The resulting value of 3.28±.05 gram tells us that the true weight of the object could be anywhere between 3.23 g and 3.33 g. The absolute uncertainty here is 0.1 g (±0.05 g), and the relative uncertainty is 1 part in 32.8, or about 3 percent.
How many significant digits should there be in the reported measurement? Since only the left most “3” in “3.28” is certain, you would probably elect to round the value to 3.3 g. So far, so good. But what is someone else supposed to make of this figure when they see it in your report? The value “3.3 g” suggests an implied uncertainty of 3.3±0.05 g, meaning that the true value is likely between 3.25 g and 3.35 g. This range is 0.02 g below that associated with the original measurement, and so rounding off has introduced a bias of this amount into the result. Since this is less than half of the ±0.05 g uncertainty in the weighing, it is not a very serious matter in itself. However, if several values that were rounded in this way are combined in a calculation, the rounding-off errors could become significant.
Rules for Rounding
The standard rules for rounding off are well known. Before we set them out, let us agree on what to call the various components of a numeric value.
- The most significant digit is the left most digit (not counting any leading zeros which function only as placeholders and are never significant digits.)
- If you are rounding off to n significant digits, then the least significant digit is the n th digit from the most significant digit. The least significant digit can be a zero.
- The first non-significant digit is the n +1 th digit.
Rounding-off rules
- If the first non-significant digit is less than 5, then the least significant digit remains unchanged.
- If the first non-significant digit is greater than 5, the least significant digit is incremented by 1.
- If the first non-significant digit is 5, the least significant digit can either be incremented or left unchanged ( see below! )
- All non-significant digits are removed.
Students are sometimes told to increment the least significant digit by 1 if it is odd, and to leave it unchanged if it is even. One wonders if this reflects some idea that even numbers are somehow “better” than odd ones! (The ancient superstition is just the opposite, that only the odd numbers are "lucky".)
In fact, you could do it equally the other way around, incrementing only the even numbers. If you are only rounding a single number, it doesn't really matter what you do. However, when you are rounding a series of numbers that will be used in a calculation, if you treated each first nonsignificant 5 in the same way, you would be over- or understating the value of the rounded number, thus accumulating round-off error. Since there are equal numbers of even and odd digits, incrementing only the one kind will keep this kind of error from building up. You could do just as well, of course, by flipping a coin!
|
number to round
|
number of significant digits |
result
|
comment
|
|---|---|---|---|
| 34.216 | 3 | 34.2 | First non-significant digit (1) is less than 5, so number is simply truncated. |
| 2.252 | 2 | 2.2 or 2.3 | First non-significant digit is 5, so least sig. digit can either remain unchanged or be incremented. |
| 39.99 | 3 | 40.0 | Crossing "decimal boundary", so all numbers change. |
| 85,381 | 3 | 85,4 00 | The two zeros are just placeholders |
| 0.04597 | 3 | 0.0460 | The two leading zeros are not significant digits. |
Rounding up the Nines
Suppose that an object is found to have a weight of 3.98 ± 0.05 g. This would place its true weight somewhere in the range of 3.93 g to 4.03 g. In judging how to round this number, you count the number of digits in “3.98” that are known exactly, and you find none! Since the “4” is the left most digit whose value is uncertain, this would imply that the result should be rounded to one significant figure and reported simply as 4 g. An alternative would be to bend the rule and round off to two significant digits, yielding 4.0 g. How can you decide what to do? In a case such as this, you should look at the implied uncertainties in the two values, and compare them with the uncertainty associated with the original measurement.
|
rounded value
|
implied max
|
implied min
|
absolute uncertainty
|
relative uncertainty
|
|---|---|---|---|---|
| 3.98 g | 3.985 g | 3.975 g | ±.005 g or 0.01 g | 1 in 400, or 0.25% |
| 4 g | 4.5 g | 3.5 g | ±.5 g or 1 g | 1 in 4, 25% |
| 4.0 g | 4.05 g | 3.95 g | ±.05 g or 0.1 g | 1 in 40, 2.5% |
Clearly, rounding off to two digits is the only reasonable course in this example. Observed values should be rounded off to the number of digits that most accurately conveys the uncertainty in the measurement.
- Usually, this means rounding off to the number of significant digits in in the quantity; that is, the number of digits (counting from the left) that are known exactly, plus one more.
- When this cannot be applied (as in the example above when addition of subtraction of the absolute uncertainty bridges a power of ten), then we round in such a way that the relative implied uncertainty in the result is as close as possible to that of the observed value.
Rounding the Results of Calculations
When carrying out calculations that involve multiple steps, you should avoid doing any rounding until you obtain the final result. Suppose you use your calculator to work out the area of a rectangle:
|
rounded value
|
relative implied uncertainty
|
|---|---|
| 1.58 | 1 part in 158, or 0.6% |
| 1.6 | 1 part in 16, or 6 % |
Your calculator is of course correct as far as the pure numbers go, but you would be wrong to write down "1.57676 cm 2 " as the answer. Two possible options for rounding off the calculator answer are shown at the right.
It is clear that neither option is entirely satisfactory; rounding to 3 significant digits overstates the precision of the answer, whereas following the rule and rounding to the two digits in ".42" has the effect of throwing away some precision. In this case, it could be argued that rounding to three digits is justified because the implied relative uncertainty in the answer, 0.6%, is more consistent with those of the two factors.
The "rules" for rounding off are generally useful, convenient guidelines, but they do not always yield the most desirable result. When in doubt, it is better to rely on relative implied uncertainties.
Addition and Subtraction
In operations involving significant figures, the answer is reported in such a way that it reflects the reliability of the least precise operation. An answer is no more precise that the least precise number used to get the answer. When adding or subtracting, we go by the number of decimal places (i.e., the number of digits on the right side of the decimal point) rather than by the number of significant digits. Identify the quantity having the smallest number of decimal places, and use this number to set the number of decimal places in the answer.
Multiplication and Division
The result must contain the same number of significant figures as in the value having the least number of significant figures.
Logarithms and antilogarithms
If a number is expressed in the form a × 10 b ("scientific notation") with the additional restriction that the coefficient a is no less than 1 and less than 10, the number is in its normalized form. Express the base-10 logarithm of a value using the same number of significant figures as is present in the normalized form of that value. Similarly, for antilogarithms (numbers expressed as powers of 10), use the same number of significant figures as are in that power.
The following examples will illustrate the most common problems you are likely to encounter in rounding off the results of calculations. They deserve your careful study!
|
calculator result |
rounded | remarks |
|---|---|---|
| 1.6 | Rounding to two significant figures yields an implied uncertainty of 1/16 or 6%, three times greater than that in the least-precisely known factor. This is a good illustration of how rounding can lead to the loss of information. | |
| 1.9E6 | The "3.1" factor is specified to 1 part in 31, or 3%. In the answer 1.9, the value is expressed to 1 part in 19, or 5%. These precisions are comparable, so the rounding-off rule has given us a reasonable result. | |
|
A certain book has a thickness of 117 mm; find the height of a stack of 24 identical books:
|
281 0 mm | The “24” and the “1” are exact, so the only uncertain value is the thickness of each book, given to 3 significant digits. The trailing zero in the answer is only a placeholder. |
| 10.4 | In addition or subtraction, look for the term having the smallest number of decimal places, and round off the answer to the same number of places. | |
| 23 cm | see below |
The last of the examples shown above represents the very common operation of converting one unit into another. There is a certain amount of ambiguity here; if we take "9 in" to mean a distance in the range 8.5 to 9.5 inches, then the implied uncertainty is ±0.5 in, which is 1 part in 18, or about ± 6%. The relative uncertainty in the answer must be the same, since all the values are multiplied by the same factor, 2.54 cm/in. In this case we are justified in writing the answer to two significant digits, yielding an uncertainty of about ±1 cm; if we had used the answer "20 cm" (one significant digit), its implied uncertainty would be ±5 cm, or ±25%.
When the appropriate number of significant digits is in question, calculating the relative uncertainty can help you decide. | libretexts | 2025-03-17T19:53:07.860660 | 2013-10-03T01:38:01 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/04%3A_The_Basics_of_Chemistry/4.06%3A_Significant_Figures_and_Rounding",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "4.6: Significant Figures and Rounding",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/05%3A_Atoms_and_the_Periodic_Table | 5: Atoms and the Periodic Table
Everything you need to know in a first-year college course about the principal concepts of quantum theory as applied to the atom, and how this determines the organization of the periodic table.
-
- 5.2: Quanta - A New View of the World
- The fact is, however, that it is not only for real, but serves as the key that unlocks even some of the simplest aspects of modern Chemistry. Our goal in this lesson is to introduce you to this new reality, and to provide you with a conceptual understanding of it that will make Chemistry a more meaningful part of your own personal world.
-
- 5.3: Light, Particles, and Waves
- Our intuitive view of the "real world" is one in which objects have definite masses, sizes, locations and velocities. Once we get down to the atomic level, this simple view begins to break down. It becomes totally useless when we move down to the subatomic level and consider the lightest of all chemically-significant particles, the electron. The chemical properties of a particular kind of atom depend on the arrangement and behavior of the electrons which make up almost the entire volume of the a
-
- 5.4: The Bohr Atom
- Our goal in this unit is to help you understand how the arrangement of the periodic table of the elements must follow as a necessary consequence of the fundamental laws of the quantum behavior of matter. The modern theory of the atom makes full use of the wave-particle duality of matter. We will therefore present the theory in a semi-qualitative manner, emphasizing its results and their applications, rather than its derivation.
-
- 5.5: The Quantum Atom
- The picture of the atom that Niels Bohr developed in 1913 served as the starting point for modern atomic theory, but it was not long before Bohr himself recognized that the advances in quantum theory that occurred through the 1920's required an even more revolutionary change in the way we view the electron as it exists in the atom. This lesson will attempt to show you this view— or at least the portion of it that can be appreciated without the aid of more than a small amount of mathematics.
-
- 5.6: Atomic Electron Configurations
- According to the Pauli exclusion principle, no two electrons in the same atom can have the same set of quantum numbers (n,l,m,s). This limits the number of electrons in a given orbital to two (s = ±1), and it requires that atom containing more then two electrons must place them in standing wave patterns corresponding to higher principal quantum numbers n, which means that these electrons will be farther from the nucleus and less tightly bound by it.
-
- 5.7: Periodic Properties of the Elements
- The periodic table in the form originally published by Dmitri Mendeleev in 1869 was an attempt to list the chemical elements in order of their atomic weights, while breaking the list into rows in such a way that elements having similar physical and chemical properties would be placed in each column. The shape and organization of the modern periodic table are direct consequences of the atomic electronic structure of the elements.
-
- 5.8: Why Don't Electrons Fall into the Nucleus?
- The picture of electrons "orbiting" the nucleus like planets around the sun remains an enduring one, not only in popular images of the atom but also in the minds of many of us who know better. The proposal, first made in 1913, that the centrifugal force of the revolving electron just exactly balances the attractive force of the nucleus (in analogy with the centrifugal force of the moon in its orbit exactly counteracting the pull of the Earth's gravity) is a nice picture, but is simply untenable. | libretexts | 2025-03-17T19:53:07.927884 | 2013-10-03T01:37:46 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/05%3A_Atoms_and_the_Periodic_Table",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "5: Atoms and the Periodic Table",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/05%3A_Atoms_and_the_Periodic_Table/5.01%3A_Primer_on_Quantum_Theory | 5.1: Primer on Quantum Theory
Part 1: Particles and waves
Q1. What is a particle?
A particle is a discrete unit of matter having the attributes of mass , momentum (and thus kinetic energy ) and optionally of electric charge .
Q2. What is a wave?
A wave is a periodic variation of some quantity as a function of location or time. For example, the wave motion of a vibrating guitar string is defined by the displacement of the string from its center as a function of distance along the string. A sound wave consists of variations in the pressure with location.
A wave is characterized by its wavelength \( λ\) (lambda) and frequency \(\nu\ ) (nu), which are connected by the relation
\[ \lmabda =\dfrac{u}{\nu}
in which u is the velocity of propagation of the disturbance in the medium.
The velocity of sound in the air is 330 m s –1 . What is the wavelength of A440 on the piano keyboard?
Solution
\[ \lambda =\dfrac{330\, m S^{-1}}{440 \,s^{-1}} = 0.75\,m\]
Two other attributes of waves are the amplitude (the height of the wave crests with respect to the base line) and the phase , which measures the position of a crest with respect to some fixed point. The square of the amplitude gives the intensity of the wave: the energy transmitted per unit time).
A unique property of waves is their ability to combine constructively or destructively, depending on the relative phases of the combining waves.
Q3. What is light?
Phrasing the question in this way reflects the deterministic mode of Western thought which assumes that something cannot "be" two quite different things at the same time. The short response to this question is that all we know about light (or anything else, for that matter) are the results of experiments, and that some kinds of experiments show that light behaves like particles , and that other experiments reveal light to have the properties of waves .
Q4. What is the wave theory of light?
In the early 19th century, the English scientist Thomas Young carried out the famous two-slit experiment which demonstrated that a beam of light, when split into two beams and then recombined, will show interference effects that can only be explained by assuming that light is a wavelike disturbance. By 1820, Augustin Fresnel had put this theory on a sound mathematical basis, but the exact nature of the waves remained unclear until the 1860's when James Clerk Maxwell developed his electromagnetic theory .
From the laws of electromagnetic induction that were discovered in the period 1820-1830 by Hans Christien Oersted and Michael Faraday, it was known that a moving electric charge gives rise to a magnetic field, and that a changing magnetic field can induce electric charges to move. Maxwell showed theoretically that when an electric charge is accelerated (by being made to oscillate within a piece of wire, for example), electrical energy will be lost, and an equivalent amount of energy is radiated into space, spreading out as a series of waves extending in all directions.
What is "waving" in electromagnetic radiation? According to Maxwell, it is the strengths of the electric and magnetic fields as they travel through space. The two fields are oriented at right angles to each other and to the direction of travel.
As the electric field changes, it induces a magnetic field, which then induces a new electric field, etc., allowing the wave to propagate itself through space
These waves consist of periodic variations in the electrostatic and electromagnetic field strengths. These variations occur at right angles to each other. Each electrostatic component of the wave induces a magnetic component, which then creates a new electrostatic component, so that the wave, once formed, continues to propagate through space, essentially feeding on itself. In one of the most brilliant mathematical developments in the history of science, Maxwell expounded a detailed theory, and even showed that these waves should travel at about 3E8 m s –1 , a value which experimental observations had shown corresponded to the speed of light. In 1887, the German physicist Heinrich Hertz demonstrated that an oscillating electric charge (in what was in essence the world's first radio transmitting antenna) actually does produce electromagnetic radiation just as Maxwell had predicted, and that these waves behave exactly like light.
It is now understood that light is electromagnetic radiation that falls within a range of wavelengths that can be perceived by the eye. The entire electromagnetic spectrum runs from radio waves at the long-wavelength end, through heat, light, X-rays, and to gamma radiation.
Part 2: Quantum theory of light
Q5. How did the quantum theory of light come about?
It did not arise from any attempt to explain the behavior of light itself; by 1890 it was generally accepted that the electromagnetic theory could explain all of the properties of light that were then known.
Certain aspects of the interaction between light and matter that were observed during the next decade proved rather troublesome, however. The relation between the temperature of an object and the peak wavelength emitted by it was established empirically by Wilhelm Wien in 1893. This put on a quantitative basis what everyone knows: the hotter the object, the "bluer" the light it emits.
Q6. What is black body radiation?
All objects above the temperature of absolute zero emit electromagnetic radiation consisting of a broad range of wavelengths described by a distribution curve whose peak wavelength l at absolute temperature T for a "perfect radiator" known as a black body is given by Wien's law.
\[ \lambda_{peak} (cm) = 0.0029\,m\,K\]
At ordinary temperatures this radiation is entirely in the infrared region of the spectrum, but as the temperature rises above about 1000K, more energy is emitted in the visible wavelength region and the object begins to glow, first with red light, and then shifting toward the blue as the temperature is increased.
This type of radiation has two important characteristics. First, the spectrum is a continuous one, meaning that all wavelengths are emitted, although with intensities that vary smoothly with wavelength. The other curious property of black body radiation is that it is independent of the composition of the object; all that is important is the temperature.
Q7. How did black body radiation lead to quantum physics?
Black body radiation, like all electromagnetic radiation, must originate from oscillations of electric charges which in this case were assumed to be the electrons within the atoms of an object acting somewhat as miniature Hertzian oscillators. It was presumed that since all wavelengths seemed to be present in the continuous spectrum of a glowing body, these tiny oscillators could send or receive any portion of their total energy. However, all attempts to predict the actual shape of the emission spectrum of a glowing object on the basis of classical physical theory proved futile.
In 1900, the great German physicist Max Planck (who earlier in the same year had worked out an empirical formula giving the detailed shape of the black body emission spectrum) showed that the shape of the observed spectrum could be exactly predicted if the energies emitted or absorbed by each oscillator were restricted to integral values of hν, where ν ("nu") is the frequency and h is a constant 6.626E–34 J s which we now know as Planck's Constant . The allowable energies of each oscillator are quantized, but the emission spectrum of the body remains continuous because of differences in frequency among the uncountable numbers of oscillators it contains.This modification of classical theory, the first use of the quantum concept, was as unprecedented as it was simple, and it set the stage for the development of modern quantum physics.
Q8. What is the photoelectric effect?
Shortly after J.J. Thompson's experiments led to the identification of the elementary charged particles we now know as electrons, it was discovered that the illumination of a metallic surface by light can cause electrons to be emitted from the surface. This phenomenon, the photoelectric effect, is studied by illuminating one of two metal plates in an evacuated tube.
The kinetic energy of the photoelectrons causes them to move to the opposite electrode, thus completing the circuit and producing a measurable current. However, if an opposing potential (the retarding potential ) is imposed between the two plates, the kinetic energy can be reduced to zero so that the electron current is stopped. By observing the value of the retarding potential V r , the kinetic energy of the photoelectrons can be calculated from the electron charge e , its mass m and the frequency of the incident light:
These two diagrams are taken from a web page by Joseph Alward of the University of the Pacific.
The plot at the right shows how the kinetic energy of the photoelectrons falls to zero at the critical wavelength corresponding to frequency f 0 .
Q9. What peculiarity of the photoelectric effect led to the photon?
Although the number of electrons ejected from the metal surface per second depends on the intensity of the light, as expected, the kinetic energies of these electrons (as determined by measuring the retarding potential needed to stop them) does not, and this was definitely not expected. Just as a more intense physical disturbance will produce higher energy waves on the surface of the ocean, it was supposed that a more intense light beam would confer greater energy on the photoelectrons. But what was found, to everyone's surprise, is that the photoelectron energy is controlled by the wavelength of the light, and that there is a critical wavelength below which no photoelectrons are emitted at all.
Albert Einstein quickly saw that if the kinetic energy of the photoelectrons depends on the wavelength of the light, then so must its energy. Further, if Planck was correct in supposing that energy must be exchanged in packets restricted to certain values, then light must similarly be organized into energy packets. But a light ray consists of electric and magnetic fields that spread out in a uniform, continuous manner; how can a continuously-varying wave front exchange energy in discrete amounts? Einstein's answer was that the energy contained in each packet of the light must be concentrated into a tiny region of the wave front. This is tantamount to saying that light has the nature of a quantized particle whose energy is given by the product of Planck's constant and the frequency:
Einstein's publication of this explanation in 1905 led to the rapid acceptance of Planck's idea of energy quantization, which had not previously attracted much support from the physics community of the time. It is interesting to note, however, that this did not make Planck happy at all. Planck, ever the conservative, had been reluctant to accept that his own quantized-energy hypothesis was much more than an artifice to explain black-body radiation; to extend it to light seemed an absurdity that would negate the well-established electromagnetic theory and would set science back to the time before Maxwell.
Q10. Where does relativity come in?
Einstein's special theory of relativity arose from his attempt to understand why the laws of physics that describe the current induced in a fixed conductor when a magnet moves past it are not formulated in the same way as the ones that describe the magnetic field produced by a moving conductor. The details of this development are not relevant to our immediate purpose, but some of the conclusions that this line of thinking led to very definitely are. Einstein showed that the velocity of light, unlike that of a material body, has the same value no matter what velocity the observer has. Further, the mass of any material object, which had previously been regarded as an absolute, is itself a function of the velocity of the body relative to that of the observer (hence "relativity"), the relation being given by
in which m o is the rest mass of the particle, v is its velocity with respect to the observer, and c is the velocity of light.
According to this formula, the mass of an object increases without limit as the velocity approaches that of light. Where does the increased mass come from? Einstein's answer was that the increased mass is that of the kinetic energy of the object; that is, energy itself has mass, so that mass and energy are equivalent according to the famous formula
The only particle that can move at the velocity of light is the photon itself, due to its zero rest mass.
Q11. Can the mass-less photon have momentum?
Although the photon has no rest mass, its energy, given by , confers upon it an effective mass of
and a momentum of
Q12. If waves can be particles, can particles be waves?
In 1924, the French physicist Louis de Broglie proposed (in his doctoral thesis) that just as light possesses particle-like properties, so should particles of matter exhibit a wave-like character. Within two years this hypothesis had been confirmed experimentally by observing the diffraction (a wave interference effect) produced by a beam of electrons as they were scattered by the row of atoms at the surface of a metal.
deBroglie showed that the wavelength of a particle is inversely proportional to its momentum:
Notice that the wavelength of a stationary particle is infinitely large, while that of a particle of large mass approaches zero. For most practical purposes, the only particle of interest to chemistry that is sufficiently small to exhibit wavelike behavior is the electron (mass 9.11E31 kg).
Q13. Exactly what is it that is "waving"?
We pointed out earlier that a wave is a change that varies with location in a periodic, repeating way. What kind of a change do the crests and hollows of a "matter wave" trace out? The answer is that the wave represents the value of a quantity whose square is a measure of the probability of finding the particle in that particular location. In other words, what is "waving" is the value of a mathematical probability function .
Q14. What is the uncertainty principle?
In 1927, Werner Heisenberg proposed that certain pairs of properties of a particle cannot simultaneously have exact values. In particular, the position and the momentum of a particle have associated with them uncertainties x and p given by
As with the de Broglie particle wavelength, this has practical consequences only for electrons and other particles of very small mass. It is very important to understand that these "uncertainties" are not merely limitations related to experimental error or observational technique, but instead they express an underlying fact that Nature does not allow a particle to possess definite values of position and momentum at the same time. This principle (which would be better described by the term "indeterminacy" than "uncertainty") has been thoroughly verified and has far-reaching practical consequences which extend to chemical bonding and molecular structure.
Q15. Is the uncertainty principle consistent with particle waves?
Yes; either one really implies the other. Consider the following two limiting cases: · A particle whose velocity is known to within a very small uncertainty will have a sharply-defined energy (because its kinetic energy is known) which can be represented by a probability wave having a single, sharply-defined frequency. A "monochromatic" wave of this kind must extend infinitely in space:
But if the peaks of the wave represent locations at which the particle is most likely to manifest itself, we are forced to the conclusion that it can "be" virtually anywhere, since the number of such peaks is infinite! Now think of the opposite extreme: a particle whose location is closely known. Such a particle would be described by a short wave train having only a single peak, the smaller the uncertainty in position, the more narrow the peak.
To help you see how waveforms of different wavelength combine, two such combinations are shown below:
It is apparent that as more waves of different frequency are mixed, the regions in which they add constructively diminish in extent. The extreme case would be a wave train in which destructive interference occurs at all locations except one, resulting in a single pulse:
Is such a wave possible, and if so, what is its wavelength? Such a wave is possible, but only as the sum (interference) of other waves whose wavelengths are all slightly different. Each component wave possesses its own energy (momentum), and adds that value to the range of momenta carried by the particle, thus increasing the uncertainty δ p . In the extreme case of a quantum particle whose location is known exactly, the probability wavelet would have zero width which could be achieved only by combining waves of all wavelengths-- an infinite number of wavelengths, and thus an infinite range of momentum dp and thus kinetic energy.
Q16. Are they particles or are they waves?
Suppose we direct a beam of photons (or electrons; the experiment works with both) toward a piece of metal having a narrow opening. On the other side there are two more openings, or slits. Finally the particles impinge on a photographic plate or some other recording device. Taking into account their wavelike character, we would expect the probability waves to produce an interference pattern of the kind that is well known for sound and light waves, and this is exactly what is observed; the plate records a series of alternating dark and light bands, thus demonstrating beyond doubt that electrons and light have the character of waves.
Now let us reduce the intensity of the light so that only one photon at a time passes through the apparatus (it is experimentally possible to count single photons, so this is a practical experiment). Each photon passes through the first slit, and then through one or the other of the second set of slits, eventually striking the photographic film where it creates a tiny dot. If we develop the film after a sufficient number of photons have passed through, we find the very same interference pattern we obtained previously.
There is something strange here. Each photon, acting as a particle, must pass through one or the other of the pair of slits, so we would expect to get only two groups of spots on the film, each opposite one of the two slits. Instead, it appears that the each particle, on passing through one slit, "knows" about the other, and adjusts its final trajectory so as to build up a wavelike interference pattern.
It gets even stranger: suppose that we set up a detector to determine which slit a photon is heading for, and then block off the other slit with a shutter. We find that the photon sails straight through the open slit and onto the film without trying to create any kind of an interference pattern. Apparently, any attempt to observe the photon as a discrete particle causes it to behave like one.
The only conclusion possible is that quantum particles have no well defined paths; each photon (or electron) seems to have an infinity of paths which thread their way through space, seeking out and collecting information about all possible routes, and then adjusting its behavior so that its final trajectory, when combined with that of others, produces the same overall effect that we would see from a train of waves of wavelength λ= h/mv.
Part 3: Electrons in atoms
Q17. What are line spectra?
We have already seen that a glowing body (or actually, any body whose temperature is above absolute zero) emits and absorbs radiation of all wavelength in a continuous spectrum . In striking contrast is the spectrum of light produced when certain substances are volatilized in a flame, or when an electric discharge is passed through a tube containing gaseous atoms of an element. The light emitted by such sources consists entirely of discrete wavelengths. This kind of emission is known as a discrete spectrum or line spectrum (the "lines" that appear on photographic images of the spectrum are really images of the slit through which the light passes before being dispersed by the prism in the spectrograph).
Every element has its own line spectrum which serves as a sensitive and useful tool for detecting the presence and relative abundance of the element, not only in terrestrial samples but also in stars. (As a matter of fact, the element helium was discovered in the sun, through its line spectrum, before it had been found on Earth.) In some elements, most of the energy in the visible part of the emission spectrum is concentrated into just a few lines, giving their light characteristic colors: yellow-orange for sodium, blue-green for mercury (these are commonly seen in street lights) and orange for neon.
Line spectra were well known early in the 19th century, and were widely used for the analysis of ores and metals. The German spectroscopist R.W. Bunsen, now famous for his gas burner, was then best known for discovering two new elements, rubidium and cesium, from the line spectrum he obtained from samples of mineral spring waters.
Q18. How are line spectra organized?
Until 1885, line spectra were little more than "fingerprints" of the elements; extremely useful in themselves, but incapable of revealing any more than the identify of the individual atoms from which they arise. In that year a Swiss school teacher named Johann Balmer published a formula that related the wavelengths of the four known lines in the emission spectrum of hydrogen in a simple way. Balmer's formula was not based on theory; it was probably a case of cut-and-try, but it worked: he was able to predict the wavelength of a fifth, yet-to-be discovered emission line of hydrogen, and as spectroscopic and astronomical techniques improved (the only way of observing highly excited hydrogen atoms at the time was to observe the solar spectrum during an eclipse), a total of 35 lines were discovered, all having wavelengths given by the formula which we write in the modern manner as
in which m = 2 and R is a constant (the Rydberg constant , after the Swedish spectroscopist) whose value is 1.09678E7 m –1 . The variable n is an integer whose values 1, 2, etc. give the wavelengths of the different lines.
It was soon discovered that by replacing m with integers other than 2, other series of hydrogen lines could be accounted for. These series, which span the wavelength region from the ultraviolet through infrared, are named after their discoverers.
|
name of series |
when discovered |
value of m |
| Lyman | 1906-14 |
1 |
| Balmer | 1885 |
2 |
| Paschen | 1908 |
3 |
| Brackett | 1922 |
4 |
| Pfund | 1924 |
5 |
Attempts to adapt Balmer's formula to describe the spectra of atoms other than hydrogen generally failed, although certain lines of some of the spectra seemed to fit this same scheme, with the same value of R .
Q19. How large can n be?
There is no limit; values in the hundreds have been observed, although doing so is very difficult because of the increasingly close spacing of successive levels as n becomes large. Atoms excited to very high values of n are said to be in Rydberg states .
Q20. Why do line spectra become continuous at short wavelengths?
As n becomes larger, the spacing between neighboring levels diminishes and the discrete lines merge into a continuum . This can mean only one thing: the energy levels converge as n approaches infinity. This convergence limit corresponds to the energy required to completely remove the electron from the atom; it is the ionization energy .
At energies in excess of this, the electron is no longer bound to the rest of the atom, which is now of course a positive ion. But an unbound system is not quantized; the kinetic energy of the ion and electron can now have any value in excess of the ionization energy. When such an ion and electron pair recombine to form a new atom, the light emitted will have a wavelength that falls in the continuum region of the spectrum. Spectroscopic observation of the convergence limit is an important method of measuring the ionization energies of atoms.
Q21. What were the problems with the planetary model of the atom?
Rutherford's demonstration that the mass and the positive charge of the atom is mostly concentrated in a very tiny region called the nucleus, forced the question of just how the electrons are disposed outside the nucleus. By analogy with the solar system, a planetary model was suggested: if the electrons were orbiting the nucleus, there would be a centrifugal force that could oppose the electrostatic attraction and thus keep the electrons from falling into the nucleus. This of course is similar to the way in which the centrifugal force produced by an orbiting planet exactly balances the force due to its gravitational attraction to the sun.
The planetary model suffers from one fatal weakness: electrons, unlike planets, are electrically charged. An electric charge revolving in an orbit is continually undergoing a change of direction, that is, acceleration. It has been well known since the time of Hertz that an accelerating electric charge radiates energy. We would therefore expect all atoms to act as miniature radio stations. Even worse, conservation of energy requires that any energy that is radiated must be at the expense of the kinetic energy of the orbital motion of the electron. Thus the electron would slow down, reducing the centrifugal force and allowing the electron to spiral closer and closer to the nucleus, eventually falling into it. In short, no atom that operates according to the planetary model would last long enough for us to talk about it.
As if this were not enough, the planetary model was totally unable to explain any of the observed properties of atoms, including their line spectra.
Q22. How did Bohr's theory save the planetary model... for a while?
Niels Bohr was born in the same year (1885) that Balmer published his formula for the line spectrum of hydrogen. Beginning in 1913, the brilliant Danish physicist published a series of papers that would ultimately derive Balmer's formula from first principles.
Bohr's first task was to explain why the orbiting electron does not radiate energy as it moves around the nucleus. This energy loss, if it were to occur at all, would do so gradually and smoothly. But Planck had shown that black body radiation could only be explained if energy changes were limited to jumps instead of gradual changes. If this were a universal characteristic of energy- that is, if all energy changes were quantized, then very small changes in energy would be impossible, so that the electron would in effect be "locked in" to its orbit.
From this, Bohr went on to propose that there are certain stable orbits in which the electron can exist without radiating and thus without falling into a "death spiral". This supposition was a daring one at the time because it was inconsistent with classical physics, and the theory which would eventually lend it support would not come along until the work of de Broglie and Heisenberg more than ten years later.
Since Planck's quanta came in multiples of h , Bohr restricted his allowed orbits to those in which the product of the radius r and the momentum of the electron mv (which has the same units as h , J–s ) are integral multiples of h:
2π rmv = nh ( n = 1,2,3, . .)
Each orbit corresponds to a different energy, with the electron normally occupying the one having the lowest energy, which would be the innermost orbit of the hydrogen atom.
Taking the lead from Einstein's explanation of the photoelectric effect, Bohr assumed that each spectral line emitted by an atom that has been excited by absorption of energy from an electrical discharge or a flame represents a change in energy given by Δ E = hn = hc /λ, the energy lost when the electron falls from a higher orbit (value of n ) into a lower one.
Finally, as a crowning triumph, Bohr derived an expression giving the radius of the nth orbit for the electron in hydrogen as
Substitution of the observed values of the electron mass and electron charge into this equation yielded a value of 0.529E10 m for the radius of the first orbit, a value that corresponds to the radius of the hydrogen atom obtained experimentally from the kinetic theory of gases. Bohr was also able to derive a formula giving the value of the Rydberg constant, and thus in effect predict the entire emission spectrum of the hydrogen atom.
Q23. What were the main problems with Bohr's theory?
There were two kinds of difficulties. First, there was the practical limitation that it only works for atoms that have one electron-- that is, for H, He + , Li 2 + , etc. The second problem was that Bohr was unable to provide any theoretical justification for his assumption that electrons in orbits described by the preceding equation would not lose energy by radiation. This reflects the fundamental underlying difficulty: because de Broglie's picture of matter waves would not come until a decade later, Bohr had to regard the electron as a classical particle traversing a definite orbital path.
Q24. How did the wave picture of the electron save Bohr's theory?
Once it became apparent that the electron must have a wavelike character, things began to fall into place. The possible states of an electron confined to a fixed space are in many ways analogous to the allowed states of a vibrating guitar string. These states are described as standing waves that must possess integral numbers of nodes. The states of vibration of the string are described by a series of integral numbers n = 1,2,... which we call the fundamental, first overtone, second overtone, etc. The energy of vibration is proportional to n 2 . Each mode of vibration contains one more complete wave than the one below it.
In exactly the same way, the mathematical function that defines the probability of finding the electron at any given location within a confined space possesses n peaks and corresponds to states in which the energy is proportional to n 2 .
The electron in a hydrogen atom is bound to the nucleus by its spherically symmetrical electrostatic charge, and should therefore exhibit a similar kind of wave behavior. This is most easily visualized in a two-dimensional cross section that corresponds to the conventional electron orbit. But if the particle picture is replaced by de Broglie's probability wave, this wave must follow a circular path, and- most important of all- its wavelength (and consequently its energy) is restricted to integral multiples n = 1,2,.. of the circumference 2π r = n λ.
for otherwise the wave would collapse owing to self-interference. That is, the energy of the electron must be quantized; what Bohr had taken as a daring but arbitrary assumption was now seen as a fundamental requirement. Indeed the above equation can be derived very simply by combining Bohr's quantum condition 2π r mv = nh with the expression mv = h /λ for the deBroglie wavelength of a particle.
Viewing the electron as a standing-wave pattern also explains its failure to lose energy by radiating. Classical theory predicts that an accelerating electric charge will act as a radio transmitter; an electron traveling around a circular wire would certainly act in this way, and so would one rotating in an orbit around the nucleus. In a standing wave, however, the charge is distributed over space in a regular and unchanging way; there is no motion of the charge itself, and thus no radiation.
Q25. What is an orbital?
Because the classical view of an electron as a localizable particle is now seen to be untenable, so is the concept of a definite trajectory, or "orbit". Instead, we now use the word orbital to describe the state of existence of an electron. An orbital is really no more than a mathematical function describing the standing wave that gives the probability of the electron manifesting itself at any given location in space. More commonly (and loosely) we use the word to describe the region of space in which an electron is likely to be found. Each kind of orbital is characterized by a set of quantum numbers n, l , and m These relate, respectively, to the average distance of the electron from the nucleus, to the shape of the orbital, and to its orientation in space.
Q26. If the electron cannot be localized, can it be moving?
In its lowest state in the hydrogen atom (in which l =0) the electron has zero angular momentum, so electrons in s orbitals are not in motion. In orbitals for which l >0 the electron does have an effective angular momentum, and since the electron also has a definite rest mass m e = 9.11E31 kg, it must possess an effective velocity. Its value can be estimated from the Uncertainty Principle; if the volume in which the electron is confined is about 10 –10 m, then the uncertainty in its momentum is at least h /(10 10 ) = 6.6E–24 kg m s –1 , which implies a velocity of around 10 7 m s –1 , or almost one-tenth the velocity of light.
The stronger the electrostatic force of attraction by the nucleus, the faster the effective electron velocity. In fact, the innermost electrons of the heavier elements have effective velocities so high that relativistic effects set in; that is, the effective mass of the electron significantly exceeds its rest mass. This has direct chemical effects; it is the cause, for example, of the low melting point of metallic mercury and of the color of gold.
Q27. Why does the electron not fall into the nucleus?
The negatively-charged electron is attracted to the positive charge of the nucleus. What prevents it from falling in? This question can be answered in various ways at various levels. All start with the statement that the electron, being a quantum particle, has a dual character and cannot be treated solely by the laws of Newtonian mechanics.
We saw above that in its wavelike guise, the electron exists as a standing wave which must circle the nucleus at a sufficient distance to allow at least one wavelength to fit on its circumference. This means that the smaller the radius of the circle, the shorter must be the wavelength of the electron, and thus the higher the energy. Thus it ends up "costing" the electron energy if it gets too close to the nucleus. The normal orbital radius represents the balance between the electrostatic force trying to pull the electron in, and what we might call the "confinement energy" that opposes the electrostatic energy. This confinement energy can be related to both the particle and wave character of the electron.
If the electron as a particle were to approach the nucleus, the uncertainty in its position would become so small (owing to the very small volume of space close to the nucleus) that the momentum, and therefore the energy, would have to become very large. The electron would, in effect, be "kicked out" of the nuclear region by the confinement energy.
The standing-wave patterns of an electron in a box can be calculated quite easily. For a spherical enclosure of diameter d , the energy is given by
in which n = 1,2,3. etc.
Q28. What is electron spin?
Each electron in an atom has associated with it a magnetic field whose direction is quantized; there are only two possible values that point in opposite directions. We usually refer to these as "up" and "down", but the actual directions are parallel and antiparallel to the local magnetic field associated with the orbital motion of the electron.
The term spin implies that this magnetic moment is produced by the electron charge as the electron rotates about its own axis. Although this conveys a vivid mental picture of the source of the magnetism, the electron is not an extended body and its rotation is meaningless. Electron spin has no classical counterpart and no simple explanation; the magnetic moment is a consequence of relativistic shifts in local space and time due to the high effective velocity of the electron in the atom. This effect was predicted theoretically by P.A.M. Dirac in 1928. | libretexts | 2025-03-17T19:53:08.049804 | 2013-10-03T01:37:47 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/05%3A_Atoms_and_the_Periodic_Table/5.01%3A_Primer_on_Quantum_Theory",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "5.1: Primer on Quantum Theory",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/05%3A_Atoms_and_the_Periodic_Table/5.02%3A_Quanta_-_A_New_View_of_the_World | 5.2: Quanta - A New View of the World
Make sure you thoroughly understand the following essential ideas which have been presented above. It is especially important that you know the precise meanings of all the highlighted terms in the context of this topic.
- What was the caloric theory of heat , and how did Rumford's experiments in boring cannon barrels lead to its overthrow?
- Define thermal radiation and the "scandal of the ultraviolet" and the role Max Planck played in introducing the quantum concept.
- What is the photoelectric effect ? Describe the crucial insight that led Einstein to the concept of the photon .
What we call "classical" physics is based on our experience of what we perceive as the "real world". Even without knowing the details of Newton's laws of motion that describe the behavior of macroscopic bodies, we have all developed an intuitive understanding of this behavior; it is a part of everyone's personal view of the world. By extension, we tend to view atoms and molecules in much the same way, that is, simply as miniature versions of the macroscopic objects we know from everyday life. It turns out, however, that our everyday view of the macroscopic world is only a first approximation of the reality that becomes apparent at the atomic level. Many of those who first encounter this microscopic world of quantum weirdness find it so foreign to prior experience that their first reaction is to dismiss it as pure fantasy.
The fact is, however, that it is not only for real, but serves as the key that unlocks even some of the simplest aspects of modern Chemistry. Our goal in this lesson is to introduce you to this new reality, and to provide you with a conceptual understanding of it that will make Chemistry a more meaningful part of your own personal world.
The Limits of Classical Physics
Near the end of the nineteenth century, the enormous success of the recently developed kinetic molecular theory of gases had dispelled most doubts about the atomic nature of matter; the material world was seen to consist of particles that had distinct masses and sizes, and which moved in trajectories just as definite as those of billiard balls.
In the 1890s, however, certain phenomena began to be noticed that seemed to be inconsistent with this dichotomy of particles and waves. This prompted further questions and further experiments which led eventually to the realization that classical physics, while it appears to be "the truth", is by no means the whole truth. In particular, it cannot accurately describe the behavior of objects that are extremely small or fast-moving.
Chemistry began as an entirely empirical, experimental science, dealing with the classification and properties of substances and with their transformations in chemical reactions. As this large body of facts developed into a science (one of whose functions is always to explain and correlate known facts and to predict new ones), it has become necessary to focus increasingly on the nature and behavior of individual atoms and of their own constituent parts, especially the electrons . Owing to their extremely small masses, electrons behave as quantum particles which do not obey the rules of classical physics.
The purpose of this introductory unit is to summarize the major ideas of quantum theory that will be needed to treat atomic and molecular structure later on in the course.
Quantum theory can be presented simply as a set of assumptions which are developed through mathematical treatment. This is in fact the best route to take if one is to use quantum mechanics as a working tool. More than this, however, quantum theory brings with it a set of concepts that have far-reaching philosophical implications and which should be a part of the intellectual equipment of anyone who claims to have a general education in the sciences. A major objective of this chapter will be to introduce you to "the quantum way of thinking" and to show how this led to a profound break with the past, and a shift in our way of viewing the world that has no parallel in Western intellectual history.
Light
The development of our ideas about light and radiation was not quite as direct. In the 17th century, heat was regarded as a substance called caloric whose invisible atoms could flow from one object to another, thus explaining thermal conduction. This view of heat as a material fluid seemed to be confirmed by the observation that heat can pass through a vacuum, a phenomenon that we now call radiant heat . Isaac Newton, whose experiments with a prism in 1672 led to his famous textbook " Optiks ", noted that light seemed to react with green plants to produce growth, and must therefore be a "substance" having atoms of its own. By 1800, the corpuscular (particle) theory of light was generally accepted.
And yet there were questions. Count Rumford's observation that the drill bits employed in boring cannons produced more frictional heat when they were worn and dull led to the overthrow of the caloric theory.
The caloric theory of heat assumed that small particles are able to contain more heat than large ones, so that when a material is sawn or drilled, some of its heat is released as the filings are produced. A dull drill produces few filings, and according to this theory, should produce little heat, but Rumford was able to show that the amount of heat produced is in fact independent of the state of the drill, and depends only on the amount of mechanical work done in turning it.In 1812, Christiaan Huygens showed how a number of optical effects could be explained if light had a wavelike nature, and this led Fresnel to develop an elaborate wave theory of light. By 1818 the question of "particle or wave" had become so confused that the French Academy held a great debate intended to settle the matter once for all. The mathematician Poisson pointed out that Fresnel's wave theory had a ridiculous consequence: the shadow cast by a circular disk should have a bright spot of light at its center, where waves arriving in phase would reinforce each other. Fresnel performed the experiment and was entirely vindicated: if the light source is sufficiently point-like (an extended source such as the sun or an ordinary lamp will not work), this diffraction effect is indeed observed.
Heat
By this time it was known that radiant heat and "cold" could be focused and transmitted by mirrors, and in 1800 William Herschel discovered that radiant heat could be sensed in the dark region just beyond the red light refracted by a prism. Light and radiant heat, which had formerly been considered separate, were now recognized as one, although the question of precisely what was doing the "waving" was something of an embarrassment.
The quantum revolution
By 1890, physicists thought they had tidied up the world into the two realms of particulate matter and of wavelike radiant energy, which by then had been shown by James Clerk Maxwell to be forms of electromagnetic energy. No sooner had all this been accomplished, than the cracks began to appear; these quickly widened into chasms, and within twenty years the entire foundations of classical physics had disintegrated; it would not be until the 1920's that anyone with a serious interest in the nature of the microscopic world would find a steady place to stand.
Cathode rays
The atom was the first to go. It had been known for some time that when a high voltage is applied to two separated pieces of metal in an evacuated tube, "cathode rays" pass between them. These rays could be detected by their ability to cause certain materials to give off light, or fluoresce , and were believed to be another form of electromagnetic radiation. Then, in the 1890s, J.J. Thompson and Jean Perrin showed that cathode rays are composed of particles having a measurable mass (less than 1/1000 of that of the hydrogen atom), they carry a fixed negative electric charge, and that they come from atoms. This last conclusion went so strongly against the prevailing view of atoms as the ultimate, un-cuttable stuff of the world that Thompson only reluctantly accepted it, and having done so, quickly became the object of widespread ridicule.
Radioactivity
But worse was soon to come; not only were atoms shown not to be the smallest units of matter, but the work of the Curies established that atoms are not even immutable; atoms of high atomic weight such as uranium and radium give off penetrating beams of radiation and in the process change into other elements, disintegrating through a series of stages until they turn into lead. Among the various kinds of radiation that accompany radioactive disintegration are the very same cathode rays that had been produced artificially by Thompson, and which we now know as electrons .
Radiation is quantized
The wave theory of radiation was also running into difficulties. Any object at a temperature above absolute zero gives off radiant energy; if the object is moderately warm, we sense this as radiant heat . As the temperature is raised, a larger proportion of shorter-wavelength radiation is given off, so that at sufficiently high temperatures the object becomes luminous. The origin of this radiation was thought to lie in the thermally-induced oscillations of the atoms within the object, and on this basis the mathematical physicist James Rayleigh had worked out a formula that related the wavelengths given off to the temperature. Unfortunately, this formula did not work; it predicted that most of the radiation given off at any temperature would be of very short wavelength, which would place it in the ultraviolet region of the spectrum. What was most disconcerting is that no one could say why Rayleigh's formula did not work, based as it was on sound classical physics; this puzzle became known as the "scandal of the ultraviolet".
Quanta
In 1899 the German physicist Max Planck pointed out that one simple change in Rayleigh's argument would produce a formula that accurately describes the radiation spectrum of a perfect radiator, which is known as a "black body". Rayleigh assumed that such an object would absorb and emit amounts of radiation in amounts of any magnitude, ranging from minute to very large. This is just what one would expect on the basis of the similar theory of mechanical physics which had long been well established. Planck's change, for which he could offer no physical justification other than that it works, was to discard this assumption, and to require that the absorption or emission of radiation occur only in discrete chunks, or quanta . Max Planck had unlocked the door that would lead to the resurrection of the corpuscular theory of radiation. Only a few years later, Albert Einstein would kick the door open and walk through.
The photoelectric effect
By 1900 it was known that a beam of light, falling on a piece of metal, could cause electrons to be ejected from its surface. Evidently the energy associated with the light overcomes the binding energy of the electron in the metal; any energy the light supplies in excess of this binding energy appears as kinetic energy of the emitted electron. What seemed peculiar, however, was that the energy of the ejected electrons did not depend on the intensity of the light as classical physics would predict. Instead, the energy of the photoelectrons (as they are called) varies with the color, or wavelength of the light; the higher the frequency (the shorter the wavelength), the greater the energy of the ejected electrons.
In 1905, Albert Einstein, then an unknown clerk in the Swiss Patent Office, published a remarkable paper in which he showed that if light were regarded as a collection of individual particles, a number of phenomena, including the photoelectric effect, could be explained. Each particle of light, which we now know as a photon , has associated with it a distinct energy that is proportional to the frequency of the light, and which corresponds to Planck's energy quanta. The energy of the photon is given by
\[ E = h u = \dfrac{hc}{\lambda}\]
in which h is Planck's constant, 6.63×10 –34 J-s, ν (Greek nu ) is the frequency, λ ( lamda ) is the wavelength, and c is the velocity of light, 3.00×10 8 m s –1 . The photoelectric effect is only seen if the photon energy e exceeds the binding energy of the electron in the metal; it is clear from the above equation that as the wavelength increases, e decreases, and eventually no electrons will be released. Einstein had in effect revived the corpuscular theory of light, although it would not be until about 1915 that sufficient experimental evidence would be at hand to convince most of the scientific world— but not all of it: Max Planck, whose work had led directly to the revival of the particle theory of light, remained one of the strongest doubters.
The 1905 volume of Annalen der Physik is now an expensive collector's item, for in that year Einstein published three major papers, any one of which would have guaranteed him his place in posterity. The first, on the photoelectric effect, eventually won him the Nobel Prize. The second paper, on Brownian motion, amounted to the first direct confirmation of the atomic theory of matter. The third paper, his most famous, "On the electrodynamics of moving bodies", set forth the special theory of relativity.
The appearance of his general theory of relativity in 1919 would finally make Einstein into a reluctant public celebrity and scientific superstar. This theory explained gravity as a consequence of the curvature of space-time.
Matter and energy united
Energy
The concept of energy was slow to develop in science, partly because it was not adequately differentiated from the related quantities of force and motion. It was generally agreed that some agent of motion and change must exist; Descartes suggested, for example, that God, when creating the world, had filled it with "vortices" whose motions never ceased, but which could be transferred to other objects and thus give them motion. Gradually the concepts of vis viva and vis mortua developed; these later became kinetic and potential energy. Later on, the cannon-boring experiments of Benjamin Thompson (Count Rumford) revealed the connections between heat and work. Finally, the invention of the steam engine forced the birth of the science of thermodynamics, whose founding law was that a quantity known as energy can be transferred from one object to another through the processes of heat and work, but that the energy itself is strictly conserved.
Relativity
If Einstein's first 1905 paper put him on the scientific map, the third one made him a scientific celebrity.
In effect, Einstein merely asked a simple question about Faraday's law of electromagnetic induction, which says that a moving electric charge (such as is produced by an electric current flowing in a conductor) will create a magnetic field. Similarly, a moving magnetic field will induce an electric current. In either case, something has to be moving. Why, Einstein asked, does this motion have to be relative to that of the room in which the experiment is performed— that is, relative to the Earth? A stationary charge creates no field, but we know that there is really no such thing as a stationary charge, since the Earth itself is in motion; what, then, do motion and velocity ultimately relate to?
The answer, Einstein suggested, is that the only constant and unchanging velocity in the universe is that of light. This being so, the beam emitted by the headlight of a moving vehicle, for example, can travel no faster than the light coming from a stationary one. This in turn suggested (through the Lorentz transformation - we are leaving out a few steps here!) that mass, as well as velocity (and thus also, time) are relative in that they depend entirely on the motion of the observer. Two observers, moving at different velocities relative to each other, will report different masses for the same object, and will age at different rates. Further, the faster an object moves with respect to an observer, the greater is its mass, and the harder it becomes to accelerate it to a still greater velocity. As the velocity of an object approaches the speed of light, its mass approaches infinity, making it impossible for an object to move as fast as light.
According to Einstein, the speed of light is really the only speed in the universe. If you are sitting still, you are moving through the time dimension at the speed of light. If you are flying in an airplane, your motion along the three cartesian dimensions subtracts from that along the fourth (time) coordinate, with the result that time, for you, passes more slowly.
Relativity comes into chemistry in two rather indirect ways: it is responsible for the magnetic moment ("spin") of the electron, and in high-atomic weight atoms in which the electrons have especially high effective velocities, their greater [relativistic] masses cause them to be bound more tightly to the nucleus— accounting, among other things, for the color of gold, and for the unusual physical and chemical properties of mercury.
Mass-energy
Where does the additional mass of a moving body come from? Simply from the kinetic energy of the object; this equivalence of mass and energy, expressed by the famous relation e = mc 2 , is the most well known consequence of special relativity. The reason that photons alone can travel at the velocity of light is that these particles possess zero rest mass to start with. You can think of ordinary matter as "congealed energy", trapped by its possession of rest mass, whereas light is energy that has been liberated of its mass. | libretexts | 2025-03-17T19:53:08.204986 | 2013-10-03T01:37:47 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/05%3A_Atoms_and_the_Periodic_Table/5.02%3A_Quanta_-_A_New_View_of_the_World",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "5.2: Quanta - A New View of the World",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/05%3A_Atoms_and_the_Periodic_Table/5.03%3A_Light_Particles_and_Waves | 5.3: Light, Particles, and Waves
Make sure you thoroughly understand the following essential ideas
- Cite two pieces of experimental evidence that demonstrate, respectively, the wave- and particle-like nature of light.
- Define the terms amplitude , wavelength , and frequency as they apply to wave phenomena.
- Give a qualitative description of electromagnetic radiation in terms of electrostatic and magnetic fields.
- Be able to name the principal regions of the electromagnetic spectrum (X-rays, infrared region, etc.) and specify their sequence in terms of either wavelength or energy per photon.
- Describe the difference between line spectra and continuous spectra in terms of both their appearance and their origins.
- What is meant by the de Broglie wavelength of a particle? How will the particle's mass and velocity affect the wavelength?
- State the consequences of the Heisenberg uncertainty principle in your own words.
Our intuitive view of the "real world" is one in which objects have definite masses, sizes, locations and velocities. Once we get down to the atomic level, this simple view begins to break down. It becomes totally useless when we move down to the subatomic level and consider the lightest of all chemically-significant particles, the electron . The chemical properties of a particular kind of atom depend on the arrangement and behavior of the electrons which make up almost the entire volume of the atom. The electronic structure of an atom can only be determined indirectly by observing the manner in which atoms absorb and emit light. Light, as you already know, has wavelike properties, so we need to know something about waves in order to interpret these observations. But because the electrons are themselves quantum particles and therefore have wavelike properties of their own, we will find that an understanding of the behavior of electrons in atoms can only be gained through the language of waves.
The language of light
Atoms are far too small to see directly, even with the most powerful optical microscopes. But atoms do interact with and under some circumstances emit light in ways that reveal their internal structures in amazingly fine detail. It is through the "language of light" that we communicate with the world of the atom. This section will introduce you to the rudiments of this language.
Wave, particle, or what?
In the early 19th century, the English scientist Thomas Young carried out the famous double-slit experiment which demonstrated that a beam of light, when split into two beams and then recombined, will show interference effects that can only be explained by assuming that light is a wavelike disturbance. By 1820, Augustin Fresnel had put this theory on a sound mathematical basis, but the exact nature of the waves remained unclear until the 1860's when James Clerk Maxwell developed his electromagnetic theory.
But Einstein's 1905 explanation of the photoelectric effect showed that light also exhibits a particle-like nature. The photon is the smallest possible packet ( quantum ) of light; it has zero mass but a definite energy.
When light-wave interference experiments are conducted with extremely low intensities of light, the wave theory breaks down; instead of recording a smooth succession of interference patterns as shown above, an extremely sensitive detector sees individual pulses— that is, individual photons .
Note
Suppose we conduct the double-slit interference experiment using a beam of light so weak that only one photon at a time passes through the apparatus (it is experimentally possible to count single photons, so this is a practical experiment.) Each photon passes through the first slit, and then through one or the other of the second set of slits, eventually striking the photographic film where it creates a tiny dot. If we develop the film after a sufficient number of photons have passed through, we find the very same interference pattern we obtained with higher-intensity light whose behavior was could be explained by wave interference.
There is something strange here. Each photon, acting as a particle, must pass through one or the other of the pair of slits, so we would expect to get only two groups of spots on the film, each opposite one of the two slits. Instead, it appears that the each particle, on passing through one slit, "knows" about the other, and adjusts its final trajectory so as to build up a wavelike interference pattern.
It gets even stranger: suppose that we set up a detector to determine which slit a photon is heading for, and then block off the other slit with a shutter. We find that the photon sails straight through the open slit and onto the film without trying to create any kind of an interference pattern. Apparently, any attempt to observe the photon as a discrete particle causes it to behave like one.
One well-known physicist (Landé) suggested that perhaps we should coin a new word, wavicle , to reflect this duality.
Later on, virtually the same experiment was repeated with electrons, thus showing that particles can have wavelike properties (as the French physicist Louis de Broglie predicted in 1923), just as what were conventionally thought to be electromagnetic waves possess particle-like properties.
Is it a particle or is it a wave?
For large bodies (most atoms, baseballs, cars) there is no question: the wave properties are insignificant, and the laws of classical mechanics can adequately describe their behaviors. But for particles as tiny as electrons ( quantum particles ), the situation is quite different: instead of moving along well defined paths, a quantum particle seems to have an infinity of paths which thread their way through space, seeking out and collecting information about all possible routes, and then adjusting its behavior so that its final trajectory, when combined with that of others, produces the same overall effect that we would see from a train of waves of wavelength = h/mv .
Taking this idea of quantum indeterminacy to its most extreme, the physicist Erwin Schrödinger proposed a "thought experiment" in which the radioactive decay of an atom would initiate a chain of events that would lead to the death of a cat placed in a closed box. The atom has a 50% chance of decaying in an hour, meaning that its wave representation will contain both possibilities until an observation is made. The question, then, is will the cat be simultaneously in an alive-and-dead state until the box is opened? If so, this raises all kinds of interesting questions about the nature of being.
What you need to know about waves
We use the term "wave" to refer to a quantity which changes with time. Waves in which the changes occur in a repeating or periodic manner are of special importance and are widespread in nature; think of the motions of the ocean surface, the pressure variations in an organ pipe, or the vibrations of a plucked guitar string. What is interesting about all such repeating phenomena is that they can be described by the same mathematical equations.
Wave motion arises when a periodic disturbance of some kind is propagated through a medium; pressure variations through air, transverse motions along a guitar string, or variations in the intensities of the local electric and magnetic fields in space, which constitutes electromagnetic radiation. For each medium, there is a characteristic velocity at which the disturbance travels.
There are three measurable properties of wave motion: amplitude , wavelength , and frequency , the number of vibrations per second. The relation between the wavelength \(λ\) (Greek lambda ) and frequency of a wave \( u\) (Greek nu ) is determined by the propagation velocity v.
\[v = u λ\]
What is the wavelength of the musical note A = 440 hz when it is propagated through air in which the velocity of sound is 343 m s –1 ?
Solution
\[λ = \dfrac{v} { u} = \dfrac{343\; m \,s^{–1}}{440\, s^{–1}} = 0.80\; m\]
Light and electromagnetic radiation
Michael Faraday's discovery that electric currents could give rise to magnetic fields and vice versa raised the question of how these effects are transmitted through space. Around 1870, the Scottish physicist James Clerk Maxwell (1831-1879) showed that this electromagnetic radiation can be described as a train of perpendicular oscillating electric and magnetic fields.
Maxwell was able to calculate the speed at which electromagnetic disturbances are propagated, and found that this speed is the same as that of light. He therefore proposed that light is itself a form of electromagnetic radiation whose wavelength range forms only a very small part of the entire electromagnetic spectrum. Maxwell's work served to unify what were once thought to be entirely separate realms of wave motion.
The electromagnetic spectrum
The electromagnetic spectrum is conventionally divided into various parts as depicted in the diagram below, in which the four logarithmic scales correlate the wavelength of electromagnetic radiation with its frequency in herz (units of s –1 ) and the energy per photon, expressed both in joules and electron-volts.
The other items shown on the diagram, from the top down, are:
- the names used to denote the various wavelength ranges of radiation (you should know their names and the order in which they appear)
- the principal effects of the radiation on atoms and molecules
- the peaks of thermal radiation emitted by black bodies at three different temperatures
Electromagnetic radiation and chemistry. It's worth noting that radiation in the ultraviolet range can have direct chemical effects by ionizing atoms and disrupting chemical bonds. Longer-wavelength radiation can interact with atoms and molecules in ways that provide a valuable means of identifying them and revealing particular structural features.
Energy units and magnitudes
It is useful to develop some feeling for the various magnitudes of energy that we must deal with. The basic SI unit of energy is the Joule ; the appearance of this unit in Planck's constant h allows us to express the energy equivalent of light in joules. For example, light of wavelength 500 nm, which appears blue-green to the human eye, would have a frequency of
The quantum of energy carried by a single photon of this frequency is
Another energy unit that is commonly employed in atomic physics is the electron volt ; this is the kinetic energy that an electron acquires upon being accelerated across a 1-volt potential difference. The relationship 1 eV = 1.6022E–19 J gives an energy of 2.5 eV for the photons of blue-green light.
Two small flashlight batteries will produce about 2.5 volts, and thus could, in principle, give an electron about the same amount of kinetic energy that blue-green light can supply. Because the energy produced by a battery derives from a chemical reaction, this quantity of energy is representative of the magnitude of the energy changes that accompany chemical reactions.
In more familiar terms, one mole of 500-nm photons would have an energy equivalent of Avogadro's number times 4E–19 J, or 240 kJ per mole. This is comparable to the amount of energy required to break some chemical bonds. Many substances are able to undergo chemical reactions following light-induced disruption of their internal bonding; such molecules are said to be photochemically active .
Spectra: Interaction of light and matter
Continuous spectra
Any body whose temperature is above absolute zero emits radiation covering a broad range of wavelengths. At very low temperatures the predominant wavelengths are in the radio microwave region. As the temperature increases, the wavelengths decrease; at room temperature, most of the emission is in the infrared.
At still higher temperatures, objects begin to emit in the visible region, at first in the red, and then moving toward the blue as the temperature is raised. These thermal emission spectra are described as continuous spectra , since all wavelengths within the broad emission range are present.
The source of thermal emission most familiar to us is the Sun. When sunlight is refracted by rain droplets into a rainbow or by a prism onto a viewing screen, we see the visible part of the spectrum.
Red hot, white hot, blue hot... your rough guide to temperatures of hot objects.
Line spectra
Heat a piece of iron up to near its melting point and it will emit a broad continuous spectrum that the eye perceives as orange-yellow. But if you zap the iron with an electric spark, some of the iron atoms will vaporize and have one or more of their electrons temporarily knocked out of them. As they cool down the electrons will re-combine with the iron ions, losing energy as the move in toward the nucleus and giving up this excess energy as light. The spectrum of this light is anything but continuous; it consists of a series of discrete wavelengths which we call lines .
Each chemical element has its own characteristic line spectrum which serves very much like a "fingerprint" capable of identifying a particular element in a complex mixture. Shown below is what you would see if you could look at several different atomic line spectra directly.
Atomic line spectra are extremely useful for identifying small quantities of different elements in a mixture.
- Companies that own large fleets of trucks and buses regularly submit their crankcase engine oil samples to spectrographic analysis. If they find high levels of certain elements (such as vanadium) that occur only in certain alloys, this can signal that certain parts of the engine are undergoing severe wear. This allows the mechanical staff to take corrective action before engine failure occurs.
- Several elements (Rb, Cs, Tl) were discovered by observing spectral lines that did not correspond to any of the then-known elements. Helium, which is present only in traces on Earth, was first discovered by observing the spectrum of the Sun.
- A more prosaic application of atomic spectra is determination of the elements present in stars.
If you live in a city, you probably see atomic line light sources every night! "Neon" signs are the most colorful and spectacular, but high-intensity street lighting is the most widespread source. A look at the emission spectrum (above) of sodium explains the intense yellow color of these lamps. The spectrum of mercury (not shown) similarly has its strongest lines in the blue-green region.
Particles and waves
There is one more fundamental concept you need to know before we can get into the details of atoms and their spectra. If light has a particle nature, why should particles not possess wavelike characteristics? In 1923 a young French physicist, Louis de Broglie, published an argument showing that matter should indeed have a wavelike nature. The de Broglie wavelength of a body is inversely proportional to its momentum mv :
\[ \lambda =\dfrac{h}{mv}\]
If you explore the magnitude of the quantities in this equation (recall that h is around 10 –33 J s), it will be apparent that the wavelengths of all but the lightest bodies are insignificantly small fractions of their dimensions, so that the objects of our everyday world all have definite boundaries. Even individual atoms are sufficiently massive that their wave character is not observable in most kinds of experiments. Electrons, however, are another matter; the electron was in fact the first particle whose wavelike character was seen experimentally, following de Broglie's prediction. Its small mass (9.1E–31 kg) made it an obvious candidate, and velocities of around 100 km/s are easily obtained, yielding a value of λ in the above equation that well exceeds what we think of as the "radius" of the electron. At such velocities the electron behaves as if it is "spread out" to atomic dimensions; a beam of these electrons can be diffracted by the ordered rows of atoms in a crystal in much the same way as visible light is diffracted by the closely-spaced groves of a CD recording.
Electron diffraction has become an important tool for investigating the structures of molecules and of solid surfaces.
A more familiar exploitation of the wavelike properties of electrons is seen in the electron microscope , whose utility depends on the fact that the wavelength of the electrons is much less than that of visible light, thus allowing the electron beam to reveal detail on a correspondingly smaller scale.
The uncertainty principle
In 1927, the German physicist Werner Heisenberg pointed out that the wave nature of matter leads to a profound and far-reaching conclusion: no method of observation, however perfectly it is carried out, can reveal both the exact location and momentum (and thus the velocity ) of a particle. This is the origin of the widely known concept that the very process of observation will change the value of the quantity being observed. The Heisenberg principle can be expressed mathematically by the inequality
\[ \Delta{x}\Delta{p} \leq \dfrac{h}{2\pi}\]
in which the \(\Delta\) (deltas) represent the uncertainties with which the location and momentum are known.
Note
Suppose that you wish to measure the exact location of a particle that is at rest (zero momentum). To accomplish this, you must "see" the molecule by illuminating it with light or other radiation. But the light acts like a beam of photons, each of which possesses the momentum h/λ in which λ is the wavelength of the light. When a photon collides with the particle, it transfers some of its momentum to the particle, thus altering both its position and momentum.
Notice how the form of this expression predicts that if the location of an object is known exactly (\(\Delta{x} = 0\)), then the uncertainty in the momentum must be infinite, meaning that nothing at all about the velocity can be known. Similarly, if the velocity were specified exactly, then the location would be entirely uncertain and the particle could be anywhere. One interesting consequence of this principle is that even at a temperature of absolute zero, the molecules in a crystal must still possess a small amount of zero point vibrational motion , sufficient to limit the precision to which we can measure their locations in the crystal lattice.
An equivalent formulation of the uncertainty principle relates the uncertainties associated with a measurement of the energy of a system to the time \(\Delta{t}\) taken to make the measurement:
\[ \Delta{E}\Delta{t} \leq \dfrac{h}{2 \pi}\]
The "uncertainty" referred to here goes much deeper than merely limiting our ability to observe the quantity \(\Delta{x}\Delta{p}\) to a greater precision than \(h /2\pi\). It means, rather, that this product has no exact value, nor, by extension, do position and momentum on a microscopic scale. A more appropriate term would be indeterminacy , which is closer to Heisenberg's original word Ungenauigkeit .
The revolutionary nature Heisenberg's uncertainty principle soon extended far beyond the arcane world of physics; its consequences quickly entered the realm of ideas and has inspired numerous creative works in the arts— few of which really have much to do with the Principle! A possible exception is Michael Frayn's widely acclaimed play (see below) that has brought a sense of Heisenberg's thinking to a wide segment of the public. | libretexts | 2025-03-17T19:53:08.296237 | 2013-10-03T01:37:46 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/05%3A_Atoms_and_the_Periodic_Table/5.03%3A_Light_Particles_and_Waves",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "5.3: Light, Particles, and Waves",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/05%3A_Atoms_and_the_Periodic_Table/5.04%3A_The_Bohr_Atom | 5.4: The Bohr Atom
Make sure you thoroughly understand the following essential ideas:
- Describe the Thompson, Rutherford, and early planetary models of the atom, and explain why the latter is not consistent with classical physics.
- State the major concepts that distinguished Bohr's model of the atom from the earlier planetary model.
- Give an example of a mechanical standing wave; state the meaning and importance of its boundary conditions.
- Sketch out a diagram showing how the concept of a standing wave applies to the description of the electron in a hydrogen atom.
- What is an atomic line emission spectrum? What is the significance of the continuum region of an emission spectrum? Sketch out a drawing showing the essentials of such a spectrum, including the ionization limit and the continuum.
- Describe the way in which Bohr's quantum numbers explain the observed spectrum of a typical atom.
- Explain the relation between the absorption and emission spectrum of an atom.
Our goal in this unit is to help you understand how the arrangement of the periodic table of the elements must follow as a necessary consequence of the fundamental laws of the quantum behavior of matter. The modern theory of the atom makes full use of the wave-particle duality of matter. In order to develop and present this theory in a comprehensive way, we would require a number of mathematical tools that lie beyond the scope of this course. We will therefore present the theory in a semi-qualitative manner, emphasizing its results and their applications, rather than its derivation.
Models of the atom
Models are widely employed in science to help understand things that cannot be viewed directly. The idea is to imagine a simplified system or process that might be expected to exhibit the basic properties or behavior of the real thing, and then to test this model against more complicated examples and modify it as necessary. Although one is always on shaky philosophical ground in trying to equate a model with reality, there comes a point when the difference between them becomes insignificant for most practical purposes.
The planetary model
The demonstration by Thompson in 1867 that all atoms contain units of negative electric charge led to the first science-based model of the atom which envisaged the electrons being spread out uniformly throughout the spherical volume of the atom. Ernest Rutherford, a New Zealander who started out as Thompson's student at Cambridge, distrusted this "plum pudding" model (as he called it) and soon put it to rest; Rutherford's famous alpha-ray bombardment experiment (carried out, in 1909, by his students Hans Geiger and Ernest Marsden) showed that nearly all the mass of the atom is concentrated in an extremely small (and thus extremely dense) body called the nucleus. This led him to suggest the planetary model of the atom, in which the electrons revolve in orbits around the nuclear "sun".
Even though the planetary model has long since been discredited, it seems to have found a permanent place in popular depictions of the atom, and certain aspects of it remain useful in describing and classifying atomic structure and behavior. The planetary model of the atom assumed that the electrostatic attraction between the central nucleus and the electron is exactly balanced by the centrifugal force created by the revolution of the electron in its orbit. If this balance were not present, the electron would either fall into the nucleus, or it would be flung out of the atom.
The difficulty with this picture is that it is inconsistent with a well established fact of classical electrodynamics which says that whenever an electric charge undergoes a change in velocity or direction (that is, acceleration , which must happen if the electron circles around the nucleus), it must continually radiate energy. If electrons actually followed such a trajectory, all atoms would act is miniature broadcasting stations. Moreover, the radiated energy would come from the kinetic energy of the orbiting electron; as this energy gets radiated away, there is less centrifugal force to oppose the attractive force due to the nucleus. The electron would quickly fall into the nucleus, following a trajectory that became known as the "death spiral of the electron". According to classical physics, no atom based on this model could exist for more than a brief fraction of a second.
Bohr's Model
Niels Bohr was a brilliant Danish physicist who came to dominate the world of atomic and nuclear physics during the first half of the twentieth century. Bohr suggested that the planetary model could be saved if one new assumption were made: certain "special states of motion" of the electron, corresponding to different orbital radii, would not result in radiation, and could therefore persist indefinitely without the electron falling into the nucleus. Specifically, Bohr postulated that the angular momentum of the electron, mvr (the mass and angular velocity of the electron and in an orbit of radius \(r\)) is restricted to values that are integral multiples of \(h/2\pi\). The radius of one of these allowed Bohr orbits is given by
\[r=\dfrac{nh}{2\pi m u}\]
in which h is Planck's constant, m is the mass of the electron, v is the orbital velocity, and n can have only the integer values 1, 2, 3, etc. The most revolutionary aspect of this assumption was its use of the variable integer n ; this was the first application of the concept of the quantum number to matter. The larger the value of n , the larger the radius of the electron orbit, and the greater the potential energy of the electron.
As the electron moves to orbits of increasing radius, it does so in opposition to the restoring force due to the positive nucleus, and its potential energy is thereby raised. This is entirely analogous to the increase in potential energy that occurs when any mechanical system moves against a restoring force— as, for example, when a rubber band is stretched or a weight is lifted.
Thus what Bohr was saying, in effect, is that the atom can exist only in certain discrete energy states: the energy of the atom is quantized . Bohr noted that this quantization nicely explained the observed emission spectrum of the hydrogen atom. The electron is normally in its smallest allowed orbit, corresponding to n = 1; upon excitation in an electrical discharge or by ultraviolet light, the atom absorbs energy and the electron gets promoted to higher quantum levels. These higher excited states of the atom are unstable, so after a very short time (around 10 —9 sec) the electron falls into lower orbits and finally into the innermost one, which corresponds to the atom's ground state . The energy lost on each jump is given off as a photon, and the frequency of this light provides a direct experimental measurement of the difference in the energies of the two states, according to the Planck-Einstein relationship e = h ν.
Vibrations, standing waves and bound states
Bohr's theory worked; it completely explained the observed spectrum of the hydrogen atom, and this triumph would later win him a Nobel prize. The main weakness of the theory, as Bohr himself was the first to admit, is that it could offer no good explanation of why these special orbits immunized the electron from radiating its energy away. The only justification for the proposal, other than that it seems to work, comes from its analogy to certain aspects of the behavior of vibrating mechanical systems.
Spectrum of a guitar string
In order to produce a tone when plucked, a guitar string must be fixed at each end (that is, it must be a bound system ) and must be under some tension. Only under these conditions will a transverse disturbance be countered by a restoring force (the string's tension) so as to set up a sustained vibration. Having the string tied down at both ends places a very important boundary condition on the motion: the only allowed modes of vibration are those whose wavelengths produce zero displacements at the bound ends of the string; if the string breaks or becomes unattached at one end, it becomes silent.
In its lowest-energy mode of vibration there is a single wave whose point of maximum displacement is placed at the center of the string. In musical terms, this corresponds to the fundamental note to which the string is tuned; in terms of the theory of vibrations, it corresponds to a "quantum number" of 1. Higher modes, known as overtones (and in music, as octaves ), contain 2, 3, 4 and more points of maximum displacement ( antinodes ) spaced evenly along the string, separated by points of zero displacement ( nodes ). These correspond to successively higher quantum numbers and higher energies.
The vibrational states of the string are quantized in the sense that an integral number of antinodes must be present. Note again that this condition is imposed by the boundary condition that the ends of the string, being fixed in place, must be nodes. Because the locations of the nodes and antinodes do not change as the string vibrates, the vibrational patterns are known as standing waves .
A similar kind of quantization occurs in other musical instruments; in each case the vibrations, whether of a stretched string, a column of air, or of a stretched membrane.
Standing waves live in places other than atoms and musical instruments: every time you turn on your microwave oven, a complex set of standing waves fills the interior. What is "waving" here is the alternating electrostatic field as a function of location; the wave patterns are determined by the dimensions of the oven interior and by the objects placed within it. But the part of a pizza that happens to be located at a node would not get very hot, so all microwave ovens provide a mechanical means of rotating either the food (on a circular platform) or the microwave beam (by means of a rotating deflector) so that all parts will pass through high-amplitude parts of the waves.Standing waves in the hydrogen atom
The analogy with the atom can be seen by imagining a guitar string that has been closed into a circle. The circle is the electron orbit, and the boundary condition is that the waves must not interfere with themselves along the circle. This condition can only be met if the circumference of an orbit can exactly accommodate an integral number of wavelengths. Thus only certain discrete orbital radii and energies are allowed, as depicted in the two diagrams below.
Unbound states
If a guitar string is plucked so harshly that it breaks, the restoring force and boundary conditions that restricted its motions to a few discrete harmonically related frequencies are suddenly absent; with no constraint on its movement, the string's mechanical energy is dissipated in a random way without musical effect. In the same way, if an atom absorbs so much energy that the electron is no longer bound to the nucleus, then the energy states of the atom are no longer quantized; instead of the line spectrum associated with discrete energy jumps, the spectrum degenerates into a continuum in which all possible electron energies are allowed. The energy at which the ionization continuum of an atom begins is easily observed spectroscopically, and serves as a simple method of experimentally measuring the energy with which the electron is bound to the atom.
Spectrum of the hydrogen atom
Hydrogen, the simplest atom, also has the simplest line spectrum (line spectra were briefly introduced in the previous chapter.) The hydrogen spectrum was the first to be observed (by Ånders Ångström in the 1860's). Johannn Balmer, a German high school teacher, discovered a simple mathematical formula that related the wavelengths of the various lines that are observable in the visible and near-uv parts of the spectrum. This set of lines is now known as the Balmer Series .
The four lines in the visible spectrum (designated by α through δ) were the first observed by Balmer. Notice how the lines crowd together as they approach the ionization limit in the near-ultraviolet part of the spectrum. Once the electron has left the atom, it is in an unbound state and its energy is no longer quantized. When such electrons return to the atom, they possess random amounts of kinetic energies over and above the binding energy. This reveals itself as the radiation at the short-wavelength end of the spectrum known as the continuum radiation. Other named sets of lines in the hydrogen spectrum are the Lyman series (in the ultraviolet) and the Paschen, Brackett, Pfund and Humphrey series in the infrared.
How the Bohr model explains the hydrogen line spectrum
Each spectral line represents an energy difference between two possible states of the atom. Each of these states corresponds to the electron in the hydrogen atom being in an "orbit" whose radius increases with the quantum number n . The lowest allowed value of n is 1; because the electron is as close to the nucleus as it can get, the energy of the system has its minimum (most negative) value. This is the "normal" (most stable) state of the hydrogen atom, and is called the ground state .
If a hydrogen atom absorbs radiation whose energy corresponds to the difference between that of n =1 and some higher value of n , the atom is said to be in an excited state. Excited states are unstable and quickly decay to the ground state, but not always in a single step. For example, if the electron is initially promoted to the n =3 state, it can decay either to the ground state or to the n =2 state, which then decays to n =1. Thus this single n =1→3 excitation can result in the three emission lines depicted in the diagram above, corresponding to n =3→1, n =3→2, and n =2→1.
If, instead, enough energy is supplied to the atom to completely remove the electron, we end up with a hydrogen ion and an electron. When these two particles recombine (H + + e – → H), the electron can initially find itself in a state corresponding to any value of n , leading to the emission of many lines.
The lines of the hydrogen spectrum can be organized into different series according to the value of n at which the emission terminates (or at which absorption originates.) The first few series are named after their discoverers. The most well-known (and first-observed) of these is the Balmer series, which lies mostly in the visible region of the spectrum. The Lyman lines are in the ultraviolet, while the other series lie in the infrared. The lines in each series crowd together as they converge toward the series limit which corresponds to ionization of the atom and is observed as the beginning of the continuum emission. Note that the ionization energy of hydrogen (from its ground state) is 1312 kJ mol –1 . Although an infinite number of n -values are possible, the number of observable lines is limited by our ability to resolve them as they converge into the continuum; this number is around a thousand.
Emission and absorption spectra
The line emission spectra we have been discussing are produced when electrons which had previously been excited to values of n greater than 1 fall back to the n =1 ground state, either directly, or by way of intermediate- n states. But if light from a continuous source (a hot body such as a star) passes through an atmosphere of hydrogen (such as the star's outer atmosphere), those wavelengths that correspond to the allowed transitions are absorbed, and appear as dark lines superimposed on the continuous spectrum.
These dark absorption lines were first observed by William Wollaston in his study of the solar spectrum. In 1814, Joseph von Fraunhofner (1787-1826) re-discovered them and made accurate measurements of 814 lines, including the four most prominent of the Balmer lines. | libretexts | 2025-03-17T19:53:08.376273 | 2013-10-03T01:37:48 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/05%3A_Atoms_and_the_Periodic_Table/5.04%3A_The_Bohr_Atom",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "5.4: The Bohr Atom",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/05%3A_Atoms_and_the_Periodic_Table/5.05%3A_The_Quantum_Atom | 5.5: The Quantum Atom
Make sure you thoroughly understand the following essential ideas
- State the fundamental distinction between Bohr's original model of the atom and the modern orbital model.
- Explain the role of the uncertainty principle in preventing the electron from falling into the nucleus.
- State the physical meaning of the principal quantum number of an electron orbital, and make a rough sketch of the shape of the probability- vs -distance curve for any value of n .
- Sketch out the shapes of an s , p , or a typical d orbital.
- Describe the significance of the magnetic quantum number as it applies to a p orbital.
- State the Pauli exclusion principle .
The picture of the atom that Niels Bohr developed in 1913 served as the starting point for modern atomic theory, but it was not long before Bohr himself recognized that the advances in quantum theory that occurred through the 1920's required an even more revolutionary change in the way we view the electron as it exists in the atom. This lesson will attempt to show you this view— or at least the portion of it that can be appreciated without the aid of more than a small amount of mathematics.
From Orbits to Orbitals
About ten years after Bohr had developed his theory, de Broglie showed that the electron should have wavelike properties of its own, thus making the analogy with the mechanical theory of standing waves somewhat less artificial. One serious difficulty with the Bohr model still remained, however: it was unable to explain the spectrum of any atom more complicated than hydrogen. A refinement suggested by Sommerfeld assumed that some of the orbits are elliptical instead of circular, and invoked a second quantum number, l , that indicated the degree of ellipticity. This concept proved useful, and it also began to offer some correlation with the placement of the elements in the periodic table.
By 1926, de Broglie's theory of the wave nature of the electron had been experimentally confirmed, and the stage was set for its extension to all matter. At about the same time, three apparently very different theories that attempted to treat matter in general terms were developed. These were Schrödinger's wave mechanics, Heisenberg's matrix mechanics, and a more abstract theory of P.A.M. Dirac. These eventually were seen to be mathematically equivalent, and all continue to be useful.
Of these alternative treatments, the one developed by Schrödinger is the most easily visualized. Schrödinger started with the simple requirement that the total energy of the electron is the sum of its kinetic and potential energies:
\[ E = \underbrace{\dfrac{mv^2}{2}}_{\text{kinetic energy}} + \underbrace{\dfrac{-e^2}{r}}_{\text{potential energy}} \label{5.6.1}\]
The second term represents the potential energy of an electron (whose charge is denoted by e ) at a distance r from a proton (the nucleus of the hydrogen atom). In quantum mechanics it is generally easier to deal with equations that use momentum (\(p = mv\)) rather than velocity, so the next step is to make this substitution:
\[E = \dfrac{p^2}{2m} - \dfrac{e^2}{r} \label{5.6.2}\]
This is still an entirely classical relation, as valid for the waves on a guitar string as for those of the electron in a hydrogen atom. The third step is the big one: in order to take into account the wavelike character of the hydrogen atom, a mathematical expression that describes the position and momentum of the electron at all points in space is applied to both sides of the equation. The function, denoted by \(\Psi\), "modulates" the equation of motion of the electron so as to reflect the fact that the electron manifests itself with greater probability in some locations that at others. This yields the celebrated Schrödinger equation
\[ \left( \dfrac{mv^2}{2} - \dfrac{e^2}{r} \right) \Psi = E\Psi \label{5.6.3}\]
Physical significance of the wavefunction
How can such a simple-looking expression contain within it the quantum-mechanical description of an electron in an atom— and thus, by extension, of all matter? The catch, as you may well suspect, lies in discovering the correct form of Ψ , which is known as the wave function . As this names suggests, the value of Ψ is a function of location in space relative to that of the proton which is the source of the binding force acting on the electron. As in any system composed of standing waves, certain boundary conditions must be applied, and these are also contained in Ψ; the major ones are that the value of must approach zero as the distance from the nucleus approaches infinity, and that the function be continuous.
When the functional form of has been worked out, the Schrödinger equation is said to have been solved for a particular atomic system. The details of how this is done are beyond the scope of this course, but the consequences of doing so are extremely important to us. Once the form of is known, the allowed energies E of an atom can be predicted from the above equation. Soon after Schrödinger's proposal, his equation was solved for several atoms, and in each case the predicted energy levels agreed exactly with the observed spectra.
There is another very useful kind of information contained in Ψ. Recalling that its value depends on the location in space with respect to the nucleus of the atom, the square of this function Ψ 2 , evaluated at any given point, represents the probability of finding the electron at that particular point. The significance of this cannot be overemphasized; although the electron remains a particle having a definite charge and mass, and the question of "where" it is located is no longer meaningful. Any single experimental observation will reveal a definite location for the electron, but this will in itself have little significance; only a large number of such observations (similar to a series of multiple exposures of a photographic film) will yield meaningful results which will show that the electron can "be" anywhere with at least some degree of probability. This does not mean that the electron is "moving around" to all of these places, but that (in accord with the uncertainty principle) the concept of location has limited meaning for a particle as small as the electron. If we count only those locations in space at which the probability of the electron manifesting itself exceeds some arbitrary value, we find that the Ψ function defines a definite three-dimensional region which we call an orbital .
Why doesn't the electron fall into the nucleus?
We can now return to the question which Bohr was unable to answer in 1912. Even the subsequent discovery of the wavelike nature of the electron and the analogy with standing waves in mechanical systems did not really answer the question; the electron is still a particle having a negative charge and is attracted to the nucleus.
The answer comes from the Heisenberg Uncertainty Principle, which says that a quantum particle such as the electron cannot simultaneously have sharply-defined values of location and of momentum (and thus kinetic energy). To understand the implications of this restriction, suppose that we place the electron in a small box. The walls of the box define the precision δ x to which the location is known; the smaller the box, the more exactly will we know the location of the electron. But as the box gets smaller, the uncertainty in the electron's kinetic energy will increase. As a consequence of this uncertainty, the electron will at times possess so much kinetic energy (the "confinement energy") that it may be able to penetrate the wall and escape the confines of the box.
This process is known as tunneling ; the tunnel effect is exploited in various kinds of semiconductor devices, and it is the mechanism whereby electrons jump between dissolved ions and the electrode in batteries and other electrochemical devices.
The region near the nucleus can be thought of as an extremely small funnel-shaped box, the walls of which correspond to the electrostatic attraction that must be overcome if an electron confined within this region is to escape. As an electron is drawn toward the nucleus by electrostatic attraction, the volume to which it is confined diminishes rapidly. Because its location is now more precisely known, its kinetic energy must become more uncertain; the electron's kinetic energy rises more rapidly than its potential energy falls, so that it gets ejected back into one of its allowed values of n .
We can also dispose of the question of why the orbiting electron does not radiate its kinetic energy away as it revolves around the nucleus. The Schrödinger equation completely discards any concept of a definite path or trajectory of a particle; what was formerly known as an "orbit" is now an "orbital", defined as the locations in space at which the probability of finding the electrons exceeds some arbitrary value. It should be noted that this wavelike character of the electron coexists with its possession of a momentum, and thus of an effective velocity, even though its motion does not imply the existence of a definite path or trajectory that we associate with a more massive particle.
Orbitals
The modern view of atomic structure dismisses entirely the old but comfortable planetary view of electrons circling around the nucleus in fixed orbits. As so often happens in science, however, the old outmoded theory contains some elements of truth that are retained in the new theory. In particular, the old Bohr orbits still remain, albeit as spherical shells rather than as two-dimensional circles, but their physical significance is different: instead of defining the "paths" of the electrons, they merely indicate the locations in the space around the nucleus at which the probability of finding the electron has higher values. The electron retains its particle-like mass and momentum, but because the mass is so small, its wavelike properties dominate. The latter give rise to patterns of standing waves that define the possible states of the electron in the atom.
The quantum numbers
Modern quantum theory tells us that the various allowed states of existence of the electron in the hydrogen atom correspond to different standing wave patterns. In the preceding lesson we showed examples of standing waves that occur on a vibrating guitar string. The wave patterns of electrons in an atom are different in two important ways:
- Instead of indicating displacement of a point on a vibrating string, the electron waves represent the probability that an electron will manifest itself (appear to be located) at any particular point in space. (Note carefully that this is not the same as saying that "the electron is smeared out in space"; at any given instant in time, it is either at a given point or it is not.)
- The electron waves occupy all three dimensions of space, whereas guitar strings vibrate in only two dimensions.
Aside from this, the similarities are striking. Each wave pattern is identified by an integer number n , which in the case of the atom is known as the principal quantum number . The value of \(n\) tells how many peaks of amplitude (antinodes) exist in that particular standing wave pattern; the more peaks there are, the higher the energy of the state.
The three simplest orbitals of the hydrogen atom are depicted above in pseudo-3D, in cross-section, and as plots of probability (of finding the electron) as a function of distance from the nucleus. The average radius of the electron probability is shown by the blue circles or plots in the two columns on the right. These radii correspond exactly to those predicted by the Bohr model.
The principal quantum number. The value of \(n\) tells how many peaks of amplitude (antinodes) exist in that particular standing wave pattern; the more peaks there are, the higher the energy of the state.
Physical Significance of n
The potential energy of the electron is given by the formula
\[ E =\dfrac{-4 \pi^2 e^4 m}{h^2n^2} \label{5.6.4}\]
with
- \(e\) is the charge of the electron, \(m\) is its mass,
- \(h\) is Planck's constant, and
- \(n\) is the principal quantum number.
The negative sign ensures that the potential energy is always negative. Notice that this energy in inversely proportional to the square of \(n\), so that the energy rises toward zero as \(n\) becomes very large, but it can never exceed zero.
Equation \(\ref{5.6.4}\) was actually part of Bohr's original theory and is still applicable to the hydrogen atom, although not to atoms with two or more electrons. In the Bohr model, each value of \(n\) corresponded to an orbit of a different radius. The larger the orbital radius, the higher the potential energy of the electron; the inverse square relationship between electrostatic potential energy and distance is reflected in the inverse square relation between the energy and n in the above formula. Although the concept of a definite trajectory or orbit of the electron is no longer tenable, the same orbital radii that relate to the different values of n in Bohr's theory now have a new significance: they give the average distance of the electron from the nucleus. The averaging process must encompass several probability peaks in the case of higher values of \(n\). The spatial distribution of these probability maxima defines the particular orbital.
This physical interpretation of the principal quantum number as an index of the average distance of the electron from the nucleus turns out to be extremely useful from a chemical standpoint, because it relates directly to the tendency of an atom to lose or gain electrons in chemical reactions.
The Angular Momentum Quantum Number
The electron wave functions that are derived from Schrödinger's theory are characterized by several quantum numbers. The first one, n , describes the nodal behavior of the probability distribution of the electron, and correlates with its potential energy and average distance from the nucleus as we have just described.
The theory also predicts that orbitals having the same value of n can differ in shape and in their orientation in space. The quantum number l , known as the angular momentum quantum number , determines the shape of the orbital. (More precisely, l determines the number of angular nodes, that is, the number of regions of zero probability encountered in a 360° rotation around the center.)
When l = 0, the orbital is spherical in shape. If l = 1, the orbital is elongated into something resembling a figure-8 shape, and higher values of l correspond to still more complicated shapes— but note that the number of peaks in the radial probability distributions (below) decreases with increasing l. The possible values that l can take are strictly limited by the value of the principal quantum number; l can be no greater than n – 1. This means that for n = 1, l can only have the single value zero which corresponds to a spherical orbital. For historical reasons, the orbitals corresponding to different values of l are designated by letters, starting with s for l = 0, p for l = 1, d for l = 2, and f for l = 3.
The function relationship is given by
\[ \bar{r} = (5.29\; pm) \dfrac{n^2}{Z} \left[ \dfrac{3}{2} - \dfrac{l(l-1)}{2n^2}\right]\]
in which \(Z\) is the nuclear charge of the atom, which of course is unity for hydrogen.
The Magnetic Quantum Number
An s -orbital, corresponding to l = 0, is spherical in shape and therefore has no special directional properties. The probability cloud of a p orbital is aligned principally along an axis extending along any of the three directions of space. The additional quantum number m is required to specify the particular direction along which the orbital is aligned.
"Direction in space" has no meaning in the absence of a force field that serves to establish a reference direction. For an isolated atom there is no such external field, and for this reason there is no distinction between the orbitals having different values of \(m\). If the atom is placed in an external magnetic or electrostatic field, a coordinate system is established, and the orbitals having different values of m will split into slightly different energy levels. This effect was first seen in the case of a magnetic field, and this is the origin of the term magnetic quantum number. In chemistry, however, electrostatic fields are much more important for defining directions at the atomic level because it is through such fields that nearby atoms in a molecule interact with each other. The electrostatic field created when other atoms or ions come close to an atom can cause the energies of orbitals having different direction properties to split up into different energy levels; this is the origin of the colors seen in many inorganic salts of transition elements, such as the blue color of copper sulfate.
The quantum number m can assume 2 l + 1 values for each value of l , from – l through 0 to + l . When l = 0 the only possible value of m will also be zero, and for the p orbital (l = 1), m can be –1, 0, and +1. Higher values of l introduce more complicated orbital shapes which give rise to more possible orientations in space, and thus to more values of m .
Electron Spin and the Exclusion Principle
Certain fundamental particles have associated with them a magnetic moment that can align itself in either of two directions with respect to an external magnetic field. The electron is one such particle, and the direction of its magnetic moment is called its spin .
A basic principle of modern physics states that for particles such as electrons that possess half-integral values of spin, no two of them can be in identical quantum states within the same system. The quantum state of a particle is defined by the values of its quantum numbers, so what this means is that no two electrons in the same atom can have the same set of quantum numbers. This is known as the Pauli exclusion principle, named after the German physicist Wolfgang Pauli (1900-1958, Nobel Prize 1945).
The mechanical analogy implied by the term spin is easy to visualize, but should not be taken literally . Physical rotation of an electron is meaningless. However, the coordinates of the electron's wave function can be rotated mathematically; when this is done, it is found that a rotation of 720° is required to restore the function to its initial value— rather weird, considering that a 360° rotation will leave any extended body unchanged! Electron spin is basically a relativistic effect in which the electron's momentum distorts local space and time. It has no classical counterpart and thus cannot be visualized other than through mathematics.
The exclusion principle was discovered empirically and was placed on a firm theoretical foundation by Pauli in 1925. A complete explanation requires some familiarity with quantum mechanics, so all we will say here is that if two electrons possess the same quantum numbers n, l, m and s (defined below), the wave function that describes the state of existence of the two electrons together becomes zero, which means that this is an "impossible" situation.
A given orbital is characterized by a fixed set of the quantum numbers n, l , and m . The electron spin itself constitutes a fourth quantum number s , which can take the two values +1 and –1. Thus a given orbital can contain two electrons having opposite spins , which "cancel out" to produce zero magnetic moment. Two such electrons in a single orbital are often referred to as an electron pair .
If it were not for the exclusion principle, the atoms of all elements would behave in the same way, and there would be no need for a science of Chemistry!
As we have seen, the lowest-energy standing wave pattern the electron can assume in an atom corresponds to n =1, which describes the state of the single electron in hydrogen, and of the two electrons in helium. Since the quantum numbers m and l are zero for n =1, the pair of electrons in the helium orbital have the values ( n, l, m, s ) = (1,0,0,+1) and (1,0,0,–1)— that is, they differ only in spin. These two sets of quantum numbers are the only ones that are possible for a n =1 orbital. The additional electrons in atoms beyond helium must go into higher-energy ( n >1) orbitals. Electron wave patterns corresponding to these greater values of n are concentrated farther from the nucleus, with the result that these electrons are less tightly bound to the atom and are more accessible to interaction with the electrons of neighboring atoms, thus influencing their chemical behavior . If it were not for the Pauli principle, all the electrons of every element would be in the lowest-energy n =1 state, and the differences in the chemical behavior the different elements would be minimal. Chemistry would certainly be a simpler subject, but it would not be very interesting! | libretexts | 2025-03-17T19:53:08.463542 | 2013-10-03T01:37:48 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/05%3A_Atoms_and_the_Periodic_Table/5.05%3A_The_Quantum_Atom",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "5.5: The Quantum Atom",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/05%3A_Atoms_and_the_Periodic_Table/5.06%3A_Atomic_Electron_Configurations | 5.6: Atomic Electron Configurations
Make sure you thoroughly understand the following essential ideas:
- State the principle feature of that distinguishes the energies of the excited states of a single-electron atom from atoms containing more than one electron.
- Explain why the first ionization energy of the helium atom is smaller than twice the first ionization of the hydrogen atom.
- Be able to write a plausible electron configuration for any atom having an atomic number less than 90.
In the previous section you learned that an electron standing-wave pattern characterized by the quantum numbers ( n,l,m ) is called an orbital . According to the Pauli exclusion principle , no two electrons in the same atom can have the same set of quantum numbers ( n,l,m,s ). This limits the number of electrons in a given orbital to two ( s = ±1), and it requires that atom containing more then two electrons must place them in standing wave patterns corresponding to higher principal quantum numbers n , which means that these electrons will be farther from the nucleus and less tightly bound by it.
In this chapter, we will see how the Pauli restrictions on the allowable quantum numbers of electrons in an atom affect the electronic configuration of the different elements, and, by influencing their chemical behavior, governs the structure of the periodic table.
One-electron atoms
Let us begin with atoms that contain only a single electron. Hydrogen is of course the only electrically neutral species of this kind, but by removing electrons from heavier elements we can obtain one-electron ions such as \(He^+\) and \(Li^{2+}\), etc. Each has a ground state configuration of 1 s 1 , meaning that its single electron exhibits a standing wave pattern governed by the quantum numbers n =1, m =0 and l =0, with the spin quantum number s undefined because there is no other electron to compare it with. All have simple emission spectra whose major features were adequately explained by Bohr's model.
The most important feature of a single-electron atom is that the energy of the electron depends only on the principal quantum number n . As the above diagram shows, the quantum numbers l and m have no effect on the energy; we say that all orbitals having a given value of n are degenerate . Thus the emission spectrum produced by exciting the electron to the n =2 level consists of a single line, not four lines. The wavelength of this emission line for the atoms H, He + and Li 2 + will diminish with atomic number because the greater nuclear charge will lower the energies of the various n levels. For the same reason, the energies required to remove an electron from these species increases rapidly as the nuclear charge increases, because the increasing attraction pulls the electron closer to the nucleus, thus producing an even greater attractive force.
Electron-Electron Repulsion
It takes 1312 kJ of energy to remove the electron from a mole of hydrogen atoms. What might we expect this value to be for helium? Helium contains two electrons, but its nucleus contains two protons; each electron "sees" both protons, so we might expect that the electrons of helium would be bound twice as strongly as the electron of hydrogen. The ionization energy of helium should therefore be twice 1312 kJ/mol, or 2612 kJ/mol. However, if one looks at the spectrum of helium, the continuum is seen to begin at a wavelength corresponding to an ionization energy of 2372 kJ/mol, or about 90% of the predicted value.
Why are the electrons in helium bound less tightly than the +2 nuclear charge would lead us to expect? The answer is that there is another effect to consider: the repulsion between the two electrons; the resulting electron-electron repulsion subtracts from the force holding the electron to the nucleus, reducing the local binding of each.
Electron-electron repulsion is a major factor in both the spectra and chemical behavior of the elements heavier than hydrogen. In particular, it acts to "break the degeneracy" (split the energies) of orbitals having the same value of n but different l .
The diagram below shows how the energies of the s - and p -orbitals of different principal quantum numbers get split as the result of electron-electron repulsion. Notice the contrast with the similar diagram for one-electron atoms near the top of this page. The fact that electrons preferentially fill the lowest-energy empty orbitals is the basis of the rules for determining the electron configuration of the elements and of the structure of the periodic table.
The Aufbau rules
The German word Aufbau means "building up", and this term has traditionally been used to describe the manner in which electrons are assigned to orbitals as we carry out the imaginary task of constructing the atoms of elements having successively larger atomic numbers. In doing so, we are effectively "building up" the periodic table of the elements, as we shall shortly see.
How to play the Aufbau game- Electrons occupy the lowest-energy available orbitals; lower-energy orbitals are filled before the higher ones.
- No more than two electrons can occupy any orbital.
- For the lighter elements, electrons will fill orbitals of the same type only one electron at a time, so that their spins are all unpaired. They will begin to pair up only after all the orbitals are half-filled. This principle, which is a consequence of the electrostatic repulsion between electrons, is known as Hund's rule .
- For the first 18 elements, up to the point where the 3 s and 3 p levels are completely filled, this scheme is entirely straightforward and leads to electronic configurations which you are expected to be able to work out for each of these elements.
The preceding diagram illustrates the main idea here. Each orbital is represented as a little box that can hold up to two electrons having opposite spins, which we designated by upward- or downward-pointing arrows. Electrons fill the lowest-energy boxes first, so that additional electrons are forced into wave-patterns corresponding to higher (less negative) energies. Thus in the above diagram, the "third" electron of lithium goes into the higher-energy 2s orbital, giving this element an electron configuration which we write 1s 2 2s 1 .
What is the electron configuration of the atom of phosphorus, atomic number 15?
The number of electrons filling the lowest-energy orbitals are:
1 s : 2 electrons, 2 s : 2 electrons; 2 p : 6 electrons, 3 s : 2 electrons. This adds up to 12 electrons. The remaining three electrons go into the 3 p orbital, so the complete electron configuration of P is 1 s 2 2 s 2 2 p 6 3 s 2 3 p 3 .
Energies of the highest occupied orbitals of the elements: This diagram illustrates the Aufbau rules as they are applied to all the elements. Note especially how the energies of the nd orbitals fall between the ( n –1) s and ( n –1) p orbitals so, for example the 3 d orbitals begin to fill after the 4 s orbital is filled, but before electrons populate the 4 p orbitals. A similar relation exists with d - and f -orbitals.
It is very important that you understand this diagram and how it follows from the Pauli exclusion principle: You should be able to reproduce it from memory up to the 6 s level, because it forms the fundamental basis of the periodic table of the elements.
Bending the rules
Inspection of a table of electron configurations of the elements reveals a few apparent non-uniformities in the filling of the orbitals, as is illustrated here for the elements of the so-called first transition series in which the 3 d orbitals are being populated. These anomalies are a consequence of the very small energy differences between some of the orbitals, and of the reduced electron-electron repulsion when electrons remain unpaired (Hund's rule), as is evident in chromium, which contains six unpaired electrons.
The other anomaly here is copper, which "should" have the outer-shell configuration 3 d 9 4 s 2 . The actual configuration of the Cu atom appears to be 3 d 10 4 s 1 . Although the 4 s orbital is normally slightly below the 3 d orbital energy, the two are so close that interactions between the two when one is empty and the other is not can lead to a reversal. Detailed calculations in which the shapes and densities of the charge distributions are considered predict that the relative energies of many orbitals can reverse in this way. It gets even worse when f -orbitals begin to fill!
Because these relative energies can vary even for the same atom in different chemical environments, most instructors will not expect you to memorize them.
This diagram shows how the atomic orbitals corresponding to different principal quantum numbers become interspersed with one another at higher values of n . The actual situation is more complicated than this; calculations show that the energies of d and f orbitals vary with the atomic number of the element.
The Periodic Table
The relative orbital energies illustrated above and the Pauli exclusion principle constitute the fundamental basis of the periodic table of the elements which was of course worked out empirically late in the 19th century, long before electrons had been heard of.
The periodic table of the elements is conventionally divided into sections called blocks , each of which designates the type of "sub-orbital" ( s, p, d, f ) which contains the highest-energy electrons in any particular element. Note especially that
- The non-metallic elements occur only in the p -block;
- The d -block elements contain the so-called transition elements;
- The f -block elements go in between Groups 3 and 4 of the d -block.
The above diagram illustrates the link between the electron configurations of the elements and the layout of the periodic table. Each row, also known as a period , commences with two s-block elements and continues through the p block. At the end of the rows corresponding to n >1 is an element having a p 6 configuration, a so-called noble gas element . At n values of 2 and 3, d - and f -block element sequences are added.
The table shown above is called the long form of the periodic table; for many purposes, we can use a "short form" table in which the d -block is shown below the s - and p - block "representative elements" and the f -block does not appear at all. Note that the "long form" would be even longer if the f -block elements were shown where they actually belong, between La-Hf and Ac-Db. | libretexts | 2025-03-17T19:53:08.544070 | 2013-10-03T01:37:46 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/05%3A_Atoms_and_the_Periodic_Table/5.06%3A_Atomic_Electron_Configurations",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "5.6: Atomic Electron Configurations",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/05%3A_Atoms_and_the_Periodic_Table/5.07%3A_Periodic_Properties_of_the_Elements | 5.7: Periodic Properties of the Elements
Make sure you thoroughly understand the following essential ideas:
- You should be able to sketch out the general form of the periodic table and identify the various blocks and identify the groups corresponding to the alkali metals , the transition elements , the halogens , and the noble gases .
- For the first eighteen elements, you should be able to predict the formulas of typical binary compounds they can be expected to form with hydrogen and with oxygen.
- Comment on the concept of the "size" of an atom, and give examples of how radii are defined in at least two classes of of substances.
- Define ionization energy and electron affinity , and explain the periodic general trends.
- State the meaning and significance of electronegativity .
The periodic table in the form originally published by Dmitri Mendeleev in 1869 was an attempt to list the chemical elements in order of their atomic weights, while breaking the list into rows in such a way that elements having similar physical and chemical properies would be placed in each column. At that time, nothing was known about atoms; the development of the table was entirely empirical. Our goal in this lesson is to help you understand how the shape and organization of the modern periodic table are direct consequences of the atomic electronic structure of the elements.
Organization of the Periodic Table
To understand how the periodic table is organized, imagine that we write down a long horizontal list of the elements in order of their increasing atomic number. It would begin this way:
H He Li Be B C N O F Ne Na Mg Al Si P S Cl Ar K Ca...
Now if we look at the various physical and chemical properties of these elements, we would find that their values tend to increase or decrease with Z in a manner that reveals a repeating pattern— that is, a periodicity . For the elements listed above, these breaks can be indicated by the vertical bars shown here in color:
H He | Li Be B C N O F Ne | Na Mg Al Si P S Cl Ar | Ca ...
Periods: To construct the table, we place each sequence in a separate row, which we call a period . The rows are aligned in such a way that the elements in each vertical column possess certain similarities. Thus the first short-period elements H and He are chemically similar to the elements Li and Ne at the beginning and end of the second period. The first period is split in order to place H above Li and He above Ne.
The "block" nomenclature shown above refers to the sub-orbital type (quantum number l, or s-p-d-f classification) of the highest-energy orbitals that are occupied in a given element. For n =1 there is no p block, and the s block is split so that helium is placed in the same group as the other inert gases, which it resembles chemically. For the second period ( n =2) there is a p block but no d block; in the usual "long form" of the periodic table it is customary to leave a gap between these two blocks in order to accommodate the d blocks that occur at n =3 and above. At n =6 we introduce an f block, but in order to hold the table to reasonable dimensions the f blocks are placed below the main body of the table.
Groups : Each column of the periodic table is known as a group . The elements belonging to a given group bear a strong similarity in their chemical behaviors.
In the past, two different systems of Roman numerals and letters were used to denote the various groups. North Americans added the letter B to denote the d -block groups and A for the others; this is the system shown in the table above. The the rest of the world used A for the d -block elements and B for the others. In 1985, a new international system was adopted in which the columns were simply labeled 1-18. Although this system has met sufficient resistance in North America to slow its incorporation into textbooks, it seems likely that the "one to eighteen" system will gradually take over.
Families. Chemists have long found it convenient to refer to the elements of different groups, and in some cases of spans of groups by the names indicated in the table shown below. The two of these that are most important for you to know are the noble gases and the transition metals .
The shell model of the atom
The properties of an atom depend ultimately on the number of electrons in the various orbitals, and on the nuclear charge which determines the compactness of the orbitals. In order to relate the properties of the elements to their locations in the periodic table, it is often convenient to make use of a simplified view of the atom in which the nucleus is surrounded by one or more concentric spherical "shells", each of which consists of the highest-principal quantum number orbitals (always s - and p -orbitals) that contain at least one electron. The shell model (as with any scientific model) is less a description of the world than a simplified way of looking at it that helps us to understand and correlate diverse phenomena. The principal simplification here is that it deals only with the main group elements of the s - and p -blocks, omitting the d - and f -block elements whose properties tend to be less closely tied to their group numbers.
The electrons (denoted by the red dots) in the outer-most shell of an atom are the ones that interact most readily with other atoms, and thus play a major role in governing the chemistry of an element. Notice the use of noble-gas symbols to simplify the electron-configuration notation.
In particular, the number of outer-shell electrons (which is given by the rightmost digit in the group number ) is a major determinant of an element's "combining power", or valence . The general trend is for an atom to gain or lose electrons, either directly (leading to formation of ions ) or by sharing electrons with other atoms so as to achieve an outer-shell configuration of s 2 p 6 . This configuration, known as an octet , corresponds to that of one of the noble-gas elements of Group 18.
- the elements in Groups 1, 2 and 13 tend to give up their valence electrons to form positive ions such as Na + , Mg 2 + and Al 3 + , as well as compounds NaH, MgH 2 and AlH 3 . The outer-shell configurations of the metal atoms in these species correspond to that of neon.
- elements in Groups 15-17 tend to acquire electrons, forming ions such as P 3– , S 2– and Cl – or compounds such as PH 3 , H 2 S and HCl. The outer-shell configurations of these elements correspond to that of argon.
- the Group 14 elements do not normally form ions at all, but share electrons with other elements in tetravalent compounds such as CH 4 .
The above diagram shows the first three rows of what are known as the representative elements — that is, the s - and p -block elements only. As we move farther down (into the fourth row and below), the presence of d -electrons exerts a complicating influence which allows elements to exhibit multiple valances. This effect is especially noticeable in the transition-metal elements, and is the reason for not including the d-block with the representative elements at all.
Effective nuclear charge
Those electrons in the outmost or valence shell are especially important because they are the ones that can engage in the sharing and exchange that is responsible for chemical reactions; how tightly they are bound to the atom determines much of the chemistry of the element. The degree of binding is the result of two opposing forces: the attraction between the electron and the nucleus, and the repulsions between the electron in question and all the other electrons in the atom. All that matters is the net force, the difference between the nuclear attraction and the totality of the electron-electron repulsions.
We can simplify the shell model even further by imagining that the valence shell electrons are the only electrons in the atom, and that the nuclear charge has whatever value would be required to bind these electrons as tightly as is observed experimentally. Because the number of electrons in this model is less than the atomic number Z , the required nuclear charge will also be smaller. and is known as the effective nuclear charge . Effective nuclear charge is essentially the positive charge that a valence electron "sees".
Z vs. Z effective
Part of the difference between Z and Z effective is due to other electrons in the valence shell, but this is usually only a minor contributor because these electrons tend to act as if they are spread out in a diffuse spherical shell of larger radius. The main actors here are the electrons in the much more compact inner shells which surround the nucleus and exert what is often called a shielding or " screening " effect on the valence electrons.
The formula for calculating effective nuclear charge is not very complicated, but we will skip a discussion of it here. An even simpler although rather crude procedure is to just subtract the number of inner-shell electrons from the nuclear charge; the result is a form of effective nuclear charge which is called the core charge of the atom.
Sizes of atoms and ions
The concept of "size" is somewhat ambiguous when applied to the scale of atoms and molecules. The reason for this is apparent when you recall that an atom has no definite boundary; there is a finite (but very small) probability of finding the electron of a hydrogen atom, for example, 1 cm, or even 1 km from the nucleus. It is not possible to specify a definite value for the radius of an isolated atom; the best we can do is to define a spherical shell within whose radius some arbitrary percentage of the electron density can be found
.
When an atom is combined with other atoms in a solid element or compound, an effective radius can be determined by observing the distances between adjacent rows of atoms in these solids. This is most commonly carried out by X-ray scattering experiments. Because of the different ways in which atoms can aggregate together, several different kinds of atomic radii can be defined.
Note
Distances on the atomic scale have traditionally been expressed in Ångstrom units (1Å = 10 –8 cm = 10 –10 m), but nowadays the picometer is preferred:
1 pm = 10 –12 m = 10 – 10 cm = 10 –2 Å, or 1Å = 100 pm.
The radii of atoms and ions are typically in the range 70-400 pm.
A rough idea of the size of a metallic atom can be obtained simply by measuring the density of a sample of the metal. This tells us the number of atoms per unit volume of the solid. The atoms are assumed to be spheres of radius r in contact with each other, each of which sits in a cubic box of edge length 2 r . The volume of each box is just the total volume of the solid divided by the number of atoms in that mass of the solid; the atomic radius is the cube root of r .
Although the radius of an atom or ion cannot be measured directly, in most cases it can be inferred from measurements of the distance between adjacent nuclei in a crystalline solid. This is most commonly carried out by X-ray scattering experiments. Because such solids fall into several different classes, several kinds of atomic radius are defined. Many atoms have several different radii; for example, sodium forms a metallic solid and thus has a metallic radius, it forms a gaseous molecule Na 2 in the vapor phase (covalent radius), and of course it forms ionic solids such as NaCl.
molecules in crystalline iodine.
Ionic radius is the effective radius of ions in solids such as NaCl. It is easy enough to measure the distance between adjacent rows of Na + and Cl – ions in such a crystal, but there is no unambiguous way to decide what portions of this distance are attributable to each ion. The best one can do is make estimates based on studies of several different ionic solids (LiI, KI, NaI, for example) that contain one ion in common. Many such estimates have been made, and they turn out to be remarkably consistent.
Many atoms have several different radii; for example, sodium forms a metallic solid and thus has a metallic radius, it forms a gaseous molecule Na 2 in the vapor phase (covalent radius), and of course it forms ionic solids as mentioned above.
Periodic trends in atomic size
We would expect the size of an atom to depend mainly on the principal quantum number of the highest occupied orbital; in other words, on the "number of occupied electron shells". Since each row in the periodic table corresponds to an increment in n, atomic radius increases as we move down a column. The other important factor is the nuclear charge; the higher the atomic number, the more strongly will the electrons be drawn toward the nucleus, and the smaller the atom. This effect is responsible for the contraction we observe as we move across the periodic table from left to right.
The figure shows a periodic table in which the sizes of the atoms are represented graphically. The apparent discontinuities in this diagram reflect the difficulty of comparing the radii of atoms of metallic and nonmetallic bonding types. Radii of the noble gas elements are estimates from those of nearby elements.
Ionic radii
A positive ion is always smaller than the neutral atom, owing to the diminished electron-electron repulsion. If a second electron is lost, the ion gets even smaller; for example, the ionic radius of Fe 2 + is 76 pm, while that of Fe 3 + is 65 pm. If formation of the ion involves complete emptying of the outer shell, then the decrease in radius is especially great.
The hydrogen ion H + is in a class by itself; having no electron cloud at all, its radius is that of the bare proton, or about 0.1 pm— a contraction of 99.999%! Because the unit positive charge is concentrated into such a small volume of space, the charge density of the hydrogen ion is extremely high; it interacts very strongly with other matter, including water molecules, and in aqueous solution it exists only as the hydronium ion H 3 O + .
Negative ions are always larger than the parent ion; the addition of one or more electrons to an existing shell increases electron-electron repulsion which results in a general expansion of the atom.
An isoelectronic series is a sequence of species all having the same number of electrons (and thus the same amount of electron-electron repulsion) but differing in nuclear charge. Of course, only one member of such a sequence can be a neutral atom (neon in the series shown below.) The effect of increasing nuclear charge on the radius is clearly seen.
Periodic Trends in ion formation
Chemical reactions are based largely on the interactions between the most loosely bound electrons in atoms, so it is not surprising that the tendency of an atom to gain, lose or share electrons is one of its fundamental chemical properties.
Ionization Energy
This term always refers to the formation of positive ions. In order to remove an electron from an atom, work must be done to overcome the electrostatic attraction between the electron and the nucleus; this work is called the ionization energy of the atom and corresponds to the exothermic process
\[M_{(g)} → M^+_{(g)} + e^–\]
where \(M_{(g)}\) stands for any isolated (gaseous) atom.
An atom has as many ionization energies as it has electrons. Electrons are always removed from the highest-energy occupied orbital. An examination of the successive ionization energies of the first ten elements (below) provides experimental confirmation that the binding of the two innermost electrons (1 s orbital) is significantly different from that of the n =2 electrons.Successive ionization energies of an atom increase rapidly as reduced electron-electron repulsion causes the electron shells to contract, thus binding the electrons even more tightly to the nucleus.
Ionization energies increase with the nuclear charge Z as we move across the periodic table. They decrease as we move down the table because in each period the electron is being removed from a shell one step farther from the nucleus than in the atom immediately above it. This results in the familiar zig-zag lines when the first ionization energies are plotted as a function of Z.
This more detailed plot of the ionization energies of the atoms of the first ten elements reveals some interesting irregularities that can be related to the slightly lower energies (greater stabilities) of electrons in half-filled (spin-unpaired) relative to completely-filled subshells.
Finally, a more comprehensive survey of the ionization energies of the main group elements is shown below.
Some points to note:
- The noble gases have the highest IE's of any element in the period. This has nothing to do with any mysterious "special stability" of the s 2 p 6 electron configuration; it is simply a matter of the high nuclear charge acting on more contracted orbitals.
- IE's (as well as many other properties) tend not to vary greatly amongst the d -block elements. This reflects the fact that as the more-compact d orbitals are being filled, they exert a screening effect that partly offsets that increasing nuclear charge on the outermost s orbitals of higher principal quantum number.
- Each of the Group 13 elements has a lower first-IE than that of the element preceding it. The reversal of the IE trend in this group is often attributed to the more easy removal of the single outer-shell p electron compared to that of electrons contained in filled (and thus spin-paired) s - and d -orbitals in the preceding elements.
Electron affinity
Formation of a negative ion occurs when an electron from some external source enters the atom and become incorporated into the lowest energy orbital that possesses a vacancy. Because the entering electron is attracted to the positive nucleus, the formation of negative ions is usually exothermic. The energy given off is the electron affinity of the atom. For some atoms, the electron affinity appears to be slightly negative, suggesting that electron-electron repulsion is the dominant factor in these instances.
In general, electron affinities tend to be much smaller than ionization energies, suggesting that they are controlled by opposing factors having similar magnitudes. These two factors are, as before, the nuclear charge and electron-electron repulsion. But the latter, only a minor actor in positive ion formation, is now much more significant. One reason for this is that the electrons contained in the inner shells of the atom exert a collective negative charge that partially cancels the charge of the nucleus, thus exerting a so-called shielding effect which diminishes the tendency for negative ions to form.
Because of these opposing effects, the periodic trends in electron affinities are not as clear as are those of ionization energies. This is particularly evident in the first few rows of the periodic table, in which small effects tend to be magnified anyway because an added electron produces a large percentage increase in the number of electrons in the atom.
In general, we can say that electron affinities become more exothermic as we move from left to right across a period (owing to increased nuclear charge and smaller atom size). There are some interesting irregularities, however:
- In the Group 2 elements, the filled 2 s orbital apparently shields the nucleus so effectively that the electron affinities are slightly endothermic.
- The Group 15 elements have rather low values, due possibly to the need to place the added electron in a half-filled p orbital; why the electron affinity of nitrogen should be endothermic is not clear. The vertical trend is for electron affinity to become less exothermic in successive periods owing to better shielding of the nucleus by more inner shells and the greater size of the atom, but here also there are some apparent anomalies.
Electronegativity
When two elements are joined in a chemical bond, the element that attracts the shared electrons more strongly is more electronegative . Elements with low electronegativities (the metallic elements) are said to be electropositive .
Electronegativities are properties of atoms that are chemically bound to each other; there is no way of measuring the electronegativity of an isolated atom.
Moreover, the same atom can exhibit different electronegativities in different chemical environments, so the "electronegativity of an element" is only a general guide to its chemical behavior rather than an exact specification of its behavior in a particular compound. Nevertheless, electronegativity is eminently useful in summarizing the chemical behavior of an element. You will make considerable use of electronegativity when you study chemical bonding and the chemistry of the individual elements.
Because there is no single definition of electronegativity, any numerical scale for measuring it must of necessity be somewhat arbitrary. Most such scales are themselves based on atomic properties that are directly measurable and which relate in one way or the other to electron-attracting propensity. The most widely used of these scales was devised by Linus Pauling and is related to ionization energy and electron affinity. The Pauling scale runs from 0 to 4; the highest electron affinity, 4.0, is assigned to fluorine, while cesium has the lowest value of 0.7. Values less than about 2.2 are usually associated with electropositive, or metallic character. In the representation of the scale shown in figure, the elements are arranged in rows corresponding to their locations in the periodic table. The correlation is obvious; electronegativity is associated with the higher rows and the rightmost columns.
The location of hydrogen on this scale reflects some of the significant chemical properties of this element. Although it acts like a metallic element in many respects (forming a positive ion, for example), it can also form hydride-ion (H – ) solids with the more electropositive elements, and of course its ability to share electrons with carbon and other p -block elements gives rise to a very rich chemistry, including of course the millions of organic compounds. | libretexts | 2025-03-17T19:53:08.640149 | 2013-10-03T01:37:47 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/05%3A_Atoms_and_the_Periodic_Table/5.07%3A_Periodic_Properties_of_the_Elements",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "5.7: Periodic Properties of the Elements",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/05%3A_Atoms_and_the_Periodic_Table/5.08%3A_Why_Don't_Electrons_Fall_into_the_Nucleus | 5.8: Why Don't Electrons Fall into the Nucleus?
The picture of electrons "orbiting" the nucleus like planets around the sun remains an enduring one, not only in popular images of the atom but also in the minds of many of us who know better. The proposal, first made in 1913, that the centrifugal force of the revolving electron just exactly balances the attractive force of the nucleus (in analogy with the centrifugal force of the moon in its orbit exactly counteracting the pull of the Earth's gravity) is a nice picture, but is simply untenable.
One origin for this hypothesis that suggests this perspective is plausible is the similarity of the gravity and Coulombic interactions. The expression for the force of gravity between two masses (Newton's Law of gravity) is
\[F_{gravity} \propto \dfrac{m_1m_2}{r^2}\label{1}\]
with
- \(m_1\) and \(m_2\) representing the mass of object 1 and 2, respectively and
- \(r\) representing the distance between the objects centers
The expression for the Coulomb force between two charged species is
\[F_{Coulomb} \propto \dfrac{q_1q_2}{r^2}\label{2}\]
with
- \(q_1\) and \(q_2\) representing the charge of object 1 and 2, respectively and
- \(r\) representing the distance between the objects centers
However, an electron, unlike a planet or a satellite, is electrically charged, and it has been known since the mid-19th century that an electric charge that undergoes acceleration (changes velocity and direction) will emit electromagnetic radiation, losing energy in the process. A revolving electron would transform the atom into a miniature radio station, the energy output of which would be at the cost of the potential energy of the electron; according to classical mechanics, the electron would simply spiral into the nucleus and the atom would collapse.
Quantum theory to the Rescue!
By the 1920's, it became clear that a tiny object such as the electron cannot be treated as a classical particle having a definite position and velocity. The best we can do is specify the probability of its manifesting itself at any point in space. If you had a magic camera that could take a sequence of pictures of the electron in the 1s orbital of a hydrogen atom, and could combine the resulting dots in a single image, you would see something like this. Clearly, the electron is more likely to be found the closer we move toward the nucleus.
This is confirmed by this plot which shows the quantity of electron charge per unit volume of space at various distances from the nucleus. This is known as a probability density plot. The per unit volume of space part is very important here; as we consider radii closer to the nucleus, these volumes become very small, so the number of electrons per unit volume increases very rapidly. In this view, it appears as if the electron does fall into the nucleus!
According to classical mechanics, the electron would simply spiral into the nucleus and the atom would collapse. Quantum mechanics is a different story.
The Battle of the Infinities Saves the electron from its death spiral
As you know, the potential energy of an electron becomes more negative as it moves toward the attractive field of the nucleus; in fact, it approaches negative infinity. However, because the total energy remains constant (a hydrogen atom, sitting peacefully by itself, will neither lose nor acquire energy), the loss in potential energy is compensated for by an increase in the electron's kinetic energy (sometimes referred to in this context as "confinement" energy) which determines its momentum and its effective velocity.
So as the electron approaches the tiny volume of space occupied by the nucleus, its potential energy dives down toward minus-infinity, and its kinetic energy (momentum and velocity) shoots up toward positive-infinity. This "battle of the infinities" cannot be won by either side, so a compromise is reached in which theory tells us that the fall in potential energy is just twice the kinetic energy, and the electron dances at an average distance that corresponds to the Bohr radius.
There is still one thing wrong with this picture; according to the Heisenberg uncertainty principle (a better term would be "indeterminacy"), a particle as tiny as the electron cannot be regarded as having either a definite location or momentum. The Heisenberg principle says that either the location or the momentum of a quantum particle such as the electron can be known as precisely as desired, but as one of these quantities is specified more precisely, the value of the other becomes increasingly indeterminate. It is important to understand that this is not simply a matter of observational difficulty, but rather a fundamental property of nature.
What this means is that within the tiny confines of the atom, the electron cannot really be regarded as a "particle" having a definite energy and location, so it is somewhat misleading to talk about the electron "falling into" the nucleus.
Arthur Eddington, a famous physicist, once suggested, not entirely in jest, that a better description of the electron would be "wavicle"!
Probability Density vs. Radial probability
We can, however, talk about where the electron has the highest probability of manifesting itself— that is, where the maximum negative charge will be found.
This is just the curve labeled "probability density"; its steep climb as we approach the nucleus shows unambiguously that the electron is most likely to be found in the tiny volume element at the nucleus. But wait! Did we not just say that this does not happen? What we are forgetting here is that as we move out from the nucleus, the number of these small volume elements situated along any radius increases very rapidly with \(r\), going up by a factor of \(4πr^2\). So the probability of finding the electron somewhere on a given radius circle is found by multiplying the probability density by \(4πr^2\). This yields the curve you have probably seen elsewhere, known as the radial probability , that is shown on the right side of the above diagram. The peak of the radial probability for principal quantum number \(n = 1\) corresponds to the Bohr radius.
To sum up, the probability density and radial probability plots express two different things: the first shows the electron density at any single point in the atom, while the second, which is generally more useful to us, tells us the the relative electron density summed over all points on a circle of given radius.
References
- Why Doesn't the Electron Fall Into the Nucleus? Franklin Mason and Robert Richardson, J Chem. Ed. 1983 (40-42). See also the comment on this article by Werner Luck, J Chem Ed 1985 (914).
- For more detailed descriptions of these two kinds of plots, see this McMaster U. page by Richard Bader.
- The author is grateful to Robert Harrison of U. of Tennessee-Knoxville whose suggestions led to improving this article.
Contributors and Attributions
-
Stephen Lower, Professor Emeritus ( Simon Fraser U. ) Chem1 Virtual Textbook | libretexts | 2025-03-17T19:53:08.710334 | 2013-10-03T01:37:49 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/05%3A_Atoms_and_the_Periodic_Table/5.08%3A_Why_Don't_Electrons_Fall_into_the_Nucleus",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "5.8: Why Don't Electrons Fall into the Nucleus?",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/06%3A_Properties_of_Gases | 6: Properties of Gases
The gaseous state of matter is the only one that is based on a simple model that can be developed from first principles. As such, it serves as the starting point for the study of the other states of matter.
-
- 6.1: Observable Properties of Gas
- The invention of the sensitive balance in the early seventeenth century showed once and for all that gases have weight and are therefore matter. Guericke's invention of air pump (which led directly to his discovery of the vacuum) launched the “pneumatic era" of chemistry long before the existence of atoms and molecules had been accepted. Indeed, the behavior of gases was soon to prove an invaluable tool in the development of the atomic theory of matter.
-
- 6.3: Dalton's Law
- Although all gases closely follow the ideal gas law PV = nRT under appropriate conditions, each gas is also a unique chemical substance consisting of molecular units that have definite masses. In this lesson we will see how these molecular masses affect the properties of gases that conform to the ideal gas law. Following this, we will look at gases that contain more than one kind of molecule— in other words, mixtures of gases. We begin with a review of molar volume and the E.V.E.N. principle.
-
- 6.4: Kinetic Molecular Theory (Overview)
- The kinetic molecular theory of gases relates macroscopic properties to the behavior of the individual molecules, which are described by the microscopic properties of matter. This theory applies strictly only to a hypothetical substance known as an ideal gas; we will see, however, that under many conditions it describes the behavior of real gases at ordinary temperatures and pressures quite accurately, and serves as the starting point for dealing with more complicated states of matter.
-
- 6.5: More on Kinetic Molecular Theory
- In this section, we look in more detail at some aspects of the kinetic-molecular model and how it relates to our empirical knowledge of gases. For most students, this will be the first application of algebra to the development of a chemical model; this should be educational in itself, and may help bring that subject back to life for you! As before, your emphasis should on understanding these models and the ideas behind them, there is no need to memorize any of the formulas.
-
- 6.6: Real Gases and Critical Phenomena
- When the temperature is reduced, or the pressure is raised in a gas, the ideal gas begins to break down, and its properties become unpredictable; eventually the gas condenses into a liquid. It is vital for appreciating the limitations of the scientific model that constitutes the "ideal gas". | libretexts | 2025-03-17T19:53:08.772467 | 2013-10-03T01:38:02 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/06%3A_Properties_of_Gases",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "6: Properties of Gases",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/06%3A_Properties_of_Gases/6.01%3A_Observable_Properties_of_Gas | 6.1: Observable Properties of Gas
Make sure you thoroughly understand the following essential ideas:
- State the three major properties of gases that distinguish them from condensed phases of matter.
- Define pressure , and explain why a gas exerts pressure on the walls of a container.
- Explain the operation of a simple barometer , and why its invention revolutionized our understanding of gases.
- Explain why a barometer that uses water as the barometric fluid is usually less practical than one which employs mercury.
- How are the Celsius and Fahrenheit temperature scales defined? How do the magnitudes of the "degree" on each scale related?
- Why must the temperature and pressure be specified when reporting the volume of a gas?
The invention of the sensitive balance in the early seventeenth century showed once and for all that gases have weight and are therefore matter. Guericke's invention of air pump (which led directly to his discovery of the vacuum ) launched the “pneumatic era" of chemistry long before the existence of atoms and molecules had been accepted. Indeed, the behavior of gases was soon to prove an invaluable tool in the development of the atomic theory of matter.
Introduction
The study of gases allows us to understand the behavior of matter at its simplest: individual particles, acting independently, almost completely uncomplicated by interactions and interferences between each other. This knowledge of gases will serve as the pathway to our understanding of the far more complicated condensed phases (liquids and solids) in which the theory of gases will no longer give us correct answers, but it will still provide us with a useful model that will at least help us to rationalize the behavior of these more complicated states of matter.
First, we know that a gas has no definite volume or shape ; a gas will fill whatever volume is available to it. Contrast this to the behavior of a liquid, which always has a distinct upper surface when its volume is less than that of the space it occupies. The other outstanding characteristic of gases is their low densities , compared with those of liquids and solids. One mole of liquid water at 298 K and 1 atm pressure occupies a volume of 18.8 cm 3 , whereas the same quantity of water vapor at the same temperature and pressure has a volume of 30200 cm 3 , more than 1000 times greater.
The most remarkable property of gases, however, is that to a very good approximation, they all behave the same way in response to changes in temperature and pressure, expanding or contracting by predictable amounts. This is very different from the behavior of liquids or solids, in which the properties of each particular substance must be determined individually. We will see later that each of these three macroscopic characteristics of gases follows directly from the microscopic view— that is, from the atomic nature of matter.
The Pressure of a Gas
The molecules of a gas, being in continuous motion, frequently strike the inner walls of their container. As they do so, they immediately bounce off without loss of kinetic energy, but the reversal of direction ( acceleration ) imparts a force to the container walls. This force, divided by the total surface area on which it acts, is the pressure of the gas.
The pressure of a gas is observed by measuring the pressure that must be applied externally in order to keep the gas from expanding or contracting. To visualize this, imagine some gas trapped in a cylinder having one end enclosed by a freely moving piston. In order to keep the gas in the container, a certain amount of weight (more precisely, a force, f ) must be placed on the piston so as to exactly balance the force exerted by the gas on the bottom of the piston, and tending to push it up. The pressure of the gas is simply the quotient f/A , where A is the cross-section area of the piston.
The unit of pressure in the SI system is the pascal (Pa), defined as a force of one newton per square meter (1 Nm –2 = 1 kg m –1 s –2 .) At the Earth's surface, the force of gravity acting on a 1 kg mass is 9.81 N. Thus if the weight is 1 kg and the surface area of the piston is 1 M 2 , the pressure of the gas would be 9.81 Pa. A 1-gram weight acting on a piston of 1 cm 2 cross-section would exert a pressure of 98.1 pA. (If you wonder why the pressure is higher in the second example, consider the number of cm 2 contained in 1 m 2 .)
In chemistry, it is often common to express pressures in units of atmospheres or torr : 1 atm = 101325 Pa = 760 torr. The older unit millimeter of mercury (mm Hg) is almost the same as the torr and is defined as one mm of level difference in a mercury barometer at 0°C. In meteorology, the pressure unit most commonly used is the bar :
1 bar = 10 5 N m –2 = 750.06 torr = 0.987 atm.
The pressures of gases encountered in nature span an exceptionally wide range, only part of which is ordinarily encountered in chemistry. Note that in the chart below, the pressure scales are logarithmic; thus 0 on the atm scale means 10 0 = 1 atm.
Atmospheric Pressure
The column of air above us exerts a force on each 1-cm 2 of surface equivalent to a weight of about 1034 g. The higher into the air you go, the lower the mass of air above you, hence the lower the pressure (right).
This figure is obtained by solving Newton's law \(\textbf{F} = m\textbf{a}\) for \(m\), using the acceleration of gravity for \(\textbf{a}\):
\[ m = \dfrac{101375\; kg\, m^{-1} \, s^{-2}}{9.8 \, m \, s^{-2}} = 10340 \, kg\, m^{-1} =1034\; g \; cm^{-2}\]
If several kilos of air are constantly pressing down on your body, why do you not feel it?
Solution
Because every other part of your body (including within your lungs and insides) also experiences the same pressure, so there is no net force (other than gravity) acting on you.
This was the crucial first step that led eventually to the concept of gases and their essential role in the early development of Chemistry. In the early 17th century the Italian Evangelista Torricelli invented a device — the barometer — to measure the pressure of the atmosphere. A few years later, the German scientist and some-time mayor of Magdeburg Otto von Guericke devised a method of pumping the air out of a container, thus creating which might be considered the opposite of air: the vacuum .
As with so many advances in science, idea of a vacuum — a region of nothingness — was not immediately accepted. Torricelli's invention overturned the then-common belief that air (and by extension, all gases) are weightless. The fact that we live at the bottom of a sea of air was most spectacularly demonstrated in 1654, when two teams of eight horses were unable to pull apart two 14-inch copper hemispheres (the "Magdeburg hemispheres") which had been joined together and then evacuated with Guericke's newly-invented vacuum pump.
The classical barometer, still used for the most accurate work, measures the height of a column of liquid that can be supported by the atmosphere. As indicated below, this pressure is exerted directly on the liquid in the reservoir, and is transmitted hydrostatically to the liquid in the column.
Metallic mercury , being a liquid of exceptionally high density and low vapor pressure, is the ideal barometric fluid. Its widespread use gave rise to the "millimeter of mercury" (now usually referred to as the "torr") as a measure of pressure.
How is the air pressure of 1034 g cm –3 related to the 760-mm height of the mercury column in the barometer? What if water were used in place of mercury?
Solution
The density of Hg is 13.6 g cm –3 , so in a column of 1-cm 2 cross-section, the height needed to counter the atmospheric pressure would be (1034 g × 1 cm 2 ) / (13.6 g cm –3 ) = 76 cm.
The density of water is only 1/13.6 that of mercury, so standard atmospheric pressure would support a water column whose height is 13.6 x 76 cm = 1034 cm, or 10.3 m. You would have to read a water barometer from a fourth-story window!
Water barometers were once employed to measure the height of the ground and the heights of buildings before more modern methods were adopted.
A modification of the barometer, the U-tube manometer , provides a simple device for measuring the pressure of any gas in a container. The U-tube is partially filled with mercury, one end is connected to container, while the other end can either be opened to the atmosphere. The pressure inside the container is found from the difference in height between the mercury in the two sides of the U-tube. The illustration below shows how the two kinds of manometer work.
The manometers ordinarily seen in the laboratory come in two flavors: closed-tube and open-tube. In the closed-tube unit shown at the left, the longer limb of the J-tube is evacuated by filling it with mercury and then inverting it. If the sample container is also evacuated, the mercury level will be the same in both limbs. When gas is let into the container, its pressure pushes the mercury down on one side and up on the other; the difference in levels is the pressure in torr. For practical applications in engineering and industry, especially where higher pressures must be monitored, many types of mechanical and electrical pressure gauges are available.
The Temperature of a Gas
If two bodies are at different temperatures, heat will flow from the warmer to the cooler one until their temperatures are the same. This is the principle on which thermometry is based; the temperature of an object is measured indirectly by placing a calibrated device known as a thermometer in contact with it. When thermal equilibrium is obtained, the temperature of the thermometer is the same as the temperature of the object.
A thermometer makes use of some temperature-dependent quantity, such as the density of a liquid, to allow the temperature to be found indirectly through some easily measured quantity such as the length of a mercury column. The resulting scale of temperature is entirely arbitrary; it is defined by locating its zero point, and the size of the degree unit. At one point in the 18 th century, 35 different temperature scales were in use! The Celsius temperature scale (formally called centigrade) locates the zero point at the freezing temperature of water; the Celsius degree is defined as 1/100 of the difference between the freezing and boiling temperatures of water at 1 atm pressure.
The older Fahrenheit scale placed the zero point at the coldest temperature it was possible to obtain at the time (by mixing salt and ice.) The 100° point was set with body temperature (later found to be 98.6°F.) On this scale, water freezes at 32°F and boils at 212°F. The Fahrenheit scale is a finer one than the Celsius scale; there are 180 Fahrenheit degrees in the same temperature interval that contains 100 Celsius degrees, so 1 F° = 5/9 C°. Since the zero points are also different by 32F°, conversion between temperatures expressed on the two scales requires the addition or subtraction of this offset, as well as multiplication by the ratio of the degree size.
You should be able to derive the formulas for this conversion.
In 1787 the French mathematician and physicist Jacques Charles discovered that for each Celsius degree that the temperature of a gas is lowered, the volume of the gas will diminish by 1/273 of its volume at 0°C. The obvious implication of this is that if the temperature could be reduced to –273°C, the volume of the gas would contract to zero. Of course, all real gases condense to liquids before this happens, but at sufficiently low pressures their volumes are linear functions of the temperature ( Charles' Law ), and extrapolation of a plot of volume as a function of temperature predicts zero volume at -273°C. This temperature, known as absolute zero , corresponds to the total absence of thermal energy.
The temperature scale on which the zero point is –273.15°C was suggested by Lord Kelvin, and is usually known as the Kelvin scale. Since the size of the Kelvin and Celsius degrees are the same, conversion between the two scales is a simple matter of adding or subtracting 273.15; thus room temperature, 20°, is about 293 K.
Because the Kelvin scale is based on an absolute, rather than on an arbitrary zero of temperature, it plays a special significance in scientific calculations; most fundamental physical relations involving temperature are expressed mathematically in terms of absolute temperature. In engineering work, an absolute scale based on the Fahrenheit degree is sometimes used; this is known as the Rankine scale .
The volume occupied by a gas
The volume of a gas is simply the space in which the molecules of the gas are free to move. If we have a mixture of gases, such as air, the various gases will coexist within the same volume. In these respects, gases are very different from liquids and solids, the two condensed states of matter. The volume of a gas can be measured by trapping it above mercury in a calibrated tube known as a gas burette . The SI unit of volume is the cubic meter, but in chemistry we more commonly use the liter and the milliliter (mL). The cubic centimeter (cc) is also frequently used; it is very close to 1 milliliter (mL).
It's important to bear in mind, however, that the volume of a gas varies with both the temperature and the pressure, so reporting the volume alone is not very useful. A common practice is to measure the volume of the gas under the ambient temperature and atmospheric pressure, and then to correct the observed volume to what it would be at standard atmospheric pressure and some fixed temperature, usually 0° C or 25°C.
Contributors and Attributions
-
Stephen Lower, Professor Emeritus ( Simon Fraser U. ) Chem1 Virtual Textbook | libretexts | 2025-03-17T19:53:08.858832 | 2014-11-17T22:54:22 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/06%3A_Properties_of_Gases/6.01%3A_Observable_Properties_of_Gas",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "6.1: Observable Properties of Gas",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/06%3A_Properties_of_Gases/6.02%3A_Ideal_Gas_Model_-_The_Basic_Gas_Laws | 6.2: Ideal Gas Model - The Basic Gas Laws
Make sure you thoroughly understand the following essential ideas which have been presented above, and be able to state them in your own words.
- Boyle's Law - The PV product for any gas at a fixed temperature has a constant value. Understand how this implies an inverse relationship between the pressure and the volume.
- Charles' Law - The volume of a gas confined by a fixed pressure varies directly with the absolute temperature. The same is true of the pressure of a gas confined to a fixed volume.
- Avogadro's Law - This is quite intuitive: the volume of a gas confined by a fixed pressure varies directly with the quantity of gas.
- The E.V.E.N. principle - this is just another way of expressing Avogadro's Law.
- Gay-Lussac's Law of Combining Volumes - you should be able to explain how this principle, that follows from the E.V.E.N. principle and the Law of Combining Weights,
- The ideal gas equation of state - this is one of the very few mathematical relations you must know. Not only does it define the properties of the hypothetical substance known as an ideal gas , but it's importance extends quite beyond the subject of gases.
The "pneumatic" era of chemistry began with the discovery of the vacuum around 1650 which clearly established that gases are a form of matter. The ease with which gases could be studied soon led to the discovery of numerous empirical (experimentally-discovered) laws that proved fundamental to the later development of chemistry and led indirectly to the atomic view of matter. These laws are so fundamental to all of natural science and engineering that everyone learning these subjects needs to be familiar with them.
Pressure-volume relations: Boyle's law
Robert Boyle (1627-91) showed that the volume of air trapped by a liquid in the closed short limb of a J-shaped tube decreased in exact proportion to the pressure produced by the liquid in the long part of the tube. The trapped air acted much like a spring, exerting a force opposing its compression. Boyle called this effect " the spring of the air ", and published his results in a pamphlet of that title.
The difference between the heights of the two mercury columns gives the pressure (76 cm = 1 atm), and the volume of the air is calculated from the length of the air column and the tubing diameter. Some of Boyle's actual data are shown in Table \(\PageIndex{1}\).
| volume | pressure | P × V |
|---|---|---|
| 96.0 | 2.00 | 192 |
| 76.0 | 2.54 | 193 |
| 46.0 | 4.20 | 193 |
| 26.0 | 7.40 | 193 |
Boyle's law can be expressed as
\[PV = \text{constant} \nonumber\]
or, equivalently,
\[P_1V_1 = P_2V_2\]
These relations hold true only if the number of molecules n and the temperature are constant. This is a relation of inverse proportionality ; any change in the pressure is exactly compensated by an opposing change in the volume. As the pressure decreases toward zero, the volume will increase without limit. Conversely, as the pressure is increased, the volume decreases, but can never reach zero. There will be a separate P-V plot for each temperature; a single P-V plot is therefore called an isotherm .
Shown here are some isotherms for one mole of an ideal gas at several different temperatures. Each plot has the shape of a hyperbola — the locus of all points having the property x y = a , where a is a constant. You will see later how the value of this constant ( PV =25 for the 300K isotherm shown here) is determined. It is very important that you understand this kind of plot which governs any relationship of inverse proportionality. You should be able to sketch out such a plot when given the value of any one ( x,y ) pair.
A related type of plot with which you should be familiar shows the product PV as a function of the pressure. You should understand why this yields a straight line, and how this set of plots relates to the one immediately above.
In an industrial process, a gas confined to a volume of 1 L at a pressure of 20 atm is allowed to flow into a 12-L container by opening the valve that connects the two containers. What will be the final pressure of the gas?
Solution
The final volume of the gas is (1 + 12)L = 13 L. The gas expands in inverse proportion two volumes
\[P_2 = (20 \,atm) (1 \,L ÷ 13 \,L) = 1.5 \,atm \nonumber\]
Note that there is no need to make explicit use of any "formula" in problems of this kind!
How the temperature affects the volume: Charles' law
All matter expands when heated, but gases are special in that their degree of expansion is independent of their composition. The French scientists Jacques Charles (1746-1823) and Joseph Gay-Lussac (1778-1850) independently found that if the pressure is held constant, the volume of any gas changes by the same fractional amount (1/273 of its value) for each C° change in temperature.
The volume of a gas confined against a constant pressure is directly proportional to the absolute temperature. A graphical expression of the law of Charles and Gay-Lussac can be seen in these plots of the volume of one mole of an ideal gas as a function of its temperature at various constant pressures.
- What do these plots show?
- The straight-line plots show that the ratio V/T (and thus dV / dT ) is a constant at any given pressure. Thus we can express the law algebraically as V/T = constant or V 1 /T 1 = V 2 /T 2
- What is the significance of the extrapolation to zero volume?
- If a gas contracts by 1/273 of its volume for each degree of cooling, it should contract to zero volume at a temperature of –273°C. This, of course, is the temperature of absolute zero, and this extrapolation of Charles' law is the first evidence of the special significance of this temperature.
- Why do the plots for different pressures have different slopes?
- The lower the pressure, the greater the volume (Boyle's law), so at low pressures the fraction ( V /273) will have a larger value. You might say that the gas must "contract faster" to reach zero volume when its starting volume is larger.
The air pressure in a car tire is 30 psi (pounds per square inch) at 10°C. What will be pressure be after driving has raised its temperature to 45°C ? (Assume that the volume remains unchanged.)
Solution
The gas expands in direct proportion to the ratio of the absolute temperatures:
\[P_2 = (30\, psi) × (318\,K ÷ 283\,K) = 33.7\, psi \nonumber\]
Historical notes
The relation between the temperature of a gas and its volume has long been known. In 1702, Guillaume Amontons (1163-1705), who is better known for his early studies of friction, devised a thermometer that related the temperature to the volume of a gas. Robert Boyle had observed this inverse relationship in 1662, but the lack of any uniform temperature scale at the time prevented them from establishing the relationship as we presently understand it. Jacques Charles discovered the law that is named for him in the 1780s, but did not publish his work. John Dalton published a form of the law in 1801, but the first thorough published presentation was made by Gay-Lussac in 1802, who acknowledged Charles' earlier studies.
The buoyancy that lifts a hot-air balloon into the sky depends on the difference between the density (mass ÷ volume) of the air entrapped within the balloon's envelope, compared to that of the air surrounding it. When a balloon on the ground is being prepared for flight, it is first partially inflated by an external fan, and possesses no buoyancy at all. Once the propane burners are started, this air begins to expand according to Charles' law. After the warmed air has completely inflated the balloon, further expansion simply forces excess air out of the balloon, leaving the weight of the diminished mass of air inside the envelope smaller than that of the greater mass of cooler air that the balloon displaces.
Jacques Charles collaborated with the Montgolfier brothers whose hot-air balloon made the world's first manned balloon flight in June, 1783. Ten days later, Charles himself co-piloted the first hydrogen-filled balloon. Gay-Lussac, who had a special interest in the composition of the atmosphere, also saw the potential of the hot-air balloon, and in 1804 he ascended to a then-record height of 6.4 km.
Volume and the Number of Molecules
Gay-Lussac's Law of Combining Volumes
In the same 1808 article in which Gay-Lussac published his observations on the thermal expansion of gases, he pointed out that when two gases react, they do so in volume ratios that can always be expressed as small whole numbers. This came to be known as the Law of combining volumes . These "small whole numbers" are of course the same ones that describe the "combining weights" of elements to form simple compounds, as described in the lesson dealing with the simplest formulas. The Italian scientist Amedeo Avogadro (1776-1856) drew the crucial conclusion: these volume ratios must be related to the relative numbers of molecules that react, and thus the famous "E.V.E.N principle":
The E.V.E.N Principle
E qual v olumes of gases, measured at the same temperature and pressure, contain e qual n umbers of molecules
Avogadro's law thus predicts a directly proportional relation between the number of moles of a gas and its volume. This relationship, originally known as Avogadro's Hypothesis , was crucial in establishing the formulas of simple molecules at a time (around 1811) when the distinction between atoms and molecules was not clearly understood. In particular, the existence of diatomic molecules of elements such as H 2 , O 2 , and Cl 2 was not recognized until the results of combining-volume experiments such as those depicted below could be interpreted in terms of the E.V.E.N. principle.
How the E.V.E.N. principle led to the correct formula of water
Early chemists made the mistake of assuming that the formula of water is HO. This led them to miscalculate the molecular weight of oxygen as 8 (instead of 16). If this were true, the reaction H + O → HO would correspond to the following combining volumes results according to the E.V.E.N principle:
But a similar experiment on the formation of hydrogen chloride from hydrogen and chlorine yielded twice the volume of HCl that was predicted by the the assumed reaction H + Cl → HCl. This could be explained only if hydrogen and chlorine were diatomic molecules:
This made it necessary to re-visit the question of the formula of water. The experiment immediately confirmed that the correct formula of water is H 2 O:
This conclusion was also seen to be consistent with the observation, made a few years earlier by the English chemists Nicholson and Carlisle that the reverse of the above reaction, brought about by the electrolytic decomposition of water, yields hydrogen and oxygen in a 2:1 volume ratio.
Putting it all together: The Ideal Gas Equation of State
If the variables P, V, T and n (the number of moles) have known values, then a gas is said to be in a definite state , meaning that all other physical properties of the gas are also defined. The relation between these state variables is known as an equation of state . By combining the expressions of Boyle's, Charles', and Avogadro's laws (you should be able to do this!) we can write the very important ideal gas equation of state
\[PV=NRT\]
where the proportionality constant R is known as the gas constant . This is one of the few equations you must commit to memory in this course; you should also know the common value and units of \(R\).
An ideal gas is defined as a hypothetical substance that obeys the ideal gas equation of state.
Take note of the word "hypothetical" here. No real gas (whose molecules occupy space and interact with each other) can behave in a truly ideal manner. But we will all gases behave more and more like an ideal gas as the pressure approaches zero. A pressure of only 1 atm is sufficiently close to zero to make this relation useful for most gases at this pressure.
Many textbooks show formulas, such as \(P_1V_1 = P_2V_2\) for Boyle's law. Don't bother memorizing them ; if you really understand the meanings of these laws as stated above, you can easily derive them on the rare occasions when they are needed. The ideal gas equation is the only one you need to know.
PVT surface for an ideal gas
In order to depict the relations between the three variables P, V and T we need a three-dimensional graph.
A biscuit made with baking powder has a volume of 20 mL, of which one-fourth consists of empty space created by gas bubbles produced when the baking powder decomposed to CO 2 . What weight of \(\ce{NaHCO3}\) was present in the baking powder in the biscuit? Assume that the gas reached its final volume during the baking process when the temperature was 400°C.
(Baking powder consists of sodium bicarbonate mixed with some other solid that produces an acidic solution on addition of water, initiating the reaction
\[\ce{NaHCO3(s) + H^{+} → Na^+ + H2O + CO2} \nonumber\]
Solution: Use the ideal gas equation to find the number of moles of CO 2 gas; this will be the same as the number of moles of NaHCO 3 (84 g mol –1 ) consumed :
\[n=\frac{(1 \mathrm{atm})(0.005 \mathrm{L})}{\left(.082 \mathrm{L} \mathrm{atm} \mathrm{mol}^{-1} \mathrm{K}^{-1}\right)(673 \mathrm{K})}=9.1 \times 10^{-6} \,\mathrm{mol} \nonumber\]
\[9.1 \mathrm{E}-6 \mathrm{mol} \times 84 \mathrm{g} \mathrm{mol}^{-1}=0.0076 \mathrm{g} \nonumber\] | libretexts | 2025-03-17T19:53:08.955053 | 2013-10-03T01:38:03 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/06%3A_Properties_of_Gases/6.02%3A_Ideal_Gas_Model_-_The_Basic_Gas_Laws",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "6.2: Ideal Gas Model - The Basic Gas Laws",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/06%3A_Properties_of_Gases/6.03%3A_Dalton's_Law | 6.3: Dalton's Law
Make sure you thoroughly understand the following essential ideas which have been presented below.
- One mole of a gas occupies a volume of 22.4 L at STP (standard temperature and pressure, 273K, 1 atm = 101.3 kPa.)
- The above fact allows us to relate the measurable property of the density of a gas to its molar mass .
- The composition of a mixture of gases is commonly expressed in terms mole fractions ; be sure you know how to calculate them.
- Dalton's Law of partial pressures says that every gas in a mixture acts independently, so the total pressure a gas exerts against the walls of a container is just the sum of the partial pressures of the individual components.
Although all gases closely follow the ideal gas law PV = nRT under appropriate conditions, each gas is also a unique chemical substance consisting of molecular units that have definite masses. In this lesson we will see how these molecular masses affect the properties of gases that conform to the ideal gas law. Following this, we will look at gases that contain more than one kind of molecule— in other words, mixtures of gases. We begin with a review of molar volume and the E.V.E.N. principle, which is central to our understanding of gas mixtures.
The Molar Volume of a Gas
You will recall that the molar mass of a pure substance is the mass of 6.02 x 10 23 ( Avogadro's number ) of particles or molecular units of that substance. Molar masses are commonly expressed in units of grams per mole (g mol –1 ) and are often referred to as molecular weight s. As was explained in the preceding lesson, equal volumes of gases, measured at the same temperature and pressure, contain equal numbers of molecules (this is the "EVEN" principle , more formally known as Avogadro's law . ) Standard temperature and pressure: 273K, 1 atm
The magnitude of this volume will of course depend on the temperature and pressure, so as a means of convenient comparison it is customary to define a set of conditions T = 273 K and P = 1 atm as standard temperature and pressure , usually denoted as STP . Substituting these values into the ideal gas equation of state and solving for V yields a volume of 22.414 liters for 1 mole.
What would the volume of one mole of air be at 20°C on top of Mauna Kea, Hawa'ii (altitude 4.2 km) where the air pressure is approximately 60 kPa?
Scoria and cinder cones on Mauna Kea's summit in winter. (Public Domain; USGS )
Solution
Apply Boyle's and Charles' laws as successive correction factors to the standard sea-level pressure of 101.3 kPa:
The standard molar volume 22.4 L mol –1 is a value worth memorizing, but remember that it is valid only at STP. The molar volume at other temperatures and pressures can easily be found by simple proportion. The molar volume of a substance can tell us something about how much space each molecule occupies, as the following example shows.
Estimate the average distance between the molecules in a gas at 1 atm pressure and 0°C.
Solution
Consider a 1-cm 3 volume of the gas, which will contain
\[\dfrac{6.02 \times 10^{23} \;mol^{–1}}{22,400\; cm^3 \;mol^{–1}} = 2.69 \times 10^{19} cm^{-3} \nonumber\]
The volume per molecule (not the same as the volume of a molecule, which for an ideal gas is zero!) is just the reciprocal of this, or \(3.72 \times 10^{-20}\, cm^3\). Assume that the molecules are evenly distributed so that each occupies an imaginary box having this volume. The average distance between the centers of the molecules will be defined by the length of this box, which is the cube root of the volume per molecule:
\[(3.72 \times 10^{–20})^{1/3} = 3.38 \times 10^{–7}\, cm = 3.4\, nm \nonumber\]
Under conditions at which the ideal gas model is applicable (that is, almost always unless you are a chemical engineer dealing with high pressures), "a molecule is a molecule", so the volume of Avogadro's number of molecules will be independent of the composition of the gas. The reason, of course, is that the volume of the gas is mostly empty space; the volumes of the molecules themselves are negligible.
Molar Mass and Density of a Gas
The molecular weight (molar mass) of any gas is the mass, expressed in grams, of Avogadro's number of its molecules. This is true regardless of whether the gas is composed of one molecular species or is a mixture. For a mixture of gases, the molar mass will depend on the molar masses of its components, and on the fractional abundance of each kind of molecule in the mixture. The term "average molecular weight" is often used to describe the molar mass of a gas mixture.
The average molar mass (\(\bar{m}\)) of a mixture of gases is just the sum of the mole fractions of each gas, multiplied by the molar mass of that substance:
\[\bar{m}=\sum_i \chi_im_i\]
Find the average molar mass of dry air whose volume-composition is O 2 (21%), N 2 (78%) and Ar (1%).
Solution
The average molecular weight is the mole-fraction-weighted sum of the molecular weights of its components. The mole fractions, of course, are the same as the volume-fractions (E.V.E.N. principle.)
\[m = (0.21 \times 32) + (0.78 \times 28) + (0.01 \times 20) = 28 \nonumber\]
The molar volumes of all gases are the same when measured at the same temperature and pressure. However, the molar masses of different gases will vary. This means that different gases will have different densities (different masses per unit volume). If we know the molecular weight of a gas, we can calculate its density.
Uranium hexafluoride UF 6 gas is used in the isotopic enrichment of natural uranium. Calculate its density at STP.
Solution
The molecular weight of UF 6 is 352.
\[\dfrac{352\; g \;mol^{–1}}{22.4\, L\, mol^{–1}} = 15.7\; g\; L^{–1} \nonumber\]
Note: there is no need to look up a "formula" for this calculation; simply combine the molar mass and molar volume in such a way as to make the units come out correctly.
More importantly, if we can measure the density of an unknown gas, we have a convenient means of estimating its molecular weight. This is one of many important examples of how a macroscopic measurement (one made on bulk matter) can yield microscopic information (that is, about molecular-scale objects.)
Gas densities are now measured in industry by electro-mechanical devices such as vibrating reeds which can provide continuous, on-line records at specific locations, as within pipelines. Determination of the molecular weight of a gas from its density is known as the Dumas method , after the French chemist Jean Dumas (1800-1840) who developed it. One simply measures the weight of a known volume of gas and converts this volume to its STP equivalent, using Boyle's and Charles' laws. The weight of the gas divided by its STP volume yields the density of the gas, and the density multiplied by 22.4 mol –1 gives the molecular weight. Pay careful attention to the examples of gas density calculations shown here and in your textbook. You will be expected to carry out calculations of this kind, converting between molecular weight and gas density.
Calculate the approximate molar mass of a gas whose measured density is 3.33 g/L at 30°C and 780 torr.
Solution
Find the volume that would be occupied by 1 L of the gas at STP; note that correcting to 273 K will reduce the volume, while correcting to 1 atm (760 torr) will increase it:
\[V=(1.00 \mathrm{L})\left(\frac{273}{303}\right)\left(\frac{780}{760}\right)=0.924 \mathrm{L} \nonumber\]
The number of moles of gas is
\[n = \dfrac{0.924\, L}{22.4\, L\, mol^{–1}}= 0.0412\, mol \nonumber\]
The molecular weight is therefore
\[\dfrac{33\, g\, L^{–1}}{0.0412\, mol\, L^{–1}} = 80.7\, g\, mol^{–1} \nonumber\]
Density of a Gas Mixture
Gas density measurements can be a useful means of estimating the composition of a mixture of two different gases; this is widely done in industrial chemistry operations in which the compositions of gas streams must be monitored continuously.
Find the composition of a mixture of \(\ce{CO2}\) (44 g/mol) and methane \(\ce{CH4}\) (16 g/mol) that has a STP density of 1.214 g/L.
Solution
The density of a mixture of these two gases will be directly proportional to its composition, varying between that of pure methane and pure CO 2 . We begin by finding these two densities:
For CO 2 :
(44 g/mol) ÷ (22.4 L/mol) = 1.964 g/L
For CH 4 :
(16 g/mol) ÷ (22.4 L/mol) = 0.714 g/L
If x is the mole fraction of CO 2 and (1– x ) is the mole fraction of CH 4 , we can write
1.964 x + 0.714 (1–x) = 1.214
(Does this make sense? Notice that if x = 0, the density would be that of pure CH 4 , while if it were 1, it would be that of pure CO 2 .)
Expanding the above equation and solving for x yields the mole fractions of 0.40 for CO 2 and 0.60 for CH 4 .
Expressing the Composition of a Gas Mixture
Because most of the volume occupied by a gas consists of empty space, there is nothing to prevent two or more kinds of gases from occupying the same volume. Homogeneous mixtures of this kind are generally known as solutions , but it is customary to refer to them simply as gaseous mixtures . We can specify the composition of gaseous mixtures in many different ways, but the most common ones are by volumes and by mole fractions .
Volume Fractions
From Avogadro's Law we know that "equal volumes contains equal numbers of molecules". This means that the volumes of gases, unlike those of solids and liquids, are additive. So if a partitioned container has two volumes of gas A in one section and one mole of gas B in the other (both at the same temperature and pressure), and we remove the partition, the volume remains unchanged.
Volume fractions are often called partial volumes:
\[V_i = \dfrac{v_i}{\sum v_i}\]
Don't let this type of notation put you off! The summation sign Σ (Greek Sigma) simply means to add up the v 's (volumes) of every gas. Thus if Gas A is the " i -th" substance as in the expression immediately above, the summation runs from i =1 through i =2. Note that we can employ partial volumes to specify the composition of a mixture even if it had never actually been made by combining the pure gases.
When we say that air, for example, is 21 percent oxygen and 78 percent nitrogen by volume, this is the same as saying that these same percentages of the molecules in air consist of O 2 and N 2 . Similarly, in 1.0 mole of air, there is 0.21 mol of O 2 and 0.78 mol of N 2 (the other 0.1 mole consists of various trace gases, but is mostly neon.) Note that you could never assume a similar equivalence with mixtures of liquids or solids, to which the E.V.E.N. principle does not apply.
Mole Fractions
These last two numbers (0.21 and 0.78) also express the mole fractions of oxygen and nitrogen in air. Mole fraction means exactly what it says: the fraction of the molecules that consist of a specific substance. This is expressed algebraically by
\[X_i = \dfrac{n_i}{\sum_i n_i}\]
so in the case of oxygen in the air, its mole fraction is
\[ X_{O_2} = \dfrac{n_{O_2}}{n_{O_2}+n_{N_2}+n_{Ar}}= \dfrac{0.21}{1}=0.21 \nonumber\]
A mixture of \(O_2\) and nitrous oxide, \(N_2O\), is sometimes used as a mild anesthetic in dental surgery. A certain mixture of these gases has a density of 1.482 g L –1 at 25 and 0.980 atm. What was the mole-percent of \(N_2O\) in this mixture?
Solution
First, find the density the gas would have at STP:
\[\left(1.482 \mathrm{g} \mathrm{L}^{-1}\right) \times\left(\frac{298}{273}\right)\left(\frac{1}{0.980}\right)=1.65 \mathrm{g} \mathrm{L}^{-1}\nonumber \]
The molar mass of the mixture is (1.65 g L –1 )(22.4 L mol –1 ) = 37.0 g mol –1 . The molecular weights of \(O_2\) and \(N_2\) are 32 and 44, respectively. 37.0 is 5/12 of the difference between the molar masses of the two pure gases. Since the density of a gas mixture is directly proportional to its average molar mass, the mole fraction of the heavier gas in the mixture is also 5/12:
\[\dfrac{37-32}{44-32}=\dfrac{5}{12}=0.42 \nonumber\]
What is the mole fraction of carbon dioxide in a mixture consisting of equal masses of CO 2 (MW=44) and neon (MW=20.2)?
Solution
Assume any arbitrary mass, such as 100 g, find the equivalent numbers of moles of each gas, and then substitute into the definition of mole fraction:
- n CO2 = (100 g) ÷ (44 g mol –1 ) = 2.3 mol
- n Ne = (100 g) ÷ (20.2 g mol –1 ) = 4.9 mol
- X Ne = (2.3 mol) ÷ (2.3 mol + 4.9 mol) = 0.32
Dalton's Law of Partial Pressures
The ideal gas equation of state applies to mixtures just as to pure gases. It was in fact with a gas mixture, ordinary air, that Boyle, Gay-Lussac and Charles did their early experiments. The only new concept we need in order to deal with gas mixtures is the partial pressure , a concept invented by the famous English chemist John Dalton (1766-1844). Dalton reasoned that the low density and high compressibility of gases indicates that they consist mostly of empty space; from this it follows that when two or more different gases occupy the same volume, they behave entirely independently.
The contribution that each component of a gaseous mixture makes to the total pressure of the gas is known as the partial pressure of that gas. Dalton himself stated this law in the simple and vivid way shown at the left.
The usual way of stating Dalton's Law of Partial Pressures is
The total pressure of a gas is the sum of the partial pressures of its components
which is expressed algebraically as
\[P_{total}=P_1+P_2+P_3 ... = \sum_i P_i\]
or, equivalently
\[ P_{total} = \dfrac{RT}{V} \sum_i n_i\]
There is also a similar relationship based on volume fractions , known as Amagat's law of partial volumes . It is exactly analogous to Dalton's law, in that it states that the total volume of a mixture is just the sum of the partial volumes of its components. But there are two important differences: Amagat's law holds only for ideal gases which must all be at the same temperature and pressure. Dalton's law has neither of these restrictions. Although Amagat's law seems intuitively obvious, it sometimes proves useful in chemical engineering applications. We will make no use of it in this course.
Calculate the mass of each component present in a mixture of fluorine (MW and xenon (MW 131.3) contained in a 2.0-L flask. The partial pressure of Xe is 350 torr and the total pressure is 724 torr at 25°C.
Solution
From Dalton's law, the partial pressure of F 2 is (724 – 350) = 374 torr:
The mole fractions are
\[\chi_{Xe} = \dfrac{350}{724} = 0.48 \nonumber\]
and
\[\chi_{F_2} = \dfrac{374}{724} = 0.52 \nonumber\]
The total number of moles of gas is
\[n=\dfrac{P V}{R T}=\frac{(724 / 60)(2)}{(.082)(298)}=0.078 \mathrm{mol}\nonumber\]
The mass of \(Xe\) is
\[(131.3\, g\, mol^{–1}) \times (0.48 \times 0.078\, mol) = 4.9\, g \nonumber\]
Three flasks having different volumes and containing different gases at various pressures are connected by stopcocks as shown. When the stopcocks are opened,
- What will be the pressure in the system?
- Which gas will be most abundant in the mixture?
Assume that the temperature is uniform and that the volume of the connecting tubes is negligible.
Solution
The trick here is to note that the total number of moles n T and the temperature remain unchanged, so we can make use of Boyle's law PV = constant. We will work out the details for CO 2 only, denoted by subscripts a.
For CO 2 ,
P a V a = (2.13 atm)(1.50 L) = 3.19 L-atm.
Adding the PV products for each separate container, we obtain
\[\sum P_iV_i = 6.36 L-atm = n_T RT. \nonumber\]
We will call this sum P 1 V 1 . After the stopcocks have been opened and the gases mix, the new conditions are denoted by P 2 V 2 .
From Boyle's law,
= 6.36 L-atm . V 2 = Σ V i = 4.50 L.Solving for the final pressure P 2 we obtain
(6.36 L-atm)/(4.50 L) = 1.41 atm .
For CO 2 , this works out to (3.19/ RT ) / (6.36/ RT ) = 0.501. Because this exceeds 0.5, we know that this is the most abundant gas in the final mixture.
Application of Dalton's Law: Collecting Gases over Water
A common laboratory method of collecting the gaseous product of a chemical reaction is to conduct it into an inverted tube or bottle filled with water, the opening of which is immersed in a larger container of water. This arrangement is called a pneumatic trough , and was widely used in the early days of chemistry. As the gas enters the bottle it displaces the water and becomes trapped in the upper part.
The volume of the gas can be observed by means of a calibrated scale on the bottle, but what about its pressure? The total pressure confining the gas is just that of the atmosphere transmitting its force through the water. (An exact calculation would also have to take into account the height of the water column in the inverted tube.) But liquid water itself is always in equilibrium with its vapor, so the space in the top of the tube is a mixture of two gases: the gas being collected, and gaseous H 2 O. The partial pressure of H 2 O is known as the vapor pressure of water and it depends on the temperature. In order to determine the quantity of gas we have collected, we must use Dalton's Law to find the partial pressure of that gas.
Oxygen gas was collected over water as shown above. The atmospheric pressure was 754 torr, the temperature was 22°C, and the volume of the gas was 155 mL. The vapor pressure of water at 22°C is 19.8 torr. Use this information to estimate the number of moles of \(O_2\) produced.
Solution
From Dalton's law, \(P_{O_2} = P_{total} – P_{H_2O} = 754 – 19.8 = 734 \; torr = 0.966\; atm\).
\[n=\frac{P V}{R T}=\frac{0.966 \mathrm{atm} \times(0.155 \mathrm{L})}{\left(.082 \mathrm{L} \mathrm{atm} \mathrm{mol}^{-1} \mathrm{K}^{-1}\right)(295 \mathrm{K})}=.00619 \mathrm{mol}\nonumber\]
Application of Dalton's Law: Scuba diving
Our respiratory systems are designed to maintain the proper oxygen concentration in the blood when the partial pressure of O 2 is 0.21 atm, its normal sea-level value. Below the water surface, the pressure increases by 1 atm for each 10.3 m increase in depth; thus a scuba diver at 10.3 m experiences a total of 2 atm pressure pressing on the body. In order to prevent the lungs from collapsing, the air the diver breathes should also be at about the same pressure.
But at a total pressure of 2 atm, the partial pressure of \(O_2\) in ordinary air would be 0.42 atm; at a depth of 100 ft (about 30 m), the \(O_2\) pressure of .8 atm would be far too high for health. For this reason, the air mixture in the pressurized tanks that scuba divers wear must contain a smaller fraction of \(O_2\). This can be achieved most simply by raising the nitrogen content, but high partial pressures of N 2 can also be dangerous, resulting in a condition known as nitrogen narcosis. The preferred diluting agent for sustained deep diving is helium, which has very little tendency to dissolve in the blood even at high pressures. | libretexts | 2025-03-17T19:53:09.071177 | 2013-10-03T01:38:02 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/06%3A_Properties_of_Gases/6.03%3A_Dalton's_Law",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "6.3: Dalton's Law",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/06%3A_Properties_of_Gases/6.04%3A_Kinetic_Molecular_Theory_(Overview) | 6.4: Kinetic Molecular Theory (Overview)
Make sure you thoroughly understand the following essential ideas which are presented below. It is especially important that you know the principal assumptions of the kinetic-molecular theory. These can be divided into those that refer to the nature of the molecules themselves, and those that describe the nature of their motions:
- The molecules - Negligible volume, absence of inermolecular attractions (think of them as very hard, "non-sticky" objects.)
- Their motions - Completely random in direction, in straight lines only (this is a consequence of their lack of attractions), average velocities proportional to the absolute temperature..
- The idea that random motions of individual molecules can result in non-random (directed) movement of the gas as a whole is one of the most important concepts of chemistry, exemplified here as the principle of diffusion .
- In most courses you will be expected to know and be able to use (or misuse!) Graham's law .
The properties such as temperature, pressure, and volume, together with others dependent on them (density, thermal conductivity, etc.) are known as macroscopic properties of matter; these are properties that can be observed in bulk matter, without reference to its underlying structure or molecular nature. By the late 19 th century the atomic theory of matter was sufficiently well accepted that scientists began to relate these macroscopic properties to the behavior of the individual molecules, which are described by the microscopic properties of matter. The outcome of this effort was the kinetic molecular theory of gases. This theory applies strictly only to a hypothetical substance known as an ideal gas ; we will see, however, that under many conditions it describes the behavior of real gases at ordinary temperatures and pressures quite accurately, and serves as the starting point for dealing with more complicated states of matter.
The basic ideas of kinetic-molecular theory
The "kinetic-molecular theory of gases" may sound rather imposing, but it is based on a series of easily-understood assumptions that, taken together, constitute a model that greatly simplifies our understanding of the gaseous state of matter. The five basic tenets of the kinetic-molecular theory are as follows:
- A gas is composed of molecules that are separated by average distances that are much greater than the sizes of the molecules themselves. The volume occupied by the molecules of the gas is negligible compared to the volume of the gas itself .
- The molecules of an ideal gas exert no attractive forces on each other, or on the walls of the container.
- The molecules are in constant random motion , and as material bodies, they obey Newton's laws of motion. This means that the molecules move in straight lines (see demo illustration at the left) until they collide with each other or with the walls of the container.
- Collisions are perfectly elastic ; when two molecules collide, they change their directions and kinetic energies, but the total kinetic energy is conserved . Collisions are not “sticky" .
- The average kinetic energy of the gas molecules is directly proportional to the absolute temperature . Notice that the term “average” is very important here; the velocities and kinetic energies of individual molecules will span a wide range of values, and some will even have zero velocity at a given instant. This implies that all molecular motion would cease if the temperature were reduced to absolute zero.
According to this model, most of the volume occupied by a gas is empty space ; this is the main feature that distinguishes gases from condensed states of matter (liquids and solids) in which neighboring molecules are constantly in contact. Gas molecules are in rapid and continuous motion; at ordinary temperatures and pressures their velocities are of the order of 0.1-1 km/sec and each molecule experiences approximately 10 10 collisions with other molecules every second.
The Gas Laws explained
If gases do in fact consist of widely-separated particles, then the observable properties of gases must be explainable in terms of the simple mechanics that govern the motions of the individual molecules. The kinetic molecular theory makes it easy to see why a gas should exert a pressure on the walls of a container. Any surface in contact with the gas is constantly bombarded by the molecules.
At each collision, a molecule moving with momentum mv strikes the surface. Since the collisions are elastic, the molecule bounces back with the same velocity in the opposite direction. This change in velocity Δ V is equivalent to a n accelerati o n a ; according to Newton's second law, a force f = ma is thus exerted on the surface of area A exerting a pressure P = f/A .
Kinetic Interpretation of Temperature
According to the kinetic molecular theory, the average kinetic energy of an ideal gas is directly proportional to the absolute temperature. Kinetic energy is the energy a body has by virtue of its motion:
\[ K.E. = \dfrac{mv^2}{2}\]
As the temperature of a gas rises, the average velocity of the molecules will increase; a doubling of the temperature will increase this velocity by a factor of four. Collisions with the walls of the container will transfer more momentum, and thus more kinetic energy, to the walls. If the walls are cooler than the gas, they will get warmer, returning less kinetic energy to the gas, and causing it to cool until thermal equilibrium is reached. Because temperature depends on the average kinetic energy, the concept of temperature only applies to a statistically meaningful sample of molecules. We will have more to say about molecular velocities and kinetic energies farther on.
- Kinetic explanation of Boyle's law : Boyle's law is easily explained by the kinetic molecular theory. The pressure of a gas depends on the number of times per second that the molecules strike the surface of the container. If we compress the gas to a smaller volume, the same number of molecules are now acting against a smaller surface area, so the number striking per unit of area, and thus the pressure, is now greater.
- Kinetic explanation of Charles' law: Kinetic molecular theory states that an increase in temperature raises the average kinetic energy of the molecules. If the molecules are moving more rapidly but the pressure remains the same, then the molecules must stay farther apart, so that the increase in the rate at which molecules collide with the surface of the container is compensated for by a corresponding increase in the area of this surface as the gas expands.
- Kinetic explanation of Avogadro's law : If we increase the number of gas molecules in a closed container, more of them will collide with the walls per unit time. If the pressure is to remain constant, the volume must increase in proportion, so that the molecules strike the walls less frequently, and over a larger surface area.
- Kinetic explanation of Dalton's law : "Every gas is a vacuum to every other gas". This is the way Dalton stated what we now know as his law of partial pressures. It simply means that each gas present in a mixture of gases acts independently of the others. This makes sense because of one of the fundamental tenets of KMT theory that gas molecules have negligible volumes. So Gas A in mixture of A and B acts as if Gas B were not there at all. Each contributes its own pressure to the total pressure within the container, in proportion to the fraction of the molecules it represents.
Some important practical applications of KMT
The molecules of a gas are in a state of perpetual motion in which the velocity (that is, the speed and direction) of each molecule is completely random and independent of that of the other molecules. This fundamental assumption of the kinetic-molecular model helps us understand a wide range of commonly-observed phenomena.
Diffusion: random motion with direction
Diffusion refers to the transport of matter through a concentration gradient ; the rule is that substances move (or tend to move) from regions of higher concentration to those of lower concentration. The diffusion of tea out of a tea bag into water, or of perfume from a person, are common examples; we would not expect to see either process happening in reverse!
When the stopcock is opened, random motions cause each gas to diffuse into the other container. After diffusion is complete (bottom), individual molecules of both kinds continue to pass between the flasks in both directions.
It might at first seem strange that the random motions of molecules can lead to a completely predictable drift in their ultimate distribution. The key to this apparent paradox is the distinction between an individual and the population . Although we can say nothing about the fate of an individual molecule, the behavior of a large collection ("population") of molecules is subject to the laws of statistics. This is exactly analogous to the manner in which insurance actuarial tables can accurately predict the average longevity of people at a given age, but provide no information on the fate of any single person.
Effusion and Graham's law
If a tiny hole is made in the wall of a vessel containing a gas, then the rate at which gas molecules leak out of the container will be proportional to the number of molecules that collide with unit area of the wall per second, and thus with the rms - average velocity of the gas molecules. This process, when carried out under idealized conditions, is known as effusion .
Around 1830, the English chemist Thomas Graham (1805-1869) discovered that the relative rates at which two different gases, at the same temperature and pressure, will effuse through identical openings is inversely proportional to the square root of its molar mass.
\[v \propto \dfrac{1}{\sqrt{M}}\]
Graham's law , as this relation is known, is a simple consequence of the square-root relation between the velocity of a body and its kinetic energy.
According to the kinetic molecular theory, the molecules of two gases at the same temperature will possess the same average kinetic energy. If v 1 and v 2 are the average velocities of the two kinds of molecules, then at any given temperature KE 1 = KE 2 and
\[\dfrac{m_1v_1^2}{2} = \dfrac{m_2v_2^2}{2}\]
or, in terms of molar masses \(M\),
\[ \color{red} { \dfrac{v_1}{v_2} = \sqrt{\dfrac{M_2}{M_1}}}\]
Thus the average velocity of the lighter molecules must be greater than those of the heavier molecules, and the ratio of these velocities will be given by the inverse ratio of square roots of the molecular weights. Although Graham's law applies exactly only when a gas diffuses into a vacuum, the law gives useful estimates of relative diffusion rates under more practical conditions, and it provides insight into a wide range of phenomena that depend on the relative average velocities of molecules of different masses.
The glass tube shown above has cotton plugs inserted at either end. The plug on the left is moistened with a few drops of aqueous ammonia, from which \(NH_3\) gas slowly escapes. The plug on the right is similarly moisted with a strong solution of hydrochloric acid, from which gaseous \(HCl\) escapes. The gases diffuse in opposite directions within the tube; at the point where they meet, they combine to form solid ammonium chloride, which appears first as a white fog and then begins to coat the inside of the tube.
The reaction is
\[NH_{3(g)} + HCl_{(g)} \rightarrow NH_4Cl_{(s)}\]
- In what part of the tube (left, right, center) will the NH 4 Cl first be observed?
- If the distance between the two ends of the tube is 100 cm, how many cm from the left end of the tube will the NH 4 Cl first form?
Solution
a) The lighter ammonia molecules will diffuse more rapidly, so the point where the two gases meet will be somewhere in the right half of the tube.
b) The ratio of the diffusion velocities of ammonia ( v 1 )and hydrogen chloride ( v 2 ) can be estimated from Graham's law:
\[ \dfrac{v_1}{v_2} = \sqrt{\dfrac{36.5}{17}} = 1.46\]
We can therefore assign relative velocities of the two gases as \(v_1 = 1.46\) and \(v_2 = 1\). Clearly, the meeting point will be directly proportional to v 1 . It will, in fact, be proportional to the ratio v 1 /( v 1 + v 2 )*:
\[ \dfrac{v_1}{v_1+v_2} \times 100\; cm = \dfrac{1.46}{1.46 + 1.00} \times 100\, cm = 59 \;cm \]
*In order to see how this ratio was deduced, consider what would happen in the three special cases in which v 1 =0, v 2 =0, and v 1 = v 2 , for which the distances (from the left end) would be 0, 50, and 100 cm, respectively. It should be clear that the simpler ratio v 1 / v 2 would lead to absurd results.
Note that the above calculation is only an estimate. Graham's law is strictly valid only under special conditions, the most important one being that no other gases are present. Contrary to what is written in some textbooks and is often taught, Graham's law does not accurately predict the relative rates of escape of the different components of a gaseous mixture into the outside air, nor does it give the rates at which two gases will diffuse through another gas such as air. See Misuse of Graham's Laws by Stephen J. Hawkes, J. Chem. Education 1993 70(10) 836-837
Uranium enrichment
One application of this principle that was originally suggested by Graham himself but was not realized on a practical basis until a century later is the separation of isotopes. The most important example is the enrichment of uranium in the production of nuclear fission fuel.
The K-25 Gaseous Diffusion Plant was one of the major sources of enriched uranium during World War II. It was completed in 1945 and employed 12,000 workers. Owing to the secrecy of the Manhattan Project, the women who operated the system were unaware of the purpose of the plant; they were trained to simply watch the gauges and turn the dials for what they were told was a "government project".
Uranium consists mostly of U 238 , with only 0.7% of the fissionable isotope U 235 . Uranium is of course a metal, but it reacts with fluorine to form a gaseous hexafluoride, UF 6 . In the very successful gaseous diffusion process the UF 6 diffuses repeatedly through a porous wall. Each time, the lighter isotope passes through a bit more rapidly then the heavier one, yielding a mixture that is minutely richer in U 235 . The process must be over a thousand times to achieve the desired degree of enrichment. The development of a large-scale diffusion plant was a key part of the U.S. development of the first atomic bomb in 1945. This process is now obsolete, having been replaced by other methods.
Density fluctuations: Why is the sky blue?
Diffusion ensures that molecules will quickly distribute themselves throughout the volume occupied by the gas in a thoroughly uniform manner. The chances are virtually zero that sufficiently more molecules might momentarily find themselves near one side of a container than the other to result in an observable temporary density or pressure difference. This is a result of simple statistics. But statistical predictions are only valid when the sample population is large.
Consider what would happen if we consider extremely small volumes of space: cubes that are about 10 –7 cm on each side, for example. Such a cell would contain only a few molecules, and at any one instant we would expect to find some containing more or less than others, although in time they would average out to the same value. The effect of this statistical behavior is to give rise to random fluctuations in the density of a gas over distances comparable to the dimensions of visible light waves. When light passes through a medium whose density is non-uniform, some of the light is scattered . The kind of scattering due to random density fluctuations is called Rayleigh scattering , and it has the property of affecting (scattering) shorter wavelengths more effectively than longer wavelengths. The clear sky appears blue in color because the blue (shorter wavelength) component of sunlight is scattered more. The longer wavelengths remain in the path of the sunlight, available to delight us at sunrise or sunset.
[source]
What we have been discussing is a form of what is known as fluctuation phenomena . As the animation shows, the random fluctuations in pressure of a gas on either side do not always completely cancel when the density of molecules (i.e., pressures ) are quite small.
Incandescent light bulbs
An interesting application involving several aspects of the kinetic molecular behavior of gases is the use of a gas, usually argon, to extend the lifetime of incandescent lamp bulbs. As a light bulb is used, tungsten atoms evaporate from the filament and condense on the cooler inner wall of the bulb, blackening it and reducing light output. As the filament gets thinner in certain spots, the increased electrical resistance results in a higher local power dissipation, more rapid evaporation, and eventually the filament breaks.
The pressure inside a lamp bulb must be sufficiently low for the mean free path of the gas molecules to be fairly long; otherwise heat would be conducted from the filament too rapidly, and the bulb would melt. (Thermal conduction depends on intermolecular collisions, and a longer mean free path means a lower collision frequency). A complete vacuum would minimize heat conduction, but this would result in such a long mean free path that the tungsten atoms would rapidly migrate to the walls, resulting in a very short filament life and extensive bulb blackening.
Around 1910, the General Electric Company hired Irving Langmuir as one of the first chemists to be employed as an industrial scientist in North America. Langmuir quickly saw that bulb blackening was a consequence of the long mean free path of vaporized tungsten atoms, and he showed that the addition of a small amount of argon will reduce the mean free path, increasing the probability that an outward-moving tungsten atom will collide with an argon atom. A certain proportion of these will eventually find their way back to the filament, partially reconstituting it.
Krypton would be a better choice of gas than argon, since its greater mass would be more effective in changing the direction of the rather heavy tungsten atom. Unfortunately, krypton, being a rarer gas, is around 50 times as expensive as argon, so it is used only in “premium” light bulbs. The more recently-developed halogen-cycle lamp is an interesting chemistry-based method of prolonging the life of a tungsten-filament lamp.
Viscosity of gases
Gases, like all fluids, exhibit a resistance to flow, a property known as viscosity . The basic cause of viscosity is the random nature of thermally-induced molecular motion. In order to force a fluid through a pipe or tube, an additional non-random translational motion must be superimposed on the thermal motion.
There is a slight problem, however. Molecules flowing near the center of the pipe collide mostly with molecules moving in the same direction at about the same velocity, but those that happen to find themselves near the wall will experience frequent collisions with the wall. Since the molecules in the wall of the pipe are not moving in the direction of the flow, they will tend to absorb more kinetic energy than they return, with the result that the gas molecules closest to the wall of the pipe lose some of their forward momentum. Their random thermal motion will eventually take them deeper into the stream, where they will collide with other flowing molecules and slow them down. This gives rise to a resistance to flow known as viscosity ; this is the reason why long gas transmission pipelines need to have pumping stations every 100 km or so.
As you know, liquids such as syrup or honey exhibit smaller viscosities at higher temperatures as the increased thermal energy reduces the influence of intermolecular attractions, thus allowing the molecules to slip around each other more easily. Gases, however, behave in just the opposite way; gas viscosity arises from collision-induced transfer of momentum from rapidly-moving molecules to slow ones that have been released from the boundary layer. The higher the temperature, the more rapidly the molecules move and collide with each other, so the higher the viscosity.
Distribution of gas molecules in a gravitational field
Everyone knows that the air pressure decreases with altitude. This effect is easily understood qualitatively through the kinetic molecular theory. Random thermal motion tends to move gas molecules in all directions equally. In the presence of a gravitational field, however, motions in a downward direction are slightly favored. This causes the concentration, and thus the pressure of a gas to be greater at lower elevations and to decrease without limit at higher elevations.
The pressure at any elevation in a vertical column of a fluid is due to the weight of the fluid above it. This causes the pressure to decrease exponentially with height.
The exact functional relationship between pressure and altitude is known as the barometric distribution law . It is easily derived using first-year calculus. For air at 25°C the pressure P h at any altitude is given by
\[P_h = P_o e^{–.011h}\]
in which \(P_o\) is the pressure at sea level.
This is a form of the very common exponential decay law which we will encounter in several different contexts in this course. An exponential decay (or growth) law describes any quantity whose rate of change is directly proportional to its current value, such as the amount of money in a compound-interest savings account or the density of a column of gas at any altitude. The most important feature of any quantity described by this law is that the fractional rate of change of the quantity in question (in this case, Δ P /P or in calculus, dP/P) is a constant. This means that the increase in altitude required to reduce the pressure by half is also a constant, about 6 km in the Earth's case.
Because heavier molecules will be more strongly affected by gravity, their concentrations will fall off more rapidly with elevation. For this reason the partial pressures of the various components of the atmosphere will tend to vary with altitude. The difference in pressure is also affected by the temperature; at higher temperatures there is more thermal motion, and hence a less rapid fall-off of pressure with altitude. Owing to atmospheric convection and turbulence, these effects are not observed in the lower part of the atmosphere, but in the uppermost parts of the atmosphere the heavier molecules do tend to drift downward.
The ionosphere and radio communication
At very low pressures, mean free paths are sufficiently great that collisions between molecules become rather infrequent. Under these conditions, highly reactive species such as ions, atoms, and molecular fragments that would ordinarily be destroyed on every collision, can persist for appreciable periods of time.
The most important example of this occurs at the top of the Earth's atmosphere, at an altitude of 200 km, at pressure of about 10 –7 atm. Here the mean free path will be 10 7 times its value at 1 atm, or about 1 m. In this part of the atmosphere, known as the thermosphere , the chemistry is dominated by species such as O, O 2 + and HO which are formed by the action of intense solar ultraviolet light on the normal atmospheric gases near the top of the stratosphere. The high concentrations of electrically charged species in these regions (sometimes also called the ionosphere ) reflect radio waves and are responsible for around-the-world transmission of mid-frequency radio signals.
The ion density in the lower part of the ionosphere (about 80 km altitude) is so great that the radiation from broadcast-band radio stations is absorbed in this region before these waves can reach the reflective high-altitude layers. However, the pressure in this region (known as the D-layer ) is great enough that the ions recombine soon after local sunset, causing the D-layer to disappear and allowing the waves to reflect off of the upper (F-layer) part of the ionosphere. This is the reason that distant broadcast stations can only be heard at night. | libretexts | 2025-03-17T19:53:09.169770 | 2013-10-03T01:38:03 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/06%3A_Properties_of_Gases/6.04%3A_Kinetic_Molecular_Theory_(Overview)",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "6.4: Kinetic Molecular Theory (Overview)",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/06%3A_Properties_of_Gases/6.05%3A_More_on_Kinetic_Molecular_Theory | 6.5: More on Kinetic Molecular Theory
Make sure you thoroughly understand the following essential ideas that are presented below. It is especially important that you know the precise meanings of all the italicized terms in the context of this topic.
- You should be able to sketch the general shape of the Maxwell-Boltzmann plot showing the distribution of molecular velocities. You should also show how these plots are affected by the temperature and by the molar mass.
- Although there is no need for you to be able to derive the ideal gas equation of state , you should understand that the equation PV = nRT can be derived from the principles of kinetic molecular theory, as outlined above.
- Explain the concept of the mean free path of a gas molecule (but no need to reproduce the mathematics.)
In this section, we look in more detail at some aspects of the kinetic-molecular model and how it relates to our empirical knowledge of gases. For most students, this will be the first application of algebra to the development of a chemical model; this should be educational in itself, and may help bring that subject back to life for you! As before, your emphasis should on understanding these models and the ideas behind them, there is no need to memorize any of the formulas.
Image: Wikimedia Commons
The V elocities of G as M olecules
At temperatures above absolute zero, all molecules are in motion. In the case of a gas, this motion consists of straight-line jumps whose lengths are quite great compared to the dimensions of the molecule. Although we can never predict the velocity of a particular individual molecule, the fact that we are usually dealing with a huge number of them allows us to know what fraction of the molecules have kinetic energies (and hence velocities) that lie within any given range.
The trajectory of an individual gas molecule consists of a series of straight-line paths interrupted by collisions. What happens when two molecules collide depends on their relative kinetic energies; in general, a faster or heavier molecule will impart some of its kinetic energy to a slower or lighter one. Two molecules having identical masses and moving in opposite directions at the same speed will momentarily remain motionless after their collision.
If we could measure the instantaneous velocities of all the molecules in a sample of a gas at some fixed temperature, we would obtain a wide range of values. A few would be zero, and a few would be very high velocities, but the majority would fall into a more or less well defined range. We might be tempted to define an average velocity for a collection of molecules, but here we would need to be careful: molecules moving in opposite directions have velocities of opposite signs. Because the molecules are in a gas are in random thermal motion, there will be just about as many molecules moving in one direction as in the opposite direction, so the velocity vectors of opposite signs would all cancel and the average velocity would come out to zero. Since this answer is not very useful, we need to do our averaging in a slightly different way.
The proper treatment is to average the squares of the velocities, and then take the square root of this value. The resulting quantity is known as the root mean square , or RMS velocity
\[ \nu_{rms} = \sqrt{\dfrac{\sum \nu^2}{n}}\]
which we will denote simply by \(\bar{v}\). The formula relating the RMS velocity to the temperature and molar mass is surprisingly simple, considering the great complexity of the events it represents:
\[ \bar{v}= v_{rms} = \sqrt{\dfrac{3RT}{m}}\]
in which \(m\) is the molar mass in kg mol –1 , and k = R ÷6.02E23, the “gas constant per molecule", is known as the Boltzmann constant .
What is the average velocity of a nitrogen molecules at 300 K?
Solution
The molar mass of N 2 is 28.01 g. Substituting in the above equation and expressing R in energy units, we obtain
\[v^{2}=\frac{3 \times 8.31 \mathrm{J} \mathrm{mol}^{-1} \mathrm{K}^{-1} \times 300 \mathrm{K}}{28.01 \times 10^{-3} \mathrm{kg} \mathrm{mol}^{-1}}=2.67 \times 10^{5} \mathrm{J} \mathrm{kg}^{-1} \nonumber\]
Recalling the definition of the joule (1 J = 1 kg m 2 s –2 ) and taking the square root,
\[\overline{v}=\sqrt{2.67 \times 10^{5} \mathrm{J} \mathrm{kg}^{-1} \times \frac{1 \mathrm{kg} \mathrm{m}^{2} \mathrm{s}^{-2}}{1 \mathrm{J}}}=517 \mathrm{ms}^{-1} \nonumber\]
or
\[517 \mathrm{m} \mathrm{s}^{-1} \times \frac{1 \mathrm{km}}{10^{3} \mathrm{m}} \times \frac{3600 \mathrm{s}}{1 \mathrm{h}}=1860 \mathrm{km} \mathrm{h}^{-1} \nonumber\]
Comment: this is fast! The velocity of a rifle bullet is typically 300-500 m s –1 ; convert to common units to see the comparison for yourself.
A simpler formula for estimating average molecular velocities is
\[v=157 \sqrt{\dfrac{T}{m}}\]
in which \(v\) is in units of meters/sec, \(T\) is the absolute temperature and \(m\) the molar mass in grams.
The Boltzmann Distribution
If we were to plot the number of molecules whose velocities fall within a series of narrow ranges, we would obtain a slightly asymmetric curve known as a velocity distribution . The peak of this curve would correspond to the most probable velocity. This velocity distribution curve is known as the Maxwell-Boltzmann distribution , but is frequently referred to only by Boltzmann's name. The Maxwell-Boltzmann distribution law was first worked out around 1850 by the great Scottish physicist, James Clerk Maxwell (left, 1831-1879), who is better known for discovering the laws of electromagnetic radiation. Later, the Austrian physicist Ludwig Boltzmann (1844-1906) put the relation on a sounder theoretical basis and simplified the mathematics somewhat. Boltzmann pioneered the application of statistics to the physics and thermodynamics of matter, and was an ardent supporter of the atomic theory of matter at a time when it was still not accepted by many of his contemporaries.
The derivation of the Boltzmann curve is a bit too complicated to go into here, but its physical basis is easy to understand. Consider a large population of molecules having some fixed amount of kinetic energy. As long as the temperature remains constant, this total energy will remain unchanged, but it can be distributed among the molecules in many different ways, and this distribution will change continually as the molecules collide with each other and with the walls of the container.
It turns out, however, that kinetic energy is acquired and handed around only in discrete amounts which are known as quanta . Once the molecule has a given number of kinetic energy quanta, these can be apportioned amongst the three directions of motion in many different ways, each resulting in a distinct total velocity state for the molecule. The greater the number of quanta, (that is, the greater the total kinetic energy of the molecule) the greater the number of possible velocity states. If we assume that all velocity states are equally probable, then simple statistics predicts that higher velocities will be more favored simply because there are so many more of them .
Although the number of possible higher-energy states is greater, the lower-energy states are more likely to be occupied . This is because only so much kinetic energy available to the gas as a whole; every molecule that acquires kinetic energy in a collision leaves behind another molecule having less. This tends to even out the kinetic energies in a collection of molecules, and ensures that there are always some molecules whose instantaneous velocity is near zero. The net effect of these two opposing tendencies, one favoring high kinetic energies and the other favoring low ones, is the peaked curve seen above. Notice that because of the asymmetry of this curve, the mean (rms average) velocity is not the same as the most probable velocity, which is defined by the peak of the curve.
At higher temperatures (or with lighter molecules) the latter constraint becomes less important, and the mean velocity increases. But with a wider velocity distribution, the number of molecules having any one velocity diminishes, so the curve tends to flatten out.
Velocity Distributions Depend on Temperature and Mass
Higher temperatures allow a larger fraction of molecules to acquire greater amounts of kinetic energy, causing the Boltzmann plots to spread out.
Notice how the left ends of the plots are anchored at zero velocity (there will always be a few molecules that happen to be at rest.) As a consequence, the curves flatten out as the higher temperatures make additional higher-velocity states of motion more accessible. The area under each plot is the same for a constant number of molecules.
All molecules have the same kinetic energy ( mv 2 /2) at the same temperature, so the fraction of molecules with higher velocities will increase as m , and thus the molecular weight, decreases.
Boltzmann Distribution and Planetary Atmospheres
The ability of a planet to retain an atmospheric gas depends on the average velocity (and thus on the temperature and mass) of the gas molecules and on the planet's mass, which determines its gravity and thus the escape velocity. In order to retain a gas for the age of the solar system, the average velocity of the gas molecules should not exceed about one-sixth of the escape velocity. The escape velocity from the Earth is 11.2 km/s, and 1/6 of this is about 2 km/s. Examination of the above plot reveals that hydrogen molecules can easily achieve this velocity, and this is the reason that hydrogen, the most abundant element in the universe, is almost absent from Earth's atmosphere.
Although hydrogen is not a significant atmospheric component, water vapor is. A very small amount of this diffuses to the upper part of the atmosphere, where intense solar radiation breaks down the H 2 O into H 2 . Escape of this hydrogen from the upper atmosphere amounts to about 2.5 × 10 10 g/year.
Derivation of the Ideal Gas Equation
The ideal gas equation of state came about by combining the empirically determined ("ABC") laws of Avogadro , Boyle , and Charles , but one of the triumphs of the kinetic molecular theory was the derivation of this equation from simple mechanics in the late nineteenth century. This is a beautiful example of how the principles of elementary mechanics can be applied to a simple model to develop a useful description of the behavior of macroscopic matter. We begin by recalling that the pressure of a gas arises from the force exerted when molecules collide with the walls of the container. This force can be found from Newton's law
\[f = ma = m\dfrac{dv}{dt} \label{2.1}\]
in which \(v\) is the velocity component of the molecule in the direction perpendicular to the wall and \(m\) is its mass.
To evaluate the derivative in Equation \ref{2.1}, which is the velocity change per unit time, consider a single molecule of a gas contained in a cubic box of length \( l\) . For simplicity, assume that the molecule is moving along the x -axis which is perpendicular to a pair of walls, so that it is continually bouncing back and forth between the same pair of walls. When the molecule of mass \( m\) strikes the wall at velocity \( +v\) (and thus with a momentum \( mv\) ) it will rebound elastically and end up moving in the opposite direction with –v . The total change in velocity per collision is thus 2 v and the change in momentum is \(2mv\).
The Frequency of Collisions
After the collision the molecule must travel a distance l to the opposite wall, and then back across this same distance before colliding again with the wall in question. This determines the time between successive collisions with a given wall; the number of collisions per second will be \(v/2l\). The force \(F\) exerted on the wall is the rate of change of the momentum, given by the product of the momentum change per collision and the collision frequency:
\[F = \dfrac{d(mv_x}{dt} = (2mv_x) \times \left( \dfrac{v_x}{2l} \right) = \dfrac{m v_x^2}{l} \label{2-2}\]
Pressure is force per unit area, so the pressure \(P\) exerted by the molecule on the wall of cross-section \(l^2\) becomes
\[ P = \dfrac{mv^2}{l^3} = \dfrac{mv^2}{V} \label{2-3}\]
in which \(V\) is the volume of the box.
The pressure produced by N molecules
As noted near the beginning of this unit, any given molecule will make about the same number of moves in the positive and negative directions, so taking a simple average would yield zero. To avoid this embarrassment, we square the velocities before averaging them, and then take the square root of the average. This result is known as the root mean square (rms) velocity.
We have calculated the pressure due to a single molecule moving at a constant velocity in a direction perpendicular to a wall. If we now introduce more molecules, we must interpret \(v^2\) as an average value which we will denote by \(\bar{v^2}\). Also, since the molecules are moving randomly in all directions, only one-third of their total velocity will be directed along any one Cartesian axis, so the total pressure exerted by \(N\) molecules becomes
\[ P=\dfrac{N}{3}\dfrac{m \bar{\nu}^2}{V} \label{2.4}\]
The above statement that "one-third of the total velocity (of all the molecules together)..." does not mean that 1/3 of the molecules themselves are moving in each of these three directions; each individual particle is free to travel in any possible direction between collisions. However, any random trajectory can be regarded as composed of three components that correspond to these three axes.
The red arrow in the illustration depicts the path of a single molecule as it travels from the point of its last collision at the origin (lower left corner). The length of the arrow (which you may recognize as a vector ) is proportional to its velocity. The three components of the molecule's velocity are indicated by the small green arrows. It should be clearly apparent the trajectory is mainly along the x , z axis. In the section that follows, Equation \ref{2-5} contains another 1/3 factor that similarly divides the kinetic energy into components along the three axes. This makes sense because kinetic energy is partly determined by velocity.
The temperature of a gas is a measure of the average translational kinetic energy of its molecules, so we begin by calculating the latter. Recalling that mv 2 /2 is the average translational kinetic energy \(ε\), we can rewrite the Equation \ref{2-4} as
\[PV = \dfrac{1}{3} N m \bar{v^2} = \dfrac{2}{3} N \epsilon \label{2-5}\]
The 2/3 factor in the proportionality reflects the fact that velocity components in each of the three directions contributes ½ kT to the kinetic energy of the particle. The average translational kinetic energy is directly proportional to temperature:
\[\epsilon = \dfrac{3}{2} kT \label{2.6}\]
in which the proportionality constant k is known as the Boltzmann constant . Substituting this into Equation \ref{2-5} yields
\[ PV = \left( \dfrac{2}{3}N \right) \left( \dfrac{3}{2}kT \right) =NkT \label{2.7}\]
Notice that Equation \ref{2-7} looks very much like the ideal gas equation
\[PV = nRT \nonumber \]
but is not quite the same, however; we have been using capital \(N\) to denote the number of molecules , whereas \(n\) stands for the number of moles . And of course, the proportionality factor is not the gas constant \(R\), but rather the Boltzmann constant, 1.381 × 10 –23 J K –1 . If we multiply \( k\) by Avogadro's number (\(N_A\)
\[(1.381 \times 10^{–23}{\, J \,K^{–1}) (6.022 \times 10^{23}) = 8.314 \,J \,K^{–1}.\]
Hence, the Boltzmann constant \(k\) is just the gas constant per molecule. So for n moles of particles, the Equtation \ref{2-7} turns into our old friend
\[ P V = n R T \label{2.8}\]
The ideal gas equation of state came about by combining the empirically determined laws of Boyle, Charles, and Avogadro, but one of the triumphs of the kinetic molecular theory was the derivation of this equation from simple mechanics in the late nineteenth century. This is a beautiful example of how the principles of elementary mechanics can be applied to a simple model to develop a useful description of the behavior of macroscopic matter, and it will be worth your effort to follow and understand the individual steps of the derivation. (But don't bother to memorize it!)
RT has the dimensions of energy
Since the product \(PV\) has the dimensions of energy, so does RT , and this quantity in fact represents the average translational kinetic energy per mole of molecular particles. The relationship between these two energy units can be obtained by recalling that 1 atm is \(1.013\times 10^{5}\, N\, m^{–2}\), so that
\[1\, liter-atm = 1000 \mathrm{cm}^{3}\left(\frac{1 \mathrm{m}^{3}}{10^{6} \mathrm{cm}^{3}}\right) \times 1.01325 \times 10^5} \mathrm{Nm}^{2}=101325 \mathrm{J}\]
The gas constant \( R\) is one of the most important fundamental constants relating to the macroscopic behavior of matter. It is commonly expressed in both pressure-volume and in energy units:
R = 0.082057 L atm mol –1 K –1 = 8.314 J mol –1 K –1
That is, R expresses the amount of energy per Kelvin degree. As noted above, the Boltzmann constant k , which appears in many expressions relating to the statistical treatment of molecules, is just
R ÷ 6.02E23 = 1.3807 × 10 –23 J K –1 ,
the "gas constant per molecule "
How Far does a Molecule travel between Collisions?
Molecular velocities tend to be very high by our everyday standards (typically around 500 meters per sec), but even in gases, they bump into each other so frequently that their paths are continually being deflected in a random manner, so that the net movement ( diffusion ) of a molecule from one location to another occurs rather slowly. How close can two molecules get?
The average distance a molecule moves between such collisions is called the mean free path (\(\lambda\)), which depends on the number of molecules per unit volume and on their size. To avoid collision, a molecule of diameter σ must trace out a path corresponding to the axis of an imaginary cylinder whose cross-section is \(\pi \sigma^2\). Eventually it will encounter another molecule (extreme right in the diagram below) that has intruded into this cylinder and defines the terminus of its free motion.
The volume of the cylinder is \(\pi \sigma^2 \lambda.\) At each collision the molecule is diverted to a new path and traces out a new exclusion cylinder. After colliding with all n molecules in one cubic centimeter of the gas it will have traced out a total exclusion volume of \(\pi \sigma^2 \lambda\). Solving for \(\lambda\) and applying a correction factor \(\sqrt{2}\) to take into account exchange of momentum between the colliding molecules (the detailed argument for this is too complicated to go into here), we obtain
\[\lambda = \dfrac{1}{\sqrt{2\pi n \sigma^2}} \label{3.1}\]
Small molecules such as He, H 2 and CH 4 typically have diameters of around 30-50 pm. At STP the value of \(n\), the number of molecules per cubic meter, is
\[\dfrac{6.022 \times 10^{23}\; mol^{-1}}{22.4 \times 10^{-3} m^3 \; mol^{-1}} = 2.69 \times 10 \; m^{-3}\]
Substitution into Equation \(\ref{3.1}\) yields a value of around \(10^{ –7}\; m (100\; nm)\) for the mean free path of most molecules under these conditions. Although this may seem like a very small distance, it typically amounts to 100 molecular diameters, and more importantly, about 30 times the average distance between molecules. This explains why so many gases conform very closely to the ideal gas law at ordinary temperatures and pressures.
On the other hand, at each collision the molecule can be expected to change direction. Because these changes are random, the net change in location a molecule experiences during a period of one second is typically rather small. Thus in spite of the high molecular velocities, the speed of molecular diffusion in a gas is usually quite small. | libretexts | 2025-03-17T19:53:09.266014 | 2013-10-03T01:38:03 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/06%3A_Properties_of_Gases/6.05%3A_More_on_Kinetic_Molecular_Theory",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "6.5: More on Kinetic Molecular Theory",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/06%3A_Properties_of_Gases/6.06%3A_Real_Gases_and_Critical_Phenomena | 6.6: Real Gases and Critical Phenomena
Make sure you thoroughly understand the following essential concepts that have been presented above.
- Real gases are subject to the effects of molecular volume (intermolecular repulsive force) and intermolecular attractive forces.
- The behavior of a real gas approximates that of an ideal gas as the pressure approaches zero.
- The effects of non-ideal behavior are best seen when the PV product is plotted as a function of P . You should be able to identify the regions of such a plot in which attractive and repulsive forces dominate.
- Each real gas has its own unique equation of state. Various general equations of state have been devised in which adjustable constants are used to approximate the behavior of a particular gas.
- The most well-known equation of state is that of van der Waals. Although you need not memorize this equation, you should be able to explain the significance of its terms.
The "ideal gas laws" as we know them do a remarkably good job of describing the behavior of a huge number chemically diverse substances as they exist in the gaseous state under ordinary environmental conditions, roughly around 1 atm pressure and a temperature of 300 K. But when the temperature is reduced, or the pressure is raised, the relation PV = constant that defines the ideal gas begins to break down, and its properties become unpredictable; eventually the gas condenses into a liquid. Why is this important? It is of obvious interest to a chemical engineer who needs to predict the properties of the gases involved in a chemical reaction carried out at several hundred atmospheres pressure. This is especially so when we consider that some of the basic tenets of the ideal gas model have to be abandoned in order to explain such properties as
- the average distance between collisions (the molecules really do take up space!)
- at sufficiently high pressures and low temperatures, intermolecular attractions assume control and the gas condenses to a liquid;
- the viscosity of a gas flowing through a pipe (the molecules do get temporarily "stuck" on the pipe surface, and are therefore affected by intermolecular attractive forces.)
Even so, many of the common laws such as Boyle's and Charles' continue to describe these gases quite well even under conditions where these phenomena are evident. Under ordinary environmental conditions (moderate pressures and above 0°C), the isotherms of substances we normally think of as gases don't appear to differ very greatly from the hyperbolic form
\[\dfrac{PV}{RT} = \text{constant} \label{6.6.1}\]
However, over a wider range of conditions, things begin to get more complicated. Thus isopentane (Figure \(\PageIndex{1}\)) behaves in a reasonably ideal manner above 210 K, but below this temperature the isotherms become somewhat distorted, and at 185 K and below they cease to be continuous, showing peculiar horizontal segments in which reducing the volume does not change the pressure.
Within this region, any attempt to compress the gas simply "squeezes" some of it into the liquid state whose greater density exactly compensates for the smaller volume, thus maintaining the pressure at a constant value. It turns out that real gases eventually begin to follow their own unique equations of state, and ultimately even cease to be gases. In this unit we will see why this occurs, what the consequences are, and how we might modify the ideal gas equation of state to extend its usefulness over a greater range of temperatures and pressures.
Effects of Intermolecular Forces
According to Boyle's law , the product PV is a constant at any given temperature, so a plot of PV as a function of the pressure of an ideal gas yields a horizontal straight line. This implies that any increase in the pressure of the gas is exactly counteracted by a decrease in the volume as the molecules are crowded closer together. But we know that the molecules themselves are finite objects having volumes of their own, and this must place a lower limit on the volume into which they can be squeezed. So we must reformulate the ideal gas equation of state as a relation that is true only in the limiting case of zero pressure:
\[\lim_{P \rightarrow 0} PV=nRT \label{6.6.2}\]
So what happens when a real gas is subjected to a very high pressure? The outcome varies with both the molar mass of the gas and its temperature, but in general we can see the the effects of both repulsive and attractive intermolecular forces:
- Repulsive forces : As a gas is compressed, the individual molecules begin to get in each other's way, giving rise to a very strong repulsive force acts to oppose any further volume decrease. We would therefore expect the PV vs P line to curve upward at high pressures, and this is in fact what is observed for all gases at sufficiently high pressures.
- Attractive forces : At very close distances, all molecules repel each other as their electron clouds come into contact. At greater distances, however, brief statistical fluctuations in the distribution these electron clouds give rise to a universal attractive force between all molecules. The more electrons in the molecule (and thus the greater the molecular weight), the greater is this attractive force. As long as the energy of thermal motion dominates this attractive force, the substance remains in the gaseous state, but at sufficiently low temperatures the attractions dominate and the substance condenses to a liquid or solid.
The universal attractive force described above is known as the dispersion , or London force. There may also be additional (and usually stronger) attractive forces related to charge imbalance in the molecule or to hydrogen bonding. These various attractive forces are often referred to collectively as van der Waals forces . A plot of PV/RT as a function of pressure is a very sensitive indicator of deviations from ideal behavior, since such a plot is just a horizontal line for an ideal gas. Figures \(\PageIndex{2}\) and \(\PageIndex{3}\) demonstrate how these plots vary with the nature of the gas and with temperature, respectively.
Intermolecular attractions , which generally increase with molecular weight, cause the PV product to decrease as higher pressures bring the molecules closer together and thus within the range of these attractive forces; the effect is to cause the volume to decrease more rapidly than it otherwise would. The repulsive forces always eventually win out. However, as the molecules begin to intrude on each others' territory, the stronger repulsive forces cause the curve to bend upward.
The temperature makes a big difference! At higher temperatures, increased thermal motions overcome the effects of intermolecular attractions which normally dominate at lower pressures (Figure \(\PageIndex{3}\)). So all gases behave more ideally at higher temperatures. For any gas, there is a special temperature (the Boyle temperature ) at which attractive and repulsive forces exactly balance each other at zero pressure. As you can see in this plot for methane, some of this balance does remain as the pressure is increased.
The van der Waals Equation of State
How might we modify the ideal gas equation of state to take into account the effects of intermolecular interactions? The first and most well known answer to this question was offered by the Dutch scientist J.D. van der Waals (1837-1923) in 1873. The ideal gas model assumes that the gas molecules are merely points that occupy no volume; the " V " term in the equation is the volume of the container and is independent of the nature of the gas.
van der Waals recognized that the molecules themselves take up space that subtracts from the volume of the container (Figure \(\PageIndex{4}\)), so that the “volume of the gas” V in the ideal gas equation should be replaced by the term (\(V–b\)), in which \(b\) relates to the excluded volume , typically of the order of 20-100 cm 3 mol –1 . The excluded volume surrounding any molecule defines the closest possible approach of any two molecules during collision. Note that the excluded volume is greater then the volume of the molecule, its radius being half again as great as that of a spherical molecule.
The other effect that van der Waals needed to correct for are the intermolecular attractive forces. These are ignored in the ideal gas model, but in real gases they exert a small cohesive force between the molecules, thus helping to hold the gas together and reducing the pressure it exerts on the walls of the container.
Because this pressure depends on both the frequency and the intensity of collisions with the walls, the reduction in pressure is proportional to the square of the number of molecules per volume of space, and thus for a fixed number of molecules such as one mole, the reduction in pressure is inversely proportional to the square of the volume of the gas. The smaller the volume, the closer are the molecules and the greater will be the effect. The van der Walls equation replaces the \(P\) term in the ideal gas equation with \(P + (a / V^2)\) in which the magnitude of the constant a increases with the strength of the intermolecular attractive forces.
The complete van der Waals equation of state can be written as
Although most students are not required to memorize this equation, you are expected to understand it and to explain the significance of the terms it contains. You should also understand that the van der Waals constants \(a\) and \(b\) must be determined empirically for every gas. This can be done by plotting the P-V behavior of the gas and adjusting the values of \(a\) and \(b\) until the van der Waals equation results in an identical plot. The constant a is related in a simple way to the molecular radius; thus the determination of \(a\) constitutes an indirect measurement of an important microscopic quantity.
|
Substance
|
molar mass (g) |
a (L
2
-atm mole
–2
)
|
b (L mol
–1
)
|
|---|---|---|---|
| hydrogen H 2 | 2 | 0.244 | 0.0266 |
| helium He | 4 | 0.034 | 0.0237 |
| methane CH 4 | 16 | 2.25 | 0.0428 |
| water H 2 O | 18 | 5.46 | 0.0305 |
| nitrogen N 2 | 28 | 1.39 | 0.0391 |
| carbon dioxide CO 2 | 44 | 3.59 | 0.0427 |
| carbon tetrachloride CCl 4 | 154 | 20.4 | 0.1383 |
The van der Waals equation is only one of many equations of state for real gases. More elaborate equations are required to describe the behavior of gases over wider pressure ranges. These generally take account of higher-order nonlinear attractive forces, and require the use of more empirical constants. Although we will make no use of them in this course, they are widely employed in chemical engineering work in which the behavior of gases at high pressures must be accurately predicted.
Condensation and the Critical Point
The most striking feature of real gases is that they cease to remain gases as the temperature is lowered and the pressure is increased. Figure \(\PageIndex{6}\) illustrates this behavior; as the volume is decreased, the lower-temperature isotherms suddenly change into straight lines. Under these conditions, the pressure remains constant as the volume is reduced. This can only mean that the gas is “disappearing" as we squeeze the system down to a smaller volume. In its place, we obtain a new state of matter, the liquid. In the green-shaded region, two phases, liquid, and gas, are simultaneously present. Finally, at very small volume all the gas has disappeared and only the liquid phase remains. At this point the isotherms bend strongly upward, reflecting our common experience that a liquid is practically incompressible.
To better understand this plot, look at the isotherm labeled . As the gas is compressed from to , the pressure rises in much the same way as Boyle's law predicts. Compression beyond , however, does not cause any rise in the pressure. What happens instead is that some of the gas condenses to a liquid. At , the substance is entirely in its liquid state. The very steep rise to corresponds to our ordinary experience that liquids have very low compressibilities. The range of volumes possible for the liquid diminishes as the critical temperature is approached.
The Critical Point
Liquid and gas can coexist only within the regions indicated by the green-shaded area in the diagram above. As the temperature and pressure rise, this region becomes more narrow, finally reaching zero width at the critical point . The values of P, T , and V at this juncture are known as the critical constants P c , T c , and V c . The isotherm that passes through the critical point is called the critical isotherm . Beyond this isotherm, the gas and liquids become indistinguishable; there is only a single fluid phase, sometimes referred to as a supercritical liquid (Figure \(\PageIndex{7}\)).
At temperatures below 31°C (the critical temperature ), CO 2 acts somewhat like an ideal gas even at a rather high pressure ( ). Below 31°, an attempt to compress the gas to a smaller volume eventually causes condensation to begin. Thus at 21°C, at a pressure of about 62 atm ( ), the volume can be reduced from 200 cm 3 to about 55 cm 3 without any further rise in the pressure. Instead of the gas being compressed, it is replaced with the far more compact liquid as the gas is essentially being "squeezed" into its liquid phase. After all of the gas has disappeared ( ), the pressure rises very rapidly because now all that remains is an almost incompressible liquid. Above this isotherm ( ), CO 2 exists only as a supercritical fluid .
What happens if you have some liquid carbon dioxide in a transparent cylinder at just under its P c of 62 atm, and you then compress it slightly? Nothing very dramatic until you notice that the meniscus has disappeared. By successively reducing and increasing the pressure, you can "turn the meniscus on and off".
One intriguing consequence of the very limited bounds of the liquid state is that you could start with a gas at large volume and low temperature, raise the temperature, reduce the volume, and then reduce the temperature so as to arrive at the liquid region at the lower left, without ever passing through the two-phase region, and thus without undergoing condensation!
Supercritical fluids
The supercritical state of matter, as the fluid above the critical point is often called, possesses the flow properties of a gas and the solvent properties of a liquid. The density of a supercritical fluid can be changed over a wide range by adjusting the pressure; this, in turn, changes its solubility, which can thus be optimized for a particular application. The picture at the right shows a commercial laboratory device used for carrying out chemical reactions under supercritical conditions.
Supercritical carbon dioxide is widely used to dissolve the caffeine out of coffee beans and as a dry-cleaning solvent. Supercritical water has recently attracted interest as a medium for chemically decomposing dangerous environmental pollutants such as PCBs. Supercritical fluids are being increasingly employed as as substitutes for organic solvents (so-called "green chemistry") in a range of industrial and laboratory processes. Applications that involve supercritical fluids include extractions, nano particle and nano structured film formation, supercritical drying, carbon capture and storage, as well as enhanced oil recovery studies. | libretexts | 2025-03-17T19:53:09.429766 | 2013-10-03T01:38:04 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/06%3A_Properties_of_Gases/6.06%3A_Real_Gases_and_Critical_Phenomena",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "6.6: Real Gases and Critical Phenomena",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids | 7: Solids and Liquids
The pages present an overview of the condensed states of matter. Although there is more detail than can be found in standard textbooks, the level is still suitable for first-year college and advanced high school courses. These pages should also be helpful as review material for students in more advanced courses in chemistry, geology, and materials science.
-
- 7.1: Matter under the Microscope
- Gases, liquids, and especially solids surround us and give form to our world. Chemistry at its most fundamental level is about atoms and the forces that act between them to form larger structural units. But the matter that we experience with our senses is far removed from this level. This unit will help you see how these macroscopic properties of matter depend on the microscopic particles of which it is composed.
-
- 7.2: Intermolecular Interactions
- Liquids and solids differ from gases in that they are held together by forces that act between the individual molecular units of which they are composed. In this lesson we will take a closer look at these forces so that you can more easily understand, and in many cases predict, the diverse physical properties of the many kinds of solids and liquids we encounter in the world.
-
- 7.4: Liquids and their Interfaces
- The molecular units of a liquid, like those of solids, are in direct contact, but never for any length of time and in the same locations. Rapid chemical change requires intimate contact between the agents undergoing reaction, but these agents, along with the reaction products, must be free to move away to allow new contacts and further reaction to take place. This is why so much of what we do with chemistry takes place in the liquid phase.
-
- 7.5: Changes of State
- A given substance will exist in the form of a solid, liquid, or gas, depending on the temperature and pressure. In this unit, we will learn what common factors govern the preferred state of matter under a particular set of conditions, and we will examine the way in which one phase gives way to another when these conditions change.
-
- 7.6: Introduction to Crystals
- Crystallography is of importance not only to chemists and physicists, but also to geologists, amateur minerologists and "rock-hounds". In this lesson we will see how the external shape of a crystal can reveal much about the underlying arrangement of its constituent atoms, ions, or molecules.In this lesson we will see how the external shape of a crystal can reveal much about the underlying arrangement of its constituent atoms, ions, or molecules.
-
- 7.7: Ionic and Ion-Derived Solids
- In this section we deal mainly with a very small but imporant class of solids that are commonly regarded as composed of ions. We will see how the relative sizes of the ions determine the energetics of such compounds. And finally, we will point out that not all solids that are formally derived from ions can really be considered "ionic" at all.
-
- 7.8: Cubic Lattices and Close Packing
- When substances form solids, they tend to pack together to form ordered arrays of atoms, ions, or molecules that we call crystals. Why does this order arise, and what kinds of arrangements are possible? We will limit our discussion to cubic crystals, which form the simplest and most symmetric of all the lattice types. Cubic lattices are also very common — they are formed by many metallic crystals, and also by most of the alkali halides, several of which we will study as examples.
-
- 7.9: Polymers and Plastics
- Synthetic polymers, which includes the large group known as plastics, came into prominence in the early twentieth century. Chemists' ability to engineer them to yield a desired set of properties (strength, stiffness, density, heat resistance, electrical conductivity) has greatly expanded the many roles they play in the modern industrial economy. This Module deals mostly with synthetic polymers, but will include a synopsis of some of the more important natural polymers.
-
- 7.10: Colloids and their Uses
- Colloids occupy an intermediate place between [particulate] suspensions and solutions, both in terms of their observable properties and particle size. In a sense, they bridge the microscopic and the macroscopic. As such, they possess some of the properties of both, which makes colloidal matter highly adaptable to specific uses and functions. Colloid science is central to biology, food science and numerous consumer products. | libretexts | 2025-03-17T19:53:09.497754 | 2013-10-03T01:38:06 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "7: Solids and Liquids",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.01%3A_Matter_under_the_Microscope | 7.1: Matter under the Microscope
- State the major feature that characterizes a condensed state of matter.
- Describe some of the major observable properties that distinguish gases, liquid and solids, and state their relative magnitudes in these three states of matter.
- Describe the dominant forces and the resulting physical properties that distinguish ionic, covalent, metallic, and molecular solids.
- Explain the difference between crystalline and amorphous solids, and cite some examples of each.
- Name some of the basic molecular units from which solids of different type can be composed.
- What is meant by an " extended " or "infinite-molecule solid"?
- Describe some of the special properties of graphite and their structural basis.
Gases, liquids, and especially solids surround us and give form to our world. Chemistry at its most fundamental level is about atoms and the forces that act between them to form larger structural units. But the matter that we experience with our senses is far removed from this level. This unit will help you see how these macroscopic properties of matter depend on the microscopic particles of which it is composed.
Solids, Liquids and Gases
What distinguishes solids, liquids, and gases– the three major states of matter — from each other? Let us begin at the microscopic level, by reviewing what we know about gases, the simplest state in which matter can exist. At ordinary pressures, the molecules of a gas are so far apart that intermolecular forces have an insignificant effect on the random thermal motions of the individual particles. As the temperature decreases and the pressure increases, intermolecular attractions become more important, and there will be an increasing tendency for molecules to form temporary clusters. These are so short-lived, however, that even under extreme conditions, gases cannot be said to possess “structure” in the usual sense.
The contrast at the microscopic level between solids, liquids and gases is most clearly seen in the simplified schematic views above. The molecular units of crystalline solids tend to be highly ordered, with each unit occupying a fixed position with respect to the others. In liquids, the molecules are able to slip around each other, introducing an element of disorder and creating some void spaces that decrease the density. Gases present a picture of almost total disorder, with practically no restrictions on where any one molecule can be.
Having lived our lives in a world composed of solids, liquids, and gases, few of us ever have any difficulty deciding into which of these categories a given sample of matter falls. Our decision is most commonly based on purely visual cues:
- a gas has no definite boundaries other than those that might be imposed by the walls of a confining vessel.
- Liquids and solids possess a clearly delineated phase boundary that gives solids their definite shapes and whose light-reflecting properties enable us to distinguish one phase from another.
- Solids can have any conceivable shape, and their surfaces are usually too irregular to show specular (mirror-like) reflection of light. Liquids, on the other hand, are mobile ; except when in the form of tiny droplets, liquids have no inherent shape of their own, but assume the shape of their container and show an approximately flat upper surface.
Our experience also tells us that these categories are quite distinct; a phase , which you will recall is a region of matter having uniform intensive properties, is either a gas, a liquid, or a solid. Thus the three states of matter are not simply three points on a continuum; when an ordinary solid melts, it usually does so at a definite temperature, without apparently passing through any states that are intermediate between a solid and a liquid.
Limiting Behavior
Although these common-sense perceptions are usually correct, they are not infallible, and in fact there are solids such as glasses and many plastics that do not have sharp melting points, but instead undergo a gradual transition from solid to liquid known as softening , and when subject to enough pressure, solids can exhibit something of the flow properties of liquids (glacial ice, for example).
A more scientific approach would be to compare the macroscopic physical properties of the three states of matter, but even here we run into difficulty. It is true, for example, that the density of a gas is usually about a thousandth of that of the liquid or solid at the same temperature and pressure; thus one gram of water vapor at 100°C and 1 atm pressure occupies a volume of 1671 mL; when it condenses to liquid water at the same temperature, it occupies only 1.043 mL.
| Phase | Density |
|---|---|
| gas |
22,400 cm
3
/mol total volume
(42 cm 3 /mol excluded volume) |
| liquid | 16.8 cm 3 /mol |
| solid | 13.9 cm 3 /mol |
Table \(\PageIndex{1}\) compares the molar volumes of neon in its three states. For the gaseous state, P = 1 atm and T = 0°C. The excluded volume is the volume actually taken up by the neon atoms according to the van der Waals equation of state model. It is this extreme contrast with the gaseous states that leads to the appellation “ condensed states of matter ” for liquids and solids. However, gases at very high pressures can have densities that exceed those of other solid and liquid substances, so density alone is not a sufficiently comprehensive criterion for distinguishing between the gaseous and condensed states of matter. Similarly, the density of a solid is usually greater than that of the corresponding liquid at the same temperature and pressure, but not always: you have certainly seen ice floating on water!
Compare the density of gaseous xenon (molar mass 131 g) at 100 atm and 0°C with that of a hydrocarbon liquid for which \(ρ = 0.104\, g/mL\) at the same temperature.
Solution
For simplicity, we will assume that xenon approximates an ideal gas under these conditions, which it really does not.
The ideal molar volume at 0° C and 1 atm is 22.4 L; at 100 atm, this would be reduced to 0.22 L or 220 mL, giving a density
\[ρ = \dfrac{131\, g}{224\, mL} = 0.58\, g/mL. \nonumber\]
In his autobiographical Uncle Tungsten , the physician/author Oliver Sacks describes his experience with xenon-filled balloons of "astonishing density — as near to 'lead balloons" as could be [imagined]. If one twirled these xenon balloons in one's hand, then stopped, the heavy gas, by its own momentum, would continue rotating for a minute, almost as if it were a liquid."
Other physical properties, such as the compressibility, surface tension, and viscosity, are somewhat more useful for distinguishing between the different states of matter. Even these, however, provide no well-defined dividing lines between the various states. Rather than try to develop a strict scheme for classifying the three states of matter, it will be more useful to simply present a few generalizations.
| property |
|
|
|
|---|---|---|---|
| density | very small | large | large |
| thermal expansion coefficient | large (= R/P) | small | small |
| cohesiveness | nil | small | large |
| surface tension | nil | medium | very large |
| viscosity | small | medium | very large |
| kinetic energy per molecule | large | small | smaller |
| disorder | random | medium | small |
Condensed States of Matter
Some of these deal with macroscopic properties (that is, properties such as the density that relate to bulk matter ), and others with microscopic properties that refer to the individual molecular units. Even the most casual inspection of the above table shows that solids and liquids possess an important commonality that distinguishes them from gases: in solids and liquids, the molecules are in contact with their neighbors . As a consequence, these condensed states of matter generally possess much higher densities than gases.
In our study of gases, we showed that the macroscopic properties of a gas (the pressure, volume, and temperature) are related through an equation of state , and that for the limiting case of an ideal gas, this equation of state can be derived from the relatively small set of assumptions of the kinetic molecular theory. To the extent that a volume of gas consists mostly of empty space, all gases have very similar properties. Equations of state work for gases because gases consist mostly of empty space, so intermolecular interactions can be largely neglected. In condensed matter, these interactions dominate, and they tend to be unique to each particular substance, so there is no such thing as a generally useful equation of state of liquids and solids.
Is there a somewhat more elaborate theory that can predict the behavior of the other two principal states of matter, liquids and solids? Very simply, the answer is "no"; despite much effort, no one has yet been able to derive a general equation of state for condensed states of matter. The best one can do is to construct models based on the imagined interplay of attractive and repulsive forces, and then test these models by computer simulation. Nevertheless, the very factors that would seem to make an equation of state for liquids and solids impossibly complicated also give rise to new effects that are easily observed, and which ultimately define the distinguishing characteristics of the gaseous, liquid, and solid states of matter. In this unit, we will try to learn something about these distinctions, and how they are affected by the chemical constitution of a substance.
Liquids
Crystalline solids and gases stand at the two extremes of the spectrum of perfect order and complete chaos. Liquids display elements of both qualities, and both in limited and imperfect ways. Liquids and solids share most of the properties of having their molecular units in direct contact as discussed in the previous section on condensed states of matter. At they same time, liquids, like gases, are fluids, meaning that their molecular units can move more or less independently of each other. But whereas the volume of a gas depends entirely on the pressure (and thus generally on the volume within which it is confined), the volume of a liquid is largely independent of the pressure. Here we offer just enough to help you see how they relate to the other major states of matter.
Solids
Of the four ancient elements of "fire, air, earth and water", it is the many forms of solids ("earths") that we encounter in daily life and which give form, color and variety to our visual world. The solid state, being the form of any substance that prevails at lower temperatures, is one in which thermal motion plays an even smaller role than in liquids. The thermal kinetic energy that the individual molecular units do have at temperatures below their melting points allows them to oscillate around a fixed center whose location is determined by the balance between local forces of attraction and repulsion due to neighboring units, but only very rarely will a molecule jump out of the fixed space allotted to it in the lattice. Thus solids, unlike liquids, exhibit long-range order , cohesiveness, and rigidity , and possess definite shapes .
Classification of solids
Most people who have lived in the world long enough to read this have already developed a rough way of categorizing solids on the basis of macroscopic properties they can easily observe; everyone knows that a piece of metal is fundamentally different from a rock or a chunk of wood. Unfortunately, nature's ingenuity is far too versatile to fit into any simple system of classifying solids, especially those composed of more than a single chemical substance.
Classification Scheme 1: According to bond type
The most commonly used classification is based on the kinds of forces that join the molecular units of a solid together. We can usually distinguish four major categories on the basis of properties such as general appearance, hardness, and melting point.
|
|
|
|
|
|---|---|---|---|
| ionic | ions | coulombic | high-melting, hard, brittle |
| covalent | atoms of electronegative elements | chemical bonds | non-melting (decompose), extremely hard |
| metallic | atoms of electropositive elements | mobile electrons | moderate-to-high melting, deformable, conductive, metallic luster |
| molecular | molecules | van der Waals | low-to-moderate mp, low hardness |
It's important to understand that these four categories are in a sense idealizations that fail to reflect the diversity found in nature. The triangular diagram shown here illustrates this very nicely by displaying examples of binary compounds whose properties suggest that they fall somewhere other than at a vertex of the triangle.
The triangle shown above excludes what is probably the largest category: molecular solids that are bound by van der Waals forces. One way of including these is to expand the triangle to a tetrahedron (the so-called Laing tetrahedron ). Although this illustrates the concept, it is visually awkward to include many examples of the intermediate cases.
Classification Scheme 2: By type of Molecular Unit
Solids, like the other states of matter, can be classified according to whether their fundamental molecular units are atoms, electrically-neutral molecules, or ions. But solids possess an additional property that gases and liquids do not: an enduring structural arrangement of their molecular units. Over-simplifying only a bit, we can draw up a rough classification of solids according to the following scheme:
|
|
|
|
|---|---|---|
| array of discrete units | noble gas solids, metals | molecular solids |
| array of linked units | metals and covalent solids | "extended molecule" compounds |
| disordered arrangement | alternative forms of some elements (e.g. S, Se) | polymers, glasses |
Classification Scheme 3: Classification by Dominant Attractive Force
Notice how the boiling points in the following selected examples reflect the major type of attractive force that binds the molecular units together. Bear in mind, however, that more than one type of attractive force can be operative in many substances.
| substance | bp °C | molecular units | dominant attractive force | separation distance (pm) | attraction energy (kJ/mol) |
|---|---|---|---|---|---|
| sodium fluoride | 990 | Na + F – | coulombic | 18.8 | 657 |
| sodium hydroxide | 318 | Na + OH – | ion-dipole | 21.4 | 90.4 |
| water | 100 | H 2 O | dipole-dipole | 23.7 | 20.2 |
| neon | 249 | Ne | dispersion | 33.0 | 0.26 |
Crystalline Solids
In a solid comprised of identical molecular units, the most favored (lowest potential energy) locations occur at regular intervals in space. If each of these locations is actually occupied, the solid is known as a perfect crystal. What really defines a crystalline solid is that its structure is composed of repeating unit cells each containing a small number of molecular units bearing a fixed geometric relation to one another. The resulting long-range order defines a three-dimensional geometric framework known as a lattice.
Geometric theory shows that only fourteen different types of lattices are possible in three dimensions, and that just six different unit cell arrangements can generate these lattices. The regularity of the external faces of crystals, which in fact correspond to lattice planes, reflects the long-range order inherent in the underlying structure. Perfection is no more attainable in a crystal than in anything else; real crystals contain defects of various kinds, such as lattice positions that are either vacant or occupied by impurities, or by abrupt displacements or dislocations of the lattice structure. Most pure substances, including the metallic elements, form crystalline solids. But there are some important exceptions.
Metallic Solids
In metals the valence electrons are free to wander throughout the solid, instead of being localized on one atom and shared with a neighboring one. The valence electrons behave very much like a mobile fluid in which the fixed lattice of atoms is immersed. This provides the ultimate in electron sharing, and creates a very strong binding effect in solids composed of elements that have the requisite number of electrons in their valence shells. The characteristic physical properties of metals such as their ability to bend and deform without breaking, their high thermal and electrical conductivities and their metallic sheen are all due to the fluid-like behavior of the valence electrons.
Molecular solids
Recall that a "molecule" is defined as a discrete aggregate of atoms bound together sufficiently tightly (that is, by directed covalent forces) to allow it to retain its individuality when the substance is dissolved, melted, or vaporized.
The two words italicized in the preceding sentence are important; covalent bonding implies that the forces acting between atoms within the molecule are much stronger than those acting between molecules, and the directional property of covalent bonding confers on each molecule a distinctive shape which affects a number of its properties. Most compounds of carbon — and therefore, most chemical substances, fall into this category.
Many simpler compounds also form molecules; H 2 O, NH 3 , CO 2 , and PCl 5 are familiar examples. Some of the elements, such as H 2 , O 2 , O 3 , P 4 and S 8 also occur as discrete molecules. Liquids and solids that are composed of molecules are held together by van der Waals forces , and many of their properties reflect this weak binding. Thus molecular solids tend to be soft or deformable, have low melting points, and are often sufficiently volatile to evaporate (sublime) directly into the gas phase; the latter property often gives such solids a distinctive odor.
Iodine
Iodine is a good example of a volatile molecular crystal. The solid (mp 114° C , bp 184°) consists of I 2 molecules bound together only by dispersion forces. If you have ever worked with solid iodine in the laboratory, you will probably recall the smell and sight of its purple vapor which is easily seen in a closed container.
Because dispersion forces and the other van der Waals forces increase with the number of atoms, larger molecules are generally less volatile, and have higher melting points, than do the smaller ones. Also, as one moves down a column in the periodic table, the outer electrons are more loosely bound to the nucleus, increasing the polarizibility of the atom and thus its susceptibility to van der Waals-type interactions. This effect is particularly apparent in the progression of the boiling points of the successively heavier noble gas elements.
Covalent Solids
These are a class of extended-lattice compounds (see Section 6 below) in which each atom is covalently bonded to its nearest neighbors. This means that the entire crystal is in effect one super-giant “molecule”. The extraordinarily strong binding forces that join all adjacent atoms account for the extreme hardness of such substances; these solids cannot be broken or abraded without cleaving a large number of covalent chemical bonds. Similarly, a covalent solid cannot “melt” in the usual sense, since the entire crystal is its own giant molecule. When heated to very high temperatures, these solids usually decompose into their elements.
Diamond
Diamond is the hardest material known, defining the upper end of the 1-10 scale known as Mohs Hardness . Diamond cannot be melted; above 1700°C it is converted to graphite the more stable form of carbon.
The diamond unit cell is face-centered cubic and contains 8 carbon atoms. The four darkly shaded ones are contained within the cell and are completely bonded to other members of the cell. The other carbon atoms (6 in faces and 4 at corners) have some bonds that extend to atoms in other cells. (Two of the carbons nearest the viewer are shown as open circles in order to more clearly reveal the bonding arrangement.)
Other covalent solids
Boron nitride BN is similar to carbon in that it exists as a diamond-like cubic polymorph as well as in a hexagonal form analogous to graphite. Cubic BN is the second hardest material after diamond, and finds use in industrial abrasives and cutting tools. Recent interest in BN has centered on its carbon-like ability to form nanotubes and related nanostructures.
Silicon carbide SiC is also known as carborundum . Its structure is very much like that of diamond with every other carbon replaced by silicon. On heating at atmospheric pressure, it decomposes at 2700°C, but has never been observed to melt. Structurally, it is very complex; at least 70 crystalline forms have been identified. Its extreme hardness and ease of synthesis have led to a diversity of applications — in cutting tools and abrasives, high-temperature semiconductors, and other high-temperature applications, manufacture of specialty steels, jewelry, and many more. Silicon carbide is an extremely rare mineral on the earth, and comes mostly from meteorites which are believed to have their origins in carbonaceous stars. The first synthetic SiC was made accidently by E.G. Acheson in 1891 who immediately recognized its industrial prospects and founded the Carborundum Co.
Tungsten carbide WC is probably the most widely-encountered covalent solid owing to its use in "carbide" cutting tools and as the material used to make the rotating balls in ball-point pens. It's high-melting (2870°C) form has a structure similar to that of diamond and is only slightly less hard. In many of its applications it is embedded in a softer matrix of cobalt or coated with titanium compounds.
Amorphous Solids
In some solids there is so little long-range order that the substance cannot be considered crystalline at all; such a solid is said to be amorphous . Amorphous solids possess short-range order but are devoid of any organized structure over longer distances; in this respect they resemble liquids. However, their rigidity and cohesiveness allow them to retain a definite shape, so for most practical purposes they can be considered to be solids.
Glasses refers generally to solids formed from their melts that do not return to their crystalline forms on cooling, but instead form hard, and often transparent amorphous solids. Although some organic substances such as sugar can form glasses ("rock candy"), the term more commonly describes inorganic compounds, especially those based on silica , SiO 2 . Natural silica-based glasses, known as obsidian , are formed when certain volcanic magmas cool rapidly.
Ordinary glass is composed mostly of SiO 2 , which usually exists in nature in a crystalline form known as quartz . If quartz (in the form of sand) is melted and allowed to cool, it becomes so viscous that the molecules are unable to move to the low potential energy positions they would occupy in the crystal lattice, so that the disorder present in the liquid gets “frozen into” the solid. In a sense, glass can be regarded as a supercooled liquid. Glasses are transparent because the distances over which disorder appears are small compared to the wavelength of visible light, so there is nothing to scatter the light and produce cloudiness.
Ordinary glass is made by melting silica sand to which has been added some calcium and sodium carbonates. These additives reduce the melting point and make it more difficult for the SiO2 molecules to arrange themselves into crystalline order as the mixture cools. Glass is believed to have first been made in the Middle East at least as early as 3000 BCE. Its workability and ease of coloring has made it one of mankind's most important and versatile materials.
Types of molecular units
Molecules
Molecules, not surprisingly, are the most common building blocks of pure substances. Most of the 15-million-plus chemical substances presently known exist as distinct molecules. Chemists commonly divide molecular compounds into "small" and "large-molecule" types, the latter usually falling into the class of polymers (see below.) The dividing line between the two categories is not very well defined, and tends to be based more on the properties of the substance and how it is isolated and purified.
Atoms
We usually think of atoms as the building blocks of molecules, so the only pure substances that consist of plain atoms are those of some of the elements — mostly the metallic elements, and also the noble-gas elements. The latter do form liquids and crystalline solids, but only at very low temperatures. Although the metallic elements form crystalline solids that are essentially atomic in nature, the special properties that give rise to their "metallic" nature puts them into a category of their own. Most of the non-metallic elements exist under ordinary conditions as small molecules such as O 2 or S 6 , or as extended structures that can have a somewhat polymeric nature. Many of these elements can form more than one kind of structure, each one stable under different ranges of temperature and pressure. Multiple structures of the same element are known as allotropes , although the more general term polymorph is now preferred.
Ions
Ions, you will recall, are atoms or molecules that have one or more electrons missing (positive ions) or in excess (negative ions), and therefore possess an electric charge. A basic law of nature, the electroneutrality principle , states that bulk matter cannot acquire more than a trifling (and chemically insignificant) net electric charge. So one important thing to know about ions is that in ordinary matter, whether in the solid, liquid, or gaseous state, any positive ions must be accompanied by a compensating number of negative ions. Ionic substances such as sodium chloride form crystalline solids that can be regarded as made of ions. These solids tend to be quite hard and have high melting points, reflecting the strong forces between oppositely-charged ions. Solid metal oxides, such as CaO and MgO which are composed of doubly-charged ions don't melt at all, but simply dissociate into the elements at very high temperatures.
Polymers
Plastics and natural materials such as rubber or cellulose are composed of very large molecules called polymers ; many important biomolecules are also polymeric in nature. Owing to their great length, these molecules tend to become entangled in the liquid state, and are unable to separate to form a crystal lattice on cooling. In general, it is very difficult to get such substances to form anything other than amorphous solids.
Extended solids
actually exist in their solid forms as linked assemblies of these basic units arranged in chains or layers that extend indefinitely in one, two, or three dimensions. Thus the very simple models of chemical bonding that apply to the isolated molecules in gaseous form must be modified to account for bonding in some of these solids. The terms "one-dimensional" and "two-dimensional", commonly employed in this context, should more accurately be prefixed by " quasi- "; after all, even a single atom occupies three-dimensional space!
One-dimensional solids
Atoms of some elements such as sulfur and selenium can bond together in long chains of indefinite length, thus forming polymeric, amorphous solids. The most well known of these is the amorphous "plastic sulfur" formed when molten sulfur is cooled rapidly by pouring it into water.These are never the most common (or stable) forms of these elements, which prefer to form discrete molecules.
Rubber-like strands of plastic sulfur formed by pouring hot molten sulfur into cold water. After a few days, it will revert to ordinary crystalline sulfur. But small molecules can also form extended chains. Sulfur trioxide is a gas above room temperature, but when it freezes at 17°C the solid forms long chains in which each S atom is coordinated to four oxygen atoms.
Multi-dimensional solids
Many inorganic substances form crystalline solids which are built up from parallel chains in which the basic formula units are linked by weak bonds involving dipole-dipole and dipole-induced dipole interactions. Neighboring chains are bound mainly by dispersion forces.
Layer or sheet-like structures
Solid cadmium chloride is a good example of a layer structure. The Cd and Cl atoms occupy separate layers; each of these layers extends out in a third dimension to form a sheet . The CdCl 2 crystal is built up from stacks of these layers held together by van der Waals forces.
It's worth pointing out that although salts such as CuCl 2 and CdCl 2 are dissociated into ions when in aqueous solution, the solids themselves should not be regarded as "ionic solids". See also this section of the lesson on ionic solids.
Graphite
Graphite is a polymorph of carbon and its most stable form. It consists of sheets of fused benzene rings stacked in layers. The spacing between layers is sufficient to admit molecules of water vapor and other atmospheric gases which become absorbed in the interlamellar spaces and act as lubricants, allowing the layers to slip along each other. Thus graphite itself often has a flake-like character and is commonly used as a solid lubricant, although it loses this property in a vacuum.
As would be expected from its anistropic structure, the electric and thermal conductivity of graphite are much greater in directions parallel to the layers than across the layers. The melting point of 4700-5000°C makes graphite useful as a high-temperature refractory material.
Graphite is the most common form of relatively pure carbon found in nature. Its name comes from the same root as the Greek word for "write" or "draw", reflecting its use as pencil "lead" since the 16th century. (The misnomer, which survives in common use, is due to its misidentification as an ore of the metallic element of the same name at a time long before modern chemistry had developed.)
Graphene
Graphene is a two-dimensional material consisting of a single layer of graphite — essentially "chicken wire made of carbon" that was discovered in 2004. Small fragments of graphene can be obtained by several methods; one is to attach a piece of Scotch Tape™ to a piece of graphite and then carefully pull it off (a process known as exfoliation .) Fragments of graphene are probably produced whenever one writes with a pencil.
Graphene has properties that are uniquely different from all other solids. It is the strongest known material, and it exhibits extremely high electrical conductivity due to its massless electrons which are apparently able to travel at relativistic velocities through the layer. | libretexts | 2025-03-17T19:53:09.628692 | 2013-10-03T01:38:10 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.01%3A_Matter_under_the_Microscope",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "7.1: Matter under the Microscope",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.02%3A_Intermolecular_Interactions | 7.2: Intermolecular Interactions
- Name five kinds of molecular units that condensed matter can be composed of.
- Sketch out a potential energy curve , showing clearly the equilibrium separation and potential energy minimum.
- State the difference between bonded and non-bonded attractions.
- Explain the meaning and significance of the dipole moment of a molecule.
- Define induced dipole and polarizability .
- State the six kinds of intermolecular attractive forces and their relative strengths.
Liquids and solids differ from gases in that they are held together by forces that act between the individual molecular units of which they are composed. In this lesson we will take a closer look at these forces so that you can more easily understand, and in many cases predict, the diverse physical properties of the many kinds of solids and liquids we encounter in the world.
The very existence of condensed states of matter suggests that there are attractive forces acting between the basic molecular units of solids and liquids. The term molecular unit refers to the smallest discrete structural unit that makes up the liquid or solid. In most of the over 15 million chemical substances that are presently known, these structural units are actual molecules — that is, aggregates of atoms that have their own distinguishing properties, formulas, and molecular weights. But the molecular units can also be individual atoms , ions and more extended units. As with most artificial classifications, these distinctions tend to break down in extreme cases: most artificial polymers ("plastics") are composed of molecules of various sizes and shapes, some metal alloys contain identifiable molecular units, and it is not too much of a stretch to regard a diamond or a crystal of NaCl as a "molecule" in itself.
Potential Energy Curves
On the atomic or molecular scale, all particles exert both attractive and repulsive forces on each other. If the attractive forces between two or more atoms are strong enough to make them into an enduring unit with its own observable properties, we call the result a "molecule" and refer to the force as a "chemical bond".
The two diatomic molecules depicted in Figure \(\PageIndex{1}\) have come into close contact with each other, but the attractive force that acts between them is not strong enough to bind them into a new molecular unit, so we call this force a non-bonding attraction . In the absence of these non-bonding attractions, all matter would exist in the gaseous state only; there would be no condensed phases.
The distinction between bonding- and non-bonding attractions can be seen by comparing the potential energy plot for a pair of hydrogen atoms with that for two argon atoms (Figure \(\PageIndex{2}\)). As two hydrogen atoms are brought together, the potential energy falls to a minimum and then rises rapidly as the two electron clouds begin to repel each other. The potential energy minimum defines the energy and the average length of the H–H bond — two of its unique measurable properties.
The potential energy of a pair of argon atoms also falls as they are brought together, but not enough to hold them together. (e.g., the laws of quantum mechanics do not allow this noble gas element to form stable \(Ar_2\) molecules.) However, these non-bonding attractions enable argon to exist as a liquid and solid at low temperatures, but are unable to withstand disruptions caused by thermal energy at ordinary temperatures, so we commonly know argon as a gas.
Thermal Effects
From a classic pictures, at temperatures above absolute zero, all molecular-scale particles possess thermal energy that keeps them in constant motion (and from a quantum picture, motion does not stop even at absolute zero due to Heisenberg Uncertainly principle). The average thermal energy is given by the product of the gas constant R and the absolute temperature. At 25°C, this works out to
\[RT = (8.314 \,J \,K^{–1} mol^{–1}) (298\, K) = 2,480\, J\, mol^{–1} \approx 2.5\, kJ\, mol^{–1}\]
A functional chemical bond is much stronger than this (typically over 100 kJ/mol), so the effect of thermal motion is simply to cause the bond to vibrate; only at higher temperatures (where the value of RT is larger) will most bonds begin to break. Non-bonding attractive forces between pairs of atoms are generally too weak to sustain even a single vibration. In addition to unique distinguishing properties such as bond energy, bond length and stretching frequencies, covalent bonds usually have directional properties that depend on the orbital structures of the component atoms. The much-weaker non-bonding attractions possess none of these properties.
The shape of a potential energy curve (often approximated as a "Morse" curve) shows how repulsive and attractive forces affect the potential energy in opposite ways: repulsions always raise this energy, and attractions reduce it. The curve passes through a minimum when the attractive and repulsive forces are exactly in balance. As we stated above, all particles exert both kinds of forces on one another; these forces are all basically electrical in nature and they manifest themselves in various ways and with different strengths.
The distance corresponding to the minimum potential energy is known as the equilibrium distance . This is the average distance that will be maintained by the two particles if there are no other forces acting on them, such as might arise from the presence of other particles nearby. A general empirical expression for the interaction potential energy curve between two particles can be written as
\[ E = Ar^{-n} + Br^{-m} \label{7.2.1}\]
\(A\) and \(B\) are proportionality constants and \(n\) and \(m\) are integers. This expression is sometimes referred to as the Mie equation . The first term, \(A\), corresponds to repulsion is always positive, and \(n\) must be larger than \(m\), reflecting the fact that repulsion always dominates at small separations. The \(B\) coefficient is negative for attractive forces, but it will become positive for electrostatic repulsion between like charges. The larger the value of one of these exponents, the closer the particles must come before the force becomes significant. Table \(\PageIndex{1}\) lists the exponents for the types of interactions we will describe in this lesson.
|
|
|
n |
|
|---|---|---|---|
| ions | Coulombic | - | 1 |
| ion - polar molecule | ion-dipole | - | 2 |
| two polar molecules | dipole-dipole | - | 3 |
| ion - nonpolar molecule | ion - induced dipole | - | 4 |
| polar and nonpolar molecule | dipole - induced dipole | - | 6 |
| nonpolar molecules | dispersion | - | 6 |
| repulsions | quantum | 9 | - |
| Note: the blue-shaded interactions are known collectively as van der Waals interactions |
The Universal Repulsive Force
The value of \(n\) for the repulsive force in Figure \(\PageIndex{3}\) is 9; this may be the highest inverse-power law to be found in nature. The magnitude of such a force is negligible until the particles are almost in direct contact, but once it kicks in, it becomes very strong; if you want to get a feel for it, try banging your head into a concrete wall. Because the repulsive force is what prevents two atoms from occupying the same space, this is just what you would expect. If the repulsive force did not always win out against all attractive forces, all matter would collapse into one huge glob! The universal repulsive force arises directly from two main aspects of quantum theory.
- First, the Heisenberg uncertainty principle tells us that the electrons situated within the confines of an atom possess kinetic energy that would exert an outward pressure were it not for the compensating attractive force of the positively-charged nucleus. But even the very slight decrease in volume that would result from squeezing the atom into a smaller space will raise this pressure so as to effectively resist this change in volume. This is the basic reason that condensed states of matter have extremely small compressibilities.
- Working in concert with this is the Pauli exclusion principle each electron must have a different set of quantum numbers. So as two particles begin to intrude upon each other, the volume their electrons occupy gets divided up between each spin-pair, and the ones forced into higher quantum states would normally occupy even greater volumes. The effect is again to massively raise the potential energy as the particles begin to squeeze too close together.
In a wonderful article ( Science 187 605-612 1975 ), the physicist Victor Weiskopf showed how these considerations, combined with a few fundamental constants, leads to realistic estimates of such things as the hardness and compressibility of solids, the heights of mountains, the lengths of ocean waves, and the sizes of stars.
Ion-Ion Interactions
Electrostatic attraction between electrically-charged particles is the strongest of all the intermolecular forces. These Coulombic forces (as they are often called) cause opposite charges to attract and like charges to repel.
Coulombic forces are involved in all forms of chemical bonding; when they act between separate charged particles (ion-ion interactions) they are especially strong. Thus the energy required to pull a mole of Na + and Cl – ions apart in the sodium chloride crystal is greater than that needed to break the covalent bond in \(H_2\) (Figure \(\PageIndex{1}\)). The effects of ion-ion attraction are seen most directly in solids such as NaCl which consist of oppositely-charged ions arranged in two inter-penetrating crystal lattices.
According to Coulomb's Law the force between two charged particles is given by
\[ F= \dfrac{q_1q_2}{4\pi\epsilon_0 r^2} \label{7.2.2}\]
Instead of using SI units, chemists often prefer to express atomic-scale distances in picometers and charges as electron charge (±1, ±2, etc.) Using these units, the proportionality constant \(1/4\pi\epsilon\) works out to \(2.31 \times 10^{16}\; J\; pm\). The sign of \(F\) determines whether the force will be attractive (–) or repulsive (+); notice that the latter is the case whenever the two q 's have the same sign.
Equation \(\ref{7.2.2}\) is an example of an inverse square law ; the force falls off as the square of the distance. A similar law governs the manner in which the illumination falls off as you move away from a point light source; recall this the next time you walk away from a street light at night, and you will have some feeling for what an inverse square law means.
The stronger the attractive force acting between two particles, the greater the amount of work required to separate them. Work represents a flow of energy, so the foregoing statement is another way of saying that when two particles move in response to a force, their potential energy is lowered. This work, as you may recall if you have studied elementary mechanics, is found by integrating the negative force with respect to distance over the distance moved. Thus the energy that must be supplied in order to completely separate two oppositely-charged particles initially at a distance r 0 is given by
\[ w= - \int _{r_o} ^{\infty} \dfrac{q_1q_2}{4\pi\epsilon_0 r}dr =- \dfrac{q_1q_2}{4\pi\epsilon_0 r_o} \label{7.2.3}\]
When sodium chloride is melted, some of the ion pairs vaporize and form neutral NaCl molecules. How much energy would be released when one mole of Na + and Cl – ions are brought together in this way?
Solution
The energy released will be the same as the work required to separate
\[ \begin{align*} E &= \dfrac{(2.31 \times 10^{16} J pm) (+1) (–1)}{276\; pm} \\[4pt] &= –8.37 \times 10^{–19}\; J \end{align*} \]
The ion-ion interaction is the simplest of electrostatic interactions and other higher order interactions exists as discussed below.
Dipoles
According to Coulomb's law (Equation \(\ref{7.2.1}\)), the electrostatic force between an ion and an uncharged particle having Q = 0 should be zero. Bear in mind, however, that this formula assumes that the two particles are point charges having zero radii. A real particle such as an atom or a molecule occupies a certain volume of space. Even if the electric charges of the protons and electrons cancel out (as they will in any neutral atom or molecule), it is possible that the spatial distribution of the electron cloud representing the most loosely-bound [valence] electrons might be asymmetrical, giving rise to an electric dipole moment . There are two kinds of dipole moments:
- Permanent electric dipole moments can arise when bonding occurs between elements of differing electronegativities.
- Induced (temporary) dipole moments are created when an external electric field distorts the electron cloud of a neutral molecule.
An electric dipole refers to a separation of electric charge. An idealized electric dipole consists of two point charges of magnitude + q and – q separated by a distance r . Even though the overall system is electrically neutral, the charge separation gives rise to an electrostatic effect whose strength is expressed by the electric dipole moment given by
\[μ = q \times r \label{\(\PageIndex{5}\)}\]
Dipole moments possess both magnitude and direction, and are thus vectorial quantities; they are conventionally represented by arrows whose heads are at the negative end.
Permanent dipole moments
These are commonly referred to simply as "dipole moments". The most well-known molecule having a dipole moment is ordinary water. The charge imbalance arises because oxygen, with its nuclear charge of 8, pulls the electron cloud that comprises each O–H bond toward itself. These two "bond moments" add vectorially to produce the permanent dipole moment denoted by the red arrow. Note the use of the δ (Greek delta ) symbol to denote the positive and negative ends of the dipoles.
When an electric dipole is subjected to an external electric field, it will tend to orient itself so as to minimize the potential energy; that is, its negative end will tend to point toward the higher (more positive) electric potential. In liquids, thermal motions will act to disrupt this ordering, so the overall effect depends on the temperature. In condensed phases the local fields due to nearby ions or dipoles in a substance play an important role in determining the physical properties of the substance, and it is in this context that dipolar interactions are of interest to us here. We will discuss each kind of interaction in order of decreasing strength.
Induced dipoles
Even if a molecule is electrically neutral and possesses no permanent dipole moment, it can still be affected by an external electric field. Because all atoms and molecules are composed of charged particles (nuclei and electrons), the electric field of a nearby ion will cause the centers of positive and negative charges to shift in opposite directions. This effect, which is called polarization , results in the creation of a temporary, or induced dipole moment. The induced dipole then interacts with the species that produced it, resulting in a net attraction between the two particles.
The larger an atom or ion, the more loosely held are its outer electrons, and the more readily will the electron cloud by distorted by an external field. A quantity known as the polarizability expresses the magnitude of the temporary dipole that can be induced in it by a nearby charge.
Ion-Dipole interactions
A dipole that is close to a positive or negative ion will orient itself so that the end whose partial charge is opposite to the ion charge will point toward the ion. This kind of interaction is very important in aqueous solutions of ionic substances; H 2 O is a highly polar molecule, so that in a solution of sodium chloride, for example, the Na + ions will be enveloped by a shell of water molecules with their oxygen-ends pointing toward these ions, while H 2 O molecules surrounding the Cl – ions will have their hydrogen ends directed inward. As a consequence of ion-dipole interactions, all ionic species in aqueous solution are hydrated; this is what is denoted by the suffix in formulas such as K + (aq), etc.
The strength of ion-dipole attraction depends on the magnitude of the dipole moment and on the charge density of the ion. This latter quantity is just the charge of the ion divided by its volume. Owing to their smaller sizes, positive ions tend to have larger charge densities than negative ions, and they should be more strongly hydrated in aqueous solution. The hydrogen ion, being nothing more than a bare proton of extremely small volume, has the highest charge density of any ion; it is for this reason that it exists entirely in its hydrated form H 3 O + in water.
Dipole-dipole interactions
As two dipoles approach each other, they will tend to orient themselves so that their oppositely-charged ends are adjacent. Two such arrangements are possible: the dipoles can be side by side but pointing in opposite directions, or they can be end to end. It can be shown that the end-to-end arrangement gives a lower potential energy.
Dipole-dipole attraction is weaker than ion-dipole attraction, but it can still have significant effects if the dipole moments are large. The most important example of dipole-dipole attraction is hydrogen bonding.
Ion-induced dipole Interactions
The most significant induced dipole effects result from nearby ions, particularly cations (positive ions). Nearby ions can distort the electron clouds even in polar molecules, thus temporarily changing their dipole moments. The larger ions (especially negative ones such as SO 2 2– and ClO 4 2– ) are highly polarizable, and the dipole moments induced in them by a cation can play a dominant role in compound formation.
Dipole-induced dipole interactions
A permanent dipole can induce a temporary one in a species that is normally nonpolar, and thus produce a net attractive force between the two particles (Figure \(\PageIndex{9}\)). This attraction is usually rather weak, but in a few cases it can lead to the formation of loosely-bound compounds. This effect explains the otherwise surprising observation that a wide variety of neutral molecules such as hydrocarbons, and even some of the noble gas elements, form stable hydrate compounds with water.
Dispersion (London) Forces
The fact that noble gas elements and completely non-polar molecules such as H 2 and N 2 can be condensed to liquids or solids tells us that there must be yet another source of attraction between particles that does not depend on the existence of permanent dipole moments in either particle (Figure \(\PageIndex{10}\)). To understand the origin of this effect, it is necessary to realize that when we say a molecule is “nonpolar”, we really mean that the time-averaged dipole moment is zero. This is the same kind of averaging we do when we draw a picture of an orbital, which represents all the locations in space in which an electron can be found with a certain minimum probability. On a very short time scale, however, the electron must be increasingly localized; not even quantum mechanics allows it to be in more than one place at any given instant. As a consequence, there is no guarantee that the distribution of negative charge around the center of an atom will be perfectly symmetrical at every instant; every atom therefore has a weak, fluctuating dipole moment that is continually disappearing and reappearing in another direction.
Dispersion or London forces can be considered to be "spontaneous dipole - induced dipole" interactions.
Although these extremely short-lived fluctuations quickly average out to zero, they can still induce new dipoles in a neighboring atom or molecule, which helps sustain the original dipole and gives rise to a weak attractive force known as the dispersion or London force . Although dispersion forces are the weakest of all the intermolecular attractions, they are universally present . Their strength depends to a large measure on the number of electrons in a molecule. This can clearly be seen by looking at the noble gas elements in Table \(\PageIndex{2}\), whose ability to condense to liquids and freeze to solids is entirely dependent on dispersion forces.
| element | He | Ne | Ar | Kr | Xe |
|---|---|---|---|---|---|
| atomic number | 2 | 10 | 18 | 36 | 54 |
| boiling point, K | 27 | 87 | 120 | 165 | 211 |
| critical temperature, K | 5 | 44 | 151 | 209.5 | 290 |
| heat of vaporization, kJ mol –1 | 0.08 | 1.76 | 6.51 | 59 | 12.6 |
It is important to note that dispersion forces are additive ; if two elongated molecules find themselves side by side, dispersion force attractions will exist all along the regions where the two molecules are close. This can produce quite strong attractions between large polymeric molecules even in the absence of any stronger attractive forces.
"van der Waals" forces is a catch all Term
Although nonpolar molecules are by no means uncommon, many kinds of molecules possess permanent dipole moments, so liquids and solids composed of these species will be held together by a combination of dipole-dipole, dipole-induced dipole, and dispersion forces. These weaker forces (that is, those other than Coulombic attractions) are known collectively as van der Waals forces. These include attraction and repulsions between atoms, molecules, and surfaces, as well as other intermolecular forces. The term includes:
- force between two permanent dipoles and higher order moments like quadrupole
- force between a permanent dipole and a corresponding induced dipole
- force between two instantaneously induced dipoles (dispersion forces)
Table \(\PageIndex{3}\) shows some estimates of the contributions of the various types of van der Waals forces that act between several different types of molecules. Note particularly how important dispersion forces are in all of these examples, and how this, in turn, depends on the polarizability .
|
|
|
|
|
|
|
|
|---|---|---|---|---|---|---|
| Ar | –186 | 0 | 1.6 | 0 | 0 | 100 |
| CO | –190 | 0.1 | 2.0 | 0 | 0 | 100 |
| HCl | –84 | 1.0 | 2.6 | 4.2 | 14.4 | 81.4 |
| HBr | –67 | 0.8 | 3.6 | 2.2 | 3.3 | 94.5 |
| HI | –35 | 0.4 | 5.4 | 0.4 | 0.1 | 99.5 |
| NH 3 | –33 | 1.5 | 2.6 | 5.4 | 44.6 | 50.0 |
| H 2 O | 100 | 1.8 | 1.5 | 4.0 | 77.0 | 19.0 | | libretexts | 2025-03-17T19:53:09.744920 | 2013-10-03T01:38:08 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.02%3A_Intermolecular_Interactions",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "7.2: Intermolecular Interactions",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.03%3A_Hydrogen-Bonding_and_Water | 7.3: Hydrogen-Bonding and Water
- Identify three special properties of water that make it unusual for a molecule of its size, and explain how these result from hydrogen bonding.
- Explain what is meant by hydrogen bonding and the molecular structural features that bring it about.
- Describe the "structure", such as it is, of liquid water.
- Sketch out structural examples of hydrogen bonding in three small molecules other than H 2 O.
- Describe the roles of hydrogen bonding in proteins and in DNA.
Most students of chemistry quickly learn to relate the structure of a molecule to its general properties. Thus we generally expect small molecules to form gases or liquids, and large ones to exist as solids under ordinary conditions. And then we come to H 2 O, and are shocked to find that many of the predictions are way off, and that water (and by implication, life itself) should not even exist on our planet! In this section we will learn why this tiny combination of three nuclei and ten electrons possesses special properties that make it unique among the more than 15 million chemical species we presently know.
In water, each hydrogen nucleus is covalently bound to the central oxygen atom by a pair of electrons that are shared between them. In H 2 O, only two of the six outer-shell electrons of oxygen are used for this purpose, leaving four electrons which are organized into two non-bonding pairs. The four electron pairs surrounding the oxygen tend to arrange themselves as far from each other as possible in order to minimize repulsions between these clouds of negative charge. This would ordinarily result in a tetrahedral geometry in which the angle between electron pairs (and therefore the H-O-H bond angle ) is 109.5°. However, because the two non-bonding pairs remain closer to the oxygen atom, these exert a stronger repulsion against the two covalent bonding pairs, effectively pushing the two hydrogen atoms closer together. The result is a distorted tetrahedral arrangement in which the H—O—H angle is 104.5°.
Water's large dipole moment leads to hydrogen bonding
The H 2 O molecule is electrically neutral, but the positive and negative charges are not distributed uniformly. This is illustrated by the gradation in color in the schematic diagram here. The electronic (negative) charge is concentrated at the oxygen end of the molecule, owing partly to the nonbonding electrons (solid blue circles), and to oxygen's high nuclear charge which exerts stronger attractions on the electrons. This charge displacement constitutes an electric dipole , represented by the arrow at the bottom; you can think of this dipole as the electrical "image" of a water molecule.
Opposite charges attract, so it is not surprising that the negative end of one water molecule will tend to orient itself so as to be close to the positive end of another molecule that happens to be nearby. The strength of this dipole-dipole attraction is less than that of a normal chemical bond, and so it is completely overwhelmed by ordinary thermal motions in the gas phase. However, when the H 2 O molecules are crowded together in the liquid, these attractive forces exert a very noticeable effect, which we call (somewhat misleadingly) hydrogen bonding . And at temperatures low enough to turn off the disruptive effects of thermal motions, water freezes into ice in which the hydrogen bonds form a rigid and stable network.
Notice that the hydrogen bond (shown by the dashed green line) is somewhat longer than the covalent O—H bond. It is also much weaker , about 23 kJ mol –1 compared to the O–H covalent bond strength of 492 kJ mol –1 .
Forty-one anomalies of water" — some of them rather esoteric.
Water has long been known to exhibit many physical properties that distinguish it from other small molecules of comparable mass. Although chemists refer to these as the "anomalous" properties of water, they are by no means mysterious; all are entirely predictable consequences of the way the size and nuclear charge of the oxygen atom conspire to distort the electronic charge clouds of the atoms of other elements when these are chemically bonded to the oxygen.
Boiling point
The most apparent peculiarity of water is its very high boiling point for such a light molecule. Liquid methane CH 4 (molecular weight 16) boils at –161°C. As you can see from this diagram, extrapolation of the boiling points of the various Group 16 hydrogen compounds to H 2 O suggests that this substance should be a gas under normal conditions.
Surface Tension
Compared to most other liquids, water also has a high surface tension . Have you ever watched an insect walk across the surface of a pond? The water strider takes advantage of the fact that the water surface acts like an elastic film that resists deformation when a small weight is placed on it. (If you are careful, you can also "float" a small paper clip or steel staple on the surface of water in a cup.) This is all due to the surface tension of the water. A molecule within the bulk of a liquid experiences attractions to neighboring molecules in all directions, but since these average out to zero, there is no net force on the molecule. For a molecule that finds itself at the surface, the situation is quite different; it experiences forces only sideways and downward, and this is what creates the stretched-membrane effect.
The distinction between molecules located at the surface and those deep inside is especially prominent in H 2 O, owing to the strong hydrogen-bonding forces. The difference between the forces experienced by a molecule at the surface and one in the bulk liquid gives rise to the liquid's surface tension. This drawing highlights two H 2 O molecules, one at the surface, and the other in the bulk of the liquid. The surface molecule is attracted to its neighbors below and to either side, but there are no attractions pointing in the 180° solid angle angle above the surface. As a consequence, a molecule at the surface will tend to be drawn into the bulk of the liquid. But since there must always be some surface, the overall effect is to minimize the surface area of a liquid.
The geometric shape that has the smallest ratio of surface area to volume is the sphere , so very small quantities of liquids tend to form spherical drops. As the drops get bigger, their weight deforms them into the typical tear shape.
Ice floats on water
The most energetically favorable configuration of H 2 O molecules is one in which each molecule is hydrogen-bonded to four neighboring molecules. Owing to the thermal motions described above, this ideal is never achieved in the liquid, but when water freezes to ice, the molecules settle into exactly this kind of an arrangement in the ice crystal. This arrangement requires that the molecules be somewhat farther apart then would otherwise be the case; as a consequence, ice, in which hydrogen bonding is at its maximum, has a more open structure, and thus a lower density than water.
Here are three-dimensional views of a typical local structure of water (left) and ice (right.) Notice the greater openness of the ice structure which is necessary to ensure the strongest degree of hydrogen bonding in a uniform, extended crystal lattice. The more crowded and jumbled arrangement in liquid water can be sustained only by the greater amount of thermal energy available above the freezing point.
When ice melts, the more vigorous thermal motion disrupts much of the hydrogen-bonded structure, allowing the molecules to pack more closely. Water is thus one of the very few substances whose solid form has a lower density than the liquid at the freezing point. Localized clusters of hydrogen bonds still remain, however; these are continually breaking and reforming as the thermal motions jiggle and shove the individual molecules. As the temperature of the water is raised above freezing, the extent and lifetimes of these clusters diminish, so the density of the water increases.
At higher temperatures, another effect, common to all substances, begins to dominate: as the temperature increases, so does the amplitude of thermal motions. This more vigorous jostling causes the average distance between the molecules to increase, reducing the density of the liquid; this is ordinary thermal expansion.
Because the two competing effects (hydrogen bonding at low temperatures and thermal expansion at higher temperatures) both lead to a decrease in density, it follows that there must be some temperature at which the density of water passes through a maximum. This temperature is 4° C; this is the temperature of the water you will find at the bottom of an ice-covered lake in which this most dense of all water has displaced the colder water and pushed it nearer to the surface.
Structure of Liquid Water
The nature of liquid water and how the H 2 O molecules within it are organized and interact are questions that have attracted the interest of chemists for many years. There is probably no liquid that has received more intensive study, and there is now a huge literature on this subject. The following facts are well established:
- H 2 O molecules attract each other through the special type of dipole-dipole interaction known as hydrogen bonding
- a hydrogen-bonded cluster in which four H 2 Os are located at the corners of an imaginary tetrahedron is an especially favorable (low-potential energy) configuration, but ...
- the molecules undergo rapid thermal motions on a time scale of picoseconds (10 –12 second), so the lifetime of any specific clustered configuration will be fleetingly brief.
A variety of techniques including infrared absorption, neutron scattering, and nuclear magnetic resonance have been used to probe the microscopic structure of water. The information garnered from these experiments and from theoretical calculations has led to the development of around twenty "models" that attempt to explain the structure and behavior of water. More recently, computer simulations of various kinds have been employed to explore how well these models are able to predict the observed physical properties of water.
This work has led to a gradual refinement of our views about the structure of liquid water, but it has not produced any definitive answer. There are several reasons for this, but the principal one is that the very concept of "structure" (and of water "clusters") depends on both the time frame and volume under consideration. Thus, questions of the following kinds are still open:
- How do you distinguish the members of a "cluster" from adjacent molecules that are not in that cluster?
- Since individual hydrogen bonds are continually breaking and re-forming on a picosecond time scale, do water clusters have any meaningful existence over longer periods of time? In other words, clusters are transient, whereas "structure" implies a molecular arrangement that is more enduring. Can we then legitimately use the term "clusters" in describing the structure of water?
- The possible locations of neighboring molecules around a given H 2 O are limited by energetic and geometric considerations, thus giving rise to a certain amount of "structure" within any small volume element. It is not clear, however, to what extent these structures interact as the size of the volume element is enlarged. And as mentioned above, to what extent are these structures maintained for periods longer than a few picoseconds?
In the 1950's it was assumed that liquid water consists of a mixture of hydrogen-bonded clusters (H 2 O) n in which n can have a variety of values, but little evidence for the existence of such aggregates was ever found. The present view, supported by computer-modeling and spectroscopy, is that on a very short time scale, water is more like a "gel" consisting of a single, huge hydrogen-bonded cluster. On a 10 –12 -10 –9 sec time scale, rotations and other thermal motions cause individual hydrogen bonds to break and re-form in new configurations, inducing ever-changing local discontinuities whose extent and influence depends on the temperature and pressure.
Ice
Ice , like all solids, has a well-defined structure; each water molecule is surrounded by four neighboring H 2 Os. two of these are hydrogen-bonded to the oxygen atom on the central H 2 O molecule, and each of the two hydrogen atoms is similarly bonded to another neighboring H 2 O.
Ice forms crystals having a hexagonal lattice structure, which in their full development would tend to form hexagonal prisms very similar to those sometimes seen in quartz. This does occasionally happen, and anyone who has done much winter mountaineering has likely seen needle-shaped prisms of ice crystals floating in the air. Under most conditions, however, the snowflake crystals we see are flattened into the beautiful fractal-like hexagonal structures that are commonly observed.
Snowflakes
The H 2 O molecules that make up the top and bottom plane faces of the prism are packed very closely and linked (through hydrogen bonding) to the molecules inside. In contrast to this, the molecules that make up the sides of the prism, and especially those at the hexagonal corners, are much more exposed, so that atmospheric H 2 O molecules that come into contact with most places on the crystal surface attach very loosely and migrate along it until they are able to form hydrogen-bonded attachments to these corners, thus becoming part of the solid and extending the structure along these six directions. This process perpetuates itself as the new extensions themselves acquire a hexagonal structure.
Why is ice slippery?
At temperatures as low as 200 K, the surface of ice is highly disordered and water-like. As the temperature approaches the freezing point, this region of disorder extends farther down from the surface and acts as a lubricant.
The illustration is taken from from an article in the April 7, 2008 issue of C&EN honoring the physical chemist Gabor Somorjai who pioneered modern methods of studying surfaces.
"Pure" water
To a chemist, the term "pure" has meaning only in the context of a particular application or process. The distilled or de-ionized water we use in the laboratory contains dissolved atmospheric gases and occasionally some silica, but their small amounts and relative inertness make these impurities insignificant for most purposes. When water of the highest obtainable purity is required for certain types of exacting measurements, it is commonly filtered, de-ionized, and triple-vacuum distilled. But even this "chemically pure" water is a mixture of isotopic species: there are two stable isotopes of both hydrogen (H 1 and H 2 , the latter often denoted by D) and oxygen (O 16 and O 18 ) which give rise to combinations such as H 2 O 18 , HDO 16 , etc., all of which are readily identifiable in the infrared spectra of water vapor. And to top this off, the two hydrogen atoms in water contain protons whose magnetic moments can be parallel or antiparallel, giving rise to ortho- and para- water, respectively. The two forms are normally present in a o/p ratio of 3:1.
The amount of the rare isotopes of oxygen and hydrogen in water varies enough from place to place that it is now possible to determine the age and source of a particular water sample with some precision. These differences are reflected in the H and O isotopic profiles of organisms. Thus the isotopic analysis of human hair can be a useful tool for crime investigations and anthropology research.
More about hydrogen bonding
Hydrogen bonds form when the electron cloud of a hydrogen atom that is attached to one of the more electronegative atoms is distorted by that atom, leaving a partial positive charge on the hydrogen. Owing to the very small size of the hydrogen atom, the density of this partial charge is large enough to allow it to interact with the lone-pair electrons on a nearby electronegative atom. Although hydrogen bonding is commonly described as a form of dipole-dipole attraction, it is now clear that it involves a certain measure of electron-sharing (between the external non-bonding electrons and the hydrogen) as well, so these bonds possess some covalent character.
Hydrogen bonds are longer than ordinary covalent bonds, and they are also weaker. The experimental evidence for hydrogen bonding usually comes from X-ray diffraction studies on solids that reveal shorter-than-normal distances between hydrogen and other atoms.
Hydrogen bonding in small molecules
The following examples show something of the wide scope of hydrogen bonding in molecules.
| Ammonia (mp –78, bp –33°C) is hydrogen-bonded in the liquid and solid states. | |
| Hydrogen bonding is responsible for ammonia 's remarkably high solubility in water. | |
| Many organic (carboxylic) acids form hydrogen-bonded dimers in the solid state. | |
| Here the hydrogen bond acceptor is the π electron cloud of a benzene ring. This type of interaction is important in maintaining the shape of proteins. | |
|
Hydrogen fluoride (mp –92, bp 33°C) is another common substance that is strongly hydrogen-bonded in its condensed phases. |
|
| The bifluoride ion (for which no proper Lewis structure can be written) can be regarded as a complex ion held together by the strongest hydrogen bond known: about 155 kJ mol –1 . | |
| " As slow as molasses in the winter! " Multiple hydroxyl groups provide lots of opportunities for hydrogen bonding and lead to the high viscosities of substances such as glycerine and sugar syrups . |
Hydrogen bonding in biopolymers
Hydrogen bonding plays an essential role in natural polymers of biological origin in two ways:
- Hydrogen bonding between adjacent polymer chains (intermolecular bonding);
- Hydrogen bonding between different parts of the same chain (intramolecular bonding;
- Hydrogen bonding of water molecules to –OH groups on the polymer chain ("bound water") that helps maintain the shape of the polymer.
The examples that follow are representative of several types of biopolymers.
Cellulose
Cellulose is a linear polymer of glucose (see above), containing 300 to over 10,000 units, depending on the source. As the principal structural component of plants (along with lignin in trees), cellulose is the most abundant organic substance on the earth. The role of hydrogen bonding is to cross-link individual molecules to build up sheets as shown here. These sheets than stack up in a staggered array held together by van der Waals forces. Further hydrogen-bonding of adjacent stacks bundles them together into a stronger and more rigid structure.
Proteins
These polymers made from amino acids R—CH(NH 2 )COOH depend on intramolecular hydrogen bonding to maintain their shape (secondary and tertiary structure) which is essential for their important function as biological catalysts (enzymes). Hydrogen-bonded water molecules embedded in the protein are also important for their structural integrity.
The principal hydrogen bonding in proteins is between the -N—H groups of the "amino" parts with the -C=O groups of the "acid" parts. These interactions give rise to the two major types of the secondary structure which refers to the arrangement of the amino acid polymer chain:
|
alpha-helix[images] |
beta-sheet |
Although carbon is not usually considered particularly electronegative, C—H----X hydrogen bonds are also now known to be significant in proteins.
DNA (Deoxyribonucleic acid)
Who you are is totally dependent on hydrogen bonds! DNA, as you probably know, is the most famous of the biopolymers owing to its central role in defining the structure and function of all living organisms. Each strand of DNA is built from a sequence of four different nucleotide monomers consisting of a deoxyribose sugar, phosphate groups , and a nitrogenous base conventionally identified by the letters A,T, C and G. DNA itself consists of two of these polynucleotide chains that are coiled around a common axis in a configuration something like the protein alpha helix depicted above. The sugar-and-phosphate backbones are on the outside so that the nucleotide bases are on the inside and facing each other. The two strands are held together by hydrogen bonds that link a nitrogen atom of a nucleotide in one chain with a nitrogen or oxygen on the nucleotide that is across from it on the other chain.
Efficient hydrogen bonding within this configuration can only occur between the pairs A-T and C-G, so these two complementary pairs constitute the "alphabet" that encodes the genetic information that gets transcribed whenever new protein molecules are built. Water molecules, hydrogen-bonded to the outer parts of the DNA helix, help stabilize it. | libretexts | 2025-03-17T19:53:09.843553 | 2013-10-03T01:38:08 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.03%3A_Hydrogen-Bonding_and_Water",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "7.3: Hydrogen-Bonding and Water",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.04%3A_Liquids_and_their_Interfaces | 7.4: Liquids and their Interfaces
- Liquids are both fluids and condensed phases . Explain what this tells us about liquids, and what other states of matter fit into each of these categories.
- Define viscosity , and comment on the molecular properties that correlate with viscosity.
- Define surface tension and explain its cause.
- State the major factors that determine the extent to which a liquid will wet a solid surface.
- Explain what a surfactant is, and how it reduces the surface tension of water and aids in cleaning.
- Explain the origins of capillary rise and indicate the major factors that affect it.
- Describe the structure of a soap bubble , and comment on the role of the "soap" molecules in stabilizing it.
- Comment on the applicability of the term "structure" when describing a pure liquid phase.
The molecular units of a liquid, like those of solids, are in direct contact, but never for any length of time and in the same locations. Whereas the molecules or ions of a solid maintain the same average positions, those of liquids are continually jumping and sliding to new ones, giving liquids something of the mobility of gases. From the standpoint of chemistry, this represents the best of two worlds; rapid chemical change requires intimate contact between the agents undergoing reaction, but these agents, along with the reaction products, must be free to move away to allow new contacts and further reaction to take place. This is why so much of what we do with chemistry takes place in the liquid phase.
Liquids occupy a rather peculiar place in the trinity of solid, liquid and gas. A liquid is the preferred state of a substance at temperatures intermediate between the realms of the solid and the gas. Howeve, if one look at the melting and boiling points of a variety of substances Figure \(\PageIndex{1}\), you will notice that the temperature range within which many liquids can exist tends to be rather small. In this, and in a number of other ways, the liquid state appears to be somewhat tenuous and insecure, as if it had no clear right to exist at all, and only does so as an oversight of Nature. Certainly the liquid state is the most complicated of the three states of matter to analyze and to understand. But just as people whose personalities are more complicated and enigmatic are often the most interesting ones to know, it is these same features that make the liquid state of matter the most fascinating to study.
Anyone can usually tell if a substance is a liquid simply by looking at it. What special physical properties do liquids possess that make them so easy to recognize? One obvious property is their mobility , which refers to their ability to move around, to change their shape to conform to that of a container, to flow in response to a pressure gradient, and to be displaced by other objects. But these properties are shared by gases, the other member of the two fluid states of matter. The real giveaway is that a liquid occupies a fixed volume, with the consequence that a liquid possesses a definite surface . Gases, of course, do not; the volume and shape of a gas are simply those of the container in which it is confined. The higher density of a liquid also plays a role here; it is only because of the large density difference between a liquid and the space above it that we can see the surface at all. (What we are really seeing are the effects of reflection and refraction that occur when light passes across the boundary between two phases differing in density, or more precisely, in their refractive indexes .)
Viscosity: Resistance to Flow
The term viscosity is a measure of resistance to flow. It can be measured by observing the time required for a given volume of liquid to flow through the narrow part of a viscometer tube. The viscosity of a substance is related to the strength of the forces acting between its molecular units. In the case of water, these forces are primarily due to hydrogen bonding. Liquids such as syrups and honey are much more viscous because the sugars they contain are studded with hydroxyl groups (–OH) which can form multiple hydrogen bonds with water and with each other, producing a sticky disordered network.
|
|
|
|---|---|
| water H(OH) | 1.00 |
| diethyl ether (CH 3 -CH 2 ) 2 O | 0.23 |
| benzene C 6 H 6 | 0.65 |
| glycerin C 3 H 2 (OH) 3 | 280 |
| mercury | 1.5 |
| motor oil, SAE30 | 200 |
| honey | ~10,000 |
| molasses | ~5000 |
| pancake syrup | ~3000 |
Even in the absence of hydrogen bonding, dispersion forces are universally present (as in mercury). Because these forces are additive, they can be very significant in long carbon-chain molecules such as those found in oils used in cooking and for lubrication. Most "straight-chain" molecules are really bent into complex shapes, and dispersion forces tend to preserve their spaghetti-like entanglements with their neighbors.
The temperature dependence of the viscosity of liquids is well known to anyone who has tried to pour cold syrup on a pancake. Because the forces that give rise to viscosity are weak, they are easily overcome by thermal motions, so it is no surprise that viscosity decreases as the temperature rises.
| T/°C | 0 | 10 | 20 | 40 | 60 | 80 | 100 |
|---|---|---|---|---|---|---|---|
| viscosity/cP | 1.8 | 1.3 | 1.0 | 0.65 | 0.47 | 0.36 | 0.28 |
Motor Oil
Automotive lubricating oils can be too viscous at low temperatures (making it harder for your car to operate on a cold day), while losing so much viscosity at engine operating temperatures that their lubricating properties become impaired. These engine oils are sold in a wide range of viscosities; the higher-viscosity oils are used in warmer weather and the lower-viscosity oils in colder weather. The idea is to achieve a fairly constant viscosity that is ideal for the particular application. By blending in certain ingredients, lubricant manufacturers are able to formulate “multigrade” oils whose viscosities are less sensitive to temperatures, thus making a single product useful over a much wider temperature range.
The next time you pour a viscous liquid over a surface, notice how different parts of the liquid move at different rates and sometimes in different directions. To flow freely, the particles making up a fluid must be able to move independently. Intermolecular attractive forces work against this, making it difficult for one molecule to pull away from its neighbors and force its way in between new neighbors.
The pressure drop that is observed when a liquid flows through a pipe is a direct consequence of viscosity. Those molecules that happen to find themselves near the inner walls of a tube tend to spend much of their time attached to the walls by intermolecular forces, and thus move forward very slowly. Movement of the next layer of molecules is impeded as they slip and slide over the slow-movers; this process continues across successive layers of molecules as we move toward the center of the tube, where the velocity is greatest. This effect is called viscous drag , and is directly responsible for the pressure drop that can be quite noticeable when you are taking a shower bath and someone else in the house suddenly turns on the water in the kitchen.
Liquids and gases are both fluids and exhibit resistance to flow through a confined space. However, it is interesting (and not often appreciated) that their viscosities have entirely different origins, and that they vary with temperature in opposite ways. Why should the viscosity of a gas increase with temperature?
Surface Tension
A molecule within the bulk of a liquid experiences attractions to neighboring molecules in all directions, but since these average out to zero, there is no net force on the molecule because it is, on the average, as energetically comfortable in one location within the liquid as in another. Liquids ordinarily do have surfaces, however, and a molecule that finds itself in such a location is attracted to its neighbors below and to either side, but there is no attraction operating in the 180° solid angle above the surface. As a consequence, a molecule at the surface will tend to be drawn into the bulk of the liquid. Conversely, work must be done in order to move a molecule within a liquid to its surface.
Clearly there must always be some molecules at the surface, but the smaller the surface area, the lower the potential energy. Thus intermolecular attractive forces act to minimize the surface area of a liquid . The geometric shape that has the smallest ratio of surface area to volume is the sphere , so very small quantities of liquids tend to form spherical drops. As the drops get bigger, their weight deforms them into the typical tear shape.
Think of a bubble as a hollow drop. Surface tension acts to minimize the surface, and thus the radius of the spherical shell of liquid, but this is opposed by the pressure of vapor trapped within the bubble.
The imbalance of forces near the upper surface of a liquid has the effect of an elastic film stretched across the surface. You have probably seen water striders and other insects take advantage of this when they walk across a pond. Similarly, you can carefully "float" a light object such as a steel paperclip on the surface of water in a cup.
Surface tension is defined as the amount of work that must be done in order to create unit area of surface. The SI units are J m –2 (or N m –1 ), but values are more commonly expressed in mN m –1 or in cgs units of dyn cm –1 or erg cm –2 . Table \(\PageIndex{3}\) compares the surface tensions of several liquids at room temperature. Note especially that:
- hydrocarbons and non-polar liquids such as ether have rather low values
- one of the main functions of soaps and other surfactants is to reduce the surface tension of water
- mercury has the highest surface tension of any liquid at room temperature. It is so high that mercury does not flow in the ordinary way, but breaks into small droplets that roll independently.
|
|
|
|---|---|
| water H(OH) | 72.7 |
| diethyl ether (CH 3 -CH 2 ) 2 O | 17.0 |
| benzene C 6 H 6 | 40.0 |
| glycerin C 3 H 2 (OH) 3 | 63 |
| mercury (15°C) | 487 |
| n -octane | 21.8 |
| sodium chloride solution (6 M in water) | 82.5 |
| sucrose solution (85% in water) | 76.4 |
| sodium oleate (soap) solution in water | 25 |
Surface tension and viscosity are not directly related, as you can verify by noting the disparate values of these two quantities for mercury. Viscosity depends on intermolecular forces within the liquid, whereas surface tension arises from the difference in the magnitudes of these forces within the liquid and at the surface. Surface tension is also affected by the electrostatic charge of a body. This is most dramatically illustrated by the famous "mercury beating heart" demonstration..
Surface tension always decreases with temperature as thermal motions reduce the effect of intermolecular attractions (Table \(\PageIndex{4}\)). This is one reason why washing with warm water is more effective; the lower surface tension allows water to more readily penetrate a fabric.
| °C | dynes/cm |
|---|---|
| 0 | 75.9 |
| 20 | 72.7 |
| 50 | 67.9 |
| 100 | 58.9 |
"Tears" in a wine glass: effects of a surface tension gradient
Why do "tears" form inside a wine glass? You have undoubtedly noticed this; pour some wine into a glass, and after a few minutes, droplets of clear liquid can be seen forming on the inside walls of the glass about a centimeter above the level of the wine. This happens even when the wine and the glass are at room temperature, so it has nothing to do with condensation.
The explanation involves Raoult's law , hydrogen bonding, adsorption, and surface tension, so this phenomenon makes a good review of much you have learned about liquids and solutions. The tendency of a surface tension gradient to draw water into the region of higher surface tension is known as the Maringoni effect
First, remember that both water and alcohol are hydrogen-bonding liquids; as such, they are both strongly attracted to the oxygen atoms and -OH groups on the surface of the glass. This causes the liquid film to creep up the walls of the glass. Alcohol, the more volatile of the two liquids, vaporizes more readily, causing the upper (and thinnest) part of the liquid film to become enriched in water. Because of its stronger hydrogen bonding, water has a larger surface tension than alcohol, so as the alcohol evaporates, the surface tension of the upper part of the liquid film increases. This that part of the film draw up more liquid and assume a spherical shape which gets distorted by gravity into a "tear", which eventually grows so large that gravity wins out over adsorption, and the drop falls back into the liquid, soon to be replaced by another.
Interfacial effects in liquids
The surface tension discussed immediately above is an attribute of a liquid in contact with a gas (ordinarily the air or vapor) or a vacuum. But if you think about it, the molecules in the part of a liquid that is in contact with any other phase (liquid or solid) will experience a different balance of forces than the molecules within the bulk of the liquid. Thus surface tension is a special case of the more general interfacial tension which is defined by the work associated with moving a molecule from within the bulk liquid to the interface with any other phase.
Wetting
Take a plastic mixing bowl from your kitchen, and splash some water around in it. You will probably observe that the water does not cover the inside surface uniformly, but remains dispersed into drops. The same effect is seen on a dirty windshield; running the wipers simply breaks hundreds of drops into thousands. By contrast, water poured over a clean glass surface will wet it, leaving a uniform film.
When a molecule of a liquid is in contact with another phase, its behavior depends on the relative attractive strengths of its neighbors on the two sides of the phase boundary. If the molecule is more strongly attracted to its own kind, then interfacial tension will act to minimize the area of contact by increasing the curvature of the surface. This is what happens at the interface between water and a hydrophobic surface such as a plastic mixing bowl or a windshield coated with oily material. A liquid will wet a surface if the angle at which it makes contact with the surface is less than 90°. The value of this contact angle can be predicted from the properties of the liquid and solid separately.
A clean glass surface, by contrast, has –OH groups sticking out of it which readily attach to water molecules through hydrogen bonding; the lowest potential energy now occurs when the contact area between the glass and water is maximized. This causes the water to spread out evenly over the surface, or to wet it.
Surfactants
The surface tension of water can be reduced to about one-third of its normal value by adding some soap or synthetic detergent. These substances, known collectively as surfactants , are generally hydrocarbon molecules having an ionic group on one end. The ionic group, being highly polar, is strongly attracted to water molecules; we say it is hydrophilic . The hydrocarbon ( hydrophobic ) portion is just the opposite; inserting it into water would break up the local hydrogen-bonding forces and is therefore energetically unfavorable. What happens, then, is that the surfactant molecules migrate to the surface with their hydrophobic ends sticking out, effectively creating a new surface. Because hydrocarbons interact only through very weak dispersion forces, this new surface has a greatly reduced surface tension.
Washing
How do soaps and detergents help get things clean? There are two main mechanisms. First, by reducing water's surface tension, the water can more readily penetrate fabrics (see the illustration under "Water repellency" below.) Secondly, much of what we call "dirt" consists of non-water soluble oils and greasy materials which the hydrophobic ends of surfactant molecules can penetrate. When they do so in sufficient numbers and with their polar ends sticking out, the resulting aggregate can hydrogen-bond to water and becomes "solubilized".
Washing is usually more effective in warm water; higher temperatures reduce the surface tension of the water and make it easier for the surfactant molecules to penetrate the material to be removed.
Can magnets reduce the surface tension of water?
The answer is no, but claims that they can are widely circulated in promotions of dubious products such as "magnetic laundry disks" which are supposed to reduce the need for detergents.
Water repellency
In Gore-Tex, one of the more successful water-proof fabrics, the fibers are made non-wettable by coating them with a Teflon-like fluoropolymer.
Water is quite strongly attracted to many natural fibers such as cotton and linen through hydrogen-bonding to their cellulosic hydroxyl groups. A droplet that falls on such a material will flatten out and be drawn through the fabric. One way to prevent this is to coat the fibers with a polymeric material that is not readily wetted. The water tends to curve away from the fibers so as to minimize the area of contact, so the droplets are supported on the gridwork of the fabric but tend not to fall through.
Capillary rise
If the walls of a narrow tube can be efficiently wetted by a liquid, then the the liquid will be drawn up into the tube by capillary action . This effect is only noticeable in narrow containers (such as burettes) and especially in small-diameter capillary tubes . The smaller the diameter of the tube, the higher will be the capillary rise. A clean glass surface is highly attractive to most molecules, so most liquids display a concave meniscus in a glass tube.
To help you understand capillary rise, the above diagram shows a glass tube of small cross-section inserted into an open container of water. The attraction of the water to the inner wall of the tube pulls the edges of the water up, creating a curved meniscus whose surface area is smaller than the cross-section area of the tube. The surface tension of the water acts against this enlargement of its surface by attempting to reduce the curvature, stretching the surface into a flatter shape by pulling the liquid farther up into the tube. This process continues until the weight of the liquid column becomes equal to the surface tension force, and the system reaches mechanical equilibrium.
Capillary rise results from a combination of two effects: the tendency of the liquid to wet (bind to) the surface of the tube (measured by the value of the contact angle), and the action of the liquid's surface tension to minimize its surface area.
In the formula shown at the left (which you need not memorize!)
h
= elevation of the liquid (m)
γ = surface tension (N/m)
θ = contact angle (radians)
ρ = density of liquid (kg/m
3
)
g
= acceleration of gravity (m/s
–2
)
r
= radius of tube (m)
The contact angle between water and ordinary soda-lime glass is essentially zero; since the cosine of 0 radians is unity, its capillary rise is especially noticable. In general, water can be drawn very effectively into narrow openings such as the channels between fibers in a fabric and into porous materials such as soils.
Note that if θ is greater than 90° (π/2 radians), the capillary "rise" will be negative — meaning that the molecules of the liquid are more strongly attracted to each other than to the surface. This is readily seen with mercury in a glass container, in which the meniscus is upwardly convex instead of concave.
Capillary action and trees
Capillary rise is the principal mechanism by which water is able to reach the highest parts of trees. Water strongly bonds to the narrow (25 μM) cellulose channels in the xylem. (Osmotic pressure and "suction" produced by loss of water vapor through the leaves also contribute to this effect, and are the main drivers of water flow in smaller plants.)
Bubbles
Bubbles can be thought of as "negative drops" — spherical spaces within a liquid containing a gas, often just the vapor of the liquid. Bubbles within pure liquids such as water (which we see when water boils) are inherently unstable because the liquid's surface tension causes them to collapse. But in the presence of a surfactant, bubbles can be stabilized and given an independent if evanescent existence.
The pressure of the gas inside a bubble P in must be sufficient to oppose the pressure outside of it ( P out , the atmospheric pressure plus the hydrostatic pressure of any other fluid in which the bubble is immersed. But the force caused by surface tension γ of the liquid boundary also tends to collapse the bubble, so P in must be greater than P out by the amount of this force, which is given by 4γ/ r :
\[ P_{in}=P_{out} + \dfrac{4\gamma}{r}\]
The most important feature of this relationship (known as LaPlace's law ) is the that the pressure required to maintain the bubble is inversely proportional to its radius. This means that the smallest bubbles have the greatest internal gas pressures! This might seem counterintuitive, but if you are an experienced soap-bubble blower, or have blown up a rubber balloon (in which the elastic of the rubber has an effect similar to the surface tension in a liquid), you will have noticed that you need to puff harder to begin the expansion.
Soap bubbles
All of us at one time or another have enjoyed the fascination of creating soap bubbles and admiring their intense and varied colors as they drift around in the air, seemingly aloof from the constraints that govern the behavior of ordinary objects — but only for a while! Their life eventually comes to an abrupt end as they fall to the ground or pop in mid-flight.
The walls of these bubbles consist of a thin layer of water molecules sandwiched between two layers of surfactant molecules. Their spherical shape is of course the result of water's surface tension. Although the surfactant (soap) initially reduces the surface tension, expansion of the bubble spreads the water into a thinner layer and spreads the surfactant molecules over a wider area, deceasing their concentration. This, in turn, allows the water molecules to interact more strongly, increasing its surface tension and stabilizing the bubble as it expands.
The bright colors we see in bubbles arises from interference between light waves that are reflected back from the inner and outer surfaces, indicating that the thickness of the water layer is comparable the range of visible light (around 400-600 nm).
Once the bubble is released, it can endure until it strikes a solid surface or collapses owing to loss of the water layer by evaporation. The latter process can be slowed by adding a bit of glycerine to the liquid. A variety of recipes and commercial "bubble-making solutions" are available; some of the latter employ special liquid polymers which slow evaporation and greatly extend the bubble lifetimes. Bubbles blown at very low temperatures can be frozen, but these eventually collapse as the gas diffuses out.
Bubbles, surface tension, and breathing
The sites of gas exchange with the blood in mammalian lungs are tiny sacs known as alveoli . In humans there are about 150 million of these, having a total surface area about the size of a tennis court. The inner surface of each alveolus is about 0.25 mm in diameter and is coated with a film of water, whose high surface tension not only resists inflation, but would ordinarily cause the thin-walled alveoli to collapse. In order to counteract this effect, special cells in the alveolar wall secrete a phospholipid pulmonary surfactant that reduces the surface tension of the water film to about 35% of its normal value. But there is another problem: the alveoli can be regarded physically as a huge collection of interconnected bubbles of varying sizes. As noted above, the surface tension of a surfactant-stabilized bubble increases with their size. So by making it easier for the smaller alveoli to expand while inhibiting the expansion of the larger ones, the surfactant helps to equalize the volume changes of all the alveoli as one inhales and exhales.
Pulmonary surfactant is produced only in the later stages of fetal development, so premature infants often do not have enough and are subject to respiratory distress syndrome which can be fatal.
Structure of liquids
You can think of a simple liquid such as argon or methane as a collection of loosely-packed marbles that can assume various shapes.Although the overall arrangement of the individual molecular units is entirely random, there is a certain amount of short-range order: the presence of one molecule at a given spot means that the neighboring molecules must be at least as far away as the sum of the two radii, and this in turn affects the possible locations of more distant concentric shells of molecules.
An important consequence of the disordered arrangement of molecules in a liquid is the presence of void spaces. These, together with the increased kinetic energy of colliding molecules which helps push them apart, are responsible for the approximately 15-percent decrease in density that is observed when solids based on simple spherical molecules such as Ne and Hg melt into liquids. These void spaces are believed to be the key to the flow properties of liquids; the more “holes” there are in the liquid, the more easily the molecules can slip and slide over one another.
As the temperature rises, thermal motions of the molecules increase and the local structure begins to deteriorate, as shown in the plots below.
This plot shows the relative probability of finding a mercury atom at a given distance from another atom located at distance 0. You can see that as thermal motions increase, the probabilities even out at greater distances. It is very difficult to design experiments that yield the kind of information required to define the microscopic arrangement of molecules in the liquid state.
Many of our current ideas on the subject come from computer simulations based on hypothetical models. In a typical experiment, the paths of about 1000 molecules in a volume of space are calculated. The molecules are initially given random kinetic energies whose distribution is consistent with the Boltzmann distribution for a given temperature. The trajectories of all the molecules are followed as they change with time due to collisions and other interactions; these interactions must be calculated according to an assumed potential energy-vs.-distance function that is part of the particular model being investigated.
These computer experiments suggest that whatever structure simple liquids do possess is determined mainly by the repulsive forces between the molecules; the attractive forces act in a rather nondirectional, general way to hold the liquid together. It is also found that if spherical molecules are packed together as closely as geometry allows (in which each molecule would be in contact with twelve nearest neighbors), the collection will have a long-range order characteristic of a solid until the density is decreased by about ten percent, at which point the molecules can slide around and move past one another, thus preserving only short-range order. In recent years, experimental studies based on ultra-short laser flashes have revealed that local structures in liquids have extremely short lifetimes, of the order of picoseconds to nanoseconds.
It has long been suspected that the region of a liquid that bounds a solid surface is more ordered than within the bulk liquid. This has been confirmed for the case of water in contact with silicon, in which the liquid's layers form layers, similar to what is found in liquid crystals. | libretexts | 2025-03-17T19:53:09.961216 | 2013-10-03T01:38:09 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.04%3A_Liquids_and_their_Interfaces",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "7.4: Liquids and their Interfaces",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.05%3A_Changes_of_State | 7.5: Changes of State
Make sure you thoroughly understand the following essential ideas which have been presented above. It is especially important that you know the precise meanings of all the terms in the context of this topic.
- Describe what is meant by the escaping tendency of molecules from a solid, liquid, or gas. What experimentally-observable quantity serves as its measure?
- The terms " vapor pressure " and " pressure of the vapor above a solid or liquid" are easily confused. Explain the difference between them, and state under what conditions they will have identical values.
- Define relative humidity and calculate its value, given the partial pressure of water vapor and a suitable vapor pressure table or plot for water.
- Explain the difference between evaporation and boiling, and why liquids may not begin to boil until the temperature exceeds the boiling point.
- Given a phase diagram of a pure substance, label all of the lines and the regions they enclose, identify the normal melting and boiling points , the triple point and the critical point , and state the physical significance of the latter two.
- Conversely, sketch out a properly-labeled phase diagram for a pure substance, given the parameters mentioned above, along with information about the relative densities of the solid and liquid phases.
A given substance will exist in the form of a solid, liquid, or gas, depending on the temperature and pressure. In this unit, we will learn what common factors govern the preferred state of matter under a particular set of conditions, and we will examine the way in which one phase gives way to another when these conditions change.
Phase Stability
Earlier in the morning, the droplets of water in Figure \(\PageIndex{1}\) were tiny crystals of ice, but even though the air temperature is still around 0°C and will remain so all day, the sun's warmth has rendered them into liquid form, bound up by surface tension into reflective spheres. By late afternoon, most of the drops will be gone, their H 2 O molecules now dispersed as a tenuous atmospheric gas.
Solid, liquid, and gas — these are the basic three states, or phases , in which the majority of small-molecule substances can exist. At most combinations of pressure and temperature, only one of these phases will be favored; this is the phase that is most thermodynamically stable under these conditions. A proper explanation of why most substances have well-defined melting and boiling points needs to invoke some principles of thermodynamics and quantum mechanics. A full explanation of this would go beyond the scope of what most students who see this lesson are familiar with, but the following greatly over-simplified explanation should convince you that it is something more than black magic.
All atoms and molecules at temperatures above absolute zero possess thermal energy that keeps them in constant states of motion. A fundamental law of nature mandates that this energy tends to spread out and be shared as widely as possible. Within a single molecular unit, this spreading and sharing can occur by dispersing the energy into the many allowed states of motion (translation, vibration, rotation) of the molecules of the substance itself. There are a huge number of such states, and they are quantized, meaning that they all require different amounts of thermal energy to come into action. Temperature is a measure of the intensity of thermal energy, so the higher the temperature, the greater will be the number of states that can be active, and the more extensively will the energy be dispersed among these allowed states.
In solids, the molecular units are bound into fixed locations, so the kinds of motion (and thus the number of states) that can be thermally activated is relatively small. Because the molecules of solids possess the lowest potential energies, solids are the most stable states at low temperatures. At the other extreme are gas molecules which are not only free to vibrate and rotate, but are in constant translational motion. The corresponding number of quantum states is hugely greater for gases, providing a nearly-endless opportunity to spread energy. But this can only happen if the temperature is high enough to populate this new multitude of states. Once it does, the gaseous state wins out by a landslide.
Escaping Tendency and Vapor Pressure
Escaping tendency is more formally known as free energy . Bear in mind also that changes in state always involve changes in enthalpy and internal energy. In much the same way that tea spreads out from a tea bag into the larger space of the water in which it is immersed, molecules that are confined within a phase (liquid, solid, or gas) will tend to spread themselves (and the thermal energy they carry with them) as widely as possible. This fundamental law of nature is manifested in what we will call the escaping tendency of the molecules from the phase. The escaping tendency is a quantity of fundamental importance in understanding all chemical equilibria and transformations. We need not define the term in a formal way at this point. What is important for now is how we can observe and compare escaping tendencies.
Think first of a gas : what property of the gas constitutes the best measure of its tendency to escape from a container? It does not require much reflection to conclude that the greater the pressure of the gas, the more frequently will its molecules collide with the walls of the container and possibly find their way through an opening to the outside. Thus the pressure confining a gas is a direct measure of the tendency of molecules to escape from a gaseous phase.
What about liquids and solids? Although we think of the molecules of condensed phases as permanently confined within them, these molecules still possess some thermal energy, and there is always a chance that one that is near the surface will occasionally fly loose and escape into the space outside the solid or liquid. We can observe the tendency of molecules to escape into the gas phase from a solid or liquid by placing the substance in a closed, evacuated container connected to a manometer for measuring gas pressure (Figure \(\PageIndex{2}\)).
If we do this for water (Figure \(\PageIndex{3}\)), the partial pressure of water P w in the vapor space will initially be zero ( 1 ). Gradually, P w will rise as molecules escape from the substance and enter the vapor phase. But at the same time, some of the vapor molecules will "escape" back into the liquid phase ( 2 ). But because this latter process is less favorable (at the particular temperature represented here), P w continues to rise. Eventually a balance is reached between the two processes ( 3 ), and P w eventually stabilizes at a fixed value P vap that depends on the substance and on the temperature and is known as the equilibrium vapor pressure , or simply as the “vapor pressure” of the liquid or solid. The vapor pressure is a direct measure of the escaping tendency of molecules from a condensed state of matter.
Note carefully that if the container is left open to the air, it is unlikely that many of the molecules in the vapor phase will return to the liquid phase. They will simply escape from the entire system and the partial pressure of water vapor P w will never reach P vap ; the liquid will simply evaporate without any kind of equilibrium ever being achieved.
The escaping tendency of molecules from a phase always increases with the temperature; therefore the vapor pressure of a liquid or solid will be greater at higher temperatures. As Figure \(\PageIndex{4}\) shows, the variation of the vapor pressure with the temperature is not linear.
It's important that you be able to interpret vapor pressure plots such as the three shown here. Take special note of how boiling points can be found from these plots. You will recall that the normal boiling point is the temperature at which the liquid is in equilibrium with its vapor at a partial pressure of 1 atm (760 torr). Thus the intercepts of each curve with the blue dashed 760-torr line indicate the normal boiling points of each liquid. Similarly, you can easily estimate the boiling points these liquids would have in Denver, Colorado where the atmospheric pressure is 630 torr by simply constructing a horizontal line corresponding to this pressure.
The normal boiling point is the temperature at which the liquid is in equilibrium with its vapor at a partial pressure of 1 atm. This is when the vapor pressure is at atmospheric pressure.
Vapor pressure of water
The great importance of H 2 O in our world merits a more detailed look at its vapor pressure properties. Because the vapor pressure of water varies greatly over the range of temperatures in which the liquid can exist (Figure \(\PageIndex{5}\)).
The larger plot in Figure \(\PageIndex{5}\) covers the lowest temperatures, while the inset shows the complete range of pressure values. Note particularly that
- The normal boiling point is the temperature at which the vapor pressure is the same as that of the standard atmosphere, 760 torr.
- The boiling point at any other pressure can be found by dropping a vertical line from the curve to the temperature axis.
- As seen on the inset plot, the vapor pressure curve of water ends at the critical point .
Relative humidity
The vapor pressure of water at 22°C is about 20 torr, or around 0.026 atm (2.7 kPa). This is the partial pressure of H 2 O that will be found in the vapor space within a closed container of water at this temperature; the air in this space is said to be saturated with water vapor. Humid air is sometimes described as "heavy", but this is misleading; the average molar mass of dry air is 29, but that of water is only 18, so humid air is actually less dense. The feeling of "heaviness" probably relates to the reduced ability of perspiration to evaporate in humid air. In ordinary air, the partial pressure of water vapor is normally less than its saturation or equilibrium value. The ratio of the partial pressure of H 2 O in the air to its (equilibrium) vapor pressure at any given temperature is known as the relative humidity . Water enters the atmosphere through evaporation from the ocean and other bodies of water, and from water-saturated soils. The resulting vapor tends to get dissipated and diluted by atmospheric circulation, so the relative humidity rarely reaches 100 percent. When it does and the weather is warm, we are very uncomfortable because vaporization of water from the skin is inhibited; if the air is already saturated with water, then there is no place for our perspiration to go, other than to drip down our face.
Because the vapor pressure increases with temperature, a parcel of air containing a fixed partial pressure of water vapor will have a larger relative humidity at low temperatures than at high temperatures. Thus when cold air enters a heated house, its water content remains unchanged but the relative humidity drops. In climates with cold winters, this promotes increased moisture loss from house plants and from mucous membranes, leading to wilting of the former and irritation of the latter.
The vapor pressure of water is 3.9 torr at –2°C and 20 torr at 22°C. What will be the relative humidity inside a house maintained at 22°C when the outside air temperature is –2°C and the relative humidity is 70%?
Solution
At 70 percent relative humidity, the partial pressure of the –2° air is (0.7 × 3.9 torr) = 2.7 torr. When this air enters the house, its relative humidity will be (2.7 torr)/(20 torr) = 0.14 or 14% .
In the evening, especially on clear nights, solid objects (even spider webs!) lose heat to the sky more rapidly than does the air. It is often important to know what temperature such objects must drop to so that atmospheric moisture will condense out on them (Figure \(\PageIndex{1}\)). The dew point is the temperature at which the relative humidity is 100 percent — that is, the temperature at which the vapor pressure of water becomes equal to its partial pressure at a given [higher] temperature and relative humidity. For water to condense directly out of the atmosphere as rain, the air must be at or below the dew point, but this is not of itself the only requirement for the formation of rain, as we will see shortly.
Vapor pressures of Solid Hydrates
Many solid salts incorporate water molecules into their crystal lattices; the resulting compounds are known as hydrates . These solid hydrates possess definite vapor pressures that correspond to an equilibrium between the hydrated and anhydrous compounds and water vapor. For example Strontium chloride hexahydrate:
\[ \ce{SrCl2 \cdot 6H2O(s) \rightarrow SrCl2(s) + 6H2O(g)} \label{\(\PageIndex{1}\)}\]
The vapor pressure of this hydrate is 8.4 torr at 25°C. Only at this unique partial pressure of water vapor can the two solids coexist at 25°C. If the partial pressure of water in the air is greater than 8.4 torr, a sample of anhydrous SrCl 2 will absorb moisture from the air and change into the hydrate. In fact, when fully hydrates, water is responsible for 40% of the hydrate's mass.
What will be the relative humidity of air in an enclosed vessel containing solid SrCl 2 ·6H 2 O at 25°C?
Solution
What fraction of the vapor pressure of water at this temperature (23.8 torr) is the vapor pressure of the hydrate (8.4 torr)? Expressed in percent, this is the relative humidity.
If the partial pressure of H 2 O in the air is less than the vapor pressure of the hydrate, the latter will tend to lose moisture and revert back to its anhydrous form. This process is sometimes accompanied by a breakup of the crystal into a powdery form, an effect known as efflorescence .
Condensation and boiling: Nucleation
Evaporation and boiling of a liquid, and condensation of a gas (vapor) are such ordinary parts of our daily life that we hardly give them a thought. Every time we boil water to make a pot of tea and see the cloud of steam above the teapot, we are observing this most common of all phase changes. How can we understand these changes in terms of vapor pressure?
Figure \(\PageIndex{6}\) plots the vapor-pressure as a function of temperature can represent water or any other liquid. When we say this is a vapor-pressure plot, we mean that each point on the curve represents a combination of temperature and vapor pressure at which the liquid (green) and the vapor (blue) can coexist. Thus at the normal boiling point, defined as the temperature at which the vapor pressure is 1 atm, the state of the system corresponds to the point labeled 3 .
Suppose that we select an arbitrary point 1 at a temperature and pressure at which only the gaseous state is stable. We then decrease the temperature so as to move the state point toward point 2 in the liquid region. When the state point falls on the vapor pressure line, the two phases can coexist and we would expect some liquid to condense. Once the state point moves to the left of the vapor pressure line, the substance will be entirely in the liquid phase. This is supposedly what happens when "steam" (actually tiny water drops) forms above a pot of boiling water.
The reverse process should work the same way: starting with a temperature in the liquid region, nothing happens until we reach the vapor pressure line, at which point the liquid begins to change into vapor. At higher temperatures, only the vapor remains. This is the theory , but it is not complete. The fact is that a vapor will generally not condense to a liquid at the boiling point (also called the condensation point or dew point), and a liquid will generally not boil at its boiling point.
Bubbles and Drops
The reason for the discrepancy is that the vapor pressure, as we normally use the term and as it is depicted by the liquid-vapor line on the phase diagram, refers to the partial pressure of vapor in equilibrium with a liquid whose surface is reasonably flat, as it would be in a partially filled container. In a drop of liquid or in a bubble of vapor within a liquid, the surface of the liquid is not flat, but curved. For drops or bubbles that are of reasonable size, this does not make much difference, but these drops and bubbles must grow from smaller ones, and these from tinier ones still. Eventually, one gets down to the primordial drops and bubbles having only a few molecular dimensions, and it is here that we run into a problem: this is the problem of nucleation — the formation and growth of the first tiny drop (in the vapor) or of a bubble (in a liquid).
The vapor pressure of a liquid is determined by the attractive forces that act over a 180° solid angle at the surface of a liquid. In a very small drop, the liquid surface is curved in such a way that each molecule experiences fewer nearest-neighbor attractions than is the case for the bulk liquid. The outermost molecules of the liquid are bound to the droplet less tightly, and the drop has a larger vapor pressure than does the bulk liquid. If the vapor pressure of the drop is greater than the partial pressure of vapor in the gas phase, the drop will evaporate. Thus it is highly unlikely that a droplet will ever form within a vapor as it is cooled.
A bubble, like a drop, must start small and grow larger, but there is a diffculty here that is similar to the one with bubbles. A bubble is a hole in a liquid; molecules at the liquid boundary are curved inward, so that they experience nearest-neighbor attractions over a solid angle greater than 180°. As a consequence, the vapor pressure of the liquid facing into a bubble P w is always less than that of the bulk liquid P w at the same temperature. When the bulk liquid is at its boiling point (that is, when its vapor pressure is 1 atm), the pressure of the vapor within the bubble will be less than 1 atm, so the bubble will tend to collapse. Also, since the bubble is formed within the liquid, the hydrostatic pressure of the overlaying liquid will add to this effect. For both of these reasons, a liquid will not boil until the temperature is raised slightly above the boiling point, a phenomenon known as superheating . Once the boiling begins, it will continue to do so at the liquid's proper boiling point.
These plots show how, in the case of water, the vapor pressure of a very small bubble or drop varies with its radius of curvature; the quantity being plotted is the ratio of the actual vapor pressure P to P o , the vapor pressure of a flat surface.
Condensation of Liquids
If the tiniest of drops are destined to self-destruct, why do vapors ever condense (e.g., why does it rain)?
- If you cool a vapor in a container, condensation takes place not within the vapor itself, but on the inner surface of the container. What happens here is that intermolecular attractions between the solid surface will cause vapor molecules to adsorb to the surface and stabilize the incipient drop until it grows to a size at which it can be self-sustaining. This is the origin of the condensation on the outside of a cool drink, or of the dew that appears on the grass.
- In the case of the cloud of steam you see over the boiling water, the first few droplets form on tiny dust particles in the air — the ones you can see by scattered light when a sunbeam shines through a darkened room.
Clouds and precipitation In the region of the atmosphere where rain forms there are large numbers of solid particles, mostly of microscopic size. Some of these are particles of salt produced by evaporation of spray from the ocean surface. Many condensation nuclei are of biological origin; these include bacteria, spores, and particles of ammonium sulfate. There is volcanic and meteor dust, and of course there is dust and smoke due to the activities of humans. These particles tend to adsorb water vapor, and some may even dissolve to form a droplet of concentrated solution. In either case, the vapor pressure of the water is reduced below its equilibrium value, thus stabilizing the aggregate until it can grow to self-sustaining size and become fog, rain, or snow.
This, by the way, is why fog is an irritant to the nose and throat; each fog droplet carries within it a particle of dust or (in air polluted by the burning of sulfur-containing fossil fuels) a droplet of sulfuric acid, which it effciently deposits on your sensitive mucous membranes. If you own a car which is left outside on a foggy night, you may have noticed how dirty the windshield is in the morning.
Superheating and boiling of liquids
What is the difference between the evaporation and boiling of a liquid? When a liquid evaporates at a temperature below its boiling point, the molecules that enter the vapor phase do so directly from the surface. When a liquid boils, bubbles of vapor form in the interior of the liquid, and are propelled to the surface by their lower density (buoyancy). As they rise, the diminishing hydrostatic pressure causes the bubbles to expand, reducing their density (and increasing their buoyancy) even more.
But as we explained above, getting that first bubble to form and survive is often sufficiently difficult that liquids commonly superheat before they begin to boil. If you have had experience in an organic chemistry laboratory, you probably know this as “bumping”, and have been taught to take precautions against it. In large quantities, superheated liquids can be very dangerous, because the introduction of an impurity (such as release of an air bubble from the container surface) or even a mechanical disturbance can trigger nucleation and cause boiling to occur suddenly and almost explosively (Video \(\PageIndex{1}\)).
Many people have been seriously burned after attempting to boil water in a microwave oven, or after having added powdered material such as instant coffee to such water. When water is heated on a stove, the bottom of the container superheats only the thin layer of water immediately in contact with it, producing localized "microexplosions" that you can hear just before regular smooth boiling begins; these bubbles quickly disperse and serve as nucleation centers for regular boiling. In a microwave oven, however, the energy is absorbed by the water itself, so that the entire bulk of the water can become superheated. If this happens, the slightest disturbance can produce an explosive flash into vapor.
Sublimation
Some solids have such high vapor pressures that heating leads to a substantial amount of direct vaporization even before the melting point is reached. This is the case for solid iodine, for example. I 2 melts at 115°C and boils at 183°C, is easily sublimed at temperatures around 100°C. Even ice has a measurable vapor pressure near its freezing point, as evidenced by the tendency of snow to evaporate in cold dry weather. There are other solids whose vapor pressure overtakes that of the liquid before melting can occur. Such substances sublime without melting; a common example is solid carbon dioxide (“Dry Ice”) at 1 atm (see the CO 2 phase diagram below).
Phase Diagrams
The temperatures and pressures at which a given phase of a substance is stable (that is, from which the molecules have the lowest escaping tendency) is an important property of any substance. Because both the temperature and pressure are factors, it is customary to plot the regions of stability of the various phases in P - T coordinates, as in this generic phase diagram for a hypothetical substance.
Because pressures and temperatures can vary over very wide ranges, it is common practice to draw phase diagrams with non-linear or distorted coordinates. This enables us to express a lot of information in a compact way and to visualize changes that could not be represented on a linearly-scaled plot. It is important that you be able to interpret a phase diagram, or alternatively, construct a rough one when given the appropriate data. Take special note of the following points:
- The three colored regions on the diagram are the ranges of pressure and temperature at which the corresponding phase is the only stable one.
- The three lines that bound these regions define all values of ( P,T ) at which two phases can coexist (i.e., be in equilibrium). Notice that one of these lines is the vapor pressure curve of the liquid as described above. The " sublimation curve " is just a vapor pressure curve of the solid . The slope of the line depends on the difference in density of the two phases.
- In order to depict the important features of a phase diagram over the very wide range of pressures and temperatures they encompass, the axes are not usually drawn to scale, and are usually highly distorted. This is the reason that the "melting curve" looks like a straight line in most of these diagrams.
- Where the three named curves intersect, all three phases can coexist. This condition can only occur at a unique value of ( P,T ) known as the triple point . Since all three phases are in equilibrium at the triple point, their vapor pressures will be identical at this temperature.
- The line that separates the liquid and vapor regions ends at the critical point . At temperatures and pressures greater than the critical temperature and pressure, no separate liquid phase exists. We refer to this state simply as a fluid , although the term supercritical liquid is also commonly used.
The best way of making sure you understand a phase diagram is to imagine that you are starting at a certain temperature and pressure, and then change just one of these parameters, keeping the other constant. You will be traversing a horizontal or vertical path on the phase diagram, and there will be a change in state every time your path crosses a line. Of special importance is the horizontal path (shown by the blue line on the diagram above) corresponding to a pressure of 1 atmosphere; this line defines the normal melting and boiling temperatures of a substance.
Notice the following features for the phase diagram of water (Figure \(\PageIndex{11}\)):
- The slope of the line 2 separating the solid and liquid regions is negative; this reflects the unusual property that the density of the liquid is greater than that of the solid, and it means that the melting point of ice decreases as the pressure increases. Thus if ice at 0°C is subjected to a high pressure, it will find itself above its melting point and it will melt. (Contrary to what is sometimes said, however, this is not the reason that ice melts under the pressure of ice skates or skis, providing a lubricating film which makes these modes of transportation so enjoyable. The melting in these cases arises from frictional heating).
- The dashed line 1 is the extension of the liquid vapor pressure line below the freezing point. This represents the vapor pressure of supercooled water — a metastable state of water which can temporarily exist down to about –20°C. (If you live in a region subject to "freezing rain", you will have encountered supercooled water!)
- 3 The triple point ( TP ) of water is just 0.0075° above the freezing point; only at this temperature and pressure can all three phases of water coexist indefinitely.
- 4 Above the critical point ( CP ) temperature of 374°C, no separate liquid phase of water exists.
Dry ice, solid carbon dioxide, is widely used as a refrigerant and the phase diagram in Figure \(\PageIndex{12}\) shows why it is “dry”. The triple point pressure is at 5.11 atm, so below this pressure, liquid CO 2 cannot exist; the solid can only sublime directly to vapor. Gaseous carbon dioxide at a partial pressure of 1 atm is in equilibrium with the solid at 195K (−79 °C, 1 ); this is the normal sublimation temperature of carbon dioxide. The surface temperature of dry ice will be slightly less than this, since the partial pressure of CO 2 in contact with the solid will usually be less than 1 atm. Notice also that the critical temperature of CO 2 is only 31°C. This means that on a very warm day, the CO 2 in a fire extinguisher will be entirely vaporized; the vessel must therefore be strong enough to withstand a pressure of 73 atm.
This view of the carbon dioxide phase diagram employs a logarithmic pressure scale and thus encompasses a much wider range of pressures, revealing the upper boundary of the fluid phase (liquid and supercritical). Supercritical carbon dioxide (CO 2 above its critical temperature) possesses the solvent properties of a liquid and the penetrating properties of a gas; one major use is to remove caffeine from coffee beans.
Elemental iodine, I 2 , forms dark gray crystals that have an almost metallic appearance. It is often used in chemistry classes as an example of a solid that is easily sublimed; if you have seen such a demonstration or experimented with it in the lab, its phase diagram might be of interest.
The most notable feature of iodine's phase behavior is the small difference (less than a degree) between the temperatures of its triple point 1 and melting point 2 . Contrary to the impression many people have, there is nothing really special about iodine's tendency to sublime, which is shared by many molecular crystals including ice and naphthalene ("moth balls".) The vapor pressure of iodine at room temperature is really quite small — only about 0.3 torr (40 Pa).The fact that solid iodine has a strong odor and is surrounded by a purple vapor in a closed container is mainly a consequence of its strong ability to absorb green light (this leaves blue and red which make purple) and the high sensitivity of our noses to its vapor.
Phase diagram of Sulfur
Sulfur exhibits a very complicated phase behavior that has puzzled chemists for over a century; what you see here is the greatly simplified phase diagram shown in most textbooks. The difficulty arises from the tendency of S 8 molecules to break up into chains (especially in the liquid above 159°C) or to rearrange into rings of various sizes (S 6 to S 20 ). Even the vapor can contain a mixture of species S 2 through S 10 .
The phase diagram of sulfur contains a new feature: there are two solid phases, rhombic and monoclinic . The names refer to the crystal structures in which the S 8 molecules arrange themselves. This gives rise to three triple points , indicated by the numbers on the diagram.
From the phase diagram in Figure \(\PageIndex{14}\) identify one example of three phases that never coexist in s ulfur (\(S_8\) (hint there are several correct answers)?
When rhombic sulfur (the stable low-temperature phase) is heated slowly, it changes to the monoclinic form at 114°C, which then melts at 119°. But if the monoclinic form is heated rapidly the molecules do not have time to rearrange themselves, so the rhombic arrangement persists as a metastable phase until it melts at 119-120°. Formation of more than one solid phase is not uncommon — in fact, if one explores into the very high pressures (see below), it seems to be the rule.
Extremes Pressure and Temperatures
We tend to think of the properties of substances as they exist under the conditions we encounter in everyday life, forgetting that most of the matter that makes up our world is situated inside the Earth, where pressures are orders of magnitude higher (Figure \(\PageIndex{15}\)). Geochemists and planetary scientists need to know about the phase behavior of substances at high temperatures and pressures to develop useful models to test their theories about the structure and evolution of the Earth and of the solar system.
What ranges of temperatures and pressures are likely to be of interest — and more importantly, are experimentally accessible?
Figure \(\PageIndex{16}\) shows several scales (all of which, please note, are logarithmic) that cover respectively the temperature range for the universe; the low temperatures of importance to chemistry (note the green line indicating the temperatures at which liquid water can exist); the higher temperatures, showing the melting and boiling points of several elements for reference. The highest temperatures that can be produced in the laboratory are achieved (but only for very short time intervals) by light pulses from laser or synchrotron radiation.
The study of low temperatures is limited by the laws of physics that prohibit reaching absolute zero. But the fact that there is no limit to how close one can approach 0 K has encouraged a great deal of creative experimentation.
The study of matter at high pressures is not an easy task. The general techniques were pioneered between 1908-1960 by P.W. Bridgeman of Harvard University, whose work won him the 1946 Nobel Prize in physics. The more recent development of the diamond anvil cell has greatly extended the range of pressures attainable and the kinds of observations that can be made. Shock-wave techniques have made possible the production of short-lived pressures in the tPa range.
High pressure laboratory studies have revealed that many molecular substances such as hydrogen and water change to solid phases having melting points well above room temperature at very high pressures; there is a solid form of ice that remains frozen even at 100°C. At still higher pressures, many of these substances become metals. It is believed that much of the inner portion of the largest planets consists of metallic hydrogen — and, in fact, that all substances can become metallic at sufficiently high pressures.
Figure \(\PageIndex{19}\): Phase diagram of carbon: diamond and graphite
Graphite is the stable form of solid carbon at low pressures; diamond is only stable above about 10 4 atm. But once it is in this form, the rate at which diamond converts back to graphite is immeasurably slow under ordinary environmental conditions; there is simply not enough thermal energy available to break all of those carbon-carbon bonds. So the diamonds we admire in jewelry and pay dearly for are said to be metastable. [image: WikiMedia]
Synthetic Diamonds
To transform graphite into diamond at a reasonable rate, a pressure of 200,000 atm and a temperature of about 4000 K would be required.Since no apparatus can survive these conditions, the process, known as is high pressure high temperature synthesis (HPHT) is carried out commercially at 70,000 atm and 2300 K in a solution of molten nickel, which also acts as a catalyst. Traces of Ni in the finished product serve to distinguish synthetic diamonds from natural ones. However, most synthetic diamonds are too small (only a few millimeters) and too flawed for gem quality, and are used mainly to fabricate industrial grinding and cutting tools.
Figure \(\PageIndex{20}\): Phase diagram of carbon: diamond and graphite
More recently, thin diamond films have become important for engineering applications and semiconductor fabrication. These are most commonly made by condensation of gaseous carbon onto a suitable substrate (chemical vapor deposition, CVD). The conditions under which synthetic diamonds are made are depicted on the above phase diagram from Bristol University.
Video \(\PageIndex{2}\) : Chemist Roy Gat explains how he uses phase diagrams to synthesize synthetic diamonds at low pressure and temperatures.
Helium Phase weirdness
Helium is unique in that quantum phenomena, which normally apply only to tiny objects such as atoms and electrons, extend to and dominate its macroscopic properties. A glance at the phase diagram of \(\ce{^4He}\) reveals some of this quantum weirdness. The main points to notice are:
- Helium can be frozen only at high pressures;
- The solid and gas cannot coexist (be in equilibrium) under any conditions;
- There are two liquid phases , helium I (an ordinary liquid) and helium II (a highly unusual ordered liquid );
- The λ ( lambda ) line represents the ( P,T ) values at which the two phases can coexist;
- Helium-II behaves as a superfluid — essentially a quantum liquid .
Why cannot liquid helium freeze at low pressures? The low mass of the He atoms and their close confinement in the solid provides them with a very high zero-point energy (the Heisenberg uncertainty principle in action!) that allows them to vibrate with such amplitude that they overcome the dispersion forces that would otherwise hold the solid together, thus keeping the atoms too separated to form a solid. Only by applying a high pressure (25 atm) can this effect be overcome.
Helium-II, quantum liquids and superfluidity
We usually need quantum theory only to describe the properties of tiny objects such as electrons, but with liquid He-II, it extends to the macroscopic scale of the bulk liquid. \(\ce{^4He}\) atoms (99.99+ percent of natural helium) are bosons , which means that at low temperatures, they can all occupy the same quantum state (all other normal atoms are fermions and are subject to the Pauli exclusion principle ).
All together, now
Objects that occupy the same quantum state all possess the same momentum. Thus when one atom moves, they all move together. In a sense, this means that the entire bulk of liquid He-II acts as a single entity. This property, known as superfluidity , gives rise to a number of remarkable effects, most notably:
- The liquid can flow through a narrow channel without friction, up to a critical velocity that depends on the ratio of flow rate to channel width;
- When placed in a container, they form a film that climbs up the walls and down the outside, seeking the same level outside the container as inside if this is possible;
- A small molecule dissolved in helium-II behaves the same as it would in a vacuum.
In liquid helium-II, only about 10% of the atoms are in such a state, but it is enough to give the liquid some of the weird properties of a quantum liquid.
He 3 exhibits similar properties, but owing to its low natural abundance, it was not extensively studied until the 1940s when large amounts became available as a byproduct of nuclear weapons manufacture. Finally, in 1996, its superfluidity was observed at a temperature of 2 nK. Although He 3 atoms are fermions, only those that pair up (and thus assume the properties of bosons) give rise to superfluidity.
Water at high pressures
Water, like most other substances, exhibits many solid forms at higher pressures (Figure \(\PageIndex{22}\)). So far, fifteen distinct ice phases have been identified; these are designated by Roman numerals ice-I through ice-XV. Ice-I can exist in two modifications; the crystal lattice of ice-Ic is cubic, while that of ice-lh is hexagonal. The latter corresponds to the ordinary ice that we all know. It's interesting to note that several high-pressure phases of ice can exist at temperatures in excess of 100°C. | libretexts | 2025-03-17T19:53:10.089067 | 2013-10-03T01:38:07 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.05%3A_Changes_of_State",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "7.5: Changes of State",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.06%3A_Introduction_to_Crystals | 7.6: Introduction to Crystals
- Identify the three kinds of rotational symmetry axes of a cube.
- State what is meant by a crystal's habit , and identify some factors that might affect it.
- Explain why the angles between adjacent faces (of even a broken crystal) tend to have the same small set of values.
- What is a unit cell , and how does it relate to a crystal lattice?
- What are Bravais lattices , and why are they important?
- Find the Miller index of a line or plane in a unit cell, or sketch the line or plane having a given Miller index.
The delicately faceted surfaces of large crystals that occur in nature have always been a source of fascination and delight. In some ways they seem to represent a degree of perfection that is not apparent in other forms of matter. But in the realm of pure solid substances, crystals are the rule rather than the exception, although this may not be apparent unless they are observed under a hand-lens or a microscope. It is remarkable that the visual examination of crystals was able to establish a fairly mature science of crystallography (applied mainly to the study of minerals) by the end of the 19 th Century, even before the atomic theory of matter had been universally accepted. Today this aspect of crystallography is of importance not only to chemists and physicists, but also to geologists, amateur minerologists and "rock-hounds" who maintain some of the best Web resources on crystals. In this lesson we will see how the external shape of a crystal can reveal much about the underlying arrangement of its constituent atoms, ions, or molecules.
The first thing we notice about a crystal is the presence of planes — called faces — which constitute the external boundaries of the solid. Of course, any solid, including non-crystalline glass, can be carved, molded or machined to display planar faces; examples of these can be found in any "dollar store" display of costume jewelry. What distinguishes and defines a true crystal is that these faces develop spontaneously and naturally as the solid forms from a melt or from solution. The multiple faces invariably display certain geometrical relationships to one another, resulting in a symmetry that attracts our attention and delights the eye.
Symmetry elements
One of the most apparent elements of this geometrical regularity are the sets of parallel faces that many crystals display. Nowhere is this more apparent than in the cubes that develop when sodium chloride crystallizes from solution. We usually think of a cubic shape in terms of the equality of its edge lengths and the 90° angles between its sides, but there is a more fundamental way of classifying shapes that chemists find very useful. This is to look at what geometric transformations (such as rotations around an axis) we can perform that leave the appearance unchanged.
Cubic symmetry
For example, you can rotate a cube 90° around an axis perpendicular to any pair of its six faces without making any apparent change to it. We say that the cube possesses three mutually perpendicular four-fold rotational axes , abbreviated C 4 axes. But if you think about it, a cube can also be rotated around an axis that extends between opposite corners; in this case, it takes three 120° rotations to go through a complete circle, so these axes (also four in number) are three-fold or C 3 axes. And finally, there are two-fold (C 2 ) axes that pass diagonally through the centers of the six pairs of opposite edges.
In addition, there are imaginary symmetry planes that mirror the portions of the cube that lie on either side of them. Three of these are parallel to the three major axes of the crystal, and an additional six pass diagonally through opposite edges.
All told, there are 13 rotational axes and 9 mirror planes (only a few of which are shown above) that define cubic symmetry. Why is this important? Although anyone can recognize a cube when they see one, it turns out that many crystals, both natural and synthetic, are for one reason or another unable to develop all of their faces equally. Thus a crystal that forms on the bottom of a container will be unable to grow any faces that project downward for the simple reason that there is no supply of ions or molecules from that direction. The same effect occurs when a mineral crystal tries to grow in contact with other solids. Finally, the presence of certain impurities that selectively adsorb to one or more faces can block the addition of more material to them, thus either completely inhibiting their formation or forcing them to grow at slower rates. These alternative shapes that can develop from a single basic crystal type are known as habits .
Crystal habits
Sodium chloride grown from pure aqueous solution forms simple cubes, but the addition of various impurities can result in habits that can be regarded as cubes that have been truncated along planes normal to some of the symmetry axes. (The same effects can sometimes be seen as a crystal slowly dissolves and material is released more rapidly from some directions than others.)
In this example, the perfect cube 1 develops triangular faces at the corners 2 . If these enlarge beyond their maximum size 3 , the triangular faces meet in a new set of edges that are hexagonal 4 . Eventually we are left with the eight faces of what is obviously a regular octahedron 5 . One might think that these five shapes bear no relationship to one another, but in fact they all possess the same set of set of symmetry elements as the simple cube and are thus various habits of the same underlying cubic structure and belong to the cubic crystal system described further below.
Even though a given crystal may be distorted or broken, the angles between corresponding faces remain the same. Thus you can crush a crystal underfoot or break it up with a hammer, but you will always find that the fragments possess a limited set of interfacial angles.
This fundamental law, discovered by Nicholas Steno in 1669, was a major key development in crystallography. About 100 years later, a protractor-like device (the contact goniometer ) was invented to enable more accurate measurements than the rather crude ones that had formerly been traced out on paper.
Cleavage Planes
When a crystal is broken by applying a force in certain directions (as opposed to being pulverized by a hammer) it will often be seen to break cleanly into two pieces along what are known as cleavage planes. The new faces thus formed always correspond to the symmetry planes associated with a particular crystal type, and of course make constant angles with any other faces that may be present.
Scientific crystallography began with an accident
Cleavage planes were first described in the late 17 th century, but nothing much was thought about their significance until about a hundred years later when the Abbe Hauy accidently dropped a friend's sample of calcite and noticed how cleanly it broke. Further experimentation showed that other calcite crystals, even ones of different initial shapes (habits), displayed similar rhombohedral shapes upon cleavage, and that these in turn produced similar shapes when they were cleaved. This led Haüy to suggest that continued cleavages would ultimately lead to the smallest possible unit which would be the fundamental building block of the crystal. (Remember that the atomic theory of matter had not developed at this time.)
Haüy's elaborately drawn figures (published in 1784) showed how external faces of a crystal could be produced by stacking the units in various ways. For example, by omitting rows from a cubic stack of primal cubelets, one could arrive at the various stages between the cube and the octahedra for sodium chloride that we saw earlier on this page.
The modern interpretation of these observations replaces Haüy's primal shapes with atoms or molecules, or more generally with points in space that these define the possible locations of atoms or molecules. It is easy to see how plane faces can develop along some directions and not others if one assumes that the new faces must follow a linear sequence of points.
|
Square
Lattice:
|
Parallelogram
lattice
|
Rectangular
lattice
|
Rhombic or centered-rectangle lattice: x = y , angles neither 60° or 90°; |
Hexagonal
lattice
|
Although everyone has seen and admired the huge variety of patterns on printed fabrics or wallpapers, few are aware that these are all based on one of five types of two-dimensional "unit cells" that form the basis for these infinitely-extendable patterns. One of the most remarkable uses of this principle is in the work of the Dutch artist Maurits Escher (1888-1972).
Shown below are two-dimensional views of the unit cells for two very common types of crystal lattices, one having cubic symmetry and the other being hexagonal. Although we could use a hexagon for the second of these lattices, the rhombus is preferred because it is simpler.
Notice that in both of these lattices, the corners of the unit cells are centered on a lattice point. This means that an atom or molecule located on this point in a real crystal lattice is shared with its neighboring cells. As is shown more clearly here for a two-dimensional square-packed lattice, a single unit cell can claim "ownership" of only one-quarter of each molecule, and thus "contains" 4 × ¼ = 1 molecule.
The unit cell of the graphite form of carbon is also a rhombus, in keeping with the hexagonal symmetry of this arrangement.Notice that to generate this structure from the unit cell, we need to shift the cell in both the x - and y - directions in order to leave empty spaces at the correct spots. We could alternatively use regular hexagons as the unit cells, but the x + y shifts would still be required, so the simpler rhombus is usually preferred.
This image nicely illustrates the relations between the unit cell, the lattice structure, and the actual packing of atoms in a typical crystal.
Crystal systems and Bravais lattices
We saw above that five basic cell shapes can reproduce any design motif in two dimensions. If we go to the three-dimensional world of crystals, there are just seven possible basic lattice types, known as crystal systems , that can produce an infinite lattice by successive translations in three-dimensional space so that each lattice point has an identical environment. Each system is defined by the relations between the axis lengths and angles of its unit cell. For example, if the three edge lengths are identical and all corner angles are 90°, a crystal belongs to the cubic system.
The simplest possible cube is defined by the eight lattice points at its corner, but variants are also possible in which additional lattice points exist in the faces ("face-centered cubic") or in the center of the cube ("body-centered cubic"). If variants of this kind are taken into account, the total number of possible lattices is fourteen; these are known as the fourteen Bravais lattices .
|
|
|
|---|---|
|
cubic
a
=
b
=
c
|
The F cell corresponds to closest cubical packing , a very common and important structure. |
|
tetragonal
a
=
b
≠
c
|
A cube that has been extended in one direction, creating a unique c -axis. An F cell would simply be a network of joined I cells. |
|
orthorhombic
a
≠
b
≠
c
|
Three unequal axes at right angles. The "C" form has atoms in the two faces that cut the c -axis. |
|
hexagonal
a
=
b
≠
c
|
Just as in the 2-dimensional examples given above, the unit cell of the hexagonal lattice has a rhombic cross-section; the entire hexagonal unit is built from three of these rhombic prisms. |
|
trigonal (rhombohedral)
a
=
b
=
c
|
Think of this as a cube that has been skewed or distorted to one side so that opposite faces remain parallel to each other. This can also be regarded as a special case of the hexagonal system, and is often classified as such by U.S. minerologists who recognize only six crystal systems. The rhombohedral form of the hexagonal system is difficult to visualize. |
|
monoclinic
a
≠
b
≠
c
|
Two 90° angles, one > 90°, with all sides of different lengths. A C cell (also seen in the orthorhombic class) has additional points in the center of each end. Monoclinic I and F cells can be constructed from C cells. |
|
triclinic
a
≠
b
≠
c
|
This is the most generalized of the crystal systems, with all lengths and angles unequal, and no right angles. |
Notes on the above diagrams:
- The labels a,b,c along the unit cell axes represent the dimensions of the unit cell. Visual examination of a crystal does not allow us to determine their actual values, but merely to know whether any two (or all three) are the same.
- When a = b , both axes may be given " a " labels, since neither is unique.
- The angles α, β and γ are those between the b-c , a-c , and a-b axes, respectively. Similarly in the cube, all axes are a axes.
Lines and planes in unit cells: the Miller index
In any kind of repeating pattern, it is useful to have a convenient way of specifying the orientation of elements relative to the unit cell. This is done by assigning to each such element a set of integer numbers known as its Miller index .
Indexing lines in two-dimensions
To understand indexing, it will be easier to begin with a unit cell plane that we are viewing from above, along the [invisible] z -axis. The drawing shows such a plane with three lines crossing it at various slopes. The index of each line is found by first determining the points where it intersects the x and y axes as fractions of the unit cell parameters a and b . Thus in the above example:
- Line starts at the origin and extends to the lower right-hand corner which corresponds to intersections of the x axis at one unit cell length of both a and b — that is, at 1× a and 1× b . We can abbreviate this set of intersections as [1,1].
- Line intersects the x axis at one-half the unit cell distance a , or at a /2. This line is parallel to the y axis, so it never intersects it; this is mathematically the same as saying that it intersects it at infinity, or in terms of unit cell increments, at ∞× b . We can describe this line in terms of the unit cell intercepts by the pair of values [½,∞].
- Line starts at the upper right corner of the cell which corresponds to the coordinates (0,1), but this is equivalent to the origin (0,0) of the neighboring unit cell on the right. So in terms of our coordinate system (which repeats for each unit cell), this line extends in the negative- y direction and intersects this axis at – b /2. The x-intercept is a . These intercepts correspond to [1,–2].
The Miller indices of the lines are given by the reciprocals of these values:
| Line 1 [1,1] → (11) | Line 2 [½,∞] → (20 ) | Line 3 [1,–2] → 1,2) |
Miller indices are written in parentheses with no spaces between numbers. Negative values are indicated by an overbar as in
.Indexing planes in three-dimensions
We proceed in exactly the same way, except that we now have 3-digit Miller indices corresponding to the axes a, b and c .
It is important to note that multiple parallel planes that repeat at the same interval have identical Miller indices. This simply reflects the fact that we can repeat the coordinate axes at any regular interval.
Identifying crystal faces
We mentioned previously that the plane faces of crystals are their most important visually-distinctive property, so it is important to have a convenient way of referring to any given face. First, we define a set of reference directions ( x,y,z ) which are known as the crystallographic axes . In most cases these axes correspond to directions that are fairly apparent on visual examination of one or more crystals of a given kind. They are parallel to actual or possible edges of the crystal, and they are not necessarily orthogonal. We now know, as Haüy first suggested, that these directions correspond to rows of lattice points in the underlying structure of the crystal.
We also define three lattice parameters ( a,b,c ) which mark out the boundaries of the unit cell along the crystallographic axes. The index of a particular face is determined by the fractional values of (a,b,c) at which the face intersects the axes ( x,y,z ). Study the examples shown below for three different habits of a cubic lattice.
Below is a more complicated example of one particular habit of an orthorhombic crystal. The figure at the right shows how the (113) face is indexed.
In this case, the plane at the top of the crystal is extended downward to the ( x,y ) plane. This extended plane cuts the (x,y,z ) axes at (2 a , 2 b , 2/3 c ). The corresponding inverses would be (½,½,3/2). In order to make them into proper Miller indices (which should always be integers) we multiply everything by 2, yielding (113).
Why do Miller indices use mostly small numbers?
It is remarkable that the faces that bound real crystals generally have small Miller indices. The low values for the indices suggest that a given lattice plane has a high density of lattice points per unit area, a logical consequence of each molecule being surrounded and held by its closely-packed neighbors. In the 2-dimensional projection below, compare the facial lattice-point density in the (11) plane with that of the (31) plane.
Crystals with a single long unit-cell axis tend to form planes with the long axis normal to the plane, so that the major faces of the crystal are planes containing the short-axis translations. Similarly, crystals with a single, short unit-cell axis tend to be needles. The main faces, on the sides of the needles, contain the short lattice translation — a high density of lattice points. In general, it is found that crystals have linear dimensions that mirror the reciprocals of the lattice parameters.
Factors affecting crystal growth habits
Faces having a lower density of lattice points (as in the (31) face shown above) can acquire new layers more rapidly, and thus grow more rapidly than faces having a high lattice-point density. The faces that can potentially develop in a crystal are determined entirely by the symmetry properties of the underlying lattice. But the faces that actually develop under specific conditions — and thus the overall shape of the crystal — is determined by the relative rates of growth of the various faces. The slower the growth rate, the larger the face.
This relation can be understood by noting that faces that grow normal to shorter unit cell axes (as in the needle-shaped crystal shown above) present a larger density of lattice points to the surface (that is, more points per unit surface area.) This means that more time is required for diffusion of enough new particles to build out a new layer on such a surface.
An interesting experiment is to grind a large crystal of salt into a spherical shape and immerse it in a saturated solution of sodium chloride. At first, the most disturbed and exposed parts on the surface dissolve, revealing a large variety of underlying plane faces. As growth resumes, the smaller of these are rapidly replaced by larger faces. Eventually, the fast-growing faces eliminate themselves and the high-lattice point density faces that correspond to the sides of the cube win out.
In addition to these structural effects, the conditions under which a crystal is grown can affect its habit. Temperature, degree of supersaturation, nature of the solvent all have their effects, and these may affect the growth of different faces in different ways. The presence of impurities in the solution can radically alter the habit of a crystal, as seen in the following table for the growth of sodium chloride:
| no impurity | Fe(CN) 6 4– | formamide | Pb 2 + , Cd 2 + | polyvinyl alcohol |
|---|---|---|---|---|
| cubes | dendrites | large crystals | needles |
These effects presumably come about because these substances preferentially adsorb to certain faces, impeding their growth. | libretexts | 2025-03-17T19:53:10.208633 | 2013-10-03T01:38:09 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.06%3A_Introduction_to_Crystals",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "7.6: Introduction to Crystals",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.07%3A_Ionic_and_Ion-Derived_Solids | 7.7: Ionic and Ion-Derived Solids
Make sure you thoroughly understand the following essential ideas which have been presented above.
- What is an ionic solid, what are its typical physical properties, and what kinds of elements does it contain?
- Define the lattice energy of an ionic solid in terms of the energetic properties of its component elements.
- Make a rough sketch that describes the structure of solid sodium chloride.
- Describe the role that the relative ionic radii play in contributing to the stability of an ionic solids.
- Give examples of some solids that can form when ionic solutions are evaporated, but which do not fall into the category of "ionic" solids.
In this section we deal mainly with a very small but important class of solids that are commonly regarded as composed of ions. We will see how the relative sizes of the ions determine the energetics of such compounds. And finally, we will point out that not all solids that are formally derived from ions can really be considered "ionic" at all.
Ionic Solids
The idealized ionic solid consists of two interpenetrating lattices of oppositely-charged point charges that are held in place by a balance of coulombic forces. But because real ions occupy space, no such "perfect" ionic solid exists in nature. Nevertheless, this model serves as a useful starting point for understanding the structure and properties of a small group of compounds between elements having large differences in electronegativity.
Chemists usually apply the term "ionic solid" to binary compounds of the metallic elements of Groups 1-2 with one of the halogen elements or oxygen. As can be seen from the diagram, the differences in electronegativity between the elements of Groups 1-2 and those of Group 17 (as well as oxygen in Group 16) are sufficiently great that the binding in these solids is usually dominated by Coulombic forces and the crystals can be regarded as built up by aggregation of oppositely-charged ions.
Sodium Chloride (rock-salt) Structure
The most well known ionic solid is sodium chloride, also known by its geological names as rock-salt or halite . We can look at this compound in both structural and energetic terms.
Structurally, each ion in sodium chloride is surrounded and held in tension by six neighboring ions of opposite charge. The resulting crystal lattice is of a type known as simple cubic , meaning that the lattice points are equally spaced in all three dimensions and all cell angles are 90°.
In Figure \(\PageIndex{2}\), we have drawn two imaginary octahedra centered on ions of different kinds and extending partially into regions outside of the diagram. (We could equally well have drawn them at any of the lattice points, but show only two in order to reduce clutter.) Our object in doing this is to show that each ion is surrounded by six other ions of opposite charge; this is known as (6,6) coordination . Another way of stating this is that each ion resides in an octahedral hole within the cubic lattice.
How can one sodium ion surrounded by six chloride ions (or vice versa ) be consistent with the simplest formula NaCl? The answer is that each of those six chloride ions also sits at the center of its own octahedron defined by another six sodium ions. You might think that this corresponds to Na 6 Cl 6 , but note that the central sodium ion shown in the diagram can claim only a one-sixth share of each of its chloride ion neighbors, so the formula NaCl is not just the simplest formula, but correctly reflects the 1:1 stoichiometry of the compound. But of course, as in all ionic structures, there are no distinguishable "molecular" units that correspond to the NaCl simplest formula. Bear in mind that large amount of empty space in diagrams depicting a crystal lattice structure can be misleading, and that the ions are really in direct contact with each other to the extent that this is geometrically possible.
Sodium Chloride Energetics
Sodium chloride, like virtually all salts, is a more energetically favored configuration of sodium and chlorine than are these elements themselves; in other words, the reaction
\[Na_{(s)} + ½Cl_{2(g)} \rightarrow NaCl_{(s)}\]
is accompanied by a release of energy in the form of heat. How much heat, and why? To help us understand, we can imagine the formation of one mole of sodium chloride from its elements proceeding in these hypothetical steps in which we show the energies explicitly:
Step 1: Atomization of sodium (breaking one mole of metallic sodium into isolated sodium atoms)
\[\ce{ Na(s) + 108 kJ → Na(g)} \label{Step1}\]
Step 2: Same thing with chlorine. This requires more energy because it involves breaking a covalent bond.
\[\ce{ ½Cl2(g) + 127\, kJ → Cl(g)} \label{Step2}\]
Step 3: We strip an electron from one mole of sodium atoms (this costs a lot of energy!)
\[\ce{ Na(g) + 496\, kJ → Na^{+}(g) + e^{–}} \label{Step3}\]
Step 4: Feeding these electrons to the chlorine atoms gives most of this energy back.
\[\ce{ Cl(g) + e^{–} → Cl^{–}(g) + 348\, kJ}\label{Step4}\]
Step 5: Finally, we bring one mole of the ions together to make the crystal lattice — with a huge release of energy.
\[\ce{ Na^{+}(g) + Cl^{–}(g) → NaCl(s) + 787\, kJ} \label{Step5}\]
If we add all of these equations together, we get
\[\ce{Na(s) + 1/2Cl2(g) → NaCl(s)} + 404\; kJ\]
In other words, the formation of solid sodium chloride from its elements is highly exothermic . As this energy is released in the form of heat, it spreads out into the environment and will remain unavailable to push the reaction in reverse. We express this by saying that "sodium chloride is more stable than its elements".
Looking at the equations above, you can see that Equation \ref{Step5} constitutes the big payoff in energy. The 787 kj/mol noted there is known as the NaCl lattice energy . Its large magnitude should be no surprise, given the strength of the coulombic force between ions of opposite charge.
It turns out that it is the lattice energy that renders the gift of stability to all ionic solids. Note that this lattice energy, while due principally to coulombic attraction between each ion and its eight nearest neighbors, is really the sum of all the interactions with the crystal. Lattice energies cannot be measured directly, but they can be estimated fairly well from the energies of the other processes described in the table immediately above.
How Geometry and Periodic Properties Interact
The most energetically stable arrangement of solids made up of identical molecular units (as in the noble gas elements and pure metals) are generally those in which there is a minimum of empty space; these are known as close-packed structures, and there are several kinds. In the case of ionic solids of even the simplest 1:1 stoichiometry, the positive and negative ions usually differ so much in size that packing is often much less efficient. This may cause the solid to assume lattice geometries that differ from the one illustrated above for sodium chloride.
By way of illustration, consider the structure of cesium chloride (the spelling cæsium is also used), CsCl. The radius of the Cs + ion is 168 pm compared to 98 pm for Na + and cannot possibly fit into the octahedral hole of a simple cubic lattice of chloride ions. The CsCl lattice therefore assumes a different arrangement.
Figure \(\PageIndex{3}\) focuses on two of these cubic lattice elements whose tops and bottoms are shaded for clarity. It should be easy to see that each cesium ion now has eight nearest-neighbor chloride ions. Each chloride ion is also surrounded by eight cesium ions, so all the lattice points are still geometrically equivalent. We therefore describe this structure as having (8,8) coordination .
The two kinds of lattice arrangements exemplified by NaCl ("rock salt") and CsCl are found in a large number of other 1:1 ionic solids, and these names are used generically to describe the structures of these other compounds. There are of course many other fundamental lattice arrangements (not all of them cubic), but the two we have described here are sufficient to illustrate the point that the radius ratio (the ratio of the radii of the positive to the negative ion) plays an important role in the structures of simple ionic solids.
The Alkali Halides
The interaction of the many atomic properties that influence ionic binding are nicely illustrated by looking at a series of alkali halides, especially those involving extreme differences in atomic radii. The latter are all drawn to the same scale. On the energetic plots at the right, the lattice energies are shown in green. We will start with the one you already know very well.
|
Sodium chloride - NaCl ("rock-salt") mp/bp 801/1413 °C; coordination (6,6) |
||
|
Lithium Fluoride - LiF - mp/bp 846/1676 °C, rock-salt lattice structure (6,6). Tiny-tiny makes strong-strong! This is the most "ionic" of the alkali halides, with the largest lattice energy and highest melting and boiling points. The small size of these ions (and consequent high charge densities) together with the large electronegativity difference between the two elements places a lot of electronic charge between the atoms. Even in this highly ionic solid, the electron that is "lost" by the lithium atom turns out to be closer to the Li nucleus than when it resides in the 2 s shell of the neutral atom. |
||
|
Cesium Fluoride, CsF - mp/bp 703/1231 °C, (8,8) coordination. With five shells of electrons shielding its nucleus, the Cs + ion with its low charge density resembles a big puff-ball which can be distorted by the highly polarizing fluoride ion. The resulting ion-induced dipoles (blue arrows) account for much of the lattice energy here. The reverse of this would be a tiny metal ion trying to hold onto four relatively huge iodide ions like Lithium iodide. |
||
|
Lithium iodide, LiI - mp/bp 745/1410 °C. Negative ions can make even bigger puff-balls. The tiny lithium ion can't get very close to any of the iodides to generate a very strong coulombic binding, but does polarize them to create an ion-induced dipole component. It does not help that the negative ions are in contact with each other. The structural geometry is the same (6,6) coordination as NaCl. |
||
|
Cesium iodide, CsI - mp/bp 626/1280 °C. Even with the (8,8) coordination afforded by the CsCl structure, this is a pretty sorry combination owing to the low charge densities. The weakness of coulombic- compared to van der Waals interactions makes this the least-"ionic" of all the alkali halide solids. |
Conclusion: Many of the alkali halide solids are not all that "ionic" in the sense that coulombic forces are the predominant actors; in many, such as the CsI illustrated above, ion-induced dipole forces are more important.
Some Properties of Ionic Solids
As noted above, ionic solids are generally hard and brittle. Both of these properties reflect the strength of the coulombic force. Hardness measures resistance to deformation . Because the ions are tightly bound to their oppositely-charged neighbors and, a mechanical force exerted on one part of the solid is resisted by the electrostatic forces operating over an extended volume of the crystal.
But by applying sufficient force, one layer of ions can be made to slip over another; this is the origin of brittleness . This slippage quickly propagates along a plane of the crystal (more readily in some directions than in others), weakening their attraction and leading to physical cleavage . Because the "ions" in ionic solids lack mobility , the solids themselves are electrical insulators .
Not all ion-derived solids are "ionic".
Even within the alkali halides, the role of coulombic attraction diminishes as the ions become larger and more polarizable or differ greatly in radii. This is especially true of the anions, which tend to be larger and whose electron clouds are more easily distorted. In solids composed of polyatomic ions such as (NH 4 ) 2 SO 4 , SrClO 4 , NH 4 CO 3 , ion-dipole and ion-induced dipole forces may actually be stronger than the coulombic force. Higher ionic charges help, especially if the ions are relatively small. This is especially evident in the extremely high melting points of the Group 2 and higher oxides:
| MgO (magnesia) | CaO (lime) | SrO (strontia) | Al2O3 (alumina) | ZrO2 (zirconia) |
|---|---|---|---|---|
| 2830 °C | 2610 °C | 2430 °C | 2050 °C | 2715 °C |
These substances are known as refractories , meaning that they retain their essential properties at high temperatures. Magnesia, for example, is used to insulate electric heating elements and, in the form of fire bricks, to line high-temperature furnaces. No boiling points have been observed for these compounds; on further heating, they simply dissociate into their elements. Their crystal structures can be very complex, and some (notably Al 2 O 3 ) can have several solid forms. Even in the most highly ionic solids there is some electron sharing, so the idea of a “pure” ionic bond is an abstraction.
Many solids that are formally derived from ions cannot really be said to form "ionic" solids at all. For example, anhydrous copper(II) chloride consists of layers of copper atoms surrounded by four chlorine atoms in a square arrangement. Neighboring chains are offset so as to create an octahedral coordination of each copper atom. Similar structures are commonly encountered for other salts of transition metals. Similarly, most oxides and sulfides of metals beyond Group 2 tend to have structures dominated by other than ion-ion attractions.
The trihalides of aluminum offer another example of the dangers of assuming ionic character of solids that are formally derived from ions. Aqueous solutions of what we assume to be AlF 3 , AlCl 3 , AlBr 3 , and AlI 3 all exhibit the normal properties ionic solutions (they are electrically conductive, for example), but the solids are quite different: the melting point of AlF 3 is 1290°C, suggesting that it is indeed ionic. But AlCl 3 melts at 192°C — hardly consistent with ionic bonding, and the other two halides are also rather low-melting. Structural studies show that when AlCl 3 vaporizes or dissolves in a non-polar solvent it forms a dimer Al 2 Cl 6 . The two other halides exist only as dimers in all states.
The structural formula of the Al 2 Cl 6 molecule shows that the aluminum atoms are bonded to four chlorines, two of which are shared between the two metal atoms. The arrows represent coordinate covalent bonds in which the bonding electrons both come from the same atom (chlorine in this case.)
As shown at the right above, the aluminum atoms can be considered to be located at the centers of two tetrahedra that possess one edge in common. | libretexts | 2025-03-17T19:53:10.299903 | 2013-10-03T01:38:09 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.07%3A_Ionic_and_Ion-Derived_Solids",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "7.7: Ionic and Ion-Derived Solids",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.08%3A_Cubic_Lattices_and_Close_Packing | 7.8: Cubic Lattices and Close Packing
Make sure you thoroughly understand the following essential ideas:
- The difference between square and hexagonal packing in two dimensions.
- The definition and significance of the unit cell.
- Sketch the three Bravais lattices of the cubic system, and calculate the number of atoms contained in each of these unit cells.
- Show how alternative ways of stacking three close-packed layers can lead to the hexagonal or cubic close packed structures.
- Explain the origin and significance of octahedral and tetrahedral holes in stacked close-packed layers, and show how they can arise.
Close-Packing of Identical Spheres
Crystals are of course three-dimensional objects, but we will begin by exploring the properties of arrays in two-dimensional space. This will make it easier to develop some of the basic ideas without the added complication of getting you to visualize in 3-D — something that often requires a bit of practice. Suppose you have a dozen or so marbles. How can you arrange them in a single compact layer on a table top? Obviously, they must be in contact with each other in order to minimize the area they cover. It turns out that there are two efficient ways of achieving this:
The essential difference here is that any marble within the interior of the square-packed array is in contact with four other marbles, while this number rises to six in the hexagonal-packed arrangement. It should also be apparent that the latter scheme covers a smaller area (contains less empty space) and is therefore a more efficient packing arrangement. If you are good at geometry, you can show that square packing covers 78 percent of the area, while hexagonal packing yields 91 percent coverage.
If we go from the world of marbles to that of atoms, which kind of packing would the atoms of a given element prefer?
If the atoms are identical and are bound together mainly by dispersion forces which are completely non-directional, they will favor a structure in which as many atoms can be in direct contact as possible. This will, of course, be the hexagonal arrangement.
Directed chemical bonds between atoms have a major effect on the packing. The version of hexagonal packing shown at the right occurs in the form of carbon known as graphite which forms 2-dimensional sheets. Each carbon atom within a sheet is bonded to three other carbon atoms. The result is just the basic hexagonal structure with some atoms missing.
The coordination number of 3 reflects the sp 2 -hybridization of carbon in graphite, resulting in plane-trigonal bonding and thus the sheet structure. Adjacent sheets are bound by weak dispersion forces, allowing the sheets to slip over one another and giving rise to the lubricating and flaking properties of graphite.
Lattices
The underlying order of a crystalline solid can be represented by an array of regularly spaced points that indicate the locations of the crystal's basic structural units. This array is called a crystal lattice. Crystal lattices can be thought of as being built up from repeating units containing just a few atoms. These repeating units act much as a rubber stamp: press it on the paper, move ("translate") it by an amount equal to the lattice spacing, and stamp the paper again.
|
The gray circles represent a square array of lattice points. |
The orange square is the simplest unit cell that can be used to define the 2-dimensional lattice. |
Building out the lattice by moving ("translating") the unit cell in a series of steps, |
Although real crystals do not actually grow in this manner, this process is conceptually important because it allows us to classify a lattice type in terms of the simple repeating unit that is used to "build" it. We call this shape the unit cell . Any number of primitive shapes can be used to define the unit cell of a given crystal lattice. The one that is actually used is largely a matter of convenience, and it may contain a lattice point in its center, as you see in two of the unit cells shown here. In general, the best unit cell is the simplest one that is capable of building out the lattice.
Shown above are unit cells for the close-packed square and hexagonal lattices we discussed near the start of this lesson. Although we could use a hexagon for the second of these lattices, the rhombus is preferred because it is simpler.
Notice that in both of these lattices, the corners of the unit cells are centered on a lattice point. This means that an atom or molecule located on this point in a real crystal lattice is shared with its neighboring cells. As is shown more clearly here for a two-dimensional square-packed lattice, a single unit cell can claim "ownership" of only one-quarter of each molecule, and thus "contains" 4 × ¼ = 1 molecule.
The unit cell of the graphite form of carbon is also a rhombus, in keeping with the hexagonal symmetry of this arrangement. Notice that to generate this structure from the unit cell, we need to shift the cell in both the x - and y - directions in order to leave empty spaces at the correct spots. We could alternatively use regular hexagons as the unit cells, but the x + y shifts would still be required, so the simpler rhombus is usually preferred. As you will see in the next section, the empty spaces within these unit cells play an important role when we move from two- to three-dimensional lattices.
Cubic crystals
In order to keep this lesson within reasonable bounds, we are limiting it mostly to crystals belonging to the so-called cubic system. In doing so, we can develop the major concepts that are useful for understanding more complicated structures (as if there are not enough complications in cubics alone!) But in addition, it happens that cubic crystals are very commonly encountered; most metallic elements have cubic structures, and so does ordinary salt, sodium chloride.
We usually think of a cubic shape in terms of the equality of its edge lengths and the 90° angles between its sides, but there is another way of classifying shapes that chemists find very useful. This is to look at what geometric transformations (such as rotations around an axis) we can perform that leave the appearance unchanged. For example, you can rotate a cube 90° around an axis perpendicular to any pair of its six faces without making any apparent change to it. We say that the cube possesses three mutually perpendicular four-fold rotational axes , abbreviated C 4 axes. But if you think about it, a cube can also be rotated around the axes that extend between opposite corners; in this case, it takes three 120° rotations to go through a complete circle, so these axes (also four in number) are three-fold or C 3 axes.
Cubic crystals belong to one of the seven crystal systems whose lattice points can be extended indefinitely to fill three-dimensional space and which can be constructed by successive translations (movements) of a primitive unit cell in three dimensions. As we will see below, the cubic system, as well as some of the others, can have variants in which additional lattice points can be placed at the center of the unit or at the center of each face.
The three types of cubic lattices
The three Bravais lattices which form the cubic crystal system are shown here.
Structural examples of all three are known, with body- and face-centered (BCC and FCC) being much more common; most metallic elements crystallize in one of these latter forms. But although the simple cubic structure is uncommon by itself, it turns out that many BCC and FCC structures composed of ions can be regarded as interpenetrating combinations of two simple cubic lattices, one made up of positive ions and the other of negative ions. Notice that only the FCC structure, which we will describe below, is a close-packed lattice within the cubic system.
Close-packed lattices in three dimensions
Close-packed lattices allow the maximum amount of interaction between atoms. If these interactions are mainly attractive, then close-packing usually leads to more energetically stable structures. These lattice geometries are widely seen in metallic, atomic, and simple ionic crystals.
As we pointed out above, hexagonal packing of a single layer is more efficient than square-packing, so this is where we begin. Imagine that we start with the single layer of green atoms shown below. We will call this the A layer. If we place a second layer of atoms (orange) on top of the A-layer, we would expect the atoms of the new layer to nestle in the hollows in the first layer. But if all the atoms are identical, only some of these void spaces will be accessible.
In the diagram on the left, notice that there are two classes of void spaces between the A atoms; one set (colored blue) has a vertex pointing up, while the other set (not colored) has down-pointing vertices. Each void space constitutes a depression in which atoms of a second layer (the B-layer) can nest. The two sets of void spaces are completely equivalent, but only one of these sets can be occupied by a second layer of atoms whose size is similar to those in the bottom layer. In the illustration on the right above we have arbitrarily placed the B-layer atoms in the blue voids, but could just as well have selected the white ones.
Two choices for the third layer lead to two different close-packed lattice types
Now consider what happens when we lay down a third layer of atoms. These will fit into the void spaces within the B-layer. As before, there are two sets of these positions, but unlike the case described above, they are not equivalent.
The atoms in the third layer are represented by open blue circles in order to avoid obscuring the layers underneath. In the illustration on the left, this third layer is placed on the B-layer at locations that are directly above the atoms of the A-layer, so our third layer is just a another A layer. If we add still more layers, the vertical sequence A-B-A-B-A-B-A... repeats indefinitely.
In the diagram on the right above, the blue atoms have been placed above the white (unoccupied) void spaces in layer A. Because this third layer is displaced horizontally (in our view) from layer A, we will call it layer C. As we add more layers of atoms, the sequence of layers is A-B-C-A-B-C-A-B-C..., so we call it ABC packing.
These two diagrams that show exploded views of the vertical stacking further illustrate the rather small fundamental difference between these two arrangements— but, as you will see below, they have widely divergent structural consequences. Note the opposite orientations of the A and C layers
The Hexagonal closed-packed structure
The HCP stacking shown on the left just above takes us out of the cubic crystal system into the hexagonal system, so we will not say much more about it here except to point out each atom has 12 nearest neighbors: six in its own layer, and three in each layer above and below it.
The cubic close-packed structure
Below we reproduce the FCC structure that was shown above.
You will notice that the B-layer atoms form a hexagon, but this is a cubic structure. How can this be? The answer is that the FCC stack is inclined with respect to the faces of the cube, and is in fact coincident with one of the three-fold axes that passes through opposite corners. It requires a bit of study to see the relationship, and we have provided two views to help you. The one on the left shows the cube in the normal isometric projection; the one on the right looks down upon the top of the cube at a slightly inclined angle.
Both the CCP and HCP structures fill 74 percent of the available space when the atoms have the same size. You should see that the two shaded planes cutting along diagonals within the interior of the cube contain atoms of different colors, meaning that they belong to different layers of the CCP stack. Each plane contains three atoms from the B layer and three from the C layer, thus reducing the symmetry to C 3 , which a cubic lattice must have.
The FCC unit cell
The figure below shows the the face-centered cubic unit cell of a cubic-close packed lattice.
How many atoms are contained in a unit cell? Each corner atom is shared with eight adjacent unit cells and so a single unit cell can claim only 1/8 of each of the eight corner atoms. Similarly, each of the six atoms centered on a face is only half-owned by the cell. The grand total is then (8 × 1/8) + (6 × ½) = 4 atoms per unit cell.
Interstitial Void Spaces
The atoms in each layer in these close-packing stacks sit in a depression in the layer below it. As we explained above, these void spaces are not completely filled. (It is geometrically impossible for more than two identical spheres to be in contact at a single point.) We will see later that these interstitial void spaces can sometimes accommodate additional (but generally smaller) atoms or ions.
If we look down on top of two layers of close-packed spheres, we can pick out two classes of void spaces which we call tetrahedral and octahedral holes .
Tetrahedral holes
If we direct our attention to a region in the above diagram where a single atom is in contact with the three atoms in the layers directly below it, the void space is known as a tetrahedral hole . A similar space will be be found between this single atom and the three atoms (not shown) that would lie on top of it in an extended lattice. Any interstitial atom that might occupy this site will interact with the four atoms surrounding it, so this is also called a four-coordinate interstitial space .
Don't be misled by this name; the boundaries of the void space are spherical sections, not tetrahedra. The tetrahedron is just an imaginary construction whose four corners point to the centers of the four atoms that are in contact.
Octahedral holes
Similarly, when two sets of three trigonally-oriented spheres are in close-packed contact, they will be oriented 60° apart and the centers of the spheres will define the six corners of an imaginary octahedron centered in the void space between the two layers, so we call these octahedral holes or six-coordinate interstitial sites . Octahedral sites are larger than tetrahedral sites.
An octahedron has six corners and eight sides. We usually draw octahedra as a double square pyramid standing on one corner (left), but in order to visualize the octahedral shape in a close-packed lattice, it is better to think of the octahedron as lying on one of its faces (right).
Each sphere in a close-packed lattice is associated with one octahedral site, whereas there are only half as many tetrahedral sites. This can be seen in this diagram that shows the central atom in the B layer in alignment with the hollows in the C and A layers above and below.
The face-centered cubic unit cell contains a single octahedral hole within itself, but octahedral holes shared with adjacent cells exist at the centers of each edge. Each of these twelve edge-located sites is shared with four adjacent cells, and thus contributes (12 × ¼) = 3 atoms to the cell. Added to the single hole contained in the middle of the cell, this makes a total of 4 octahedral sites per unit cell. This is the same as the number we calculated above for the number of atoms in the cell.
Common cubic close-packed structures
It can be shown from elementary trigonometry that an atom will fit exactly into an octahedral site if its radius is 0.414 as great as that of the host atoms. The corresponding figure for the smaller tetrahedral holes is 0.225.
Many pure metals and compounds form face-centered cubic (cubic close- packed) structures. The existence of tetrahedral and octahedral holes in these lattices presents an opportunity for "foreign" atoms to occupy some or all of these interstitial sites. In order to retain close-packing, the interstitial atoms must be small enough to fit into these holes without disrupting the host CCP lattice. When these atoms are too large, which is commonly the case in ionic compounds, the atoms in the interstitial sites will push the host atoms apart so that the face-centered cubic lattice is somewhat opened up and loses its close-packing character.
The rock-salt structure
Alkali halides that crystallize with the "rock-salt" structure exemplified by sodium chloride can be regarded either as a FCC structure of one kind of ion in which the octahedral holes are occupied by ions of opposite charge, or as two interpenetrating FCC lattices made up of the two kinds of ions. The two shaded octahedra illustrate the identical coordination of the two kinds of ions; each atom or ion of a given kind is surrounded by six of the opposite kind, resulting in a coordination expressed as (6:6).
How many NaCl units are contained in the unit cell? If we ignore the atoms that were placed outside the cell in order to construct the octahedra, you should be able to count fourteen "orange" atoms and thirteen "blue" ones. But many of these are shared with adjacent unit cells.
An atom at the corner of the cube is shared by eight adjacent cubes, and thus makes a 1/8 contribution to any one cell. Similarly, the center of an edge is common to four other cells, and an atom centered in a face is shared with two cells. Taking all this into consideration, you should be able to confirm the following tally showing that there are four AB units in a unit cell of this kind.
| Orange | Blue |
|---|---|
| 8 at corners: 8 x 1/8 = 1 | 12 at edge centers: 12 x ¼ = 3 |
| 6 at face centers: 6 x ½ = 3 | 1 at body center = 1 |
| total: 4 | total: 4 |
If we take into consideration the actual sizes of the ions (Na + = 116 pm, Cl – = 167 pm), it is apparent that neither ion will fit into the octahedral holes with a CCP lattice composed of the other ion, so the actual structure of NaCl is somewhat expanded beyond the close-packed model.
The space-filling model on the right depicts a face-centered cubic unit cell of chloride ions (purple), with the sodium ions (green) occupying the octahedral sites.
The zinc-blende structure: using some tetrahedral holes
Since there are two tetrahedral sites for every atom in a close-packed lattice, we can have binary compounds of 1:1 or 1:2 stoichiometry depending on whether half or all of the tetrahedral holes are occupied. Zinc-blende is the mineralogical name for zinc sulfide, ZnS. An impure form known as sphalerite is the major ore from which zinc is obtained.
This structure consists essentially of a FCC (CCP) lattice of sulfur atoms (orange) (equivalent to the lattice of chloride ions in NaCl) in which zinc ions (green) occupy half of the tetrahedral sites. As with any FCC lattice, there are four atoms of sulfur per unit cell, and the the four zinc atoms are totally contained in the unit cell. Each atom in this structure has four nearest neighbors, and is thus tetrahedrally coordinated.
It is interesting to note that if all the atoms are replaced with carbon, this would correspond to the diamond structure.
The fluorite structure: all tetrahedral sites occupied
Fluorite, CaF 2 , having twice as many ions of fluoride as of calcium, makes use of all eight tetrahedral holes in the CPP lattice of calcium ions (orange) depicted here. To help you understand this structure, we have shown some of the octahedral sites in the next cell on the right; you can see that the calcium ion at A is surrounded by eight fluoride ions, and this is of course the case for all of the calcium sites. Since each fluoride ion has four nearest-neighbor calcium ions, the coordination in this structure is described as (8:4).
Although the radii of the two ions (F – = 117 pm, Ca 2 + = 126 pm does not allow true close packing, they are similar enough that one could just as well describe the structure as a FCC lattice of fluoride ions with calcium ions in the octahedral holes.
Simple- and body-centered cubic structures
In Section 4 we saw that the only cubic lattice that can allow close packing is the face-centered cubic structure. The simplest of the three cubic lattice types, the simple cubic lattice , lacks the hexagonally-arranged layers that are required for close packing. But as shown in this exploded view, the void space between the two square-packed layers of this cell constitutes an octahedral hole that can accommodate another atom, yielding a packing arrangement that in favorable cases can approximate true close-packing. Each second-layer B atom (blue) resides within the unit cell defined the A layers above and below it.
The A and B atoms can be of the same kind or they can be different. If they are the same, we have a body-centered cubic lattice . If they are different, and especially if they are oppositely-charged ions (as in the CsCl structure), there are size restrictions: if the B atom is too large to fit into the interstitial space, or if it is so small that the A layers (which all carry the same electric charge) come into contact without sufficient A-B coulombic attractions, this structural arrangement may not be stable.
The cesium chloride structure
CsCl is the common model for the BCC structure. As with so many other structures involving two different atoms or ions, we can regard the same basic structure in different ways. Thus if we look beyond a single unit cell, we see that CsCl can be represented as two interpenetrating simple cubic lattices in which each atom occupies an octahedral hole within the cubes of the other lattice. | libretexts | 2025-03-17T19:53:10.402464 | 2013-10-03T01:38:08 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.08%3A_Cubic_Lattices_and_Close_Packing",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "7.8: Cubic Lattices and Close Packing",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.09%3A_Polymers_and_Plastics | 7.9: Polymers and Plastics
Make sure you thoroughly understand the following essential ideas which have been presented above. It is especially important that you know the precise meanings of all the bold terms in the context of this topic.
- Aside from their high molar masses, how do synthetic polymers differ from ordinary molecular solids?
- Polymers can be classified according to their chemical composition, their physical properties, and their general application. For each of these three categories, name two examples that that might be considered when adapting a polymer to a particular end-use.
- and a thermoset , and comment on the molecular basis for their different properties, including crystallinity.
- Describe the two general methods of polymer synthesis.
- Name two kinds each of commercially-important synthetic thermoplastics and thermosets, and specify some of their principal uses.
- Name two kinds of commercially-important natural polymers.
- Describe some of the concerns and sources of small-molecule release from polymers.
- What are some of the problems connected with recycling or re-use of polymeric materials?
Plastics and natural materials such as rubber or cellulose are composed of very large molecules called polymers. Polymers are constructed from relatively small molecular fragments known as monomers that are joined together. Wool, cotton, silk, wood and leather are examples of natural polymers that have been known and used since ancient times. This group includes biopolymers such as proteins and carbohydrates that are constituents of all living organisms.
Synthetic polymers, which includes the large group known as plastics, came into prominence in the early twentieth century. Chemists' ability to engineer them to yield a desired set of properties (strength, stiffness, density, heat resistance, electrical conductivity) has greatly expanded the many roles they play in the modern industrial economy. This Module deals mostly with synthetic polymers, but will include a synopsis of some of the more important natural polymers. It will close with a summary of some of the very significant environmental problems created by the wide use of plastics.
Polymers and "pure substances"
Let's begin by looking at an artificial polymer that is known to everyone in the form of flexible, transparent plastic bags : polyethylene . It is also the simplest polymer, consisting of random-length (but generally very long) chains made up of two-carbon units.
You will notice some "fuzziness" in the way that the polyethylene structures are represented above. The squiggly lines at the ends of the long structure indicate that the same pattern extends indefinitely. The more compact notation on the right shows the minimal repeating unit enclosed in brackets overprinted with a dash; this means the same thing and is the preferred way of depicting polymer structures.
In most areas of chemistry, a "pure substance" has a definite structure, molar mass, and properties. It turns out, however, that few polymeric substances are uniform in this way. This is especially the case with synthetic polymers, whose molecular weights cover a range of values, as may the sequence, orientation, and connectivity of the individual monomers. So most synthetic polymers are really mixtures rather than pure substances in the ordinary chemical sense of the term. Their molecular weights are typically distributed over a wide range.
Figure \(\PageIndex{1}\): Polymers
Don't be misled by chemical formulas that depict polymers such as polyethylene as reasonably straight chains of substituted carbon atoms. Free rotation around C—C bonds allows long polymer molecules to curl up and and tangle very much like spaghetti (Figure \(\PageIndex{2}\)). Thus polymers generally form amorphous solids. There are, however, ways in which certain polymers can be partially oriented.
Classification of polymers
Polymers can be classified in ways that reflect their chemical makeup, or perhaps more importantly, their properties and applications. Many of these factors are strongly interdependent, and most are discussed in much more detail in subsequent sections of this page.
Chemistry
- Nature of the monomeric units
- Average chain length and molecular weight
- Homopolymers (one kind of monomeric unit) or copolymers ;
- Chain topology: how the monomeric units are connected
- Presence or absence of cross-branching
- Method of polymerization
Properties
- Density
- Thermal properties — can they soften or melt when heated?
- Degree of crystallinity
- Physical properties such as hardness, strength, machineability.
- Solubility, permeability to gases
Applications
- molded and formed objects ("plastics")
- sheets and films
- elastomers (i.e., elastic polymers such as rubber)
- adhesives
- coatings, paints, inks
- fibers and yarns
Physical properties of polymers
The physical properties of a polymer such as its strength and flexibility depend on:
- chain length - in general, the longer the chains the stronger the polymer;
- side groups - polar side groups (including those that lead to hydrogen bonding) give stronger attraction between polymer chains, making the polymer stronger;
- branching - straight, unbranched chains can pack together more closely than highly branched chains, giving polymers that have higher density, are more crystalline and therefore stronger;
- cross-linking - if polymer chains are linked together extensively by covalent bonds, the polymer is harder and more difficult to melt.
Amorphous and crystalline polymers
The spaghetti-like entanglements of polymer molecules tend to produce amorphous solids, but it often happens that some parts can become sufficiently aligned to produce a region exhibiting crystal-like order, so it is not uncommon for some polymeric solids to consist of a random mixture of amorphous and crystalline regions. As might be expected, shorter and less-branched polymer chains can more easily organize themselves into ordered layers than have can long chains. Hydrogen-bonding between adjacent chains also helps, and is very important in fiber-forming polymers both synthetic (Nylon 6.6) and natural (cotton cellulose).
Pure crystalline solids have definite melting points, but polymers, if they melt at all, exhibit a more complex behavior. At low temperatures, the tangled polymer chains tend to behave as rigid glasses. For example, the natural polymer that we call rubber becomes hard and brittle when cooled to liquid nitrogen temperature. Many synthetic polymers remain in this state to well above room temperature.
The melting of a crystalline compound corresponds to a sudden loss of long-range order; this is the fundamental reason that such solids exhibit definite melting points, and it is why there is no intermediate form between the liquid and the solid states. In amorphous solids there is no long-range order, so there is no melting point in the usual sense. Such solids simply become less and less viscous as the temperature is raised.
In some polymers (known as thermoplastics ) there is a fairly definite softening point that is observed when the thermal kinetic energy becomes high enough to allow internal rotation to occur within the bonds and to allow the individual molecules to slide independently of their neighbors, thus rendering them more flexible and deformable. This defines the glass transition temperature t g .
Depending on the degree of crystallinity, there will be a higher temperature, the melting point t m , at which the crystalline regions come apart and the material becomes a viscous liquid. Such liquids can easily be injected into molds to manufacture objects of various shapes, or extruded into sheets or fibers. Other polymers (generally those that are highly cross-linked) do not melt at all; these are known as thermosets . If they are to be made into molded objects, the polymerization reaction must take place within the molds — a far more complicated process. About 20% of the commercially-produced polymers are thermosets; the remainder are thermoplastics.
2 Thermoplastic polymer structures
Homopolymers and heteropolymers
Copolymerization is an invaluable tool for "tuning" polymers so that they have the right combination of properties for an application. For example, homopolymeric polystyrene is a rigid and very brittle transparent thermoplastic with a glass transition temperature of 97°C. Copolymerizing it with acrylonitrile yields an alternating "SAN" copolymer in which t g is raised to 107°, making it useable for transparent drink containers.
A polymer that is composed of identical monomeric units (as is polyethylene) is called a homopolymer . Heteropolymers are built up from more than one type of monomer. Artificial heteropolymers are more commonly known as copolymers.
Chain topology
Polymers may also be classified as straight-chained or branched, leading to forms such as these:
The monomers can be joined end-to-end, and they can also be cross-linked to provide a harder material:
If the cross-links are fairly long and flexible, adjacent chains can move with respect to each other, producing an elastic polymer or elastomer
Chain configuration and tacticity
In a linear polymer such as polyethylene, rotations around carbon-carbon single bonds can allow the chains to bend or curl up in various ways, resulting in the spaghetti-like mixture of these different conformations we alluded to above. But if one of the hydrogen atoms is replaced by some other entity such as a methyl group, the relative orientations of the individual monomer units that make up a linear section of any carbon chain becomes an important characteristic of the polymer.
Cis-trans isomerism occurs because rotation around carbon-carbon double bonds is not possible — unlike the case for single bonds. Any pair of unlike substituents attached to the two carbons is permanently locked into being on the same side ( cis ) or opposite sides ( trans ) of the double bond.
If the carbon chain contains double bonds, then cis-trans isomerism becomes possible, giving rise to two different possible configurations (known as diastereomers) at each unit of the chain. This seemingly small variable can profoundly affect the nature of the polymer. For example, the latex in natural rubber is made mostly of cis -polyisoprene, whereas the trans isomer (known as gutta percha latex) has very different (and generally inferior) properties.
Chirality
The tetrahedral nature of carbon bonding has an important consequence that is not revealed by simple two-dimensional structural formulas: atoms attached to the carbon can be on one side or on the other, and these will not be geometrically equivalent if all four of the groups attached to a single carbon atom are different. Such carbons (and the groups attached to them) are said to be chiral, and can exist in two different three-dimensional forms known as enantiomers.
For an individual carbon atom in a polymer chain, two of its attached groups will ordinarily be the chain segments on either side of the carbon. If the two remaining groups are different (say one hydrogen and the other methyl), then the above conditions are satisfied and this part of the chain can give rise to two enantiomeric forms.
A chain that can be represented as (in which the orange and green circles represent different groups) will have multiple chiral centers, giving rise to a huge number of possible enantiomers. In practice, it is usually sufficient to classify chiral polymers into the following three classes of stereoregularity , usually referred to as tacticity .
The tacticity of a polymer chain can have a major influence on its properties. Atactic polymers, for example, being more disordered, cannot crystallize.
One of the major breakthroughs in polymer chemistry occurred in the early 1950s when the German chemist Karl Ziegler discovered a group of catalysts that could efficiently polymerize ethylene. At about the same time, Giulio Natta (Italian) made the first isotactic (and crystalline) polyethylene. The Ziegler-Natta catalysts revolutionized polymer chemistry by making it possible to control the stereoregularity of these giant molecules. The two shared the 1963 Nobel Prize in Chemistry
3 How polymers are made
Polymers are made by joining small molecules into large ones. But most of these monomeric molecules are perfectly stable as they are, so chemists have devised two general methods to make them react with each other, building up the backbone chain as the reaction proceeds.
Condensation-elimination polymerization
This method (also known as step-growth ) requires that the monomers possess two or more kinds of functional groups that are able to react with each other in such a way that parts of these groups combine to form a small molecule (often H 2 O) which is eliminated from the two pieces. The now-empty bonding positions on the two monomers can then join together .
This occurs, for example, in the synthesis of the Nylon family of polymers in which the eliminated H 2 O molecule comes from the hydroxyl group of the acid and one of the amino hydrogens:
Note that the monomeric units that make up the polymer are not identical with the starting components.
Addition polymerization
Addition or chain-growth polymerization involves the rearrangement of bonds within the monomer in such a way that the monomers link up directly with each other:
In order to make this happen, a chemically active molecule (called an initiator ) is needed to start what is known as a chain reaction . The manufacture of polyethylene is a very common example of such a process. It employs a free-radical initiator that donates its unpaired electron to the monomer, making the latter highly reactive and able to form a bond with another monomer at this site.
In theory, only a single chain-initiation process needs to take place, and the chain-propagation step then repeats itself indefinitely, but in practice multiple initiation steps are required, and eventually two radicals react ( chain termination ) to bring the polymerization to a halt.
As with all polymerizations, chains having a range of molecular weights are produced, and this range can be altered by controlling the pressure and temperature of the process.
4 Gallery of common synthetic polymers
Thermoplastics
Note: the left panels below show the polymer name and synonyms, structural formula, glass transition temperature, melting point/decomposition temperature, and (where applicable) the resin identification symbol used to facilitate recycling.
|
Polycarbonate (Lexan®)
T g = 145°C, T m = 225°C. |
This polymer was discovered independently in Germany and the U.S. in 1953. Lexan is exceptionally hard and strong; we see it most commonly in the form of compact disks . It was once widely used in water bottles, but concerns about leaching of unreacted monomer (bisphenol-A) has largely suppressed this market. |
|
Polyethylene terephthalate (PET, Mylar )
T g = 76°C, T m = 250°C.
|
Thin and very strong films of this material are made by drawing out the molten polymer in both directions, thus orienting the molecules into a highly crystalline state that becomes "locked-in" on cooling. Its many applications include food packaging (in foil-laminated drink containers and microwaveable frozen-food containers), overhead-projector film, weather balloons, and as aluminum-coated reflective material in spacecraft and other applications. |
|
Nylon (a polyamide )
T g = 50°C, T m = 255°C. |
Nylon has a fascinating history, both scientific and cultural. It was invented by DuPont chemist Wallace Carothers (1896-1937). The common form Nylon 6.6 has six carbon atoms in both parts of its chain; there are several other kinds. Notice that the two copolymer sub-units are held together by peptide bonds, the same kinds that join amino acids into proteins. Nylon 6.6 has good abrasion resistance and is self-lubricating, which makes it a good engineering material. It is also widely used as a fiber in carpeting, clothing, and tire cord. For an interesting account of the development of Nylon, see Enough for One Liftetime: Wallace Carothers, Inventor of Nylon by Ann Gaines (1971) |
|
Polyacrylonitrile (Orlon, Acrilan, "acrylic" fiber)
T g = 85°C, T m = 318°C. |
Used in the form of fibers in rugs, blankets, and clothing, especially cashmere-like sweaters. The fabric is very soft, but tends to "pill" — i.e., produce fuzz-like blobs. Owing to its low glass transition temperature, it requires careful treatment in cleaning and ironing. |
|
Polyethylene
T g = –78°C, T m = 100°C.
|
Control of polymerization by means of catalysts and additives has led to a large variety of materials based on polyethylene that exhibit differences in densities, degrees of chain branching and crystallinity, and cross-linking. Some major types are low-density (LDPE), linear low density (LLDPE), high-density (HDPE). LDPE was the first commercial form (1933) and is used mostly for ordinary "plastic bags", but also for food containers and in six-pack soda can rings. Its low density is due to long-chain branching that inhibits close packing. LLDPE has less branching; its greater toughness allows its use in those annoyingly-thin plastic bags often found in food markets. A "very low density" form (VLDPE) with extensive short-chain branching is now used for plastic stretch wrap (replacing the original component of Saran Wrap) and in flexible tubing. HDPE has mostly straight chains and is therefore stronger. It is widely used in milk jugs and similar containers, garbage containers, and as an "engineering plastic" for machine parts. |
|
Polymethylmethacrylate (Plexiglass, Lucite, Perspex)
T g = 114°C, T m = 130-140°C. |
This clear, colorless polymer is widely used in place of glass, where its greater impact resistance, lighter weight, and machineability are advantages. It is normally copolymerized with other substances to improve its properties. Aircraft windows, plastic signs, and lighting panels are very common applications. Its compatibility with human tissues has led to various medical applications, such as replacement lenses for cataract patients. |
|
Polypropylene
T g = –10°C, T m = 173°C.
|
Polypropylene is used alone or as a copolymer, usually with with ethylene. These polymers have an exceptionally wide range of uses — rope, binder covers, plastic bottles, staple yarns, non-woven fabrics, electric kettles. When uncolored, it is translucent but not transparent. Its resistance to fatigue makes it useful for food containers and their lids, and flip-top lids on bottled products such as ketchup. |
|
polystyrene
T g = 95°C, T m = 240°C. PS |
Polystyrene is transparent but rather brittle, and yellows under uv light. Widely used for inexpensive packaging materials and "take-out trays", foam "packaging peanuts", CD cases, foam-walled drink cups, and other thin-walled and moldable parts. |
|
polyvinyl acetate
T g = 30°C |
PVA is too soft and low-melting to be used by itself; it is commonly employed as a water-based emulsion in paints, wood glue and other adhesives. |
|
polyvinyl chloride ("vinyl", "PVC")
T g = 85°C, T m = 240°C. PVC |
This is one of the world's most widely used polymers. By itself it is quite rigid and used in construction materials such as pipes, house siding, flooring. Addition of plasticizers make it soft and flexible for use in upholstery, electrical insulation, shower curtains and waterproof fabrics. There is some effort being made to phase out this polymer owing to environmental concerns (see below). |
|
Synthetic rubbers
Neoprene
(polychloroprene)
Polybutadiene T g < –90°C
|
Neoprene, invented in 1930, was the first mass-produced synthetic rubber. It is used for such things as roofing membranes and wet suits. Polybutadiene substitutes a hydrogen for the chlorine; it is the major component (usually admixed with other rubbers) of tires. Synthetic rubbers played a crucial role in World War II SBS (styrene-butadiene-styrene) rubber is a block copolymer whose special durability makes it valued for tire treads. |
|
Polytetrafluroethylene ( Teflon , PTFE)
Decomposes above 350°C. |
This highly-crystalline fluorocarbon is exceptionally inert to chemicals and solvents. Water and oils do not wet it, which accounts for its use in cooking ware and other anti-stick applications, including personal care products. These properties — non-adhesion to other materials, non-wetability, and very low coefficient of friction ("slipperyness") — have their origin in the highly electronegative nature of fluorine whose atoms partly shield the carbon chain. Fluorine's outer electrons are so strongly attracted to its nucleus that they are less available to participate in London (dispersion force) interactions. |
|
Polyaramid (Kevlar )
Sublimation temperature 450°C. |
Kevlar is known for its ability to be spun into fibers that have five times the tensile strength of steel. It was first used in the 1970s to replace steel tire cords. Bullet-proof vests are one of it more colorful uses, but other applications include boat hulls, drum heads, sports equipment, and as a replacement for asbestos in brake pads. It is often combined with carbon or glass fibers in composite materials. The high tensile strength is due in part to the extensive hydrogen bonding between adjacent chains. Kevlar also has the distinction of having been invented by a woman chemist, Stephanie Kwolek. |
Thermosets
The thermoplastic materials described above are chains based on relatively simple monomeric units having varying degrees of polymerization, branching, bending, cross-linking and crystallinity, but with each molecular chain being a discrete unit. In thermosets, the concept of an individual molecular unit is largely lost; the material becomes more like a gigantic extended molecule of its own — hence the lack of anything like a glass transition temperature or a melting point.
These properties have their origins in the nature of the monomers used to produce them. The most important feature is the presence of multiple reactive sites that are able to form what amount to cross-links at every center. The phenolic resins, typified by the reaction of phenol with formaldehyde, illustrate the multiplicity of linkages that can be built.
Phenolic resins
- These are made by condensing one or more types of phenols (hydroxy-substituted benzene rings) with formaldehyde, as illustrated above. This was the first commercialized synthetic molding plastic. It was developed in 1907-1909 by the Belgian chemist Leo Baekeland, hence the common name bakelite. The brown material (usually bulked up with wood powder) was valued for its electrical insulating properties (light fixtures, outlets and other wiring devices) as well as for consumer items prior to the mid-century. Since that time, more recently developed polymers have largely displaced these uses. Phenolics are still extensively used as adhesives in plywood manufacture, and for making paints and varnishes.
- Urea resins
- Condensation of formaldehyde with urea yields lighter-colored and less expensive materials than phenolics. The major use if urea-formaldehyde resins is in bonding wood particles into particle board. Other uses are as baked-on enamel coatings for kitchen appliances and to coat cotton and rayon fibers to impart wrinkle- water-, and stain-resistance to the finished fabrics.
- Melamine resins
- Melamine, with even more amino (–NH 2 ) groups than urea, reacts with formaldehyde to form colorless solids that are harder then urea resins. The are most widely encountered in dinner-ware (plastic plates, cups and serving bowls) and in plastic laminates such as Formica.
- Alkyd-polyester resins
- An ester is the product of the reaction of an organic acid with an alcohol, so polyesters result when multifunctional acids such as phthalic acid react with polyhydric alcohols such as glycerol. The term alkyd derives from the two words alc ohol and ac id.
- Alkyd resins were first made by Berzelius in 1847, and they were first commercialized as Glyptal ( gly cerine + p hth al ic acid) varnishes for the paint industry in 1902.
- The later development of other polyesters greatly expanded their uses into a wide variety of fibers and molded products, ranging from clothing fabrics and pillow fillings to glass-reinforced plastics.
- Epoxy resins
- This large and industrially-important group of resins typically starts by condensing bisphenol-A with epichlorohydrin in the presence of a catalyst. (The - epi prefix refers to the epoxide group in which an oxygen atom that bridges two carbons.) These resins are usually combined with others to produce the desired properties. Epoxies are especially valued as glues and adhesives, as their setting does not depend on evaporation and the setting time can be varied over a wide range. In the two-part resins commonly sold for home use, the unpolymerized mixture and the hardener catalyst are packaged separately for mixing just prior to use. In some formulations the polymerization is initiated by heat ("heat curing"). Epoxy dental fillings are cured by irradiation with uv light.
- Polyurethanes
- Organic isocyanates R–NCO react with multifunctional alcohols to form polymeric carbamates , commonly referred to as polyurethanes . Their major use is in plastic foams for thermal insulation and upholstery, but a very large number of other applications, including paints and varnishes and plastic wheels used in fork-lift trucks, shopping carts and skateboards.
- Silicones
- Polysiloxanes (–Si–O–Si-) are the most important of the small class inorganic polymers . The commercial silicone polymers usually contained attached organic side groups that aid to cross-linking. Silicones can be made in a wide variety of forms; those having lower molecular weights are liquids, while the more highly polymerized materials are rubbery solids. These polymers have a similarly wide variety of applications: lubricants, caulking materials and sealants, medical implants, non-stick cookware coatings, hair-conditioners and other personal-care products.
Natural Polymers
Polymers derived from plants have been essential components of human existence for thousands of years. In this survey we will look at only those that have major industrial uses, so we will not be discussing the very important biopolymers proteins and nucleic acids .
Polysaccharides
Polysaccharides are polymers of sugars ; they play essential roles in energy storage, signaling, and as structural components in all living organisms. The only ones we will be concerned with here are those composed of glucose , the most important of the six-carbon hexoses . Glucose serves as the primary fuel of most organisms.
Glucose, however, is highly soluble and cannot be easily stored, so organisms make polymeric forms of glucose to set aside as reserve storage , from which glucose molecules can be withdrawn as needed.
Glycogen
In humans and higher animals, the reserve storage polymer is glycogen . It consists of roughly 60,000 glucose units in a highly branched configuration. Glycogen is made mostly in the liver under the influence of the hormone insulin which triggers a process in which digested glucose is polymerized and stored mostly in that organ. A few hours after a meal, the glucose content of the blood begins to fall, and glycogen begins to be broken down in order to maintain the body's required glucose level.
Starch
In plants, these glucose-polymer reserves are known as starch . Starch granules are stored in seeds or tubers to provide glucose for the energy needs of newly-germinated plants, and in the twigs of deciduous plants to tide them over during the winter when photosynthesis (the process in which glucose is synthesized from CO 2 and H 2 O) does not take place. The starches in food grains such as rice and wheat, and in tubers such as potatoes, are a major nutritional source for humans.
Plant starches are mixtures of two principal forms, amylose and amylopectin . Amylose is a largely-unbranched polymer of 500 to 20,000 glucose molecules that curls up into a helical form that is stabilized by internal hydrogen bonding. Amylopectin is a much larger polymer having up to two million glucose residues arranged into branches of 20 to 30 units.
Cellulose and its derivatives
Cellulose is the most abundant organic compound on the earth. Extensive hydrogen bonding between the chains causes native cellulose to be about 70% crystalline. It also raises the melting point (>280°C) to above its combustion temperature. The structures of starch and cellulose appear to be very similar; in the latter, every other glucose molecule is "upside-down". But the consequences of this are far-reaching; starch can dissolve in water and can be digested by higher animals including humans, whereas cellulose is insoluble and undigestible. Cellulose serves as the principal structural component of green plants and (along with lignin) in wood.
Cotton is one of the purest forms of cellulose and has been cultivated since ancient times. Its ability to absorb water (which increases its strength) makes cotton fabrics especially useful for clothing in very warm climates.
Cotton also serves (along with treated wood pulp) as the source the industrial production of cellulose-derived materials which were the first "plastic" materials of commercial importance.
- Nitrocellulose was developed in the latter part of the 19 th Century. It is prepared by treating cotton with nitric acid, which reacts with the hydroxyl groups in the cellulose chain. It was first used to make molded objects the first material used for a photographic film base by Eastman Kodak. Its extreme flammability posed considerable danger in movie theaters, and its spontaneous slow decomposition over time had seriously degraded many early films before they were transferred to more stable media. Nitrocellulose was also used as an explosive and propellant, for which applications it is known as guncotton .
- Cellulose acetate was developed in the early 1900s and became the first artificial fiber that was woven into fabrics that became prized for their lustrous appearance and wearing comfort. Kodak developed it as a "safety film" base in the 1930's to replace nitrocellulose, but it did not come into wide use for this purpose until 1948. A few years later, is became the base material for magnetic recording tape.
- Viscose is the general term for "regenerated" forms of cellulose made from solutions of the polymer in certain strong solvents. When extruded into a thin film it becomes cellophane which has been used as a food wrapping since 1912 and is the base for transparent adhesive tapes such as Scotch Tape. Viscose solutions extruded through a spinneret produce fibers known as rayon. Rayon (right) was the first "artificial silk" and has been used for tire cord, apparel, and carpets. It was popular for womens' stockings before Nylon became available for this purpose.
Rubber
A variety of plants produce a sap consisting of a colloidal dispersion of cis -polyisoprene. This milky fluid is especially abundant in the rubber tree ( Hevea ), from which it drips when the bark is wounded. After collection, the latex is coagulated to obtain the solid rubber. Natural rubber is thermoplastic, with a glass transition temperature of –70°C.
cis
-polyisoprene
Raw natural rubber tends to be sticky when warm and brittle when cold, so it was little more than a novelty material when first introduced to Europe around 1770. It did not become generally useful until the mid-nineteenth century when Charles Goodyear found that heating it with sulfur — a process he called vulcanization — could greatly improve its properties.
Vulcanization creates disulfide cross-links that prevent the polyisoprene chains from sliding over each other. The degree of cross-linking can be controlled to produce a rubber having the desired elasticity and hardness. More recently, other kinds of chemical treatment (such as epoxidation) have been developed to produce rubbers for special purposes.
Better things for better living... through chemistry " is a famous commercial slogan that captured the attitude of the public around 1940 when synthetic polymers were beginning to make a major impact in people's lives. What was not realized at the time, however, were some of the problems these materials would create as their uses multiplied and the world became more wary of "chemicals". (DuPont dropped the "through chemistry" part in 1982.)
Small-molecule release
Many kinds of polymers contain small molecules — either unreacted monomers, or substances specifically added (plasticizers, uv absorbers, flame retardants, etc.) to modify their properties. Many of these smaller molecules are able to diffuse through the material and be released into any liquid or air in contact with the plastic — and eventually into the aquatic environment. Those that are used for building materials (in mobile homes, for example) can build up in closed environments and contribute to indoor air pollution.
Residual monomer
Formation of long polymer chains is a complicated and somewhat random process that is never perfectly stoichiometric. It is therefore not uncommon for some unreacted monomer to remain in the finished product. Some of these monomers, such as formaldehyde, styrene (from polystyrene, including polystyrene foam food take-out containers), vinyl chloride, and bisphenol-A (from polycarbonates) are known carcinogens. Although there is little evidence that the small quantities that diffuse into the air or leach out into fluids pose a quantifiable health risk, people are understandably reluctant to tolerate these exposures, and public policy is gradually beginning to regulate them.
Perfluorooctanoic acid (PFOA), the monomer from which Teflon is made, has been the subject of a 2004 lawsuit against a DuPont factory that contaminated groundwater. Small amounts of PFOA have been detected in gaseous emissions from hot fluorocarbon products.
Plasticizers
These substances are compounded into certain types of plastics to render them more flexible by lowering the glass transition temperature. They accomplish this by taking up space between the polymer chains and acting as lubricants to enable the chains to more readily slip over each other. Many (but not all) are small enough to be diffusible and a potential source of health problems.
Polyvinyl chloride polymers are one of the most widely-plasticized types, and the odors often associated with flexible vinyl materials such as garden hoses, waterbeds, cheap shower curtains, raincoats and upholstery are testament to their ability to migrate into the environment.
The well-known "new car smell" is largely due to plasticizer release from upholstery and internal trim.
There is now an active movement to develop non-diffusible and "green" plasticizers that do not present these dangers.
Endocrine disrupters
To complicate matters even further, many of these small molecules have been found to be physiologically active owing to their ability to mimic the action of hormones or other signaling molecules, probably by fitting into and binding with the specialized receptor sites present in many tissues. The evidence that many of these chemicals are able to act in this way at the cellular level is fairly clear, but there is still some dispute whether many of these pose actual health risks to adult humans at the relatively low concentrations in which they commonly occur in the environment.
There is, however, some concern about the effects of these substances on non-adults and especially on fetuses, given that endocrines are intimately connected with sexual differentiation and neurological development which continues up through the late teens.
Decomposition products
Most commonly-used polymers are not readily biodegradable, particularly under the anaerobic conditions of most landfills. And what decomposition does occur will combine with rainwater to form leachates that can contaminate nearby streams and groundwater supplies. Partial photodecomposition, initiated by exposure to sunlight, is a more likely long-term fate for exposed plastics, resulting in tiny broken-up fragments. Many of these materials are less dense than seawater, and once they enter the oceans through coastal sewage outfalls or from marine vessel wastes, they tend to remain there indefinitely.
Open burning of polymeric materials containing chlorine (polyvinyl chloride, for example) is known to release compounds such as dioxins that persist in the environment. Incineration under the right conditions can effectively eliminate this hazard.
Disposed products containing fluorocarbons (Teflon-coated ware, some personal-care, waterproofing and anti-stick materials) break down into perfluorooctane sulfonate which has been shown to damage aquatic animals.
Hazards to animals
There are two general types of hazards that polymers can introduce into the aquatic environment. One of these relates to the release of small molecules that act as hormone disrupters as described above. It is well established that small aquatic animals such as fish are being seriously affected by such substances in many rivers and estuarine systems, but details of the sources and identities of these molecules have not been identified. One confounding factor is the release of sewage water containing human birth-control drugs (which have a feminizing effect on sexual development) into many waterways.
The other hazard relates to pieces of plastic waste that aquatic animals mistake for food or become entangled in.
|
This plastic bag (probably mistaken for a jellyfish, the sea turtle's only food) cannot be regurgitated and leads to intestinal blockage and a slow death. |
Remains of an albatross that mistook bits of plastic junk for food |
These dangers occur throughout the ocean, but are greatly accentuated in regions known as gyres. These are regions of the ocean in which a combination of ocean currents drives permanent vortices that tend to collect and concentrate floating materials. The most notorious of these are the Great Pacific Gyres that have accumulated astounding quantities of plastic waste.
|
|
Recycling
The huge quantity (one estimate is 10 8 metric tons per year) of plastic materials produced for consumer and industrial use has created a gigantic problem of what to do with plastic waste which is difficult to incinerate safely and which, being largely non-biodegradable, threatens to overwhelm the capacity of landfills. An additional consideration is that de novo production most of the major polymers consumes non-renewable hydrocarbon resources.
Plastic water bottles (left) present a special recycling problem because of their widespread use in away-from-home locations.
Plastics recycling has become a major industry, greatly aided by enlightened trash management policies in the major developed nations. However, it is plagued with some special problems of its own:
- Recycling is only profitable when there is a market for the regenerated material. Such markets vary with the economic cycle (they practically disappeared during the recession that commenced in 2008.)
- The energy-related costs of collecting and transporting plastic waste, and especially of processing it for re-use, are frequently the deciding factor in assessing the practicability of recycling.
- Collection of plastic wastes from diverse sources and locations and their transport to processing centers consumes energy and presents numerous operational problems.
- Most recycling processes are optimized for particular classes of polymers. The diversity of plastic types necessitates their separation into different waste streams — usually requiring manual (i.e., low-cost) labor. This in turn encourages shipment of these wastes to low-wage countries, thus reducing the availability of recycled materials in the countries in which the plastics originated.
Some of the major recycling processes include
- Thermal decomposition processes that can accommodate mixed kinds of plastics and render them into fuel oil, but the large inputs of energy they require have been a problem.
- A very small number of condensation polymers can be depolymerized so that the monomers can be recovered and re-used.
- Thermopolymers can be melted and pelletized, but those of widely differing types must be treated separately to avoid incompatability problems.
- Thermosets are usually shredded and used as filler material in recycled thermopolymers.
In order to facilitate efficient recycling, a set of seven resin idenfication codes has been established (the seventh, not shown below, is "other").
These codes are stamped on the bottoms of many containers of widely-distributed products. Not all categories are accepted by all local recycling authorities, so residents need to be informed about which kinds should be placed in recycling containers and which should be combined with ordinary trash.
Tire recycling
The large number of rubber tires that are disposed of, together with the increasing reluctance of landfills to accept them, has stimulated considerable innovation in the re-use of this material, especially in the construction industry. | libretexts | 2025-03-17T19:53:10.556419 | 2013-10-03T01:38:10 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.09%3A_Polymers_and_Plastics",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "7.9: Polymers and Plastics",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.10%3A_Colloids_and_their_Uses | 7.10: Colloids and their Uses
- Summarize the principal distinguishing properties of solutions, colloidal dispersions, and suspensions.
- For the various dispersion types (emulsion, gel, sol, foam, etc.), name the type (gas, liquid, or solid) of both the dispersed phase and the dispersions phase.
- Describe the origins of Brownian motion and how it can observed.
- Describe the electric double layer that surrounds many colloidal particles.
- Explain the mechanisms responsible for the stability of lyophilic and lyophobic colloidal dispersions.
- Define: surfactant, detergent, emulsifier, micelle.
- Give some examples of how colloidal dispersions can be made.
- Explain why freezing or addition of an electrolyte can result in the coagulation of an emulsion.
- Describe some of the colloid-related principles involved in food chemistry, such as the stabilization of milk and mayonaisse, the preparation of butter, and the various ways of cooking eggs.
- Describe the role of colloids in wastewater treatment.
Sand, salt, and chalk dust are made up of chunks of solid particles, each containing huge numbers of molecules. You can usually see the individual particles directly, although the smallest ones might require some magnification. At the opposite end of the size scale, we have individual molecules which dissolve in liquids to form homogeneous solutions. There is, however, a vast but largely hidden world in between: particles so tiny that they cannot be resolved by an optical microscope, or molecules so large that they begin to constitute a phase of their own when they are suspended in a liquid. This is the world of colloids which we will survey in this lesson. As you will see, we encounter colloids in the food we eat, the consumer products we buy... and we ourselves are built largely of colloidal materials.
Introducing Colloids
Colloids occupy an intermediate place between [particulate] suspensions and solutions, both in terms of their observable properties and particle size. In a sense, they bridge the microscopic and the macroscopic. As such, they possess some of the properties of both, which makes colloidal matter highly adaptable to specific uses and functions. Colloid science is central to biology, food science and numerous consumer products.
- Solutions are homogeneous mixtures whose component particles are individual molecules whose smallest dimension is generally less than 1 nm. Within this size range, thermal motions maintain homogeneity by overcoming the effects of gravitational attraction.
- Colloidal dispersions appear to be homogeneous, and the colloidal particles they contain are small enough (generally between 1-1000 nm) to exhibit Brownian motion, cannot be separated by filtration, and do not readily settle out. But these dispersions are inherently unstable and under certain circumstances, most colloidal dispersions can be "broken" and will "flocculate" or settle out.
- Suspensions are heterogeneous mixtures in which the suspended particles are sufficiently large (> 1000 nm in their smallest dimension) to settle out under the influence of gravity or centrifugal force. The particles that form suspensions are sometimes classified into various size ranges.
Colloidal particles need not fall within the indicated size range in all three dimensions; thus fibrous colloids such as many biopolymers may be very extended sizes along one direction.
The nature of colloidal particles
To begin, you need to recall two important definitions:
- a phase is defined as a region of matter in which the composition and physical properties are uniform. Thus ice and liquid water, although two forms of the single substance H 2 O, constitute two separate phases within a heterogeneous mixture .
- A solution is a homogeneous mixture of two or more substances consisting of a single phase. (Think of sugar dissolved in water).
But imagine that you are able to shrink your view of a solution of sugar in water down to the sub-microscopic level at which individual molecules can be resolved: you would see some regions of space occupied by H 2 O molecules, others by sugar molecules, and likely still others in which sugar and H 2 O molecules are momentarily linked together by hydrogen bonding— not to mention the void spaces that are continually appearing and disappearing between molecules as they are jumbled about by thermal motions. As with so many simple definitions, the concept of homogeneity (and thus of a solution ) breaks down as we move from the macro-scale into the molecular scale. And it is the region in between these two extremes that constitutes the realm of the colloid.
Smaller is bigger
What makes colloidal particles so special is not so much their sizes as it is the manner in which their surface areas increase as their sizes decrease. If we take a sample of matter and cut it up into smaller and smaller chunks, the total surface area will increase very rapidly. Although mass is conserved, surface area is not; as a solid is sliced up into smaller bits, more surfaces are created. These new surfaces are smaller, but there are many more of them; the ratio of surface area to mass can become extremely large.
- Consider a cube of material having a length of exactly 1 cm. What will be the surface area of this cube?
- Now let us cut this cube into smaller cubes by making 10 slices in each direction. How many smaller cubes wil this make, and what will be the total surface area?
Solution
- A cube possesses six square surfaces, so the total surface area is 6 × (1 cm 2 ) = 6 cm 2 .
- Each new cube has a face length of 0.10 cm, and thus a surface area of 6 × (0.1 cm 2 ) = 0.6 cm 2 . But there are 10 3 of these smaller cubes, so the total surface area is now 60 cm 2 -- quite a bit larger than it was originally!
The total surface area increases as the inverse cube of the the face length, so as we make our slices still smaller, the total surface area grows rapidly. In practical situations with real colloids, surface areas can reach hectares (or acres) per mole!
| number of slices per cube face | length of each face (cm) | surface area per face | number of cubes | total surface area |
|---|---|---|---|---|
| 0 | 1 | 1 cm 2 | 1 | 6 cm 2 |
| 10 | 0.1 | 0.01 cm 2 | 1000 | 60 cm 2 |
| 100 | 0.01 | 10 –4 cm 2 | 10 6 | 600 cm 2 |
| 1000 | 10 –3 | 10 –6 cm 2 | 10 9 | 0.6 m 2 |
| n | 1/ n | n –2 cm 2 | n 3 | 6 n cm 2 |
Why do we focus so much attention on surface area? The general answer is that surfaces (or more generally, interfaces between phases) possess physical and chemical properties of their own. In particular,
- Surfaces can exert van der Waals attractive forces on other molecules near them, and thus loosely bind other particles by adsorption
- Interfaces between different phases usually give rise to imbalances in electrical charge which can cause them to interact with nearby ions.
- The surfaces of many solids present "broken bonds" which are chemically active.
In normal "bulk" matter, these properties are mostly hidden from us owing to the small amount of surface area in relation to the quantity of matter. But as the particle size diminishes, surface phenomena begin to dominate their properties. The small sizes of colloidal solids allows the properties of their surfaces to dominate their behavior.
Colloidal Dispersions
Colloidal matter commonly exists in the form of colloidal-sized phases of solids, liquids, or gases that are uniformly dispersed in a separate medium (sometimes called the dispersions phase ) which may itself be a solid, liquid, or gas. Colloids are often classified and given special names according to the particular kinds of phases involved.
| dispersed phase | medium | dispersion type | examples |
|---|---|---|---|
| gas | liquid | foam | whipped cream |
|
gas |
solid | solid foam | pumice 1 , aerogels 2 |
| liquid | gas | liquid aerosol | fog, clouds |
| liquid | liquid | emulsion | milk 3 , mayonaisse, salad dressing |
| liquid | solid | gel | Jell-O, lubricating greases, opal 4 |
| solid | gas | solid aerosol | smoke |
| solid | liquid | sol | paints, some inks, blood |
| solid | solid | solid sol | bone, colored glass, many alloys |
Notes on this table:
- Pumice is a volcanic rock formed by the rapid depressurization and cooling of molten lava. The sudden release of pressure as the lava is ejected from the volcano allows dissolved gases to expand, producing tiny bubbles that get frozen into the matrix. Pumice is distinguished from other rocks by its very low density.
- Aerogels are manufactured rigid solids made by removing the liquid from gels, leaving a solid, porous matrix that can have remarkable and useful physical properties. Aerogels based on silica, carbon, alumina and other substances are available.
- Milk is basically an emulsion of butterfat droplets dispersed in an aqueous solution of carbohydrates.
- Opal consists of droplets of liquid water dispersed in a silica (SiO 2 ) matrix.
Large molecules can behave as colloids
Very large polymeric molecules such as proteins, starches and other biological polymers, as well as many natural polymers, exhibit colloidal behavior. There is no clear point at which a molecule becomes sufficiently large to behave as a colloidal particle.
Macroscopic or microscopic?
Colloidal dispersions behave very much like solutions in that they appear to be homogeneous on a macroscopic scale. They are often said to be microheterogeneous . The most important feature that distinguishes them from other particulate matter is that:
Colloids dispersed in liquids or gases are sufficiently small that they do not settle out under the influence of gravity. This, together with the their small sizes which allows them to pass through most filters, makes it difficult to separate colloidal matter from the phase in which it is dispersed.
Optical properties of colloidal dispersions
Colloidal dispersions are distinguished from true solutions by their light-scattering properties. The nature of this scattering depends on the ratio of the particle size in the medium to the wavelength of the light. A collimated beam of light passing through a solution composed of ordinary molecules ( r ) tends retain its shape. When such a beam is directed through a colloidal dispersion, it spreads out ( left container ).→
John Tyndall discovered this effect in 1869. Tyndall scattering (as it is commonly known) scatters all wavelengths equally. This is in contrast to Rayleigh scattering , which scatters shorter wavelengths more, bringing us blue skies and red sunsets. Tyndall scattering can be seen even in dispersions that are transparent. As the density of particles (or the particle size) increases, the light scattering may become great enough to produce a "cloudy" effect, as in this image of a smoke-filled room. This is the reason that milk, fog, and clouds themselves appear to be white. The individual water droplets in clouds (or the butterfat droplets in milk) are actually transparent, but the intense light scattering disperses the light in all directions, preventing us from seeing through them.
Colloidal particles are, like molecules, too small to be visible though an ordinary optical microscope. However, if one looks in a direction perpendicular to the light beam, a colloidal particle will "appear" over a dark background as a tiny speck due to the Tyndall scattering. A microscope specially designed for this application is known as an ultramicroscope . Bear in mind that the ultramicroscope (invented in Austria in 1902) does not really allow us to "see" the particle; the scattered light merely indicates where it is at any given instant.
Brownian motion
If you observe a single colloidal particle through the ultramicroscope, you will notice that it is continually jumping around in an irregular manner. These movements are known as Brownian motion. Scottish botanist Robert Brown discovered this effect in 1827 when observing pollen particles floating in water through a microscope. (Pollen particles are larger than colloids, but they are still small enough to exhibit some Brownian motion.)
It is worth noting that Albert Einstein's analysis of Brownian motion in 1901 constituted the first proof of the molecular theory of matter. Brownian motion arises from collisions of the liquid molecules with the solid particle. For large particles, the millions of collisions from different directions cancel out, so they remain stationary. The smaller the particle, the smaller the number of surrounding molecules able to collide with it, and more likely that random fluctuations will occur in the number of collisions from different sides. Simple statistics predicts that every once in a while, the imbalance in collisions from different directions will become great enough to give the particle a real kick!
Electrical Properties of Colloids
In general, differences in electric potential exist between all phase boundaries. If you have studied electrochemistry, you will know that two dissimilar metals in contact exhibit a "contact potential", and that similar potential differences exist between a metal and a solution in which it is immersed. But this principle extends well beyond ordinary electrochemistry; there are small potential differences even at the water-glass interface in a drinking glass, and the water-air interface above it.
Colloids are no exception to this rule; there is always a difference in electric potential between the colloid "phase" and that of the surrounding liquid. Even if the liquid consists of pure water, the polar H 2 O molecules at the colloid's surface are likely to be predominantly oriented with either their oxygen (negative) or hydrogen (positive) ends facing the interface, depending on the electrical properties of the colloid particle itself.
Interfacial electrical potential differences can have a variety of origins:
- Particles composed of ionic or ionizable substances usually have surface charges due to adsorption of an ion (usually an anion) from the solution, or to selective loss of one kinds of ion from the crystal surface. For example, Ag+ ions on the surface of a silver iodide crystal go into solution more readily than the Br- ions, leaving a negatively-charged surface.
- The charges of amphiprotic groups such as those on the surfaces of metal oxides and hydroxides will vary with the pH of the aqueous medium. Thus a particle of a metal oxide M–O will become positive in acidic solution due to formation of M–OH + , while that of a sparingly soluble hydroxide M–OH will become negative at low pH as it changes to M–O – . Colloidal-sized protein molecule can behave in a similar manner owing to the behavior of amphiprotic carboxylate-, amino- and sulfhydryl groups.
- Non-ionic particles or droplets such as oils or latex will tend to selectively adsorb positive or negative ions present in solution, thus "coating themselves" with electrical charge.
- In clays and other complex structures, isomorphous replacement of one ion by another having a different charge will leave a net electric charge on the particle. Thus particles of kaolinite clay become negatively charged due to replacement of some of the Si 4 + ions by Al 3 + .
Charged colloidal particles will attract an excess of oppositely-charged counter-ions to their vicinity from the bulk solution, forming a localized "cloud" of compensating charge around each particle. The entire assembly is called an electric double layer. Electric double layers of one kind or another exist at all phase boundaries, but those associated with colloids are especially important.
Stability of colloidal dispersions
What keeps the colloidal particles suspended in the dispersion medium? How can we force the particles to settle out? These are very important practical matters:
- Colloidal products such as paints and many foods (e.g., milk) must remain in dispersed form if they are to be useful;
- Other dispersions, often those formed as by-products of operations such as mining, water treatment, paper-manufacture, or combustion are environmental nuisances. The only practical way of disposing them is to separate the colloidal material from the much greater volume of the dispersion medium (most commonly water). Simple evaporation of the water is usually not a practical option; it is generally too slow, or too expensive if forced by heating.
You will recall that weak attractive forces act between matter of all kinds. These are known generally as van der Waals and dispersion forces, and they only "take hold" at very close distances. Countering these is the universal repulsive force that acts at even shorter distances, but is far stronger; it is the basic reason why two atoms cannot occupy the same space. For very small atomic and molecular sized particles, another thing that keeps them apart is thermal motion. Thus when two molecules in a gas collide, they do so with more than enough kinetic energy to overcome the weak attractive forces between them. As the temperature of the gas is reduced, so is the collisional energy; below its boiling point, the attractive forces dominate and the gas condenses into a liquid.
Electrical forces help keep colloids dispersed
When particles of colloidal dimension suspended in a liquid collide with each other, they do so with much smaller kinetic energies than is the case for gases, so in the absence of any compensating repulsion forces, we might expect van der Waals or dispersion attractions to win out. This would quickly result in the growth of aggregates sufficiently large to exceed colloidal size and to fall to the bottom of the container. This process is called coagulation .
So how do stable dispersions such as sols manage to survive? In the preceding section, we saw that each particle with its double layer is more or less electrically neutral. However, when two particles approach each other, each one "sees" mainly the outer part [shown here in blue] of the double layer of the other. These will always have the same charge sign (which depends on the type of colloid and the nature of the medium), so there will be an electrostatic repulsive force that opposes the dispersion force attractions.
Electrostatic (coulombic) forces have a strong advantage in this respect because they act over much greater distances do van der Waals forces. But as we will see further on, electrostatic repulsion can lose its effectiveness if the ionic concentration of the medium is too great, or if the medium freezes. Under these conditions, there are other mechanisms that can stabilize colloidal dispersions.
Interactions with the solvent
Colloids can be divided into two general classes according to how the particles interact with the dispersions medium (often referred to as the "solvent").
Lyophilic colloids
In one class of colloids, called lyophilic ("solvent loving") colloids, the particles contain chemical groups that interact strongly with the solvent, creating a sheath of solvent molecules that physically prevent the particles from coming together. Ordinary gelatine is a common example of a lyophilic colloid. It is in fact hydrophilic , since it forms strong hydrogen bonds with water. When you mix Jell-O or tapioca powder to make a gelatine dessert, the material takes up water and forms a stable colloidal gel. Lyophilic (hydrophilic) colloids are very common in biological systems and in foods.
Lyophobic colloids
Most of the colloids in manufactured products exhibit very little attraction to water: think of oil emulsions or glacially-produced rock dust in river water. These colloids are said to be lyophobic . Lyophobic colloids are all inherently unstable; they will eventually coagulate . However, "eventually" can be a very long time (the settling time for some clay colloids in the ocean is 200-600 years!).
For systems in which coagulation proceeds too rapidly, the process can be slowed down by adding a stabilizer. Stabilizers can act by coating the particles with a protective layer such as a polymer as described immediately below, or by providing an ion that is selectively adsorbed by the particle, thereby surrounding it with a charged sheath that will repel similar particles it collides with. Dispersions of these colloids are stabilized by electrostatic repulsion between the electric double layers surrounding the particles which we discussed in the preceding section.
Stabilization by cloaking
"Stabilization by stealth" has unwittingly been employed since ancient times through the use of natural gums to stabilize pigment particles in inks, paints, and pottery glazes. These gums are also widely used to stabilize foods and personal care products. A lyophobic colloid can be made to masquerade as lyophilic by coating it with something that itself possesses suitable lyophilic properties.
Steric stabilization
Alternatively, attaching a lyophobic material to a colloid of any type can surround the particles with a protective shield that physically prevents the particles from approaching close enough to join together. This method usually employs synthetic polymers and is often referred to as steric stabilization .
Synthetic polymers , which can be tailor-made for specific applications, are now widely employed for both purposes. The polymer can be attached to the central particle either by simple adsorption or by chemical bond formation.
Surfactants and micelle formation
Surfactants and detergents are basically the same thing. Surfactants that serve as cleaning agents are commonly called detergents (from L. detergere "to wipe away, cleanse"). Surfactants are molecules consisting of a hydrophylic "head" connected to a hydrophobic chain. Because such molecules can interact with both "oil" and water phases, they are often said to be amphiphilic . Typical of these is the well known cleaning detergent sodium dodecyl sulfonate ("sodium laurel sulfate") CH 3 (CH 2 ) 11 OSO 3 – Na + .
Amphiphiles possess the very important property of being able to span an oil-water interface. By doing so, they can stabilize emulsions of both the water-in-oil and oil-in-water types. Such molecules are essential components of the lipid bilayers that surround the cells and cellular organelles of living organisms.
Emulsions are inherently unstable; left alone, they tend to separate into "oil" and "water" phases. Think of a simple salad dressing made by shaking vegetable oil and vinegar. When a detergent-like molecule is employed to stabilize an emulsion, it is often referred to as an emulsifier . The resulting structure (left) is known as a micelle .
Emulsifiers are essential components of many foods. They are widely employed in pharmaceuticals, consumer goods such as lotions and other personal care products, paints and printing inks, and numerous industrial processes.
How detergents remove "dirt"
The "dirt" we are trying to remove consists of oily or greasy materials whose hydrophobic nature makes them resistant to the action of pure water. If the water contains amphiphilic molecules such as soaps or cleaning detergents that can embed their hydrophobic ends in the particles, the latter will present a hydrophilic interface to the water and will thus become "solubilized".
Soaps and detergents can also disrupt the cell membranes of many types of bacteria, for which they serve as disinfectants . However, they are generally ineffective against viruses, which do not possess cell membranes.
Bile: your body's own detergent
Oils and fats are important components of our diets, but being insoluble in water, they are unable to mix intimately with the aqueous fluid in the digestive tract in which the digestive enzymes are dissolved. In order to enable the lipase enzymes (produced by the pancreas) to break down these lipids into their component fatty acids, our livers produce a mixture of surfactants known as bile . The great surface area of the micelles in the resulting emulsion enables efficient contact between the lipase enzymes and the lipid materials.
The liver of the average adult produces about 500 mL of bile per day. Most of this is stored in the gall bladder, where it is concentrated five-fold by removal of water. As partially-digested material exits the stomach, the gall bladder squeezes bile into the top of the small intestine (the duodenum ).
In addition to its action as a detergent (which also aids in the destruction of bacteria that may have survived the high acidity of the gastric fluid), the alkaline nature of the bile salts neutralizes the acidity of the stomach exudate. The bile itself consists of of salts of a variety of bile acids, all of which are derived from cholesterol. The cholesterol-like part of the structure is hydrophobic, while the charged end of the salt is hydrophilic.
Microemulsions
Ordinary emulsions are inherently unstable; they do not form spontaneously, and once formed, the drop sizes are sufficiently large to scatter light, producing a milky appearance. As time passes, the average drop size tends to increase, eventually resulting in gravitational separation of the phases.
Microemulsions, in contrast, are thermodynamically stable and can form spontaneously. The drop radii are at the very low end of the colloidal scale, often 100 nm or smaller. This is too small to appreciably scatter visible light, so microemulsions appear visually to be homogenous systems.
Microemulsions require the presence of one or more surfactants which increase the flexibility and stability of the boundary regions. This allows them to vary form smaller micelles than surface tension forces would ordinarily allow; in some cases they can form sponge-like bicontinuous mixtures in which "oil" and "water" phases extend throughout the mixture, affording more contact area between the phases.
The uses of microemulsions are quite wide-ranging, with drug delivery, polymer synthesis, enzyme-assisted synthesis, coatings, and enhanced oil recovery being especially prominent.
4 Making and breaking colloidal dispersions
Particles of colloidal size can be made in two general ways:
- Start with larger particles and break them down into smaller ones ( Dispersion ).
- Build up molecular-sized particles (atoms, ions, or small molecules) into aggregates within the colloidal size range. ( Condensation )
Dispersion processes all require an input of energy as new surfaces are created. For solid particles , this is usually accomplished by some kind of grinding process such as in a ball- or roller-mill. Solids and liquids can also be broken into colloidal dimensions by injecting them into the narrow space between a rapidly revolving shaft and its enclosure, thus subjecting them to a strong shearing force that tends to pull the two sides of a particle in opposite directions.
The application of ultrasound (at about 20 kHz) to a mixture of two immiscible liquids can create liquid-in-liquid dispersions; the process is comparable to what we do when we shake a vinegar-and-oil salad dressing in order to create a more uniform distribution of the two liquids.
Condensation
Numerous methods exist for building colloidal particles from sub-colloidal entities.
\[S_2O_3^{2–} + H_2O \rightarrow S + SO_4^{2–} + 2 H^+ + 2 e^– \]
\[2 Fe^{3+} + 3 H_2O \rightarrow Fe_2O_3 + 6 H^+\]
\[Fe^{3+} + 2 H_2O \rightarrow FeO(OH) + 3 H^+\]
- Dissolution followed by precipitation
- This method is useful for dispersing hydrophobic organic substances in water. For example, a sample of paraffin wax is dissolved in ethanol, and the resulting solution is carefully added to a container of boiling water.
- Formation of precipitates under controlled conditions
- The trick here is to prevent the initial colloidal particles of the newly-formed compound from coalescing into an ordinary precipitate, as will ordinarily occur when solutions of two dissolved salts are combined directly. An alternative that is sometimes useful is to form the sol by a chemical process that procedes more slowly than direct precipitation:
-
- Sulfur sols are readily formed by oxidation of thiosulfate ions in acidic solution:
- Sols of oxides or hydrous oxides of transition metals can often be formed by boiling a soluble salt in water under slightly acidic conditions to prevent formation of insoluble hydroxides:
- Addition of a dispersant (usually a surfactant) can sometimes prevent colloidal particles from precipitating. Thus barium sulfate sols can be prepared from barium thiocyanate and (NH 4 ) 2 SO 4 in the presence of potassium citrate.
- Ionic solids can often selectively adsorb cations or anions from solutions containing the same kinds of ions that are present in the crystal lattice, thus coating the particles with protective electric charges. This is probably what happens in the example of Fe 3 + -ion hydrolysis given above.
- Similarly, if a solution of AgNO 3 is added to a dilute solution of potassium iodide, the AgI will form as a negatively-charged sol (AgI)·I – . But if the AgI is precipitated by adding KI to a solution of AgNO 3 , the excess Ag + will adsorb to the new particles, giving a positively-charged sol of (AgI)·Ag + .
How Dispersions are Broken
That oil-in-vinegar salad dressing you served at dinner the other day has now mostly separated into two layers, with unsightly globs of one phase floating in the other. This is surface chemistry in action! Emulsions are fundamentally unstable because molecules near surfaces (i.e., interfaces between phases) are no longer surrounded by their own kind on all sides. The resulting repulsions between like and unlike exact an energetic cost that must eventually be repaid through processes that reduce the interfacial area.
The consequent breakup of the emulsion can proceed through various stages:
- Coalescence - smaller drops join together to form larger ones;
- Flocculation - the small drops stick together without fully coalescing;
- Creamin g - Most oils have lower densities than water, so the drops float to the surface, but may not completely coalesce;
- Breaking - the ultimate thermodynamic fate and end result of the preceding steps.
The time required for these processes to take place is highly variable, and can be extended by the presence of stabilizer substances. Thus milk , an emulsion of butterfat in water, is stabilized by some of its natural components.
Coagulation and flocculation
The processes described above that allow colloids to remain suspended sometimes fail when conditions change, or equally troublesome, they work entirely too well and make it impossible to separate the colloidal particles from the medium; this is an especially serious problem in wastewater settling basins associated with sewage treatment and operations such as mining and the pulp-and-paper industries.
Coagulation is the general term that refers to the "breaking" of dispersions so that the colloidal particles can be collected, usually by settling out. The term Flocculation is often used as a synonym for coagulation, but it is more properly reserved for a special method of effecting coagulation which is described further on. Most coagulation processes act by disrupting the outer (diffuse) part of the electric double layer that gives rise to the electrostatic repulsion between them.
"Do not freeze"
Have you ever encountered milk that had previously been frozen? Not likely something you would want to drink! You will see "Do not freeze" labels on many foodstuffs and on colloidal consumer products such as latex house paint. Freezing disrupts the double layer by causing the ions within it to combine into neutral species so that the particles can now approach closely enough for attractive forces to take over, and once they do so, they never let go: coagulation is definitely an irreversible process!
Addition of an electrolyte
Coagulation of water-suspended dispersions can be brought about by raising the ionic concentration of the medium. The added ions will migrate to the oppositely-charged regions of the double layer, thus neutralizing its charges; this effectively reduces the thickness of the double layer, eventually allowing the attractive forces to prevail.
Rivers carry millions of tons of colloidal clay into the oceans. If you fly over the mouth of a river such as the Mississippi (shown here in a satellite image), you can sometimes see the difference in color as the clay colloids coagulate due to the action of the salt water.
The coagulated clay accumulates as sediments which eventually form a geographical feature called a river delta.
5 Gels
A liquid phase dispersed in a solid medium is known as a gel , but this formal definition does not always convey the full sense of the nature of the "solid". The latter may start out as a powdery or granulated material such as natural gelatin or a hydrophilic polymer, but once the gel has formed, the "solid" part is less a "phase" than a cross-linked network that extends throughout the volume of the liquid, whose quantity largely defines the volume of the entire gel.
Hydrogels can contain up to 90% water by weight
Most of the gels we commonly encounter have water as the liquid phase, and thus are called hydrogels ; ordinary gelatin deserts are well known examples.
The "solid" components of hydrogels are usually polymeric materials that have an abundance of hydrophilic groups such as hydroxyl (–OH) that readily hydrogen-bond to water and also to each other, creating an irregular, flexible, and greatly-extendable network. These polymers are sometimes synthesized for this purpose, but are more commonly formed by processing natural materials, including natural polymers such as cellulose.
- Gelatine is a protein-like material made by breaking down the connective tissues of animal skins, organs, and bones. The many polar groups on the resulting protein fragments bind them together, along with water molecules, to form a gel.
- A number of so-called super-absorbant polymers derived from cellulose, polyvinyl alcohol and other materials can absorb huge quantities of water, and have found uses for products such as disposable diapers, environmental spill control, water retention media for plants, surgical pads and wound dressings, and protective inner coatings and water-blockers in fiber optics and electrical cables.
Gels are essential components of a huge variety of consumer products ranging from thickening agents in foods and personal care products to cushioning agents in running shoes.
Gels can be fragile!
You may have noticed that a newly-opened container of yogurt or sour cream appears to be smooth and firm, but once some of the material has been spooned out, little puddles of liquid appear in the hollowed-out depressions.
As the spoon is plunged into the material, it pulls the nearby layers of the gel along with it, creating a shearing action that breaks it apart, releasing the liquid. Anyone who has attacked an egg yolk with a cook's whisk, written with a ball-point pen, or spread latex paint on a wall has made use of this phenomenon which is known as shear thinning .
Our bodies are mostly gels
The interior (the cytoplasm) of each cell in the soft tissues of our bodies consists of a variety of inclusions (organelles) suspended in a gel-like liquid phase called the cytosol. Dissolved in the cytosol are a variety of ions and molecules varying from the small to the large; among the latter, proteins and carbohydrates make up the "solid" portion of the gel structure.
Embedded within the cytosol is the filament-like cytoskeleton which controls the overall shape of the cell and holds the organelles in place.
(In free-living cells such as the amoeba, changes in the cytoskeleton enable the organism to alter its shape and move around to engulf food particles.)
Be thankful for the gels in your body; without them, you would be little more than a bag of gunge-filled liquid, likely to end up as a puddle on the floor!
The individual cells are bound into tissues by the extracellular matrix (ECM) which — on a much larger scale, holds us together and confers an overall structure and shape to the body. The ECM is made of a variety of structural fibers (collagens, elastins) embedded in a gel-like matrix.
6 Applications of colloids
Thickening agents
The usefulness of many industrial and consumer products is strongly dependent on their viscosity and flow properties. Toothpastes, lotions, lubricants, coatings are common examples. Most of the additives that confer desirable flow properties on these products are colloidal in nature; in many cases, they also provide stabilization and prevent phase separation. Since ancient times, various natural gums have been employed for such purposes, and many remain in use today.
More recently, manufactured materials whose properties can be tailored for specific applications have become widely available. Examples are colloidal microcrystalline cellulose, carboxymethyl cellulose, and fumed silica.
Fumed silica is a fine (5-50 nm), powdery form of SiO 2 of exceptionally low bulk density (as little as 0.002 g cm –3 ); the total surface area of one Kg can be as great as 60 hectares (148 acres). It is made by spraying SiCl 4 (a liquid) into a flame. It is used as a filler, for viscosity and flow control, a gelling agent, and as an additive for strenghthening concrete.
Food colloids
Most of the foods we eat are largely colloidal in nature. The function of food colloids generally has less to do with nutritional value than appearance, texture, and "mouth feel". The latter two terms relate to the flow properties of the material, such as spreadability and the ability to "melt" (transform from gel to liquid emulsion) on contact with the warmth of the mouth.
Dairy products
Milk is basically an emulsion of lipid oils ("butterfat") dispersed in water and stabilized by phospholipids and proteins. Most of the protein content of milk consists of a group known as caseins which aggregate into a complex micellar structure which is bound together by calcium phosphate units.
Homogenizer
The stabilizers present in fresh milk will maintain its uniformity for 12-24 hours, but after this time the butterfat globules begin to coalesce and float to the top ("creaming"). In order to retard this process, most milk sold after the early 1940's undergoes homogenization in which the oil particles are forced through a narrow space under high pressure. This breaks up the oil droplets into much smaller ones which remain suspended for the useful shelf life of the milk.
Before homogenization become common, milk bottles commonly had enlarged tops ↑
to make it easier to skim off the cream that would separate out.
The structures of cream, yogurt and ice cream are dominated by the casein aggregates mentioned above.
Ice cream is a complex mixture of several colloid types:
- an emulsion (of butterfat globules in a highly viscous aquatic phase);
- a semisolid foam consisting of small (100 μ) air bubbles which are beat into the mixture as it is frozen. Without these bubbles, the frozen mixture would be too hard to conveniently eat;
- a gel in which a network of tiny (50 μ) ice crystals are dispersed in a semi-glassy aqueous phase containing sugars and dissolved macromolecules.
Whereas milk is an oil (butterfat)-in-water dispersion, butter and margarine have a "reversed" (water-in-oil) arrangement. This transformation is accomplished by subjecting the butterfat droplets in cream to violent agitation ( churning ) which forces the droplets to coalesce into a semisolid mass within which remnants of the water phase are embedded. The greater part of this phase ends up as the by-product buttermilk .
Eggs: colloids for breakfast, lunch, and dessert
A detailed study of eggs and their many roles in cooking can amount to a mini-course in colloid chemistry in itself. There is something almost magical in the way that the clear, viscous "white" of the egg can be transformed into a white, opaque semi-solid by brief heating, or rendered into more intricate forms by poaching, frying, scrambling, or baking into custards, soufflés, and meringues, not to mention tasty omelettes, quiches, and more exotic delights such as the eggah (Arabic) and kuku (Persian) dishes of the Middle-East.
The raw eggwhite is basically a colloidal sol of long-chain protein molecules, all curled up into compact folded forms due to hydrogen bonding between different parts of the same molecule. Upon heating, these bonds are broken, allowing the proteins to unfold. The denuded chains can now tangle and bind to each other, transforming the sol into a cross-linked hydrogel, now so dense that scattered light changes its appearance to opaque white.
What happens next depends very much on the skill of the cook. The idea is to drive out enough of the water entrapped within the gel network to achieve the desired density while retaining enough gel structure to prevent it from forming a rubbery mass, as usually happens with hard-boiled eggs. This is especially important when the egg structure is to be incorporated into other food components as in baked dishes.
The key to all this is temperature control; the eggwhite proteins begin to coagulate at 65°C and if yolk proteins are present, the mixture is nicely set at about 73°; by 80° the principal (albumin) protein has set, and at much above this the gel network will collapse into an overcooked mass. The temperature limit required to avoid this disaster can be raised by adding milk or sugar; the water part of the milk dilutes the proteins, while sugar molecules hydrogen-bond to them, forming a protective shield that keeps the proton strand separated. This is essential when baking custards, but incorporating a bit of cream into scrambled eggs can similarly help them retain their softness.
Whipped cream and meringues
The other colloidal personalities eggs can display are liquid and solid foams. Instead of applying heat to unfold the proteins, we "beat" them; the shearing force of a whisk or egg-beater helps pull them apart, and the air bubbles that get entrapped in the mixture attract the hydrophobic parts of the unfolded proteins and help hold them in place. Sugar will stabilize the foam by raising its viscosity, but will interfere with protein folding if added before the foam is fully formed. Sugar also binds the residual water during cooking, retarding its evaporation until after the proteins not broken up by beating can be thermally coagulated.
Paints and inks
Paints have been used since ancient times for both protective and decorative purposes. They consist basically of pigment particles dispersed in vehicle — a liquid capable for forming a stable solid film as the paint "dries".
The earliest protective coatings were made by dissolving plant-derived natural polymers (resins) in an oil such as that of linseed. The double-bonds in these oils tends to oxidize when exposed to air, causing it to polymerize into an impervious film. The colloidal pigments were stabilized with naturally-occurring surfactants such as polysaccharide gums.
Present-day paints are highly-engineered products specialized for particular industrial or architectural coatings and for marine or domestic use. For environmental reasons, water-based ("latex") vehicles are now preferred.
Inks
The most critical properties of inks relate to their drying and surface properties; they must be able to flow properly and attach to the surface without penetrating it — the latter is especially critical when printing on a porous material such as paper.
Many inks consist of organic dyes dissolved in a water-based solvent, and are not colloidal at all. The ink used in printing newspapers employs colloidal carbon black dispersed in an oil vehicle. The pressure applied by the printing press forces the vehicle into the pores of the paper, leaving most of the pigment particles on the surface.
The inks employed in ball-point pens are gels, formulated in such a way that the ink will only flow over the ball and onto the paper when the shearing action of the ball (which rotates as it moves across the paper) "breaks" the gel into a liquid; the resulting liquid coats the ball and is transferred to the paper. As in conventional printing, the pigment particles remain on the paper surface, while the liquid is pressed into the pores and gradually evaporates.
Water and wastewater treatment
Turbidities of 5, 50, and 500 units.
[WikiMedia]
Water, whether intended specifically for drinking, or wastewaters such as sewage or from industrial operations such as from pulp-and-paper manufacture (most of which are likely to end up being re-used elsewhere) usually contains colloidal matter that cannot be removed by ordinary sand filters, as evidenced by its turbidity. Even "pristine" surface waters often contain suspended soil sediments that can harbor infectious organisms and may provide them with partial protection from standard disinfection treatments.
The sulfates of aluminum (alum) and of iron(III) have long been widely employed for this purpose. Synthetic polymers tailored specifically for these applications have more recently come into use.
The usual method of removing turbidity is to add a flocculating agent (flocculant). These are most often metallic salts that can form gel-like hydroxide precipitates, often with the aid of added calcium hydroxide (quicklime) if pH of the water must be raised.
The flocculant salts neutralize the surface charges of the colloids, thus enabling them to coagulate; these are engulfed and trapped by fragments of gelatinous precipitate, which are drawn together into larger aggregates by gentle agitation until they become sufficiently large to form flocs which can be separated by settling or filtration.
Soil colloids
The four major components of soils are mineral sediments, organic matter, water, and air. The water is primarily adsorbed to the mineral and organic materials, but may also share pore spaces with air; pore spaces constitute about half the bulk volume of typical solid.
The principal colloidal components of soils are mineral sediments in the form of clays, and the humic materials in the organic matter. In addition to influencing the consistency of soil by binding water molecules, soil colloids play an essential role in storing and exchanging the mineral ions required by plants.
Most soil colloids are negatively charged, and therefor attract cations such as Ca 2 + , Mg 2 + , and K + into the outer parts of their double layers. Because these ions are loosely bound, they constitute a source from which plant roots can draw these essential nutrients. Conversely, they can serve as a sink for these same ions when they are released after the plant dies.
Clays
These are layered structures based on alumino-silicates or hydrous oxides, mostly of iron or aluminum. Each layer is built of two or three sheets of extended silica or alumina structures linked together by shared oxygen atoms. These layers generally have an overall negative charge owing to the occasional replacement of a Si 4 + ion by one of Al 3 + .
Adjacent layers are separated by a region of adsorbed cations (to neutralize the negative charges) and water molecules, and thus are held together relatively loosely. It is these interlayer regions that enable clays to work their magic by exchanging ions with both the soil water and the roots of plants.
Humic substances
The principal organic components of soil are complex substances of indeterminate structure that present –OH and –COOH groups which become increasingly dissociated as the pH increases. This allows them to bind and exchange cations in much the same way as described above. | libretexts | 2025-03-17T19:53:10.791891 | 2013-10-03T01:38:07 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.10%3A_Colloids_and_their_Uses",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "7.10: Colloids and their Uses",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions | 8: Solutions
Solutions are homogeneous (single-phase) mixtures of two or more components . For convenience, we often refer to the majority component as the solvent ; minority components are solutes ; there is really no fundamental distinction between them. Solutions play a very important role in Chemistry because they allow intimate and varied encounters between molecules of different kinds, a condition that is essential for rapid chemical reactions to occur.
-
- 8.1: Solutions and their Concentrations
- Concentration is a general term that expresses the quantity of solute contained in a given amount of solution. Various ways of expressing concentration are in use; the choice is usually a matter of convenience in a particular application. You should become familiar with all of them.
-
- 8.6: Reverse Osmosis
- Applying a hydrostatic pressure greater than this to the high-solute side of an osmotic cell will force water to flow back into the fresh-water side. This process, known as reverse osmosis, is now the major technology employed to desalinate ocean water and to reclaim "used" water from power plants, runoff, and even from sewage. It is also widely used to deionize ordinary water and to purify it for for industrial uses (especially beverage and food manufacture) and drinking purposes.
-
- 8.7: Colligative Properties and Entropy
- All four colligative properties result from “dilution” of the solvent by the added solute. More specifically, these all result from the effect of dilution of the solvent on its entropy, and thus in the increase in the density of energy states of the system in the solution compared to that in the pure liquid.
-
- 8.8: Ideal vs. Real Solutions
- One might expect the vapor pressure of a solution of ethanol and water to be directly proportional to the sums of the values predicted by Raoult's law for the two liquids individually, but in general, this does not happen. The reason for this can be understood if you recall that Raoult's law reflects a single effect: the smaller proportion of vaporizable molecules (and thus their reduced escaping tendency) when the liquid is diluted by otherwise "inert" (non-volatile) substance.
-
- 8.9: Distillation
- Distillation is a process whereby a mixture of liquids having different vapor pressures is separated into its components. Since distillation depends on the different vapor pressures of the components to be separated, let's first consider the vapor pressure vs. composition plots for a hypothetical mixture at some arbitrary temperature at which both liquid and gas phases can exist, depending on the total pressure.
-
- 8.10: Ions and Electrolytes
- Electrolytic solutions are those that are capable of conducting an electric current. A substance that, when added to water, renders it conductive, is known as an electrolyte. A common example of an electrolyte is ordinary salt, sodium chloride. Solid NaCl and pure water are both non-conductive, but a solution of salt in water is readily conductive. A solution of sugar in water, by contrast, is incapable of conducting a current; sugar is therefore a non-electrolyte. | libretexts | 2025-03-17T19:53:10.866065 | 2013-10-03T01:38:04 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "8: Solutions",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.01%3A_Solutions_and_their_Concentrations | 8.1: Solutions and their Concentrations
Make sure you thoroughly understand the following essential ideas:
- Describe the major reasons that solutions are so important in the practical aspects of chemistry.
- Explain why expressing a concentration as " x -percent" can be ambiguous.
- Explain why the molarity of a solution will vary with its temperature, whereas molality and mole fraction do not.
- Given the necessary data, convert (in either direction) between any two concentration units, e.g. molarity - mole fraction.
- Show how one can prepare a given volume of a solution of a certain molarity, molality, or percent concentration from a solution that is more concentrated (expressed in the same units.)
- Calculate the concentration of a solution prepared by mixing given volumes to two solutions whose concentrations are expressed in the same units.
Solutions are homogeneous (single-phase) mixtures of two or more components . For convenience, we often refer to the majority component as the solvent ; minority components are solutes ; there is really no fundamental distinction between them. Solutions play a very important role in Chemistry because they allow intimate and varied encounters between molecules of different kinds, a condition that is essential for rapid chemical reactions to occur. Several more explicit reasons can be cited for devoting a significant amount of effort to the subject of solutions:
- For the reason stated above, most chemical reactions that are carried out in the laboratory and in industry, and that occur in living organisms, take place in solution.
- Solutions are so common; very few pure substances are found in nature.
- Solutions provide a convenient and accurate means of introducing known small amounts of a substance to a reaction system. Advantage is taken of this in the process of titration, for example.
- The physical properties of solutions are sensitively influenced by the balance between the intermolecular forces of like and unlike (solvent and solute) molecules. The physical properties of solutions thus serve as useful experimental probes of these intermolecular forces.
We usually think of a solution as a liquid made by adding a gas, a solid or another liquid solute in a liquid solvent . Actually, solutions can exist as gases and solids as well.
Solid solutions are very common; most natural minerals and many metallic alloys are solid solutions.
Still, it is liquid solutions that we most frequently encounter and must deal with. Experience has taught us that sugar and salt dissolve readily in water, but that “oil and water don’t mix”. Actually, this is not strictly correct, since all substances have at least a slight tendency to dissolve in each other. This raises two important and related questions: why do solutions tend to form in the first place, and what factors limit their mutual solubilities?
Understanding Concentrations
Concentration is a general term that expresses the quantity of solute contained in a given amount of solution. Various ways of expressing concentration are in use; the choice is usually a matter of convenience in a particular application. You should become familiar with all of them.
Parts-per concentration
In the consumer and industrial world, the most common method of expressing the concentration is based on the quantity of solute in a fixed quantity of solution. The “quantities” referred to here can be expressed in weight, in volume, or both (i.e., the weight of solute in a given volume of solution.) In order to distinguish among these possibilities, the abbreviations (w/w), (v/v) and (w/v) are used.
In most applied fields of Chemistry, (w/w) measure is often used, and is commonly expressed as weight-percent concentration, or simply "percent concentration". For example, a solution made by dissolving 10 g of salt with 200 g of water contains "1 part of salt per 20 g of water".
The normal saline solution used in medicine for nasal irrigation, wound cleaning and intravenous drips is a 0.91% (w/v) solution of sodium chloride in water. How would you prepare 1.5 L of this solution?
Solution
The solution will contain 0.91 g of NaCl in 100 mL of water, or 9.1 g in 1 L. Thus you will add (1.5 × 9.1g) = 13.6 g of NaCl to 1.5 L of water.
Percent means parts per 100; we can also use parts per thousand (ppt) for expressing concentrations in grams of solute per kilogram of solution. For more dilute solutions, parts per million (ppm) and parts per billion (10 9 ; ppb) are used. These terms are widely employed to express the amounts of trace pollutants in the environment.
Describe how you would prepare 30 g of a 20 percent (w/w) solution of KCl in water.
SolutionThe weight of potassium chloride required is 20% of the total weight of the solution, or 0.2 × (3 0 g) = 6.0 g of KCl. The remainder of the solution (30 – 6 = 24) g consists of water. Thus you would dissolve 6.0 g of KCl in 24 g of water.
Weight/volume and volume/volume basis
It is sometimes convenient to base concentration on a fixed volume, either of the solution itself, or of the solvent alone. In most instances, a 5% by volume solution of a solid will mean 5 g of the solute dissolved in 100 ml of the solvent.
Fish, like all animals, need a supply of oxygen, which they obtain from oxygen dissolved in the water. The minimum oxygen concentration needed to support most fish is around 5 ppm (w/v). How many moles of O 2 per liter of water does this correspond to?
Solution
5 ppm (w/v) means 5 grams of oxygen in one million mL (1000 L) of water, or 5 mg per liter. This is equivalent to (0.005 g) / (32.0 g mol –1 ) = 1.6 × 10 –4 mol.
If the solute is itself a liquid, volume/volume measure usually refers to the volume of solute contained in a fixed volume of solution (not solvent ). The latter distinction is important because volumes of mixed substances are not strictly additive. These kinds of concentration measure are mostly used in commercial and industrial applications. The "proof" of an alcoholic beverage is the (v/v)-percent, multiplied by two; thus a 100-proof vodka has the same alcohol concentration as a solution made by adding sufficient water to 50 ml of alcohol to give 100 ml of solution.
Molarity: mole/volume basis
This is the method most used by chemists to express concentration, and it is the one most important for you to master . Molar concentration (molarity) is the number of moles of solute per liter of solution.
The important point to remember is that the volume of the solution is different from the volume of the solvent ; the latter quantity can be found from the molarity only if the densities of both the solution and of the pure solvent are known. Similarly, calculation of the weight-percentage concentration from the molarity requires density information; you are expected to be able to carry out these kinds of calculations, which are covered in most texts.
How would you make 120 mL of a 0.10 M solution of potassium hydroxide in water?
Solution
The amount of KOH required is
(0.120 L) × (0.10 mol L –1 ) = 0.012 mol.
The molar mass of KOH is 56.1 g, so the weight of KOH required is
\[(0.012\; mol) \times (56.1\; g \;mol^{-1}) = 0.67\; g\]
We would dissolve this weight of KOH in a volume of water that is less than 120 mL, and then add sufficient water to bring the volume of the solution up to 120 mL.
Note : if we had simply added the KOH to 120 mL of water, the molarity of the resulting solution would not be the same. This is because volumes of different substances are not strictly additive when they are mixed. Without actually measuring the volume of the resulting solution, its molarity would not be known.
Calculate the molarity of a 60-% (w/w) solution of ethanol (C 2 H 5 OH) in water whose density is 0.8937 g mL –1 .
Solution
One liter of this solution has a mass of 893.7 g, of which
\[0.60 \times (893.7\; g) = 536.2\; g\]
consists of ethanol. The molecular weight of C 2 H 5 OH is 46.0, so the number of moles of ethanol present in one liter (that is, the molarity) will be
\[ \dfrac{\dfrac{536.2\;g}{46.0\;g\;mol^{-1}}}{1 L} =11.6\; mol\,L^{-1}\]
Normality and Equivalents
Normality is a now-obsolete concentration measure based on the number of equivalents per liter of solution. Although the latter term is now also officially obsolete, it still finds some use in clinical- and environmental chemistry and in electrochemistry. Both terms are widely encountered in pre-1970 textbooks and articles.
The equivalent weight of an acid is its molecular weight divided by the number of titratable hydrogens it carries. Thus for sulfuric acid H 2 SO 4 , one mole has a mass of 98 g, but because both hydrogens can be neutralized by strong base, its equivalent weight is 98/2 = 49 g. A solution of 49 g of H 2 SO 4 per liter of water is 0.5 molar, but also "1 normal" (1 N = 1 eq/L). Such a solution is "equivalent" to a 1 M solution of HCl in the sense that each can be neutralized by 1 mol of strong base.
solution of FeCl 3 is said to be "3 normal" (3 N ) because it dissociates into three moles/L of chloride ions.Although molar concentration is widely employed, it suffers from one serious defect: since volumes are temperature-dependent (substances expand on heating), so are molarities; a 0.100 M solution at 0° C will have a smaller concentration at 50° C. For this reason, molarity is not the preferred concentration measure in applications where physical properties of solutions and the effect of temperature on these properties is of importance.
Mole fraction: mole/mole basis
This is the most fundamental of all methods of concentration measure, since it makes no assumptions at all about volumes. The mole fraction of substance i in a mixture is defined as
\[ X_i= \dfrac{n_i}{\sum_j n_j}\]
in which n j is the number of moles of substance j , and the summation is over all substances in the solution. Mole fractions run from zero (substance not present) to unity (the pure substance). The sum of all mole fractions in a solution is, by definition, unity:
\[\sum_i X_i=1\]
What fraction of the molecules in a 60-% (w/w) solution of ethanol in water consist of H 2 O?
Solution
From the previous problem, we know that one liter of this solution contains 536.2 g (11.6 mol) of C 2 H 5 OH. The number of moles of H 2 O is
( (893.7 – 536.2) g) / (18.0 g mol –1 ) = 19.9 mol.
The mole fraction of water is thus
\[\dfrac{19.9}{19.9+11.6} = 0.63\]
Thus 63% of the molecules in this solution consist of water, and 37% are ethanol.
In the case of ionic solutions, each kind of ion acts as a separate component.
Find the mole fraction of water in a solution prepared by dissolving 4.5 g of CaBr 2 in 84.0 mL of water.
Solution
The molar mass of CaBr 2 is 200 g, and 84.0 mL of H 2 O has a mass of very close to 84.0 g at its assumed density of 1.00 g mL –1 . Thus the number of moles of CaBr 2 in the solution is
\[\dfrac{4.50\; g}{200\; g/mol} = 0.0225 \;mol\]
Because this salt is completely dissociated in solution, the solution will contain 0.268 mol of Ca 2 + and (2 × .268) = 0.536 of Br – . The number of moles of water is
(84 g) / (18 g mol –1 ) = 4.67 mol.
The mole fraction of water is then
\[\dfrac{0.467\; \cancel{mol}}{0.268 + 0.536 + 4.67\; \cancel{mol}} = \dfrac{0.467}{5.47} = 0.854\]
Thus H 2 O constitutes 85 out of every 100 molecules in the solution.
Molality: mole/weight basis
A 1-molal solution contains one mole of solute per 1 kg of solvent. Molality is a hybrid concentration unit, retaining the convenience of mole measure for the solute, but expressing it in relation to a temperature-independent mass rather than a volume. Molality, like mole fraction, is used in applications dealing with certain physical properties of solutions; we will see some of these in the next lesson.
Calculate the molality of a 60-% (w/w) solution of ethanol in water.
Solution
From the above problems, we know that one liter of this solution contains 11.6 mol of ethanol in
(893.7 – 536.2) = 357.5 g
of water. The molarity of ethanol in the solution is therefore
(11.6 mol) / (0.3575 kg) = 32.4 mol kg –1 .
Conversion between Concentration Measures
Anyone doing practical chemistry must be able to convert one kind of concentration measure into another. The important point to remember is that any conversion involving molarity requires a knowledge of the density of the solution.
A solution prepared by dissolving 66.0 g of urea (NH 2 ) 2 CO in 950 g of water had a density of 1.018 g mL –1 . Express the concentration of urea in
- weight-percent
- mole fraction
- molarity
- molality
Solution
a) The weight-percent of solute is (100%) –1 (66.0 g) / (950 g) = 6.9%
The molar mass of urea is 60, so the number of moles is
(66 g) /(60 g mol –1 ) = 1.1 mol.
The number of moles of H 2 O is
(950 g) / (18 g mol –1 ) = 52.8 mol.
b) Mole fraction of urea:
(1.1 mol) / (1.1 + 52.8 mol) = 0.020
c) molarity of urea: the volume of 1 L of solution is
(66 + 950)g / (1018 g L –1 )= 998 mL.
The number of moles of urea (from a) is 1.1 mol.
Its molarity is then
(1.1 mol) / (0.998 L) = 1.1 mol L –1 .
d) The molality of urea is (1.1 mol) / (.066 + .950) kg = 1.08 mol kg –1 .
Ordinary dry air contains 21% (v/v) oxygen. About many moles of O 2 can be inhaled into the lungs of a typical adult woman with a lung capacity of 4.0 L?
Solution
The number of molecules (and thus the number of moles) in a gas is directly proportional to its volume ( Avogadro's law ), so the mole fraction of O 2 is 0.21. The molar volume of a gas at 25° C is
(298/271) × 22.4 L mol –1 = 24.4 L mol –1
so the moles of O 2 in 4 L of air will be
(4 / 24.4) × (0.21 mol) × (24.4 L mol –1 ) = 0.84 mol O 2 .
Dilution calculations
These kinds of calculations arise frequently in both laboratory and practical applications. If you have a thorough understanding of concentration definitions, they are easily tackled. The most important things to bear in mind are
- Concentration is inversely proportional to volume;
- Molarity is expressed in mol L –1 , so it is usually more convenient to express volumes in liters rather than in mL;
- Use the principles of unit cancelations to determine what to divide by what.
Commercial hydrochloric acid is available as a 10.17 molar solution. How would you use this to prepare 500 mL of a 4.00 molar solution?
SolutionThe desired solution requires (0.50 L) × (4.00 M L –1) = 2.0 mol of HCl. This quantity of HCl is contained in (2.0 mol) / (10.17 M L –1 ) = 0.197 L of the concentrated acid. So one would measure out 197 mL of the concentrated acid, and then add water to make the total volume of 500 mL.
Calculate the molarity of the solution produced by adding 120 mL of 6.0 M HCl to 150 mL of 0.15 M HCl. What important assumption must be made here?
Solution
The assumption, of course, is that the density of HCl within this concentration range is constant, meaning that their volumes will be additive.
Moles of HCl in first solution:
(0.120 L) × (6.0 mol L –1 ) = 0.72 mol HCl
Moles of HCl in second solution:
(0.150 L) × (0.15 mol L –1 ) = 0.02 mol HCl
Molarity of mixture:
(0.72 + 0.02) mol / (.120 + .150) L = 4.3 mol L –1 . | libretexts | 2025-03-17T19:53:10.978422 | 2013-10-03T01:38:06 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.01%3A_Solutions_and_their_Concentrations",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "8.1: Solutions and their Concentrations",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.02%3A_Thermodynamics_of_Solutions | 8.2: Thermodynamics of Solutions
Make sure you thoroughly understand the following essential ideas:
- Describe the two fundamental processes that must occur whenever a solute dissolves in a solvent, and discuss the effects of the absorption or release of energy on the extent of these processes.
- Another factor entering into the process of solution formation is the increase (or occasionally, the decrease) in the entropy — that is, the degree to which thermal energy is dispersed or "diluted". Explain this in your own terms.
- Explain how the adage "like dissolves like" reflects the effects mentioned above. What is the principal physical property of a molecule that defines this "likeness"?
- What do we mean when we describe a liquid such as water as "associated"? Explain how this relates to the the solubility of solutes in such liquids.
You may recall that in the earlier unit on phase equilibria, we pointed out that aggregations of molecules that are more disordered tend to be the ones that are favored at higher temperature, whereas those that possess the lowest potential energy are favored at lower temperatures. This is a general principle that applies throughout the world of matter; the stable form at any given temperature will always be that which leads to the best balance between low potential energy and high molecular disorder. To see how these considerations are applied to solutions, think about the individual steps that must be carried out when a solute is dissolved in a solvent:
- If the solute is a solid or liquid, it must first be dispersed — that is, its molecular units must be pulled apart. This requires energy, and so this step always works against solution formation.
- The solute must then be introduced into the solvent. Whether this is energetically favorable or unfavorable depends on the nature of the solute and solvent. If the solute is A and the solvent is B, then what is important is the strength of the attractive forces between A-A and B-B molecules, compared to those between A-B pairs; if the latter are greater, then the potential energy will be lower when the substances are mixed and solution formation will be favored.
If step 2 releases more energy than is consumed in step 1 , this will favor solution formation, and we can generally expect the solute to be soluble in the solvent. Even if the dissolution process is slightly endothermic, there is a third important factor, the entropy increase, that will very often favor the dissolved state.
Entropy of Solution
As anyone who has shuffled a deck of cards knows, disordered arrangements of objects are statistically more favored simply because there are more ways in which they can be realized. And as the number of objects increases, the more does statistics govern their most likely arrangements. The numbers of objects (molecules) we deal with in Chemistry is so huge that their tendency to become as spread out as possible becomes overwhelming. However, in doing so, the thermal energy they carry with them is also spread and dispersed, so the availability of this energy, as measured by the temperature, is also of importance. Chemists use the term "entropy" to denote this aspect of molecular randomness.
Readers of this section who have had some exposure to thermodynamics will know that solubility, like all equilibria, is governed by the Gibbs free energy change for the process, which incorporates the entropy change at a fundamental level. A proper understanding of these considerations requires some familiarity with thermodynamics, which most students do not encounter until well into their second semester of Chemistry. If you are not there yet, do not despair; you are hereby granted temporary permission to think of molecular "disorder" and entropy simply in terms of "spread-outedness".
Thus in the very common case in which a small quantity of solid or liquid dissolves in a much larger volume of solvent, the solute becomes more spread out in space, and the number of equivalent ways in which the solute can be distributed within this volume is greatly increased. This is the same as saying that the entropy of the solute increases.
If the energetics of dissolution are favorable, this increase in entropy means that the conditions for solubility will always be met. Even if the energetics are slightly endothermic, the entropy effect can still allow the solution to form, although perhaps limiting the maximum concentration that can be achieved. In such a case, we may describe the solute as being slightly soluble in a certain solvent. What this means is that a greater volume of solvent will be required to completely dissolve a given mass of solute.
Enthalpy of Solution
Polar molecules are those in which electric charge is distributed asymmetrically. The most familiar example is ordinary water, in which the highly electronegative oxygen atom pulls part of the electric charge cloud associated with each O–H bond closer to itself. Although the H 2 O molecule is electrically neutral overall, this charge imbalance gives rise to a permanent electric dipole moment .
Chemists use the term "Associated" liquids to refer to liquids in which the effects of hydrogen bonding dominate the local structure. Water is the most important of these, but ammonia NH 3 and hydrogen cyanide HCN are other common examples.
Thus liquid water consists of an extended network of H 2 O molecules linked together by dipole-dipole attractions that we call hydrogen bonds . Because these are much weaker than ordinary chemical bonds, they are continually being disrupted by thermal forces. As a result, the extended structure is highly disordered (in contrast to that of solid ice) and continually changing.
When a solute molecule is introduced into an associated liquid, a certain amount of energy must be expended in order to break the local hydrogen-bond structure and make space for the new molecule. If the solute is itself an ion or a polar molecule, new ion-dipole or dipole-dipole attractions come into play. In favorable cases these may release sufficient potential energy to largely compensate for the energy required to incorporate the solute into the structure.
An extreme example of this occurs when ammonia dissolves in water. Each NH 3 molecule can form three hydrogen bonds, so the resulting solution is even more hydrogen-bonded than is pure water — accounting for the considerable amount of heat released in the process and the extraordinarily large solubility of ammonia in water.
Nonpolar solutes are Sparingly Soluble in Water: The Hydrophobic effect
When a nonpolar solute such as oxygen or hexane is introduced into an associated liquid, we might expect that the energy required to break the hydrogen bonds to make space for the new molecule is not compensated by the formation of new attractive interactions, suggesting that the process will be energetically unfavorable. We can therefore predict that solutes of these kinds will be only sparingly soluble in water, and this is indeed the case.
It turns out, however, that this is not an entirely correct explanation for the small solubility of non polar solutes in water. It is now known that the H 2 O molecules that surround a non-polar intruder and find themselves unable to form energy-lowering polar or hydrogen-bonded interactions with it will rearrange themselves into a configuration that maximizes the hydrogen bonding between the water molecules themselves. In doing so, this creates a cage-like shell around the solute molecule. In terms of the energetics of the process, these new H 2 O-H 2 O interactions largely compensate for the lack of solute-H 2 O interactions.
However, this shell of highly organized water molecules exacts its own toll on the solubility by reducing the entropy of the system. Dissolution of a solute normally increases the entropy by spreading the solute molecules (and the thermal energy they contain) through the larger volume of the solvent. But in this case, the H 2 O molecules within the highly structured shell surrounding the solute molecule are themselves constrained to this location, and their number is sufficiently great to reduce the entropy by far more than the dissolved solute increases it.
The implications of the hydrophobic effect extend far beyond the topic of solubility. It governs the way that proteins fold, the formation of soap bubbles, and the formation of cell membranes. The small solubility of a non polar solute in an associated liquid such as water results more from the negative entropy change rather than from energetic considerations. This phenomenon is known as the hydrophobic effect . In the next section, we will explore the ways in which these energy-and-entropy considerations come together in various kinds of solutions. | libretexts | 2025-03-17T19:53:11.047524 | 2013-10-03T01:38:06 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.02%3A_Thermodynamics_of_Solutions",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "8.2: Thermodynamics of Solutions",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.02%3A_Thermodynamics_of_Solutions/8.2.2A%3A_8.2.2A%3A_Solutions_of_Gaseous_Solutes_in_Gaseous_Solvents | 8.2.2A: Solutions of Gaseous Solutes in Gaseous Solvents
Make sure you thoroughly understand the following essential ideas:
- Why are gases the only state of matter that never fail to form solutions?
Mixtures of gases are really solutions, but we tend not to think of them this way because they mix together freely and with no limits to their compositions; we say that gases are miscible in all proportions .
To the extent that gases behave ideally (because they consist mostly of empty space), their mixing does not involve energy changes at all; the mixing of gases is driven entirely by the increase in entropy ( S ) as each kind of molecule occupies and shares the space and kinetic energy of the other. Your nose can be a remarkably sensitive instrument for detecting components of gaseous solutions, even at the parts-per-million level. The olfactory experiences resulting from cooking cabbage, eating asparagus, and bodily emanations that are not mentionable in polite society are well known.
Can solids or liquids "dissolve" in a gaseous solvent? In a very narrow sense they can, but only to a very small extent. Dissolution of a condensed phase of matter into a gas is formally equivalent to evaporation (of a liquid) or sublimation (of a solid), so the process really amounts to the mixing of gases.
The energy required to remove molecules from their neighbors in a liquid or solid and into the gaseous phase is generally too great to be compensated by the greater entropy they enjoy in the larger volume of the mixture, so solids tend to have relatively low vapor pressures. The same is true of liquids at temperatures well below their boiling points. These two cases of gaseous solutions can be summarized as follows:
| gaseous solvent, solute → | gas | liquid or solid |
|---|---|---|
| energy to disperse solute | nil | large |
| energy to introduce into gas | nil | nil |
| increase in entropy | large | large |
| miscibility | complete | very limited | | libretexts | 2025-03-17T19:53:11.108527 | 2017-02-17T01:00:24 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.02%3A_Thermodynamics_of_Solutions/8.2.2A%3A_8.2.2A%3A_Solutions_of_Gaseous_Solutes_in_Gaseous_Solvents",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "8.2.2A: Solutions of Gaseous Solutes in Gaseous Solvents",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.02%3A_Thermodynamics_of_Solutions/8.2.2B%3A_8.2.2B%3A_Solutions_of_Gaseous_Solutes_in_Liquid_Solvents | 8.2.2B: Solutions of Gaseous Solutes in Liquid Solvents
Make sure you thoroughly understand the following essential ideas:
- Explain why, in contrast, gases tend to be only slightly soluble in liquids and solids. And why are some combinations, such as the dissolution of ammonia or hydrogen chloride in water, significant exceptions to this rule?
- State Henry's law and explain
- Why do fish in rivers and streams sometimes become asphyxiated (oxygen-starved) in hot weather?
Solutions of Gases in Liquids
Gases dissolve in liquids, but usually only to a small extent. When a gas dissolves in a liquid, the ability of the gas molecules to move freely throughout the volume of the solvent is greatly restricted. If this latter volume is small, as is often the case, the gas is effectively being compressed. Both of these effects amount to a decrease in the entropy of the gas that is not usually compensated by the entropy increase due to mixing of the two kinds of molecules. Such processes greatly restrict the solubility of gases in liquids.
| liquid solvent, solute → | gas |
|---|---|
| energy to disperse solute | nil |
| energy to introduce into solvent | medium to large |
| increase in entropy | negative |
| miscibility | usually very limited |
Solubility of gases in water
One important consequence of the entropy decrease when a gas dissolves in a liquid is that the solubility of a gas decreases at higher temperatures; this is in contrast to most other situations, where a rise in temperature usually leads to increased solubility. Bringing a liquid to its boiling point will completely remove a gaseous solute. Some typical gas solubilities, expressed in the number of moles of gas at 1 atm pressure that will dissolve in a liter of water at 25° C, are given below:
| solute | formula | solubility, mol L –1 atm –1 |
|---|---|---|
| ammonia | NH 3 | 57 |
| carbon dioxide | CO 2 | 0.0308 |
| methane | CH 4 | 0.00129 |
| nitrogen | N 2 | 0.000661 |
| oxygen | O 2 | 0.00126 |
| sulfur dioxide | SO 2 | 1.25 |
As we indicated above, the only gases that are readily soluble in water are those whose polar character allows them to interact strongly with it.
Ammonia is remarkably soluble in water
Inspection of the above table reveals that ammonia is a champion in this regard. At 0° C, one liter of water will dissolve about 90 g (5.3 mol) of ammonia. The reaction of ammonia with water according to
\[\ce{NH_3 + H_2O → NH_4^{+} + OH^{–}}\]
makes no significant contribution to its solubility; the equilibrium lies heavily on the left side (as evidenced by the strong odor of ammonia solutions). Only about four out of every 1000 NH 3 molecules are in the form of ammonium ions at equilibrium. This is truly impressive when one calculates that this quantity of NH 3 would occupy (5.3 mol) × (22.4 L mol –1 ) = 119 L at STP. Thus one volume of water will dissolve over 100 volumes of this gas. It is even more impressive when you realize that in order to compress 119 L of an ideal gas into a volume of 1 L, a pressure of 119 atm would need to be applied! This, together with the observation that dissolution of ammonia is accompanied by the liberation of a considerable amount of heat, tells us that the high solubility of ammonia is due to the formation of more hydrogen bonds (to H 2 O) than are broken within the water structure in order to accommodate the NH 3 molecule.
If we actually compress 90 g of pure NH 3 gas to 1 L, it will liquefy, and the vapor pressure of the liquid would be about 9 atm. In other words, the escaping tendency of NH 3 molecules from H 2 O is only about 1/9th of what it is from liquid NH 3 . One way of interpreting this is that the strong intermolecular (dipole-dipole) attractions between NH 3 and the solvent H 2 O give rise to a force that has the effect of a negative pressure of 9 atm.
Solubility of gases decreases with Temperature
Recall that entropy is a measure of the ability of thermal energy to spread and be shared and exchanged by molecules in the system. Higher temperature exerts a kind of multiplying effect on a positive entropy change by increasing the amount of thermal energy available for sharing. Have you ever noticed the tiny bubbles that form near the bottom of a container of water when it is placed on a hot stove? These bubbles contain air that was previously dissolved in the water, but reaches its solubility limit as the water is warmed. You can completely rid a liquid of any dissolved gases (including unwanted ones such as Cl 2 or H 2 S) by boiling it in an open container.
This is quite different from the behavior of most (but not all) solutions of solid or liquid solutes in liquid solvents. The reason for this behavior is the very large entropy increase that gases undergo when they are released from the confines of a condensed phase .
Solubility of Oxygen in water
Fresh water at sea level dissolves 14.6 mg of oxygen per liter at 0°C and 8.2 mg/L at 25°C. These saturation levels ensure that fish and other gilled aquatic animals are able to extract sufficient oxygen to meet their respiratory needs. But in actual aquatic environments, the presence of decaying organic matter or nitrogenous runoff can reduce these levels far below saturation. The health and survival of these organisms is severely curtailed when oxygen concentrations fall to around 5 mg/L.
The temperature dependence of the solubility of oxygen in water is an important consideration for the well-being of aquatic life; thermal pollution of natural waters (due to the influx of cooling water from power plants) has been known to reduce the dissolved oxygen concentration to levels low enough to kill fish. The advent of summer temperatures in a river can have the same effect if the oxygen concentration has already been partially depleted by reaction with organic pollutants.
Solubility of gases increases with pressure: Henry's Law
The pressure of a gas is a measure of its "escaping tendency" from a phase. So it stands to reason that raising the pressure of a gas in contact with a solvent will cause a larger fraction of it to "escape" into the solvent phase. The direct-proportionality of gas solubility to pressure was discovered by William Henry (1775-1836) and is known as Henry's Law . It is usually written as
\[P = k_H C \label{7b.2.1}\]
with
- \(P\) is the partial pressure of the gas above the liquid,
- \(C\) is the concentration of gas dissolved in the liquid, and
- \(k_H\) is the Henry's law constant , which can be expressed in various units, and in some instances is defined in different ways, so be very careful to note these units when using published values.
For Table 7b.2.X, k H is given as
\[ k_H = \dfrac{\text{partial pressure of gas in atm}}{\text{concentration in liquid} \; mol \;L^{–1}}\]
| gas | He | N 2 | O 2 | CO 2 | CH 4 | NH 3 |
|---|---|---|---|---|---|---|
| K H | 2703 | 1639 | 769 | 29.4 | 0.00129 | 57 |
Some vendors of bottled waters sell pressurized "oxygenated water" that is (falsely) purported to enhance health and athletic performance by supplying more oxygen to the body.
- How many moles of O 2 will be in equilibrium with one liter of water at 25° C when the partial pressure of O 2 above the water is 2.0 atm?
- How many mL of air (21% O 2 v/v) must you inhale in order to introduce an equivalent quantity of O 2 into the lungs (where it might actually do some good?)
Solution:
- Solving Henry's law for the concentration, we get
\[C = \dfrac{P}{k_H} = \dfrac{2.0\; atm}{769\; L\; atm \;mol^{–1}} = 0.0026\; mol\; L^{–1}\]
- At 25° C, 0.0026 mol of O 2 occupies (22.4 L) × (.0026 mol) × (298/273) = 0.063 L. The equivalent volume of air would be (0.063 L) (.21) = 0.303 L. Given that the average tidal volume of the human lung is around 400 mL, this means that taking one extra breath would take in more O 2 than is present in 1 L of "oxygenated water".
Carbonated beverages: the history of "Fizz-ics"
Artificially carbonated water was first prepared by Joseph Priestley (who later discovered oxygen) in 1767 and was commercialized in 1783 by Joseph Schweppe, a Swiss-German jeweler. Naturally-carbonated spring waters have long been reputed to have curative values, and these became popular tourist destinations in the 19th century. The term "seltzer water" derives from one such spring in Niederselters, Germany. Of course, carbonation produced by fermentation has been known since ancient times. The tingling sensation that carbonated beverages produce in the mouth comes from the carbonic acid produced when bubbles of carbon dioxide come into contact with the mucous membranes of the mouth and tongue:
\[CO_2 + H_2O → H_2CO_3\] | libretexts | 2025-03-17T19:53:11.193279 | 2017-02-17T00:57:26 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.02%3A_Thermodynamics_of_Solutions/8.2.2B%3A_8.2.2B%3A_Solutions_of_Gaseous_Solutes_in_Liquid_Solvents",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "8.2.2B: Solutions of Gaseous Solutes in Liquid Solvents",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.02%3A_Thermodynamics_of_Solutions/8.2.2C%3A_8.2.2C%3A_Solutions_of_Liquid_Solutes_in_Liquid_Solvents | 8.2.2C: Solutions of Liquid Solutes in Liquid Solvents
Whereas all gases will mix to form solutions regardless of the proportions, liquids are much more fussy. Some liquids, such as ethyl alcohol and water, are miscible in all proportions. Others, like the proverbial oil and water, are not; each liquid has only a limited solubility in the other, and once either of these limits is exceeded, the mixture separates into two phases.
| solute → | liquid |
|---|---|
| energy to disperse solute | varies |
| energy to introduce into solvent | varies |
| increase in entropy | moderate |
| miscibility | "like dissolves like" |
The reason for this variability is apparent from the table. Mixing of two liquids can be exothermic, endothermic, or without thermal effect, depending on the particular substances. Whatever the case, the energy factors are not usually very large, but neither is the increase in randomness; the two factors are frequently sufficiently balanced to produce limited miscibility.
The range of possibilities is shown here in terms of the mole fractions X of two liquids A and B. If A and B are only slightly miscible, they separate into two layers according to their relative densities. Note that when one takes into account trace levels, no two liquids are totally immiscible.
Like Dissolves Like
A useful general rule is that liquids are completely miscible when their intermolecular forces are very similar in nature; “like dissolves like”. Thus water is miscible with other liquids that can engage in hydrogen bonding, whereas a hydrocarbon liquid in which London or dispersion forces are the only significant intermolecular effect will only be completely miscible with similar kinds of liquids.
Substances such as the alcohols, CH 3 (CH 2 ) n OH, which are hydrogen-bonding (and thus hydrophilic) at one end and hydrophobic at the other, tend to be at least partially miscible with both kinds of solvents. If n is large, the hydrocarbon properties dominate and the alcohol has only a limited solubility in water. Very small values of n allow the –OH group to dominate, so miscibility in water increases and becomes unlimited in ethanol ( n = 1) and methanol ( n = 0), but miscibility with hydrocarbons decreases owing to the energy required to break alcohol-alcohol hydrogen bonds when the non polar liquid is added.
These considerations have become quite important in the development of alternative automotive fuels based on mixing these alcohols with gasoline. At ordinary temperatures the increased entropy of the mixture is great enough that the unfavorable energy factor is entirely overcome, and the mixture is completely miscible. At low temperatures, the entropy factor becomes less predominant, and the fuel mixture may separate into two phases, presenting severe problems to the fuel filter and carburetor. | libretexts | 2025-03-17T19:53:11.253642 | 2017-02-17T01:03:10 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.02%3A_Thermodynamics_of_Solutions/8.2.2C%3A_8.2.2C%3A_Solutions_of_Liquid_Solutes_in_Liquid_Solvents",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "8.2.2C: Solutions of Liquid Solutes in Liquid Solvents",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.02%3A_Thermodynamics_of_Solutions/8.2.2D%3A_8.2.2D%3A_Solutions_of_Solid_Solutes_in_Liquid_Solvents | 8.2.2D: Solutions of Solid Solutes in Liquid Solvents
Make sure you thoroughly understand the following essential ideas:
- A key to understanding the solubility of ionic solids in water are the concepts of lattice energy and hydration energy. Explain the meaning of these terms, and sketch out a diagram that shows how these are related to the "heat of solution".
Molecular Solids in Liquid Solvents
The stronger intermolecular forces in solids require more input of energy to disperse the molecular units into a liquid solution, but there is also a considerable increase in entropy that can more than compensate if the intermolecular forces are not too strong, and if the solvent has no strong hydrogen bonds that must be broken in order to introduce the solute into the liquid.
| solvent → | non polar liquid | polar liquid |
|---|---|---|
| energy to disperse solute | moderate | moderate |
| energy to introduce into solvent | small | moderate |
| increase in entropy | moderate | moderate |
| miscibility | moderate | small |
For example, at 25° C and 1 atm pressure, 20 g of iodine crystals will dissolve in 100 ml of ethyl alcohol, but the same quantity of water will dissolve only 0.30 g of iodine. As the molecular weight of the solid increases, the intermolecular forces holding the solid together also increase, and solubilities tend to fall off; thus the solid linear hydrocarbons CH 3 (CH 2 ) n CH 3 ( n > 20) show diminishing solubilities in hydrocarbon liquids.
Ionic Solids in Liquid Solvents
Since the Coulombic forces that bind ions and highly polar molecules into solids are quite strong, we might expect these solids to be insoluble in just about any solvent. Ionic solids are insoluble in most non-aqueous solvents, but the high solubility of some (including NaCl) in water suggests the need for some further explanation.
| solvent → | non polar | polar (water) |
|---|---|---|
| energy to disperse solute | large | large (endothermic) |
| energy to introduce into liquid | small | highly negative (exothermic) |
| increase in entropy | moderate | moderate to slightly negative |
| miscibility | very small | small to large |
The key factor here turns out to be the interaction of the ions with the solvent. The electrically-charged ions exert a strong coulombic attraction on the end of the water molecule that has the opposite partial charge.
As a consequence, ions in solution are always hydrated ; that is, they are quite tightly bound to water molecules through ion-dipole interaction. The number of water molecules contained in the primary hydration shell varies with the radius and charge of the ion.
Figure \(\PageIndex{1}\): Hydration shells around some ions in a sodium chloride solution. The average time an ion spends in a shell is about 2-4 nanoseconds. But this is about two orders of magnitude longer than the lifetime of an individual \(H_2O–H_2O\) hydrogen bond.
Lattice and Hydration Energies
The dissolution of an ionic solid \(M\)X in water can be thought of as a sequence of two (hypothetical) steps:
\[MX(s) \rightarrow M^+(g) + X^–(g) \]
\[M^+(g) + X^–(g) + H_2O(l) \rightarrow M^+(aq) + X–(aq)\]
The enthalpy difference of the first step is the lattice energy and is always positive; the enthalpy difference of the second step is the hydration energy and is always negative)
- The first reaction is always endothermic; it takes a lot of work to break up an ionic crystal lattice (Table \(\PageIndex{1}\)).
- The hydration step is always exothermic as H 2 O molecules are attracted into the electrostatic field of the ion (Table \(\PageIndex{2}\)).
- The heat (enthalpy) of solution is the sum of the lattice and hydration energies, and can have either sign.
| H + (g) | –1075 | F – (g) | –503 |
| Li + (g) | –515 | Cl – (g) | –369 |
| Na + (g) | –405 | Br – (g) | –336 |
| K + (g) | –321 | I – (g) | –398 |
| Mg 2 + (g) | –1922 | OH – (g) | –460 |
| Ca 2 + (g) | –1592 | NO 3 – | –328 |
| Sr 2 + (g) | –1445 | SO 4 2– | –1145 |
Single-ion hydration energies (Table \(\PageIndex{1}\)) cannot be observed directly, but are obtained from the differences in hydration energies of salts having the given ion in common. When you encounter tables such as the above in which numeric values are related to different elements, you should always stop and see if you can make sense of any obvious trends. In this case, the things to look for are the size and charge of the ions as they would affect the electrostatic interaction between two ions or between an ion and a [polar] water molecule.
| F – | Cl – | Br – | I – | |
|---|---|---|---|---|
| Li + | +1031 | +848 | +803 | +759 |
| Na + | +918 | +780 | +742 | +705 |
| K + | +817 | +711 | +679 | +651 |
| Mg 2 + | +2957 | +2526 | +2440 | +2327 |
| Ca 2 + | +2630 | +2258 | +2176 | +2074 |
| Sr 2 + | +2492 | +2156 | +2075 | +1963 |
Lattice energies are not measured directly, but are estimates based on electrostatic calculations which are reliable only for simple salts. Enthalpies of solution are observable either directly or (for sparingly soluble salts,) indirectly. Hydration energies are not measurable; they are estimated as the sum the other two quantities. It follows that any uncertainty in the lattice energies is reflected in those of the hydration energies. For this reason, tabulated values of the latter will vary depending on the source.
When calcium chloride, CaCl 2 , is dissolved in water, will the temperature immediately after mixing rise or fall?
Solution:
Estimate the heat of solution of CaCl 2 .
- lattice energy of solid CaCl 2 : +2258 kJ mol –1
- hydration energy of the three gaseous ions: (–1562 –381 – 381) = –2324 kJ mol –1
- heat of solution:
(2258 – 2324) kJ mol –1 = –66 kJ mol –1
Since the process is exothermic, this heat will be released to warm the solution.
As often happens for a quantity that is the sum of two large terms having opposite signs, the overall dissolution process can come out as either endothermic or exothermic, and examples of both kinds are common.
|
substance →
|
LiF | NaI | KBr | CsI | LiCl | NaCl | KCl | AgCl |
|---|---|---|---|---|---|---|---|---|
| lattice energy | 1021 | 682 | 669 | 586 | 846 | 778 | 707 | 910 |
| hydration energy | 1017 | 686 | 649 | 552 | 884 | 774 | 690 | 844 |
| enthalpy of solution | +3 | –4 | +20 | +34 | –38 | +4 | +17 | +66 |
Two common examples illustrate the contrast between exothermic and endothermic heats of solution of ionic solids:
Hydration Entropy can make a Difference!
Hydration shells around some ions in a sodium chloride solution. The average time an ion spends in a shell is about 2-4 nanoseconds. But this is about two orders of magnitude longer than the lifetime of an individual \(H_2O\)–\(H_2O\) hydrogen bond. The balance between the lattice energy and hydration energy is a major factor in determining the solubility of an ionic crystal in water, but there is another factor to consider as well. We generally assume that there is a rather large increase in the entropy when a solid is dispersed into the liquid phase. However, in the case of ionic solids, each ion ends up surrounded by a shell of oriented water molecules. These water molecules, being constrained within the hydration shell, are unable to participate in the spreading of thermal energy throughout the solution, and reduce the entropy. In some cases this effect predominates so that dissolution of the salt leads to a net decrease in entropy. Recall that any process in which the the entropy diminishes becomes less probable as the temperature increases; this explains why the solubilities of some salts decrease with temperature. | libretexts | 2025-03-17T19:53:11.349488 | 2017-02-17T01:04:15 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.02%3A_Thermodynamics_of_Solutions/8.2.2D%3A_8.2.2D%3A_Solutions_of_Solid_Solutes_in_Liquid_Solvents",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "8.2.2D: Solutions of Solid Solutes in Liquid Solvents",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.03%3A_Colligative_Properties-_Raoult's_Law | 8.3: Colligative Properties- Raoult's Law
Make sure you thoroughly understand the following essential ideas:
- State Raoult's law in your own words, and explain why it makes sense.
- What do we mean by the escaping tendency of a molecule from a phase? How might we be able to observe or measure it?
- Explain why boiling point elevation follows naturally from Raoult's law.
- Explaining freezing point depression is admittedly a bit more difficult, but you should nevertheless be able to explain how the application of salt on an ice-covered road can cause the ice to melt.
The tendency of molecules to escape from a liquid phase into the gas phase depends in part on how much of an increase in entropy can be achieved in doing so. Evaporation of solvent molecules from the liquid always leads to a large increase in entropy because of the greater volume occupied by the molecules in the gaseous state. But if the liquid solvent is initially “diluted“ with solute, its entropy is already larger to start with, so the amount by which it can increase on entering the gas phase will be less. There will accordingly be less tendency for the solvent molecules to enter the gas phase, and so the vapor pressure of the solution diminishes as the concentration of solute increases and that of solvent decreases.
The number 55.5 mol L –1 (= 1000 g L –1 ÷ 18 g mol –1 ) is a useful one to remember if you are dealing a lot with aqueous solutions; this represents the concentration of water in pure water. (Strictly speaking, this is the molal concentration of H 2 O; it is only the molar concentration at temperatures around 4° C, where the density of water is closest to 1.000 g cm –1 .)
Diagram 1 (above left) represents pure water whose concentration in the liquid is 55.5 M. A tiny fraction of the H 2 O molecules will escape into the vapor space, and if the top of the container is closed, the pressure of water vapor builds up until equilibrium is achieved. Once this happens, water molecules continue to pass between the liquid and vapor in both directions, but at equal rates, so the partial pressure of H 2 O in the vapor remains constant at a value known as the vapor pressure of water at the particular temperature.
In Figure \(\PageIndex{1}\), we have replaced a fraction of the water molecules with a substance that has zero or negligible vapor pressure — a nonvolatile solute such as salt or sugar. This has the effect of diluting the water, reducing its escaping tendency and thus its vapor pressure.
What's important to remember is that the reduction in the vapor pressure of a solution of this kind is directly proportional to the fraction of the [volatile] solute molecules in the liquid — that is, to the mole fraction of the solvent. The reduced vapor pressure is given by Raoult's law (1886):
\[\chi_{solvent} = 1–\chi_{solute}.\]
Estimate the vapor pressure of a 40 % (W/W) solution of ordinary cane sugar (C 22 O 11 H 22 , 342 g mol –1 ) in water. The vapor pressure of pure water at this particular temperature is 26.0 torr.
Solution
100 g of solution contains (40 g) ÷ (342 g mol –1 ) = 0.12 mol of sugar and (60 g) ÷ (18 g mol –1 ) = 3.3 mol of water. The mole fraction of water in the solution is
\[ \dfrac{3.3}{3.3 + 12} = 0.96\]
and its vapor pressure will be 0.96 × 26.0 torr = 25.1 torr.
The vapor pressure of water at 10° C is 9.2 torr. Estimate the vapor pressure at this temperature of a solution prepared by dissolving 1 mole of CaCl 2 in 1 L of water.
Solution
Each mole of CaCl 2 dissociates into one mole of Ca 2 + and two moles of Cl 1– , giving a total of three moles of solute particles. The mole fraction of water in the solution will be
\[ \dfrac{55.5}{3 + 55.5} = 0.95\]
The vapor pressure will be 0.95 × 9.2 torr = 8.7 torr.
Since the sum of all mole fractions in a mixture must be unity, it follows that the more moles of solute, the smaller will be the mole fraction of the solvent. Also, if the solute is a salt that dissociates into ions, then the proportion of solvent molecules will be even smaller. | libretexts | 2025-03-17T19:53:11.416378 | 2017-02-17T01:29:05 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.03%3A_Colligative_Properties-_Raoult's_Law",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "8.3: Colligative Properties- Raoult's Law",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.04%3A_Colligative_Properties-_Boiling_Point_Elevation_and_Freezing_Point_Depression | 8.4: Colligative Properties- Boiling Point Elevation and Freezing Point Depression
Make sure you thoroughly understand the following essential ideas:
- Explain why boiling point elevation follows naturally from Raoult's law.
- Explaining freezing point depression is admittedly a bit more difficult, but you should nevertheless be able to explain how the application of salt on an ice-covered road can cause the ice to melt.
The colligative properties really depend on the escaping tendency of solvent molecules from the liquid phase. You will recall that the vapor pressure is a direct measure of escaping tendency, so we can use these terms more or less interchangeably.
Boiling Point Elevation
If addition of a nonvolatile solute lowers the vapor pressure of the solution via Raoult's law , then it follows that the temperature must be raised to restore the vapor pressure to the value corresponding to the pure solvent. In particular, the temperature at which the vapor pressure is 1 atm will be higher than the normal boiling point by an amount known as the boiling point elevation . The exact relation between the boiling point of the solution and the mole fraction of the solvent is rather complicated, but for dilute solutions the elevation of the boiling point is directly proportional to the molal concentration of the solute:
Bear in mind that the proportionality constant K B is a property of the solvent because this is the only component that contributes to the vapor pressure in the model we are considering in this section.
| solvent | normal bp, °C | K b , K mol –1 kg |
|---|---|---|
| water | 100 | 0.514 |
| ethanol | 79 | 1.19 |
| acetic acid | 118 | 2.93 |
| carbon tetrachloride | 76.5 | 5.03 |
Sucrose (C 22 O 11 H 22 , 342 g mol –1 ), like many sugars, is highly soluble in water; almost 2000 g will dissolve in 1 L of water, giving rise to what amounts to pancake syrup. Estimate the boiling point of such a sugar solution.
Solution
moles of sucrose:
\[ \dfrac{2000\, g}{342\, g\, mol^{–1}} = 5.8\; mol\]
mass of water: assume 1000 g (we must know the density of the solution to find its exact value)
The molality of the solution is (5.8 mol) ÷ (1.0 kg) = 5.8 m.
Using the value of K b from the table, the boiling point will be raised by (0.514 K mol –1 kg) × (5.8 mol kg –1 ) = 3.0 K, so the boiling point will be 103° C.
Freezing Point Depression
The freezing point of a substance is the temperature at which the solid and liquid forms can coexist indefinitely — that is, they are in equilibrium. Under these conditions molecules pass between the two phases at equal rates because their escaping tendencies from the two phases are identical. Suppose that a liquid solvent and its solid (water and ice, for example) are in equilibrium ( below), and we add a non-volatile solute (such as salt, sugar, or automotive antifreeze liquid) to the water. This will have the effect of reducing the mole fraction of H 2 O molecules in the liquid phase, and thus reduce the tendency of these molecules to escape from it, not only into the vapor phase (as we saw above), but also into the solid (ice) phase. This will have no effect on the rate at which H 2 O molecules escape from the ice into the water phase, so the system will no longer be in equilibrium and the ice will begin to melt .
If we wish to keep the solid from melting, the escaping tendency of molecules from the solid must be reduced. This can be accomplished by reducing the temperature ; this lowers the escaping tendency of molecules from both phases, but it affects those in the solid more than those in the liquid, so we eventually reach the new, lower freezing point where the two quantities are again in exact balance and both phases can coexist .
If you prefer to think in terms of vapor pressures, you can use the same argument if you bear in mind that the vapor pressures of the solid and liquid must be the same at the freezing point. Dilution of the liquid (the solvent) by the nonvolatile solute reduces the vapor pressure of the solvent according to Raoult’s law, thus reducing the temperature at which the vapor pressures of the liquid and frozen forms of the solution will be equal. As with boiling point elevation, in dilute solutions there is a simple linear relation between the freezing point depression and the molality of the solute:
\[ \Delta T_f = K_f \dfrac{\text{moles of solute}}{\text{kg of solvent}}\]
Note that K f values are all negative !
| Solvent | Normal Freezing Point (°C) | K f (K mol –1 kg) |
|---|---|---|
| water | 0.0 | –1.86 |
| acetic acid | 16.7 | –3.90 |
| benzene | 5.5 | –5.10 |
| camphor | 180 | –40.0 |
| cyclohexane | 6.5 | –20.2 |
| phenol | 40 | –7.3 |
The use of salt to de-ice roads is a common application of this principle. The solution formed when some of the salt dissolves in the moist ice reduces the freezing point of the ice. If the freezing point falls below the ambient temperature, the ice melts. In very cold weather, the ambient temperature may be below that of the salt solution, and the salt will have no effect. The effectiveness of a de-icing salt depends on the number of particles it releases on dissociation and on its solubility in water:
| name | Formula | lowest practical T, °C |
|---|---|---|
| ammonium sulfate | (NH 4 ) 2 SO 4 | –7 |
| calcium chloride | CaCl 2 | –29 |
| potassium chloride | KCl | –15 |
| sodium chloride | NaCl | –9 |
| urea | (NH 2 ) 2 CO | –7 |
Automotive radiator antifreezes are mostly based on ethylene glycol, (CH 2 OH) 2 . Owing to the strong hydrogen-bonding properties of this double alcohol, this substance is miscible with water in all proportions, and contributes only a very small vapor pressure of its own. Besides lowering the freezing point, antifreeze also raises the boiling point, increasing the operating range of the cooling system. The pure glycol freezes at –12.9°C and boils at 197°C, allowing water-glycol mixtures to be tailored to a wide range of conditions.
Estimate the freezing point of an antifreeze mixture is made up by combining one volume of ethylene glycol (MW = 62, density 1.11 g cm –3 ) with two volumes of water.
Solution
Assume that we use 1 L of glycol and 2 L of water (the actual volumes do not matter as long as their ratios are as given.) The mass of the glycol will be 1.10 kg and that of the water will be 2.0 kg, so the total mass of the solution is 3.11 kg. We then have:
- number of moles of glycol: (1110 g) ÷ (62 g mol –1 ) = 17.9 mol
- molality of glycol: (17.9 mol) ÷ (2.00 kg) = 8.95 mol kg –1
- freezing point depression: Δ T F = (–1.86 K kg –1 mol) × (8.95 mol kg –1 ) = –16.6 K so the solution will freeze at about –17°C.
Any ionic species formed by dissociation will also contribute to the freezing point depression. This can serve as a useful means of determining the fraction of a solute that is dissociated.
An aqueous solution of nitrous acid (HNO 2 , MW = 47) freezes at –0.198 .C. If the solution was prepared by adding 0.100 mole of the acid to 1000 g of water, what percentage of the HNO 2 is dissociated in the solution?
Solution
The nominal molality of the solution is (.001 mol) ÷ (1.00 kg) = 0.001 mol kg –1 .
But the effective molality according to the observed Δ T F value is given by
Δ T F ÷ K F = (–.198 K) ÷(–1.86 K kg mol –1 ) = 0.106 mol kg –1 ; this is the total number of moles of species present after the dissociation reaction HNO 2 → H + + NO – has occurred. If we let x = [H + ] = [NO 2 – ], then by stoichiometry, [HNO 2 ] = 0.100 - x and .106 - x = 2x and x = .0355. The fraction of HNO 2 that is dissociated is .0355 ÷ 0.100 = .355, corresponding to 35.5% dissociation of the acid.
Another Perspective of Freezing Point Depression and Boiling Point Elevation
A simple phase diagram can provide more insight into these phenomena. You may already be familiar with the phase map for water below.
The one shown below expands on this by plotting lines for both pure water and for its "diluted" state produced by the introduction of a non-volatile solute.
The normal boiling point of the pure solvent is indicated by point where the vapor pressure curve intersects the 1-atm line — that is, where the escaping tendency of solvent molecules from the liquid is equivalent to 1 atmosphere pressure. Addition of a non-volatile solute reduces the vapor pressures to the values given by the blue line. This shifts the boiling point to the right , corresponding to the increase in temperature Δ T b required to raise the escaping tendency of the H 2 O molecules back up to 1 atm.
To understand freezing point depression, notice that the vapor pressure line intersects the curved black vapor pressure line of the solid (ice), which corresponds to a new triple point at which all three phases (ice, water vapor, and liquid water) are in equilibrium and thus exhibit equal escaping tendencies. This point is by definition the origin of the freezing (solid-liquid) line, which intersects the 1-atm line at a reduced freezing point Δ T f , indicated by .
Note that the above analysis assumes that the solute is soluble only in the liquid solvent, but not in its solid form. This is generally more or less true. For example, when arctic ice forms from seawater, the salts get mostly "squeezed" out. This has the interesting effect of making the water that remains more saline, and hence more dense, causing it to sink to the bottom part of the ocean where it gets taken up by the south-flowing deep current.
A Thermodynamics Perspective on Freezing and Boiling
Those readers who have some knowledge of thermodynamics will recognize that what we have been referring to as "escaping" tendency is really a manifestation of the Gibbs Energy . This schematic plot shows how the G 's for the solid, liquid, and gas phases of a typical substance vary with the temperature.
The rule is that the phase with the most negative free energy rules.
The phase that is most stable (and which therefore is the only one that exists) is always the one having the most negative free energy (indicated here by the thicker portions of the plotted lines.) The melting and boiling points correspond to the respective temperatures where the solid and liquid and liquid and vapor have identical free energies.
As we saw above, adding a solute to the liquid dilutes it, making its free energy more negative, with the result that the freezing and boiling points are shifted to the left and right, respectively.
The relationships shown in these plots depend on the differing slopes of the lines representing the free energies of the phases as the temperature changes. These slopes are proportional to the entropy of each phase. Because gases have the highest entropies, the slope of the "gaseous solvent" line is much greater than that of the others. Note that this plot is not to scale. | libretexts | 2025-03-17T19:53:11.511698 | 2013-10-03T01:38:04 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.04%3A_Colligative_Properties-_Boiling_Point_Elevation_and_Freezing_Point_Depression",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "8.4: Colligative Properties- Boiling Point Elevation and Freezing Point Depression",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.05%3A__Colligative_Properties_-_Osmotic_Pressure | 8.5: Colligative Properties - Osmotic Pressure
- Define a semipermeable membrane in the context of osmotic flow.
- Explain, in simple terms, what fundamental process "drives" osmotic flow.
- What is osmotic pressure, and how is it measured?
- Osmotic pressure can be a useful means of estimating the molecular weight of a substance, particularly if its molecular weight is quite large. Explain in your own words how this works.
- What is reverse osmosis, and what is its principal application?
- Explain the role of osmotic pressure in food preservation, and give an example.
- Describe the role osmosis plays in the rise of water in plants (where is the semipermeable membrane?), and why it cannot be the only cause in very tall trees.
Osmosis is the process in which a liquid passes through a membrane whose pores permit the passage of solvent molecules but are too small for the larger solute molecules to pass through.
Semipermeable Membranes and Osmotic flow
Figure \(\PageIndex{1}\) shows a simple osmotic cell. Both compartments contain water, but the one on the right also contains a solute whose molecules (represented by green circles) are too large to pass through the membrane. Many artificial and natural substances are capable of acting as semi-permeable membranes. The walls of most plant and animal cells fall into this category.
If the cell is set up so that the liquid level is initially the same in both compartments, you will soon notice that the liquid rises in the left compartment and falls in the right side, indicating that water molecules from the right compartment are migrating through the semipermeable membrane and into the left compartment. This migration of the solvent is known as osmotic flow, or simply osmosis.
The escaping tendency of a substance from a phase increases with its concentration in the phase. What is the force that drives the molecules through the membrane? This is a misleading question, because there is no real “force” in the physical sense other than the thermal energies all molecules possess. Osmosis is a consequence of simple statistics: the randomly directed motions of a collection of molecules will cause more to leave a region of high concentration than return to it; the escaping tendency of a substance from a phase increases with its concentration in the phase.
Diffusion and Osmotic Flow
Suppose you drop a lump of sugar into a cup of tea, without stirring. Initially there will be a very high concentration of dissolved sugar at the bottom of the cup, and a very low concentration near the top. Since the molecules are in random motion, there will be more sugar molecules moving from the high concentration region to the low concentration region than in the opposite direction. The motion of a substance from a region of high concentration to one of low concentration is known as diffusion . Diffusion is a consequence of a concentration gradient (which is a measure of the difference in escaping tendency of the substance in different regions of the solution).
There is really no special force on the individual molecules; diffusion is purely a consequence of statistics. Osmotic flow is simply diffusion of a solvent through a membrane impermeable to solute molecules. Now take two solutions of differing solvent concentration, and separate them by a semipermeable membrane (Figure \(\PageIndex{2}\)). Being semipermeable, the membrane is essentially invisible to the solvent molecules, so they diffuse from the high concentration region to the low concentration region just as before. This flow of solvent constitutes osmotic flow , or osmosis .
Figure \(\PageIndex{2}\): Osmosis osmotic flow(a) Two sugar-water solutions of different concentrations, separated by a semipermeable membrane that passes water but not sugar. Osmosis will be to the right, since water is less concentrated there. (b) The fluid level rises until the back pressure ρgh equals the relative osmotic pressure; then, the net transfer of water is zero. (CC-BY; OpenStax).
Figure \(\PageIndex{2}\) shows water molecules (blue) passing freely in both directions through the semipermeable membrane, while the larger solute molecules remain trapped in the left compartment, diluting the water and reducing its escaping tendency from this cell, compared to the water in the right side. This results in a net osmotic flow of water from the right side which continues until the increased hydrostatic pressure on the left side raises the escaping tendency of the diluted water to that of the pure water at 1 atm, at which point osmotic equilibrium is achieved.
Osmotic flow is simply diffusion of a solvent through a membrane impermeable to solute molecules.
In the absence of the semipermeable membrane, diffusion would continue until the concentrations of all substances are uniform throughout the liquid phase. With the semipermeable membrane in place, and if one compartment contains the pure solvent, this can never happen; no matter how much liquid flows through the membrane, the solvent in the right side will always be more concentrated than that in the left side. Osmosis will continue indefinitely until we run out of solvent, or something else stops it.
Osmotic equilibrium and osmotic pressure
One way to stop osmosis is to raise the hydrostatic pressure on the solution side of the membrane. This pressure squeezes the solvent molecules closer together, raising their escaping tendency from the phase. If we apply enough pressure (or let the pressure build up by osmotic flow of liquid into an enclosed region), the escaping tendency of solvent molecules from the solution will eventually rise to that of the molecules in the pure solvent, and osmotic flow will case. The pressure required to achieve osmotic equilibrium is known as the osmotic pressure . Note that the osmotic pressure is the pressure required to stop osmosis, not to sustain it.
Osmotic pressure is the pressure required to stop osmotic flow It is common usage to say that a solution “has” an osmotic pressure of "x atmospheres". It is important to understand that this means nothing more than that a pressure of this value must be applied to the solution to prevent flow of pure solvent into this solution through a semipermeable membrane separating the two liquids.
Osmotic Pressure and Solute Concentration
The Dutch scientist Jacobus Van't Hoff (1852-1911) was one of the giants of physical chemistry. He discovered this equation after a chance encounter with a botanist friend during a walk in a park in Amsterdam; the botanist had learned that the osmotic pressure increases by about 1/273 for each degree of temperature increase. van’t Hoff immediately grasped the analogy to the ideal gas law. The osmotic pressure \(\Pi\) of a solution containing \(n\) moles of solute particles in a solution of volume \(V\) is given by the van 't Hoff equation :
\[\Pi = \dfrac{nRT}{V} \label{8.4.3}\]
in which
- \(R\) is the gas constant (0.0821 L atm mol –1 K –1 ) and
- \(T\) is the absolute temperature.
In contrast to the need to employ solute molality to calculate the effects of a non-volatile solute on changes in the freezing and boiling points of a solution, we can use solute molarity to calculate osmotic pressures.
Note that the fraction \(n/V\) corresponds to the molarity (\(M\)) of a solution of a non-dissociating solute, or to twice the molarity of a totally-dissociated solute such as \(NaCl\). In this context, molarity refers to the summed total of the concentrations of all solute species. Hence, Equation \ref{8.4.3} can be expressed as
\[\Pi =MRT \label{8.4.3B}\]
Recalling that \(\Pi\) is the Greek equivalent of P , the re-arranged form \(\Pi V = nRT\) of the above equation should look familiar. Much effort was expended around the end of the 19th century to explain the similarity between this relation and the ideal gas law , but in fact, the Van’t Hoff equation turns out to be only a very rough approximation of the real osmotic pressure law, which is considerably more complicated and was derived after van 't Hoff's formulation. As such, this equation gives valid results only for extremely dilute ("ideal") solutions.
According to the Van't Hoff equation, an ideal solution containing 1 mole of dissolved particles per liter of solvent at 0° C will have an osmotic pressure of 22.4 atm.
Sea water contains dissolved salts at a total ionic concentration of about 1.13 mol L –1 . What pressure must be applied to prevent osmotic flow of pure water into sea water through a membrane permeable only to water molecules?
Solution
This is a simple application of Equation \ref{8.4.3B}.
\[ \begin{align*} \Pi &= MRT \\[4pt] &= (1.13\; mol /L)(0.0821\; L \,atm \,mol^{–1}\; K^{–1})(298\; K) \\[4pt] &= 27.6\; atm \end{align*}\]
Molecular Weight Determination by Osmotic Pressure
Since all of the colligative properties of solutions depend on the concentration of the solvent, their measurement can serve as a convenient experimental tool for determining the concentration, and thus the molecular weight, of a solute. Osmotic pressure is especially useful in this regard, because a small amount of solute will produce a much larger change in this quantity than in the boiling point, freezing point, or vapor pressure. even a 10 –6 molar solution would have a measurable osmotic pressure. Molecular weight determinations are very frequently made on proteins or other high molecular weight polymers. These substances, owing to their large molecular size, tend to be only sparingly soluble in most solvents, so measurement of osmotic pressure is often the only practical way of determining their molecular weights.
The osmotic pressure of a benzene solution containing 5.0 g of polystyrene per liter was found to be 7.6 torr at 25°C. Estimate the average molecular weight of the polystyrene in this sample.
Solution:
osmotic pressure:
\[ \begin{align*} \Pi &= \dfrac{7.6\, torr}{760\, torr\, atm^{–1}} \\[4pt] &= 0.0100 \,atm \end{align*} \]
Using the form of the van 't Hoff equation (Equation \ref{8.4.3}), PV = nRT , the number of moles of polystyrene is
n = (0.0100 atm)(1 L) ÷ (0.0821 L atm mol –1 K –1 )(298 K) = 4.09 x 10 –4 mol
Molar mass of the polystyrene:
(5.0 g) ÷ (4.09 x 10 –4 mol) = 12200 g mol –1 .
The experiment to demonstrate this is quite simple: pure solvent is introduced into one side of a cell that is separated into two parts by a semipermeable membrane. The polymer solution is placed in the other side, which is enclosed and connected to a manometer or some other kind of pressure gauge. As solvent molecules diffuse into the solution cell the pressure builds up; eventually this pressure matches the osmotic pressure of the solution and the system is in osmotic equilibrium. The osmotic pressure is read from the measuring device and substituted into the van’t Hoff equation to find the number of moles of solute. | libretexts | 2025-03-17T19:53:11.588881 | 2013-10-03T01:38:05 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.05%3A__Colligative_Properties_-_Osmotic_Pressure",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "8.5: Colligative Properties - Osmotic Pressure",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.06%3A__Reverse_Osmosis | 8.6: Reverse Osmosis
- What is reverse osmosis, and what is its principal application?
- Explain the role of osmotic pressure in food preservation, and give an example.
- Describe the role osmosis plays in the rise of water in plants (where is the semipermeable membrane?), and why it cannot be the only cause in very tall trees.
\[E=mc^2\]
If it takes a pressure of \(Π\) atm to bring about osmotic equilibrium, then it follows that applying a hydrostatic pressure greater than this to the high-solute side of an osmotic cell will force water to flow back into the fresh-water side. This process, known as reverse osmosis , is now the major technology employed to desalinate ocean water and to reclaim "used" water from power plants, runoff, and even from sewage. It is also widely used to deionize ordinary water and to purify it for for industrial uses (especially beverage and food manufacture) and drinking purposes.
Pre-treatment commonly employs activated-carbon filtration to remove organics and chlorine (which tends to damage RO membranes). Although bacteria are unable to pass through semipermeable membranes, the latter can develop pinhole leaks, so some form of disinfection is often advised. The efficiency and cost or RO is critically dependent on the properties of the semipermeable membrane.
Osmotic Generation of Electric Power
The osmotic pressure of seawater is almost 26 atm. Since a pressure of 1 atm will support a column of water 10.6 m high, this means that osmotic flow of fresh water through a semipermeable membrane into seawater could in principle support a column of the latter by 26 x 10.3 = 276 m (904 ft)!
So imagine an osmotic cell in which one side is supplied with fresh water from a river, and the other side with seawater. Osmotic flow of fresh water into the seawater side forces the latter up through a riser containing a turbine connected to a generator, thus providing a constant and fuel-less source of electricity. The key component of such a scheme, first proposed by an Israeli scientist in 1973 and known as pressure-retarded osmosis (PRO) is of course a semipermeable membrane capable of passing water at a sufficiently high rate.
The world's first experimental PRO plant was opened in 2009 in Norway. Its capacity is only 4 kW, but it serves as proof-in-principle of a scheme that is estimated capable of supplying up to 2000 terawatt-hours of energy worldwide. The semipermeable membrane operates at a pressure of about 10 atm and passes 10 L of water per second, generating about 1 watt per m 2 of membrane. PRO is but one form of salinity gradient power that depends on the difference between the salt concentrations in different bodies of water.
1 atm is equivalent to 1034 g cm –2 , so from the density of water we get (1034 g cm –2 ) ÷ (1 g cm –3 ) = 1034 cm = 10.3 m.
Osmosis in Biology and Physiology
Because many plant and animal cell membranes and tissues tend to be permeable to water and other small molecules, osmotic flow plays an essential role in many physiological processes.
Normal saline solution
The interiors of cells contain salts and other solutes that dilute the intracellular water. If the cell membrane is permeable to water, placing the cell in contact with pure water will draw water into the cell, tending to rupture it. This is easily and dramatically seen if red blood cells are placed in a drop of water and observed through a microscope as they burst. This is the reason that "normal saline solution", rather than pure water, is administered in order to maintain blood volume or to infuse therapeutic agents during medical procedures.
In order to prevent irritation of sensitive membranes, one should always add some salt to water used to irrigate the eyes, nose, throat or bowel. Normal saline contains 0.91% w/v of sodium chloride, corresponding to 0.154 M, making its osmotic pressure close to that of blood.
Food preservation
The drying of fruit, the use of sugar to preserve jams and jellies, and the use of salt to preserve certain meats, are age-old methods of preserving food. The idea is to reduce the water concentration to a level below that in living organisms. Any bacterial cell that wanders into such a medium will have water osmotically drawn out of it, and will die of dehydration. A similar effect is noticed by anyone who holds a hard sugar candy against the inner wall of the mouth for an extended time; the affected surface becomes dehydrated and noticeably rough when touched by the tongue.
In the food industry, what is known as water activity is measured on a scale of 0 to 1, where 0 indicates no water and 1 indicates all water. Food spoilage micro-organisms, in general, are inhibited in food where the water activity is below 0.6. However, if the pH of the food is less than 4.6, micro-organisms are inhibited (but not immediately killed] when the water activity is below 0.85.
Diarrhea
The presence of excessive solutes in the bowel draws water from the intestinal walls, giving rise to diarrhea. This can occur when a food is eaten that cannot be properly digested (as, for example, milk in lactose-intolerant people). The undigested material contributes to the solute concentration, raising its osmotic pressure. The situation is made even worse if the material undergoes bacterial fermentation which results in the formation of methane and carbon dioxide, producing a frothy discharge.
Water Transport in Plants
Osmotic flow plays an important role in the transport of water from its source in the soil to its release by transpiration from the leaves, it is helped along by hydrogen-bonding forces between the water molecules. Capillary rise is not believed to be a significant factor.
Water enters the roots via osmosis, driven by the low water concentration inside the roots that is maintained by both the active [non-osmotic] transport of ionic nutrients from the soil and by the supply of sugars that are photosynthesized in the leaves. This generates a certain amount of root pressure which sends the water molecules on their way up through the vascular channels of the stem or trunk. But the maximum root pressures that have been measured can push water up only about 20 meters, whereas the tallest trees exceed 100 meters. Root pressure can be the sole driver of water transport in short plants, or even in tall ones such as trees that are not in leaf. Anyone who has seen apparently tender and fragile plants pushing their way up through asphalt pavement cannot help but be impressed!
But when taller plants are actively transpiring (losing water to the atmosphere], osmosis gets a boost from what plant physiologists call cohesion tension or transpirational pull . As each H 2 O molecule emerges from the opening in the leaf it pulls along the chain of molecules beneath it. So hydrogen-bonding is no less important than osmosis in the overall water transport process. If the soil becomes dry or saline, the osmotic pressure outside the root becomes greater than that inside the plant, and the plant suffers from “water tension”, i.e., wilting.
Do fish drink water? Do they Urinate?
The following section is a bit long, but for those who are interested in biology it offers a beautiful example of how the constraints imposed by osmosis have guided the evolution of ocean-living creatures into fresh-water species . It concerns ammonia NH 3 , a product of protein metabolism that is generated within all animals, but is highly toxic and must be eliminated.
Marine invertebrates (those that live in seawater) are covered in membranes that are fairly permeable to water and to small molecules such as ammonia. So water can diffuse in either direction as required, and ammonia can diffuse out as quickly as it forms. Nothing special here.
Invertebrates that live in fresh water do have problem: the salt concentrations within their bodies are around 1%, much greater than in fresh water. For this reason they have evolved surrounding membranes that are largely impermeable to salts (to prevent their diffusion out of the body) and to water (to prevent osmotic flow in.) But these organisms must also be able to exchange oxygen and carbon dioxide with their environment. The special respiratory organs (gills) that mediate this process, as a consequence of being permeable to these two gases, will also allow water molecules (whose sizes are comparable to those of the respiratory gases) to pass through. In order to protect fresh-water invertebrates from the disastrous effects of unlimited water inflow through the gill membranes, these animals possess special excretory organs that expel excess water back into the environment. Thus in such animals, there is a constant flow of water passing through the body. Ammonia and other substances that need to be excreted are taken up by this stream which constitutes a continual flow of dilute urine.
Fishes fall into two general classes: most fish have bony skeletons and are known as teleosts. Sharks and rays have cartilage instead of bones, and are called elasmobranchs. For the teleosts that live in fresh water, the situation is very much the same as with fresh-water invertebrates; they take in and excrete water continuously. The fact that an animal lives in the water does not mean that it enjoys an unlimited supply of water. Marine teleosts have a more difficult problem. Their gills are permeable to water, as are those of marine invertebrates. But the salt content of seawater (about 3%), being higher than the about 1% in the fish’s blood, would draw water out of the fish. Thus these animals are constantly losing water, and would be liable to desiccation if water could freely pass out of their gills. Some does, of course, and with it goes most of its nitrogen in the form of NH 3 .
Thus most of the waste nitrogen exits not through the usual excretory organs as with most vertebrates, but through the gills. But in order to prevent excessive loss of water, the gills have reduced permeability to this water, and with it, to comparably-sized NH 3 . So in order to prevent ammonia toxicity, the remainder of it is converted to a non-toxic substance (trimethylamine oxide (CH 3 ) 3 NO) which is excreted via the kidneys.
The marine elasmobranchs solve the loss-of-water problem in another way: they convert waste ammonia to urea (NH 3 ) 2 CO which is highly soluble and non-toxic. Their kidneys are able to control the quantity of urea excreted so that their blood retains about 2-2.5 percent of this substance. Combined with the 1 percent of salts and other substances in their blood, this raises the osmotic pressure within the animal to slightly above that of seawater, Thus the same mechanism that protects them from ammonia poisoning also ensures them an adequate water supply. | libretexts | 2025-03-17T19:53:11.661361 | 2017-02-17T01:39:52 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.06%3A__Reverse_Osmosis",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "8.6: Reverse Osmosis",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.07%3A_Colligative_Properties_and_Entropy | 8.7: Colligative Properties and Entropy
All four solution effects ( Reduced Vapor Pressure , Freezing Point depression , Boiling Point Elevation , Osmotic pressure ) result from “dilution” of the solvent by the added solute. Because of this commonality they are referred to as colligative properties (Lat. co ligare , connected to.) The key role of the solvent concentration is obscured by the greatly-simplified expressions used to calculate the magnitude of these effects, in which only the solute concentration appears. The details of how to carry out these calculations and the many important applications of colligative properties are covered elsewhere. Our purpose here is to offer a more complete explanation of why these phenomena occur.
Basically, these all result from the effect of dilution of the solvent on its entropy, and thus in the increase in the density of energy states of the system in the solution compared to that in the pure liquid. Equilibrium between two phases (liquid-gas for boiling and solid-liquid for freezing) occurs when the energy states in each phase can be populated at equal densities. The temperatures at which this occurs are depicted by the shading.
- Dilution of the solvent adds new energy states to the liquid, but does not affect the vapor phase . This raises the temperature required to make equal numbers of microstates accessible in the two phases.
- Dilution of the solvent adds new energy states to the liquid, but does not affect the solid phase . This reduces the temperature required to make equal numbers of states accessible in the two phases.
Effects of pressure on the entropy: Osmotic Pressure
When a liquid is subjected to hydrostatic pressure— for example, by an inert, non-dissolving gas that occupies the vapor space above the surface, the vapor pressure of the liquid is raised. The pressure acts to compress the liquid very slightly, effectively narrowing the potential energy well in which the individual molecules reside and thus increasing their tendency to escape from the liquid phase. (Because liquids are not very compressible, the effect is quite small; a 100-atm applied pressure will raise the vapor pressure of water at 25°C by only about 2 torr.) In terms of the entropy, we can say that the applied pressure reduces the dimensions of the "box" within which the principal translational motions of the molecules are confined within the liquid, thus reducing the density of energy states in the liquid phase.
Applying hydrostatic pressure to a liquid increases the spacing of its microstates, so that the number of energetically accessible states in the gas, although unchanged, is relatively greater— thus increasing the tendency of molecules to escape into the vapor phase. In terms of free energy, the higher pressure raises the free energy of the liquid, but does not affect that of the gas phase.
This phenomenon can explain osmotic pressure . Osmotic pressure, students must be reminded, is not what drives osmosis, but is rather the hydrostatic pressure that must be applied to the more concentrated solution (more dilute solvent) in order to stop osmotic flow of solvent into the solution. The effect of this pressure \(\Pi\) is to slightly increase the spacing of solvent energy states on the high-pressure (dilute-solvent) side of the membrane to match that of the pure solvent, restoring osmotic equilibrium.
Osmotic pressure does not drive osmosis, but is rather the hydrostatic pressure that must be applied to the more concentrated solution (more dilute solvent) in order to stop osmotic flow of solvent into the solution. | libretexts | 2025-03-17T19:53:11.722070 | 2017-02-17T03:11:09 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.07%3A_Colligative_Properties_and_Entropy",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "8.7: Colligative Properties and Entropy",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.08%3A_Ideal_vs._Real_Solutions | 8.8: Ideal vs. Real Solutions
Make sure you thoroughly understand the following essential ideas:
- Describe the physical reasons that a binary liquid solution might exhibit non-ideal behavior
The popular liquor vodka consists mainly of ethanol (ethyl alcohol) and water in roughly equal portions. Ethanol and water both have substantial vapor pressures, so both components contribute to the total pressure of the gas phase above the liquid in a closed container of the two liquids. One might expect the vapor pressure of a solution of ethanol and water to be directly proportional to the sums of the values predicted by Raoult's law for the two liquids individually, but in general, this does not happen. The reason for this can be understood if you recall that Raoult's law reflects a single effect: the smaller proportion of vaporizable molecules (and thus their reduced escaping tendency) when the liquid is diluted by otherwise "inert" (non-volatile) substance.
Ideal Solutions
There are some solutions whose components follow Raoult's law quite closely. An example of such a solution is one composed of hexane C 6 H 14 and heptane C 7 H 16 . The total vapor pressure of this solution varies in a straight-line manner with the mole fraction composition of the mixture.
Note that the mole fraction scales at the top and bottom run in opposite directions, since by definition,
\[\chi_{hexane} = 1 – \chi_{heptane}\]
If this solution behaves ideally, then is the sum of the Raoult's law plots for the two pure compounds:
\[P_{total} = P_{ heptane } + P_{ hexane }\]
An ideal solution is one whose vapor pressure follows Raoult's law throughout its range of compositions. Experience has shown solutions that approximate ideal behavior are composed of molecules having very similar structures. Thus hexane and heptane are both linear hydrocarbons that differ only by a single –CH 2 group. This provides a direct clue to the underlying cause of non-ideal behavior in solutions of volatile liquids. In an ideal solution, the interactions are there, but they are all energetically identical. Thus in an ideal solution of molecules A and B, A—A and B—B attractions are the same as A—B attractions. This is the case only when the two components are chemically and structurally very similar.
Ideal Solutions vs. Ideal Gases
The ideal solution differs in a fundamental way from the definition of an ideal gas , defined as a hypothetical substance that follows the ideal gas law. The kinetic molecular theory that explains ideal gas behavior assumes that the molecules occupy no space and that intermolecular attractions are totally absent.
The definition of an ideal gas is clearly inapplicable to liquids, whose volumes directly reflect the volumes of their component molecules. And of course, the very ability of the molecules to form a condensed phase is due to the attractive forces between the molecules. So the most we can say about an ideal solution is that the attractions between its all of its molecules are identical — that is, A-type molecules are as strongly attracted to other A molecules as to B-type molecules. Ideal solutions are perfectly democratic: there are no favorites.
Real Solutions
Real solutions are more like real societies, in which some members are "more equal than others." Suppose, for example, that unlike molecules are more strongly attracted to each other than are like molecules. This will cause A–B pairs that find themselves adjacent to each other to be energetically more stable than A–A and B–B pairs. At compositions in which significant numbers of both kind of molecules are present, their tendencies to escape the solution — and thus the vapor pressure of the solution, will fall below what it would be if the interactions between all the molecules were identical. This gives rise to a negative deviation from Raoult's law. The chloroform-acetone system, illustrated above, is a good example.
Conversely, if like molecules of each kind are more attracted to each other than to unlike ones, then the molecules that happen to be close to their own kind will be stabilized. At compositions approaching 50 mole-percent, A and B molecules near each other will more readily escape the solution, which will therefore exhibit a higher vapor pressure than would otherwise be the case. It should not be surprising molecules as different as benzene and \(CS_2\) should interact more strongly with their own kind, hence the positive deviation illustrated here.
You will recall that all gases approach ideal behavior as their pressures approach zero. In the same way, as the mole fraction of either component approaches unity, the behavior of the solution approaches ideality. This is a simple consequence of the fact that at these limits, each molecule is surrounded mainly by its own kind, and the few A-B interactions will have little effect. Raoult's law is therefore a limiting law:
\[P_i = \lim_{x_i \rightarrow 0} P^o \chi_i\]
it gives the partial pressure of a substance in equilibrium with the solution more and more closely as the mole fraction of that substance approaches unity. | libretexts | 2025-03-17T19:53:11.786136 | 2017-02-17T03:42:46 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.08%3A_Ideal_vs._Real_Solutions",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "8.8: Ideal vs. Real Solutions",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.09%3A_Distillation | 8.9: Distillation
Make sure you thoroughly understand the following essential ideas:
- Sketch out a typical boiling point diagram for a binary liquid solution, and use this to show how a simple one-stage distillation works.
- Explain the role of the lever rule in fractional distillation
- Describe the purpose and function of a fractionating column
- Sketch out boiling point diagrams for high- and low-boiling azeotropes
- Describe the role of distillation in crude oil refining , and explain, in a very general way, how further processing is used to increase the yield of gasoline motor fuel.
Distillation is a process whereby a mixture of liquids having different vapor pressures is separated into its components. At first one might think that this would be quite simple: if you have a solution consisting of liquid A that boils at 50°C and liquid B with a boiling point of 90°C, all that would be necessary would be to heat the mixture to some temperature between these two values; this would boil off all the A (whose vapor could then be condensed back into pure liquid A), leaving pure liquid B in the pot. But that overlooks that fact that these liquids will have substantial vapor pressures at all temperatures, not only at their boiling points.
Vapor Pressure vs. Composition Phase Diagrams
To fully understand distillation, we will consider an ideal binary liquid mixture of \(\ce{A}\) and \(\ce{B}\). If the mole fraction of \(A\) in the mixture is \(\chi_A\), then by the definition of mole fraction, that of \(B\) is
\[\chi_B = 1 – \chi_A\]
Since distillation depends on the different vapor pressures of the components to be separated, let's first consider the vapor pressure vs. composition plots (Figure \(\PageIndex{1}\)) for a hypothetical mixture at some arbitrary temperature at which both liquid and gas phases can exist, depending on the total pressure.
In Figure \(\PageIndex{2}\), all states of the system (i.e., combinations of pressure and composition) in which the solution exists solely as a liquid are shaded in green. Since liquids are more stable at higher pressures, these states occupy the upper part of the diagram. At any given total vapor pressure such as at , the composition of the vapor in equilibrium with the liquid (designated by \(x_A\)) corresponds to the intercept with the diagonal equilibrium line at . The diagonal line is just an expression of the linearity between vapor pressure and composition according to Raoult's law .
The two liquid-vapor equilibrium lines (one curved, the other straight) now enclose an area in which liquid and vapor can coexist ; outside of this region, the mixture will consist entirely of liquid or of vapor. At this particular pressure , the intercept with the upper boundary of the two-phase region gives the mole fractions of A and B in the liquid phase, while the intercept with the lower boundary gives the mole fractions of the two components in the vapor.
Take a moment to study Figure \(\PageIndex{5}\) and to confirm that
- because both intercepts occur on equilibrium lines, they describe the compositions of the liquid and vapor that can simultaneously exist;
- the compositions of the vapor and liquid are not the same;
- in the vapor, the mole fraction of \(\ce{B}\) (the more volatile component of the solution) is greater than that in the liquid;
- in the liquid, the mole fraction of \(\ce{A}\) (the less volatile component) is smaller than that of the vapor.
The vapor in equilibrium with a solution of two or more liquids is always richer in the more volatile component.
Temperatures vs. Composition Phase Diagrams (Boiling Point Diagrams)
The rule shown above suggests that if we heat a mixture sufficiently to bring its total vapor pressure into the two-phase region, we will have a means of separating the mixture into two portions which will be enriched in the more volatile and less volatile components respectively. This is the principle on which distillation is based. But what temperature is required to achieve this? Again, we will spare you the mathematical details, but it is possible to construct a plot similar to the Figure \(\PageIndex{4}\) except that the vertical axis represents temperature rather than pressure. This kind of plot is called a boiling point diagram .
Important Properties of Boiling Point Diagrams
Some important things to understand about Figure \(\PageIndex{6}\):
- The shape of the two-phase region is bi-convex, as opposed to the half-convex shape of the pressure-composition plot.
- The slope of the two-phase region is opposite to what we saw in the previous plot, and the areas corresponding to the single-phase regions are reversed. This simply reflects the fact that liquids having a higher vapor pressure boil at lower temperatures, and vice versa .
- The horizontal line that defines the temperature is called the tie line. Its intercepts with the two equilibrium curves specify the composition of the liquid and vapor in equilibrium with the mixture at the given temperature.
- The vapor composition line is also known as the dew point line — the temperature at which condensation begins on cooling.
- The liquid composition line is also called the bubble point line — the temperature at which boiling begins on heating.
The tie line shown in Figure \(\PageIndex{6}\) is for one particular temperature. But when we heat a liquid to its boiling point, the composition will change as the more volatile component (\(\ce{B}\) in these examples) is selectively removed as vapor. The remaining liquid will be enriched in the less volatile component, and its boiling point will consequently rise. To understand this process more thoroughly, let us consider the situation at several points during the distillation of an equimolar solution of \(\ce{A}\) and \(\ce{B}\).
|
Figure \(\PageIndex{5A}\): We begin with the liquid at T 1 , below its boiling point. When the temperature rises to T 2 , boiling begins and the first vapor (and thus the first drop of condensate) will have the composition y 2 . |
Figure \(\PageIndex{5B}\): As the more volatile component B is boiled off, the liquid and vapor/condensate compositions shift to the left (orange arrows). | Figure \(\PageIndex{5C}\): At T 4 , the last trace of liquid disappears. The system is now entirely vapor, of composition y 4 . |
Notice that the vertical green system composition line remains in the same location in the three plots because the "system" is defined as consisting of both the liquid in the "pot" and that in the receiving container which was condensed from the vapor. The principal ideas you should take away from this are that
- distillation can never completely separate two volatile liquids;
- the composition of the vapor and thus of the condensed distillate changes continually as each drop forms, starting at y 2 and ending at y 4 in this example;
- if the liquid is completely boiled away, the composition of the distillate will be the same as that of the original solution.
Laboratory Distillation Setup
The apparatus used for a simple laboratory batch distillation is shown here. The purpose of the thermometer is to follow the progress of the distillation; as a rough rule of thumb, the distillation should be stopped when the temperature rises to about half-way between the boiling points of the two pure liquids, which should be at least 20-30 C° apart (if they are closer, then fractional distillation , described below, becomes necessary).
Fractional Distillation
Although distillation can never achieve complete separation of volatile liquids, it can in principal be carried out in such a way that any desired degree of separation can be achieved if the solution behaves ideally and one is willing to go to the trouble. The general procedure is to distill only a fraction of the liquid, the smaller the better. The condensate, now enriched in the more volatile component, is then collected and re-distilled (again, only a small fraction), thus obtaining a condensate even-more-enriched in the more volatile component. If we repeat this sequence many times, we can eventually obtain almost-pure, if minute, samples of the two components.
But since this would hardly be practical, there is a better way. In order to understand it, you need to know about the lever rule , which provides a simple way of determining the relative quantities (not just the compositions) of two phases in equilibrium. The lever rule is easily derived from Raoult's and Dalton's laws , but we will simply illustrate it graphically (Figure \(\PageIndex{7}\)). The plot shows the boiling point diagram of a simple binary mixture of composition . At the temperature corresponding to the tie line, the composition of the liquid corresponds to and that of the vapor to .
So now for the lever rule: the relative quantities of the liquid and the vapor we identified above are given by the lengths of the tie-line segments labeled a and b . Thus in this particular example, in which b is about four times longer than a , we can say that the mole ratio of vapor (of composition ) to liquid (composition ) is 4.
Steps in Fractional Distillation
It is not practical to carry out an almost-infinite number of distillation steps to obtain nearly-infinitesimal quantities of the two pure liquids we wish to separate. So instead of collecting each drop of condensate and re-distilling it, we will distil half of the mixture in each step. Suppose you want to separate a liquid mixture composed of 20 mole-% B and 80 mole-% A, with A being the more volatile.
As we heat the mixture whose overall composition is indicated by , the first vapor is formed at T 0 and has the composition y 0 , found by extending the horizontal dashed line until it meets the vapor curve. This vapor is clearly enriched in B; if it is condensed, the resulting liquid will have a mole fraction x B approaching that of A in the original liquid. But this is only the first drop, we don't want to stop there!
As the liquid continues to boil, the boiling temperature rises. When it reaches T 1 , we will have boiled away half of the liquid. At this point, the "system" composition (liquid plus vapor) is still the same ( ), but is now equally divided between the liquid, which we call "residue" R 1 , and the condensed vapor, the distillate D 1 .
How do we know it is equally divided? We have picked T 1 so that the tie line is centered on the system concentration, so by the lever rule, R 1 and D 1 contain equal numbers of moles.
We now take the condensed liquid D 1 having the composition , and distill half of it, obtaining distillate of composition D 2 .
.. and then carry out yet another distillation, this time using D 3 as our feedstock.
Our four-stage fractionation has enriched the more volatile solute from 20 to slightly over 80 mole-percent in D 4 . The less volatile component A is most concentrated in R 1 . R 2 through R 4 are thrown away (but not down the sink, please!)
This may be sufficient for some purposes, but we might wish to do much better, using perhaps 1000 stages instead of just 4. What could be more tedious?
Fractionation with reflux
Not to worry! The multiple successive distillations can be carried out "virtually" by inserting a fractionating column between the boiling flask and the condenser.
These columns are made with indentations or are filled with materials that provide a large surface area extending through the vertical temperature gradient (higher temperature near the bottom, lower temperature at the top.) The idea is that hot vapors condense at various levels in the column and the resulting liquid drips down ( refluxes ) to a lower level where it is vaporized, which corresponds roughly to a re-distillation.
Vigreux columns having multiple indentations are widely used (above right). Simple columns can be made by filling a glass tube with beads, short glass tubes, or even stainless steel kitchen-type scouring pads. More elaborate ones have spinning steel ribbons.
Separation efficiency: theoretical plates
The operation of fractionating columns can best be understood by reference to a bubble-cap column. The one shown here consists of four sections, or "plates" through which hot vapors rise and bubble up through pools of condensate that collect on each plate. The intimate contact between vapor and liquid promotes equilibration and re-distillation at successively higher temperatures at each higher plate in the column. Unlike the case of the step-wise fractional distillation we discussed above, none of the intermediate residues is thrown away; they simply drip down back into the pot where their fractionation journey begins again, always leading to a further concentration of the less-volatile component in the remaining liquid. At the same time, the vapor emerging from the top plate (5) provides a continuing flow of volatile-enriched condensate, although in diminishing quantities as it is depleted in the boiling pot.
If complete equilibrium is attained between the liquid and vapor at each stage, then we can describe the system illustrated above as providing "five theoretical plates" of separation (remember that the pot represents the first theoretical plate.) Equilibrium at each stage requires a steady-state condition in which the quantity of vapor moving upward at each stage is equal to the quantity of liquid draining downward — in other words, the column should be operating in total reflux, with no net removal of distillate. So any real distillation process will be operated at a reflux ratio that provides optimum separation in a reasonable period of time.
Some of the more advanced laboratory-type devices (such as some spinning-steel band columns) are said to offer up to around 200 theoretical plates of separating power.
Azeotropes: the Limits of Distillation
The boiling point diagrams presented in the foregoing section apply to solutions that behave in a reasonably ideal manner — that is, to solutions that do not deviate too far from Raoult's law. As we explained above, mixtures of liquids whose intermolecular interactions are widely different do not behave ideally, and may be impossible to separate by ordinary distillation. The reason for this is that under certain conditions, the compositions of the liquid and of the vapor in equilibrium with it become identical, precluding any further separation. These cross-over points appear as "kinks" in the boiling point diagrams.
High- and low-boiling azeotropes
Thus in this boiling point diagram for a mixture exhibiting a positive deviation from Raoult's law, successive fractionations of mixtures correspond to either or bring the distillation closer to the azeotropic composition indicated by the dashed vertical line. Once this point is reached, further distillation simply yields more of the same "high-boiling" azeotrope.
Distillation of a mixture having a negative deviation from Raoult's law leads to a similar stalemate, in this case yielding a "low-boiling" azeotrope. High- and low-boiling azeotropes are commonly referred to as constant-boiling mixtures , and they are more common than most people think.
"Breaking" an azeotrope
There are four general ways of dealing with azeotropes. The first two of these are known collectively as azeotropic distillation .
- Addition of a third substance that alters the intermolecular attractions is the most common trick. The drawback is that another procedure is usually needed to remove this other substance.
- Pressure-swing distillation takes advantage of the fact that boiling point ( T,X ) diagrams are two-dimensional slices of a ( T,X,P ) diagram in which the pressure is the third variable. This means that the azeotropic composition depends on the pressure, so distillation at some pressure other than 1 atm may allow one to "jump" the azeotrope.
- Use of a molecular sieve — a porous material that selectively absorbs one of the liquids, most commonly water when the latter is present at a low concentration.
- Give up. It often happens that the azeotropic composition is sufficiently useful that it's not ordinarily worth the trouble of obtaining a more pure product. This accounts for the concentrations of many commercial chemicals such as mineral acids.
| mixture | azeotrope |
|---|---|
| Ethanol | 98%, high, 78.1°C |
| Hydrochloric acid | 20.2% high, 108.6°C |
| Hydrofluoric acid | 35.6%, 111.3°C |
| Nitric acid | 68%, 120.5°C |
| Sulfuric acid | 98.3%, 338°C |
Distillation of Ethanol
Ethanol is one of the major industrial chemicals, and is of course the essential component of beverages that have been a part of civilization throughout recorded history. Most ethanol is produced by fermentation of the starch present in food grains, or of sugars formed by the enzymatic degradation of cellulose. Because ethanol is toxic to the organisms whose enzymes mediate the fermentation process, the ethanol concentration in the fermented mixture is usually limited to about 15%. The liquid phase of the mixture is then separated and distilled.
For applications requiring anhydrous ethanol ("absolute ethanol "), the most common method is the use of zeolite-based molecular sieves to absorb the remaining water. Addition of benzene can break the azeotrope, and this was the most common production method in earlier years. For certain critical uses where the purest ethanol is required, it is synthesized directly from ethylene.
Special Distillation Methods
Here be briefly discuss two distillation methods that students are likely to encounter in more advanced organic lab courses.
Vacuum distillation : Many organic substances become unstable at high temperatures, tending to decompose, polymerize or react with other substances at temperatures around 200° C or higher. A liquid will boil when its vapor pressure becomes equal to the pressure of the gas above it, which is ordinarily that of the atmosphere. If this pressure is reduced, boiling can take place at a lower temperature. (Even pure water will boil at room temperature under a partial vacuum.) "Vacuum distillation" is of course a misnomer; a more accurate term would be "reduced-pressure distillation". Vacuum distillation is very commonly carried out in the laboratory and will be familiar to students who take more advanced organic lab courses. It is also sometimes employed on a large industrial scale.
The vacuum distillation setup is similar that employed in ordinary distillation, with a few additions:
- The vacuum line is connected to the bent adaptor above the receiving flask.
- In order to avoid uneven boiling and superheating ("bumping"), the boiling flask is usually provided with a fine capillary ("ebulliator") through which an air leak produces bubbles that nucleate the boiling liquid.
- The vacuum is usually supplied by a mechanical pump, or less commonly by a water aspirator or a "house vacuum" line.
- The boiling flask is preferably heated by a water- or steam bath, which provides more efficient heat transfer to the flask and avoids localized overheating. Prior to about 1960, open flames were commonly used in student laboratories, resulting in occasional fires that enlivened the afternoon, but detracted from the student's lab marks.
- A Claisen-type distillation head (below) provides a convenient means of accessing the boiling flask for inserting an air leak capillary or introducing additional liquid through a separatory funnel. This Claisen-Vigreux head includes a fractionation column.
Steam Distillation : Strictly speaking, this topic does not belong in this unit, since steam distillation is used to separate immiscible liquids rather than solutions. But because immiscible liquid mixtures are not treated in elementary courses, we present a brief description of steam distillation here for the benefit of students who may encounter it in an organic lab course. A mixture of immiscible liquids will boil when their combined vapor pressures reach atmospheric pressure. This combined vapor pressure is just the sum of the vapor pressures of each liquid individually, and is independent of the quantities of each phase present .
Because water boils at 100° C, a mixture of water and an immiscible liquid (an "oil"), even one that has a high boiling point, is guaranteed to boil below 100°, so this method is especially valuable for separating high boiling liquids from mixtures containing non-volatile impurities. Of course the water-oil mixture in the receiving flask must itself be separated, but this is usually easily accomplished by means of a separatory funnel since their densities are ordinarily different.
There is a catch, however: the lower the vapor pressure of the oil, the greater is the quantity of water that co-distills with it. This is the reason for using steam: it provides a source of water able to continually restore that which is lost from the boiling flask. Steam distillation from a water-oil mixture without the introduction of additional steam will also work, and is actually used for some special purposes, but the yield of product will be very limited. Steam distillation is widely used in industries such as petroleum refining (where it is often called "steam stripping") and in the flavors-and-perfumes industry for the isolation of essential oils
The term essential oil refers to the aromas ("essences") of these [mostly simple] organic liquids which occur naturally in plants, from which they are isolated by steam distillation or solvent extraction. Steam distillation was invented in the 13th Century by Ibn al-Baiter, one of the greatest of the scientists and physicians of the Islamic Golden Age in Andalusia.
Industrial-scale distillation and Petroleum fractionation
Distillation is one of the major "unit operations" of the chemical process industries, especially those connected with petroleum and biofuel refining, liquid air separation, and brewing. Laboratory distillations are typically batch operations and employ relatively simple fractionating columns to obtain a pure product. In contrast, industrial distillations are most often designed to produce mixtures having a desired boiling range rather than pure products.
Industrial operations commonly employ bubble-cap fractionating columns (seldom seen in laboratories), although packed columns are sometimes used. Perhaps the most distinctive feature of large scale industrial distillations is that they usually operate on a continuous basis in which the preheated crude mixture is preheated in a furnace and fed into the fractionating column at some intermediate point. A reboiler unit maintains the bottom temperature at a constant value. The higher-boiling components then move down to a level at which they vaporize, while the lighter (lower-boiling) material moves upward to condense at an appropriate point.
Petroleum is a complex mixture of many types of organic molecules, mostly hydrocarbons, that were formed by the effects of heat and pressure on plant materials (mostly algae) that grew in regions that the earth's tectonic movements buried over periods of millions of years. This mixture of liquid and gases migrates up through porous rock until it s trapped by an impermeable layer of sedimentary rock. The molecular composition of crude oil (the liquid fraction of petroleum) is highly variable, although its overall elemental makeup generally reflects that of typical plants.
| element | carbon | hydrogen | nitrogen | oxygen | sulfur | metals |
|---|---|---|---|---|---|---|
| amount | 83-87% | 10-14% | 0.1-2% | 0.1-1.5% | 0.5-6% |
The principal molecular constituents of crude oil are
- alkanes: Also known as paraffins , these are saturated linear- or branched-chain molecules having the general formula C n H 2 n +2 in which n is mostly between 5 and 40.
- unsaturated aliphatic: Linear- or branched chain molecules containing one or more double or triple bonds (alkenes or alkynes).
- Cycloalkanes:Also known as naphthenes these are saturated hydrocarbons C n H 2 n containing one or more ring structures.
- Aromatic hydrocarbons:These contain one or more fused benzene rings C n H n , often with hydrocarbon side-chains.
The word gasoline predates its use as a motor fuel; it was first used as a topical medicine to rid people of head lice, and to remove grease spots and stains from clothing. The first major step of refining is to fractionate the crude oil into various boiling ranges.
| boiling range | fraction name | further processing |
|---|---|---|
| butane and propane | gas processing | |
| 30 - 210° | straight-run gasoline | blending into motor gasoline |
| 100 - 200° | naphtha | reforming into gasoline components |
| 150 - 250° | kerosene | jet fuel blending |
| 160 -400° | light gas oil | distillate fuel blending into diesel or fuel oil |
| 315 - 540° | heavy gas oil | catalytic cracking: large molecules are broken up into smaller ones and recycled |
| >450° | asphalts, bottoms | may be vacuum-distilled into more fractions |
Further processing and blending
About 16% of crude oil is diverted to the petrochemical industry where it is used to make ethylene and other feedstocks for plastics and similar products. Because the fraction of straight-run gasoline is inadequate to meet demand, some of the lighter fractions undergo reforming and the heavier ones cracking and are recycled into the gasoline stream. These processes necessitate a great amount of recycling and blending, into which must be built a considerable amount of flexibility in order to meet seasonal needs (more volatile gasolines and heating fuel oil in winter, more total gasoline volumes in the summer.) | libretexts | 2025-03-17T19:53:11.980480 | 2013-10-03T01:38:05 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.09%3A_Distillation",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "8.9: Distillation",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.10%3A_Ions_and_Electrolytes | 8.10: Ions and Electrolytes
Electrolytic solutions are those that are capable of conducting an electric current. A substance that, when added to water, renders it conductive, is known as an electrolyte . A common example of an electrolyte is ordinary salt, sodium chloride. Solid NaCl and pure water are both non-conductive, but a solution of salt in water is readily conductive. A solution of sugar in water, by contrast, is incapable of conducting a current; sugar is therefore a non-electrolyte . | libretexts | 2025-03-17T19:53:12.038808 | 2013-10-03T01:38:05 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.10%3A_Ions_and_Electrolytes",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "8.10: Ions and Electrolytes",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.10%3A_Ions_and_Electrolytes/8.10.9A%3A_8.10.9A%3A_Electrolytes_and_Electrolytic_Solutions | 8.10.9A: Electrolytes and Electrolytic Solutions
Make sure you thoroughly understand the following essential ideas:
- Describe the properties of water that make it an ideal electrolytic solvent.
- Describe the general structure of ionic hydration shells.
- Explain why all cations act as acids in water.
- Describe some of the major ways in which the conduction of electricity through a solution differs from metallic conduction.
- Define resistance, resistivity, conductance, and conductivity.
- Define molar conductivity and explain its significance.
- Explain the major factors that cause molar conductivity to diminish as electrolyte concentrations increase.
- Describe the contrasting behavior of strong, intermediate, and weak electrolytes.
- Explain the distinction between ionic diffusion and ionic migration.
- Define the limiting ionic conductivity, and comment on some of its uses.
- Explain why hydrogen- and hydroxide ions exhibit exceptionally large ionic mobilities.
Electrolytic solutions are those that are capable of conducting an electric current. A substance that, when added to water, renders it conductive, is known as an electrolyte . A common example of an electrolyte is ordinary salt, sodium chloride. Solid NaCl and pure water are both non-conductive, but a solution of salt in water is readily conductive. A solution of sugar in water, by contrast, is incapable of conducting a current; sugar is therefore a non-electrolyte .
These facts have been known since 1800 when it was discovered that an electric current can decompose the water in an electrolytic solution into its elements (a process known as electrolysis ). By mid-century, Michael Faraday had made the first systematic study of electrolytic solutions. Faraday recognized that for a sample of matter to conduct electricity, two requirements must be met:
- The matter must be composed of, or contain, electrically charged particles.
- These particles must be mobile ; that is, they must be free to move under the influence of an external applied electric field.
In metallic solids, the charge carriers are electrons rather than ions; their mobility is a consequence of the quantum-mechanical uncertainty principle which promotes the escape of the electrons from the confines of their local atomic environment. In the case of electrolytic solutions, Faraday called the charge carrier ions (after the Greek word for "wanderer"). His most important finding was that each kind of ion (which he regarded as an electrically-charged atom) carries a definite amount of charge, most commonly in the range of ±1-3 units.
The fact that the smallest charges observed had magnitudes of ±1 unit suggested an "atomic" nature for electricity itself, and led in 1891 to the concept of the "electron" as the unit of electric charge — although the identification of this unit charge with the particle we now know as the electron was not made until 1897.
An ionic solid such as NaCl is composed of charged particles, but these are held so tightly in the crystal lattice that they are unable to move about, so the second requirement mentioned above is not met and solid salt is not a conductor. If the salt is melted or dissolved in water, the ions can move freely and the molten liquid or the solution becomes a conductor.
Since positively-charged ions are attracted to a negative electrode that is traditionally known as the cathode , these are often referred to as cations . Similarly, negatively-charged ions, being attracted to the positive electrode, or anode , are called anions . (These terms were all coined by Faraday.)
The role of the solvent: what's special about water
Although we tend to think of the solvent (usually water) as a purely passive medium within which ions drift around, it is important to understand that electrolytic solutions would not exist without the active involvement of the solvent in reducing the strong attractive forces that hold solid salts and molecules such as HCl together. Once the ions are released, they are stabilized by interactions with the solvent molecules. Water is not the only liquid capable of forming electrolytic solutions, but it is by far the most important. It is therefore essential to understand those properties of water that influence the stability of ions in aqueous solution.
According to Coulomb's law, the force between two charged particles is directly proportional to the product of the two charges, and inversely proportional to the square of the distance between them:
The proportionality constant D is the dimensionless dielectric constant . Its value in empty space is unity, but in other media it will be larger. Since D appears in the denominator, this means that the force between two charged particles within a gas or liquid will be less than if the particles were in a vacuum. Water has one of the highest dielectric constants of any known liquid; the exact value varies with the temperature, but 80 is a good round number to remember. When two oppositely-charged ions are immersed in water, the force acting between them is only 1/80 as great as it would be between the two gaseous ions at the same distance. It can be shown that in order to separate one mole of Na + and Cl – ions at their normal distance of 23.6 pm in solid sodium chloride, the work required will be 586 J in a vacuum, but only 7.3 J in water
Dielectric Constant
The dielectric constant is a bulk property of matter, rather than being a property of the molecule itself, as is the dipole moment. It is a cooperative effect of all the molecules in the liquid, and is a measure of the extent to which an applied electric field will cause the molecules to line up with the negative ends of their dipoles pointing toward the positive direction of the electric field. The high dielectric constant of water is a consequence of the small size of the H 2 O molecule in relation to its large dipole moment.
When one molecule is reoriented by the action of an external electric field, local hydrogen bonding tends to pull neighboring molecules into the same alignment, thus producing an additive effect that would be absent if the molecules were all acting independently.
Water's dipole moment stabilizes dissolved ions through hydrogen bonding
When an ion is introduced into a solvent, the attractive interactions between the solvent molecules must be disrupted to create space for the ion. This costs energy and would by itself tend to inhibit dissolution. However, if the solvent has a high permanent dipole moment, the energy cost is more than recouped by the ion-dipole attractions between the ion and the surrounding solvent molecules.
Water, as you know, has a sizeable dipole moment that is the source of the hydrogen bonding that holds the liquid together. The greater strength of ion-dipole attractions compared to hydrogen-bonding (dipole-dipole) attractions stabilizes the dissolved ion.
Water is not the only electrolytic solvent, but it is by far the best. For some purposes, chemists occasionally need to employ non-aqueous solvents when studying electrolytes. Here are a few examples:
| solvent | Melting Point (°C) | Boiling Point °C | D | dipole moment | specific conductivity, S/cm |
|---|---|---|---|---|---|
| water | 0 | 100 | 80.1 | 1.87 | 5.5 × 10 –8 |
| methanol | –98 | 64.7 | 32.7 | 1.7 | 1.5× 10 –7 |
| ethanol | –114 | 78.3 | 24.5 | 1.66 | 1.35 × 10 –9 |
| acetonitrile | –43.8 | 81.6 | 37.5 | 3.45 | 7 × 10 –6 |
| dimethyl sulfoxide | 18.5 | 189 | 46.7 | 3.96 | 3 × 10 –8 |
| ethylene carbonate | 36.4 | 238 | 89.6 | 4.87 | < 1 × 10 –9 | | libretexts | 2025-03-17T19:53:12.113373 | 2014-10-24T04:02:08 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.10%3A_Ions_and_Electrolytes/8.10.9A%3A_8.10.9A%3A_Electrolytes_and_Electrolytic_Solutions",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "8.10.9A: Electrolytes and Electrolytic Solutions",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.10%3A_Ions_and_Electrolytes/8.10.9B%3A_8.10.9B%3A_The_nature_of_ions_in_aqueous_solution | 8.10.9B: The nature of ions in aqueous solution
The kinds of ions we will consider in this lesson are mostly those found in solutions of common acids or salts. As is evident from the image below, most of the metallic elements form monatomic cations, but the number of monatomic anions is much smaller. This reflects the fact that many single-atom anions such as hydride H – , oxide O 2– , sulfide S 2– and those in Groups 15 and 16 , are unstable in ( i.e. , react with) water, and their major forms are those in which they are combined with other elements, particularly oxygen. Some of the more familiar oxyanions are hydroxide OH – , carbonate CO 3 2– , nitrate NO 3 – , sulfate SO 4 2– , chlorate ClO 4 2– , and arsenate AsO 4 2– .
Conductivity water" with κ = 0.043 × 10 –6 S cm –1 at 18°C. Ordinary distilled water in equilibrium with atmospheric CO 2 has a conductivity that is 16 times greater.
It is now known that ordinary distillation cannot entirely remove all impurities from water. Ionic impurities get entrained into the fog created by breaking bubbles and are carried over into the distillate by capillary flow along the walls of the apparatus. Organic materials tend to be steam-volatile ("steam-distilled").
The best current practice is to employ a special still made of fused silica in which the water is volatilized from its surface without boiling. Complete removal of organic materials is accomplished by passing the water vapor through a column packed with platinum gauze heated to around 800°C through which pure oxygen gas is passed to ensure complete oxidation of carbon compounds.
Conductance measurements are widely used to gauge water quality, especially in industrial settings in which concentrations of dissolved solids must be monitored in order to schedule maintenance of boilers and cooling towers.
The conductance of a solution depends on 1) the concentration of the ions it contains, 2) on the number of charges carried by each ion, and 3) on the mobilities of these ions. The latter term refers to the ability of the ion to make its way through the solution, either by ordinary thermal diffusion or in response to an electric potential gradient.
The first step in comparing the conductances of different solutes is to reduce them to a common concentration. For this, we define the conductance per unit concentration which is known as the molar conductivity , denoted by the upper-case Greek lambda :
\[Λ = \dfrac{κ}{c}\]
When κ is expressed in S cm –1 , C should be in mol cm –3 , so Λ will have the units S cm 2 . This is best visualized as the conductance of a cell having 1-cm 2 electrodes spaced 1 cm apart — that is, of a 1 cm cube of solution. But because chemists generally prefer to express concentrations in mol L –1 or mol dm –3 (mol/1000 cm 3 ) , it is common to write the expression for molar conductivity as
\[Λ = \dfrac{1000κ}{c}\]
electrodes, separated again by 1 cm.But if c is the concentration in moles per liter, this will still not fairly compare two salts having different stoichiometries, such as AgNO 3 and FeCl 3 , for example. If we assume that both salts dissociate completely in solution, each mole of AgNO 3 yields two moles of charges, while FeCl 3 releases six (i.e., one Fe 3 + ion, and three Cl – ions.) So if one neglects the [rather small] differences in the ionic mobilities, the molar conductivity of FeCl 3 would be three times that of AgNO 3 .
The most obvious way of getting around this is to note that one mole of a 1:1 salt such as AgNO 3 is "equivalent" (in this sense) to 1/3 of a mole of FeCl 3 , and of ½ a mole of MgBr 2 . To find the number of equivalents that correspond a given quantity of a salt, just divide the number of moles by the total number of positive charges in the formula unit. (If you like, you can divide by the number of negative charges instead; because these substances are electrically neutral, the numbers will be identical.)
We can refer to equivalent concentrations of individual ions as well as of neutral salts. Also, since acids can be regarded as salts of H + , we can apply the concept to them; thus a 1M L –1 solution of sulfuric acid H 2 SO 4 has a concentration of 2 eq L –1 .
The following diagram summarizes the relation between moles and equivalents for CuCl 2 :
What is the concentration, in equivalents per liter, of a solution made by dissolving 4.2 g of chromium(III) sulfate pentahydrate Cr 2 (SO 4 ) 3 ·5H 2 O in sufficient water to give a total volume of 500 mL? (The molar mass of the hydrate is 482 g)
Solution
Assume that the salt dissociates into 6 positive charges and 6 negative charges.
- Number of moles of the salt: (4.2 g) / (482 g mol –1 ) = 0.00871 mol
- Number of equivalents: (.00871 mol) / (6 eq mol –1 ) = 0.00146 eq
- Equivalent concentration: (0.00146 eq) / (0.5 L) = 0.00290 eq L –1
The concept of equivalent concentration allows us to compare the conductances of different salts in a meaningful way. Equivalent conductivity is defined similarly to molar conductivity
\[Λ = \dfrac{κ}{c}\]
except that the concentration term is now expressed in equivalents per liter instead of moles per liter. (In other words, the equivalent conductivity of an electrolyte is the conductance per equivalent per liter.) | libretexts | 2025-03-17T19:53:12.181155 | 2014-10-24T04:04:23 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.10%3A_Ions_and_Electrolytes/8.10.9B%3A_8.10.9B%3A_The_nature_of_ions_in_aqueous_solution",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "8.10.9B: The nature of ions in aqueous solution",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.10%3A_Ions_and_Electrolytes/8.10.9C%3A_8.10.9C%3A__Weak_and_Strong_Electrolytes | 8.10.9C: Weak and Strong Electrolytes
The serious study of electrolytic solutions began in the latter part of the 19th century, mostly in Germany — and before the details of dissociation and ionization were well understood. These studies revealed that the equivalent conductivities of electrolytes all diminish with concentration (or more accurately, with the square root of the concentration), but they do so in several distinct ways that are distinguished by their behaviors at very small concentrations. This led to the classification of electrolytes as weak, intermediate, and strong.
You will notice that plots of conductivities vs. √ c start at c =0. It is of course impossible to measure the conductance of an electrolyte at vanishingly small concentrations (not to mention zero!), but for strong and intermediate electrolytes, one can extrapolate a series of observations to zero. The resulting values are known as limiting equivalent conductances or sometimes as "equivalent conductances at infinite dilution", designated by Λ°.
- Strong electrolytes
-
These well-behaved systems include many simple salts such as NaCl, as well as all strong acids.
The Λ vs. √c plots closely follow the linear relation - Λ = Λ° – b √ c
- Intermediate electrolytes
- These "not-so-strong" salts can't quite conform to the linear equation above, but their conductivities can be extrapolated to infinite dilution.
- Weak electrolytes
- "Less is more" for these oddities which possess the remarkable ability to exhibit infinite equivalent conductivity at infinite dilution. Although Λ° cannot be estimated by extrapolation, there is a clever work-around.
Conductivity diminishes as concentrations increase
Since ions are the charge carriers, we might expect the conductivity of a solution to be directly proportional to their concentrations in the solution. So if the electrolyte is totally dissociated, the conductivity should be directly proportional to the electrolyte concentration. But this ideal behavior is never observed; instead, the conductivity of electrolytes of all kinds diminishes as the concentration rises.
The non-ideality of electrolytic solutions is also reflected in their colligative properties , especially freezing-point depression and osmotic pressure . The primary cause of this is the presence of the ionic atmosphere that was introduced above. To the extent that ions having opposite charge signs are more likely to be closer together, we would expect their charges to partially cancel, reducing their tendency to migrate in response to an applied potential gradient.
A secondary effect arises from the fact that as an ion migrates through the solution, its counter-ion cloud does not keep up with it. Instead, new counter-ions are continually acquired on the leading edge of the motion, while existing ones are left behind on the opposite side. It takes some time for the lost counter-ions to dissipate, so there are always more counter-ions on the trailing edge. The resulting asymmetry of the counter-ion field exerts a retarding effect on the central ion, reducing its rate of migration, and thus its contribution to the conductivity of the solution.
The quantitative treatment of these effects was first worked out by P. Debye and W. Huckel in the early 1920's, and was improved upon by Ostwald a few years later. This work represented one of the major advances in physical chemistry in the first half of the 20th Century, and put the behavior of electrolytic solutions on a sound theoretical basis. Even so, the Debye-Huckel theory breaks down for concentrations in excess of about 10 –3 M L –1 for most ions.
Not all Electrolytes Totally Dissociate in Solution
plots for strong electrolytes is largely explained by the effects discussed immediately above. The existence of intermediate electrolytes served as the first indication that many salts are not completely ionized in water; this was soon confirmed by measurements of their colligative properties.The curvature of the plots for intermediate electrolytes is a simple consequence of the Le Chatelier effect , which predicts that the equilibrium
\[MX_{(aq)} = M^+_{(aq)} + X^–_{(aq)}\]
will shift to the left as the concentration of the "free" ions increases. In more dilute solutions, the actual concentrations of these ions is smaller, but their fractional abundance in relation to the undissociated form is greater. As the solution approaches zero concentration, virtually all of the \(MX_{ (aq)}\) becomes dissociated, and the conductivity reaches its limiting value.
Weak electrolytes are dissociated only at extremely high dilution
| hydrofluoric acid | HF | K a = 10 –3.2 |
| acetic acid | CH 3 COOH | K a = 10 –6.3 |
| bicarbonate ion | HCO 3 – | K a = 10 –10.3 |
| ammonia | NH 3 | K b = 10 –4.7 |
Dissociation, of course, is a matter of degree. The equilibrium constants for the dissociation of an intermediate electrolyte salt MX are typically in the range of 1-200. This stands in contrast to the large number of weak acids (as well as weak bases) whose dissociation constants typically range from 10 –3 to smaller than 10 –10 .
These weak electrolytes, like the intermediate ones, will be totally dissociated at the limit of zero concentration; if the scale of the weak-electrolyte plot (blue) shown above were magnified by many orders of magnitude, the curve would resemble that for the intermediate electrolyte above it, and a value for Λ° could be found by extrapolation. But at such a high dilution, the conductivity would be so minute that it would be masked by that of water itself (that is, by the H + and OH – ions in equilibrium with the massive 55.6 M L –1 concentration of water) — making values of Λ in this region virtually unmeasurable. | libretexts | 2025-03-17T19:53:12.250422 | 2014-10-24T04:05:36 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.10%3A_Ions_and_Electrolytes/8.10.9C%3A_8.10.9C%3A__Weak_and_Strong_Electrolytes",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "8.10.9C: Weak and Strong Electrolytes",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.10%3A_Ions_and_Electrolytes/8.10.9D%3A_8.10.9D%3A_Ionic_migration | 8.10.9D: Ionic migration
The motion of ions in solution is mainly random
The conductance of an electrolytic solution results from the movement of the ions it contains as they migrate toward the appropriate electrodes. But the picture we tend to have in our minds of these ions moving in a orderly, direct march toward an electrode is wildly mistaken. The thermally-induced random motions of molecules is known as diffusion . The term migration refers specifically to the movement of ions due to an externally-applied electrostatic field.
The average thermal energy at temperatures within water's liquid range (given by RT ) is sufficiently large to dominate the movement of ions even in the presence of an applied electric field. This means that the ions, together with the water molecules surrounding them, are engaged in a wild dance as they are buffeted about by thermal motions (which include Brownian motion).
If we now apply an external electric field to the solution, the chaotic motion of each ion is supplemented by an occasional jump in the direction dictated by the interaction between the ionic charge and the field. But this is really a surprisingly tiny effect:
It can be shown that in a typical electric field of 1 volt/cm, a given ion will experience only about one field-directed (non-random) jump for every 10 5 random jumps. This translates into an average migration velocity of roughly 10 –7 m sec –1 (10 –4 mm sec –1 ). Given that the radius of the H 2 O molecule is close to 10 –10 m, it follows that about 1000 such jumps are required to advance beyond a single solvent molecule!
The ions migrate Independently
All ionic solutions contain at least two kinds of ions (a cation and an anion), but may contain others as well. In the late 1870's, the physicist Friedrich Kohlrausch noticed that the limiting equivalent conductivities of salts that share a common ion exhibit constant differences.
| electrolyte | Λ 0 (25°C) | difference | electrolyte | Λ 0 (25°C) | difference |
|---|---|---|---|---|---|
|
KCl
LiCl |
149.9
115.0 |
34.9 |
HCl
HNO 3 |
426.2
421.1 |
4.9 |
|
KNO
3
LiNO 3 |
145.0
140.1 |
34.9 |
LiCl
LiNO 3 |
115.0
110.1 |
4.9 |
These differences represent the differences in the conductivities of the ions that are not shared between the two salts. The fact that these differences are identical for two pairs of salts such as KCl/LiCl and KNO 3 /LiNO 3 tells us that the mobilities of the non-common ions K + and LI + are not affected by the accompanying anions.
Kohlrausch's law greatly simplifies estimates of Λ 0
This principle is known as Kohlrausch's law of independent migration , which states that in the limit of infinite dilution ,
Each ionic species makes a contribution to the conductivity of the solution that depends only on the nature of that particular ion, and is independent of the other ions present.
Kohlrausch's law can be expressed as
Λ 0 = Σ λ 0 + + Σ λ 0 –
This means that we can assign a limiting equivalent conductivity λ 0 to each kind of ion:
| cation | H 3 O + | NH 4 + | K + | Ba 2 + | Ag + | Ca 2 + | Sr 2 + | Mg 2 + | Na + | Li + |
|---|---|---|---|---|---|---|---|---|---|---|
| λ 0 | 349.98 | 73.57 | 73.49 | 63.61 | 61.87 | 59.47 | 59.43 | 53.93 | 50.89 | 38.66 |
| anion | OH – | SO 4 2– | Br – | I – | Cl – | NO 3 – | ClO 3 – | CH 3 COO – | C 2 H 5 COO – | C 3 H 7 COO – |
| λ 0 | 197.60 | 80.71 | 78.41 | 76.86 | 76.30 | 71.80 | 67.29 | 40.83 | 35.79 | 32.57 |
Just as a compact table of thermodynamic data enables us to predict the chemical properties of a very large number of compounds, this compilation of equivalent conductivities of twenty different species yields reliable estimates of the of Λ 0 values for five times that number of salts.
We can now estimate weak electrolyte limiting conductivities
One useful application of Kohlrausch's law is to estimate the limiting equivalent conductivities of weak electrolytes which, as we observed above, cannot be found by extrapolation. Thus for acetic acid CH 3 COOH ("HAc"), we combine the λ 0 values for H 3 O + and CH 3 COO – given in the above table:
Λ 0 HAc = λ 0 H+ + λ 0 Ac–
How fast do ions migrate in solution?
Movement of a migrating ion through the solution is brought about by a force exerted by the applied electric field. This force is proportional to the field strength and to the ionic charge. Calculations of the frictional drag are based on the premise that the ions are spherical (not always true) and the medium is continuous (never true) as opposed to being composed of discrete molecules. Nevertheless, the results generally seem to be realistic enough to be useful.
According to Newton's law, a constant force exerted on a particle will accelerate it, causing it to move faster and faster unless it is restrained by an opposing force. In the case of electrolytic conductance, the opposing force is frictional drag as the ion makes its way through the medium. The magnitude of this force depends on the radius of the ion and its primary hydration shell, and on the viscosity of the solution.
Eventually these two forces come into balance and the ion assumes a constant average velocity which is reflected in the values of λ 0 tabulated in the table above.
The relation between λ 0 and the velocity (known as the ionic mobility μ 0 ) is easily derived, but we will skip the details here, and simply present the results:
Anions are conventionally assigned negative μ 0 values because they move in opposite directions to the cations; the values shown here are absolute values |μ 0 |. Note also that the units are cm/sec per volt/cm, hence the cm 2 term.
| cation | H 3 O + | NH 4 + | K + | Ba 2+ | Ag + | Ca 2+ | Sr 2+ | Mg 2+ | Na + | Li + |
|---|---|---|---|---|---|---|---|---|---|---|
| μ 0 | 0.362 | 0.0762 | 0.0762 | 0.0659 | 0.0642 | 0.0616 | 0.0616 | 0.0550 | 0.0520 | 0.0388 |
| anion | OH – | SO 4 2– | Br – | I – | Cl – | NO 3 – | ClO 3 – | CH 3 COO – | C 2 H 5 COO – | C 3 H 7 COO – |
| μ 0 | .2050 | 0.0827 | 0.0812 | 0.0796 | 0.0791 | 0.0740 | 0.0705 | 0.0461 | 0.0424 | 0.0411 |
As with the limiting conductivities, the trends in the mobilities can be roughly correlated with the charge and size of the ion. (Recall that negative ions tend to be larger than positive ions.)
Cations and anions carry different fractions of the current
In electrolytic conduction, ions having different charge signs move in opposite directions. Conductivity measurements give only the sum of the positive and negative ionic conductivities according to Kohlrausch's law, but they do not reveal how much of the charge is carried by each kind of ion. Unless their mobilities are the same, cations and anions do not contribute equally to the total electric current flowing through the cell.
Recall that an electric current is defined as a flow of electric charges; the current in amperes is the number of coulombs of charge moving through the cell per second. Because ionic solutions contain equal quantities of positive and negative charges, it follows that the current passing through the cell consists of positive charges moving toward the cathode, and negative charges moving toward the anode. But owing to mobility differences, cations and ions do not usually carry identical fractions of the charge.
Transference numbers are often referred to as transport numbers ; either term is acceptable in the context of electrochemistry. The fraction of charge carried by a given kind of ion is known as the transference number \(t_{\pm}\). For a solution of a simple binary salt,
\[ t_+ = \dfrac{\lambda_+}}{\lambda_+ + \lambda_-}\]
and
\[ t_- = \dfrac{\lambda_-}}{\lambda_+ + \lambda_-}\]
By definition,
\[t_+ + t_– = 1.\]
To help you visualize the effects of non-identical transference numbers, consider a solution of M + X – in which t + = 0.75 and t – = 0.25. Let the cell be divided into three [imaginary] sections as we examine the distribution of cations and anions at three different stages of current flow.
| Initially, the concentrations of M + and X – are the same in all parts of the cell. | |
|
|
After 4 faradays of charge have passed through the cell, 3 eq of cations and 1 eq of anions have crossed any given plane parallel to the electrodes. Note that 3 anions are discharged at the anode, exactly balancing the number of cations discharged at the cathode. |
| In the absence of diffusion, the ratio of the ionic concentrations near the electrodes equals the ratio of their transport numbers. |
Transference numbers can be determined experimentally by observing the movement of the boundary between electrolyte solutions having an ion in common, such as LiCl and KCl:
In this example, K + has a higher transference number than Li + , but don't try to understand why the KCl boundary move to the left; the details of how this works are rather complicated and not important for the purposes of this this course.
H + and OH – ions "migrate" without moving, and rapidly!
You may have noticed from the tables above that the hydrogen- and hydroxide ions have extraordinarily high equivalent conductivities and mobilities. This is a consequence of the fact that unlike other ions which need to bump and nudge their way through the network of hydrogen-bonded water molecules, these ions are participants in this network. By simply changing the H 2 O partners they hydrogen-bond with, they can migrate "virtually". In effect, what migrates is the hydrogen-bonds, rather than the physical masses of the ions themselves.
This process is known as the Grothuss Mechanism . The shifting of the hydrogen bonds occurs when the rapid thermal motions of adjacent molecules brings a particular pair into a more favorable configuration for hydrogen bonding within the local molecular network. Bear in mind that what we refer to as "hydrogen ions" H + (aq) are really hydronium ions H 3 O + . It has been proposed that the larger aggregates H 5 O 2 + and H 9 O 4 + are important intermediates in this process.
It is remarkable that this virtual migration process was proposed by Theodor Grotthuss in 1805 — just five years after the discovery of electrolysis, and he didn't even know the correct formula for water; he thought its structure was H–O–O–H.
These two diagrams will help you visualize the process. The successive downward rows show the first few "hops" made by the virtual H + and OH – ions as they move in opposite directions toward the appropriate electrodes. (Of course, the same mechanism is operative in the absence of an external electric field, in which case all of the hops will be in random directions.)
Covalent bonds are represented by black lines, and hydrogen bonds by gray lines. | libretexts | 2025-03-17T19:53:12.351376 | 2014-10-24T04:06:25 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.10%3A_Ions_and_Electrolytes/8.10.9D%3A_8.10.9D%3A_Ionic_migration",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "8.10.9D: Ionic migration",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.10%3A_Ions_and_Electrolytes/8.10.9E%3A_8.10.9E%3A_Some_applications_of_electrolytic_conduction | 8.10.9E: Some applications of electrolytic conduction
From the chemist's standpoint, the most important examples of conduction are in connection with electrochemical cells, electrolysis and batteries.
Determination of equilibrium constants
Owing to their high sensitivity, conductivity measurements are well adapted to the measurement of equilibrium constants for processes that involve very small ion concentrations. These include
- K s values for sparingly soluble solids
- Autoprotolysis constants for solvents (such as K w )
- Acid dissociation constants for weak acids
As long as the ion concentrations are so low, their values can be taken as activities, and limiting equivalent conductivities Λ 0 can be used directly.
Ion product of water
The very small conductivity of pure water makes it rather difficult to obtain a precise value for K w ; better values are obtained by measuring the potential of an appropriate galvanic cell. But the principle illustrated here might be applicable to other autoprotolytic solvents such as H 2 SO 4 .
Use the appropriate limiting molar ionic conductivities to estimate the autoprotolysis constant K w of water at 25° C. Use the reaction equation
2 H 2 O → H 3 O + + OH – .
Solution: The data we need are λ H + = 349.6 and λ OH– = 199.1 S cm 2 mol –1 .
The conductivity of water is κ = [H + ] λ H + + [OH – ] λ OH– whose units work out to (mol cm –3 ) (S cm 2 mol –1 ). In order to express the ionic concentrations in molarities, we multiply the cm –3 term by (1 L / 1000 cm –3 ), yielding
1000 κ = [H + ] λ H + + [OH – ] λ OH– with units S cm –1 L –1 .
Recalling that in pure water, [H + ] = [OH – ] = K w ½ , we obtain
1000 κ = ( K w ½ )(λ H + + λ OH– ) = ( K w ½ )(548.7 S cm 2 mol –1 ).
Solving for K w :
K w = (1000 κ / 548.7 S cm 2 mol –1 ) 2
Substituting Kohlrausch's water conductivity value of 0.043 × 10 –6 S cm –1 ) for κ gives
K w =(1000 × 0.043 × 10 –6 S cm –1 / 548.7 S cm 2 mol –1 ) 2 = 0.614 × 10 –14 mol 2 cm –6 (i.e., mol 2 L –2 ).
The accepted value for the autoprotolysis constant of water at 25° C is K w = 1.008 × 10 –14 mol 2 L –2 .
A saturated solution of silver chloride AgCl has a conductance 2.28 x 10 –6 S cm –1 at 25°C. The conductance of the water used to make up this solution is 0.116 x 10 –6 S cm –1 . The limiting ionic conductivities of the two ions are λ Ag + = 61.9 and λ Cl– = 76.3 S cm 2 mol –1 . Use this information to estimate the molar solubility of AgCl.
The limiting molar conductivity of the solution is
Λ o = λ Ag + + λ Cl– = 138.2 S cm –1 .
The conductance due to the dissociated salt alone is the difference
(2.785 – 0.043) = 2.16 x 10 –6 S cm –1 .
Substituting into the expression Λ = 1000κ/ c yields
= 1.56 x 10 –5 mol/1000 cm 3 or 1.56 x 10 –5 mol L –1 .Conductometric titration
A chemical reaction in which there is a significant change in the number or mobilities of ionic species can be followed by monitoring the change in conductance. Many acid-base reactions fall into this category. In conductometric titration , conductometry is employed to detect the end-point of a titration.
Consider, for example, the titration of the strong acid HCl by the strong base NaOH. In ionic terms, the process can be represented as
H + + Cl – + Na + + OH – → H 2 O + Na + + Cl –
At the end point, only two ionic species remain, compared to the four during the initial stages of the titration, so the conductivity will be at a minimum. Beyond the end point, continued addition of base causes the conductivity to rise again. The very large mobilities of the H + and OH – ions cause the conductivity to rise very sharply on either side of the end point, making it quite easy to locate.
The plot on the left depicts the conductivities due to the individual ionic species. But of course the conductivity we measure is just the sum of all of these (Kohlrausch's law of independent migration), so the plot on the right corresponds to what you actually see when the titration is carried out. The factor ( V a + V b )/ V b compensates for the dilution of the solution as more base is added.
For most ordinary acid-base titrations, conductometry rarely offers any special advantage over regular volumetric analysis or potentiometry. But in some special cases such as those illustrated here, it is the only method capable of yielding useful results.
Ground- and soil conduction
Most people think of electrolytic conduction as something that takes place mainly in batteries, electrochemical plants and in laboratories, but by far the biggest and most important electrolytic system is the earth itself, or at least the thin veneer of soil sediments that coat much of its surface.
Soils are composed of sand and humic solids within which are embedded pore spaces containing gases (air and microbial respiration products) and water. This water, both that which fills the pores as well as that which is adsorbed to soil solids, contains numerous dissolved ions on which soil conductivity mainly depends. These ions include exchangeable cations present on the surfaces of the clay components of soils. Although these electrolyte channels are small and tortuously connected, they are present in huge numbers and offer an abundance of parallel conduction paths, so the ground itself acts as a surprisingly efficient conductor.
There is no better illustration of this than the use of a ground path to transmit a current of up to 1800 amperes along the 1360-km path of the Pacific DC Intertie that extends from the Celilo Converter Station (view below) in northern Oregon to Los Angeles. This system normally employs a two-conductor power line that operates at ±500,000 volts DC, but when one of these conductors is out of service, the ground path acts as a substitute conductor. This alternative path is said to have a lower resistance than the normal metallic conductor!
From an electrochemical standpoint, the most interesting aspect of this system is the manner in which the current flows into or out of the ground. The grounding system at Celilo is composed of over 1000 cast-iron electrodes buried in a circular 3.3-km trench of petroleum coke which acts as the working electrode. At the Los Angeles end, grounding is achieved by means of 24 silicon-iron alloy electrodes submerged in the nearby Pacific Ocean.
On a much smaller scale, single-wire earth return systems are often employed to supply regular single-phase ac power to remote regions, or as the return path for high-voltage DC submarine cables. For direct current submarine systems, a copper cable laid on the bottom is suitable for the cathode. The anodes are normally graphite rods surrounded by coke.
You may have noticed that the pole-mounted step-down transformers used to distribute single-phase ac power in residential neighborhoods are connected to the high-voltage (10 Kv or so) primary line by only a single wire. The return circuit back to the local substation is completed by a ground connection which runs down the pole to a buried electrode.
The Celilo Converter Station is located at The Dalles, Oregon
Other applications of ground conductivity
- Ground-wave radio propagation
- During the daytime, radio transmission at frequencies below about 5 Mhz (such as in the old standard AM broadcast band) depends entirely on so-called ground waves that follow the curvature of the earth. This occurs because that portion of the vertically-polarized wavefronts in contact with the earth induce an electrolytic current in the ground, causing their lower portions to travel more slowly, bending their pathways in toward the earth.
- Agricultural and environmental soils assessment
- Conductivity has long been used as a tool to assess the salinity of agricultural soils — a serious problem in irrigated regions, where evaporation of irrigation water over the years can raise salinity to levels that can reduce crop yields. Because other soil characteristics (moisture content, density, and mineralogy, fertilization) also play important roles, some care is required to correctly interpret measurements. Measuring devices that can be drawn behind a tractor and are equipped with GPS receivers allow the production of conductivity maps for entire fields.
- Archaeological exploration
- Buried artifacts such as stone walls and foundations and large metallic objects can be located by a series of conductivity measurements at archaeological sites. | libretexts | 2025-03-17T19:53:12.432269 | 2014-10-24T04:07:14 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.10%3A_Ions_and_Electrolytes/8.10.9E%3A_8.10.9E%3A_Some_applications_of_electrolytic_conduction",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "8.10.9E: Some applications of electrolytic conduction",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure | 9: Chemical Bonding and Molecular Structure
Why do some atoms join together to form molecules, but others do not? Why is the CO 2 molecule linear whereas H 2 O is bent? How can we tell? How does hemoglobin carry oxygen through our bloodstream? There is no topic more fundamental to Chemistry than the nature of the chemical bond, and the introduction you find here will provide you with an overview of the fundamentals and a basis for further study.
-
- 9.2: Molecules - Properties of Bonded Atoms
- The concept of chemical bonding lies at the very core of Chemistry; it is what enables about one hundred elements to form the more than fifty million known chemical substances that make up our physical world. Exactly what is a chemical bond? And what observable properties can we use to distinguish one kind of bond from another? This is the first of ten lessons that will help familiarize you with the fundamental concepts of this very broad subject.
-
- 9.3: Models of Chemical Bonding
- Why do atoms bind together— sometimes? The answer to this question would ideally be a simple, easily understood theory that would not only explain why atoms bind together to form molecules, but would also predict the three-dimensional structures of the resulting compounds as well as the energies and other properties of the bonds themselves. Unfortunately, no one theory exists that accomplishes these goals in a satisfactory way for all of the many categories of compounds that are known.
-
- 9.4: Polar Covalence
- The electrons constituting a chemical bond are simultaneously attracted by the electrostatic fields of the nuclei of the two bonded atoms. In a homonuclear molecule such as O2 the bonding electrons will be shared equally by the two atoms. In general, however, differences in the sizes and nuclear charges of the atoms will cause one of them to exert a greater attraction on the bonding pair, causing the electron cloud to be displaced toward the more strongly-attracting atom.
-
- 9.5: Molecular Geometry
- The Lewis electron-dot structures you have learned to draw have no geometrical significance other than depicting the order in which the various atoms are connected to one another. Nevertheless, a slight extension of the simple shared-electron pair concept is capable of rationalizing and predicting the geometry of the bonds around a given atom in a wide variety of situations.
-
- 9.6: The Hybrid Orbital Model
- As useful and appealing as the concept of the shared-electron pair bond is, it raises a somewhat troubling question that we must sooner or later face: what is the nature of the orbitals in which the shared electrons are contained? Up until now, we have been tacitly assuming that each valence electron occupies the same kind of atomic orbital as it did in the isolated atom. As we shall see below, his assumption very quickly leads us into difficulties.
-
- 9.7: The Hybrid Orbital Model II
- This is a continuation of the previous page which introduced the hybrid orbital model and illustrated its use in explaining how valence electrons from atomic orbitals of s and p types can combine into equivalent shared-electron pairs known as hybrid orbitals. In this lesson, we extend this idea to compounds containing double and triple bonds, and to those in which atomic d electrons are involved (and which do not follow the octet rule.)
-
- 9.8: Molecular Orbital Theory
- The molecular orbital model is by far the most productive of the various models of chemical bonding, and serves as the basis for most quantiative calculations, including those that lead to many of the computer-generated images that you have seen elsewhere in these units. In its full development, molecular orbital theory involves a lot of complicated mathematics, but the fundamental ideas behind it are quite easily understood, and this is all we will try to accomplish in this lesson.
-
- 9.9: Bonding in Coordination Complexes
- Coordination complexes have been known and studied since the mid-nineteenth century. and their structures had been mostly worked out by 1900. Although the hybrid orbital model was able to explain how neutral molecules such as water or ammonia could bond to a transition metal ion, it failed to explain many of the special properties of these complexes. Ligand field theory was developed that is able to organize and explain most of the observed properties of these compounds.
-
- 9.10: Bonding in Metals
- The simplest picture of metals, which regards them as a lattice of positive ions immersed in a “sea of electrons” which can freely migrate throughout the solid. In effect the electropositive nature of the metallic atoms allows their valence electrons to exist as a mobile fluid which can be displaced by an applied electric field, hence giving rise to their high electrical conductivities. | libretexts | 2025-03-17T19:53:12.500570 | 2013-10-03T01:37:49 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "9: Chemical Bonding and Molecular Structure",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.01%3A_Three_Views_of_Chemical_Bonding | 9.1: Three Views of Chemical Bonding
Chemical bonds form when electrons can be simultaneously close to two or more nuclei, but beyond this, there is no simple, easily understood theory that would not only explain why atoms bind together to form molecules, but would also predict the three-dimensional structures of the resulting compounds as well as the energies and other properties of the bonds themselves. Unfortunately, no one theory exists that accomplishes these goals in a satisfactory way for all of the many categories of compounds that are known. Moreover, it seems likely that if such a theory does ever come into being, it will be far from simple.
When we are faced with a scientific problem of this complexity, experience has shown that it is often more useful to concentrate instead on developing models . A scientific model is something like a theory in that it should be able to explain observed phenomena and to make useful predictions. But whereas a theory can be discredited by a single contradictory case, a model can be useful even if it does not encompass all instances of the phenomena it attempts to explain. We do not even require that a model be a credible representation of reality; all we ask is that be able to explain the behavior of those cases to which it is applicable in terms that are consistent with the model itself. An example of a model that you may already know about is the kinetic molecular theory of gases. Despite its name, this is really a model (at least at the level that beginning students use it) because it does not even try to explain the observed behavior of real gases. Nevertheless, it serves as a tool for developing our understanding of gases, and as a starting point for more elaborate treatments.
Given the extraordinary variety of ways in which atoms combine into aggregates, it should come as no surprise that a number of useful bonding models have been developed. Most of them apply only to certain classes of compounds, or attempt to explain only a restricted range of phenomena. In this section we will provide brief descriptions of some of the bonding models; the more important of these will be treated in much more detail in later parts of this chapter.
Classical models
By classical, we mean models that do not take into account the quantum behavior of small particles, notably the electron. These models generally assume that electrons and ions behave as point charges which attract and repel according to the laws of electrostatics. Although this completely ignores what has been learned about the nature of the electron since the development of quantum theory in the 1920's, these classical models have not only proven extremely useful, but the major ones also serve as the basis for the chemist's general classification of compounds into "covalent" and "ionic" categories.
Electrostatic (Ionic Bonding)
Ever since the discovery early in the 19th century that solutions of salts and other electrolytes conduct electric current, there has been general agreement that the forces that hold atoms together must be electrical in nature. Electrolytic solutions contain ions having opposite electrical charges, opposite charges attract, so perhaps the substances from which these ions come consist of positive and negatively charged atoms held together by electrostatic attraction.
It turns out that this is not true generally, but a model built on this assumption does a fairly good job of explaining a rather small but important class of compounds that are called ionic solids. The most well known example of such a compound is sodium chloride, which consists of two interpenetrating lattices of Na + and Cl – ions arranged in such as way that every ion of one type is surrounded (in three dimensional space) by six ions of opposite charge.
The main limitation of this model is that it applies really well only to the small class of solids composed of Group 1 and 2 elements with highly electronegative elements such as the halogens. Although compounds such as CuCl 2 dissociate into ions when they dissolve in water, the fundamental units making up the solid are more like polymeric chains of covalently-bound CuCl 2 molecules that have little ionic character.
Shared-Electrons (Covalent Bonding)
This model originated with the theory developed by G.N. Lewis in 1916, and it remains the most widely-used model of chemical bonding. The essential element s of this model can best be understood by examining the simplest possible molecule. This is the hydrogen molecule ion H 2 + , which consists of two nuclei and one electron. First, however, think what would happen if we tried to make the even simpler molecule H 2 2+ . Since this would consist only of two protons whose electrostatic charges would repel each other at all distances, it is clear that such a molecule cannot exist; something more than two nuclei are required for bonding to occur.
In the hydrogen molecule ion H 2 + we have a third particle, an electron. The effect of this electron will depend on its location with respect to the two nuclei. If the electron is in the space between the two nuclei, it will attract both protons toward itself, and thus toward each other. If the total attraction energy exceeds the internuclear repulsion, there will be a net bonding effect and the molecule will be stable. If, on the other hand, the electron is off to one side, it will attract both nuclei, but it will attract the closer one much more strongly, owing to the inverse-square nature of Coulomb's law. As a consequence, the electron will now help the electrostatic repulsion to push the two nuclei apart.
We see, then, that the electron is an essential component of a chemical bond, but that it must be in the right place: between the two nuclei. Coulomb's law can be used to calculate the forces experienced by the two nuclei for various positions of the electron. This allows us to define two regions of space about the nuclei, as shown in the figure. One region, the binding region, depicts locations at which the electron exerts a net binding effect on the new nuclei. Outside of this, in the antibinding region, the electron will actually work against binding.
This simple picture illustrates the number one rule of chemical bonding: chemical bonds form when electrons can be simultaneously close to two or more nuclei. It should be pointed out that this principle applies also to the ionic model; as will be explained later in this chapter, the electron that is "lost" by a positive ion ends up being closer to more nuclei (including the one from whose electron cloud it came) in the compound.
- The polar covalent model : A purely covalent bond can only be guaranteed when the electronegativities (electron-attracting powers) of the two atoms are identical. When atoms having different electronegativities are joined, the electrons shared between them will be displaced toward the more electronegative atom, conferring a polarity on the bond which can be described in terms of percent ionic character. The polar covalent model is thus an generalization of covalent bonding to include a very wide range of behavior.
- The Coulombic model : This is an extension of the ionic model to compounds that are ordinarily considered to be non-ionic. Combined hydrogen is always considered to exist as the hydride ion H – , so that methane can be treated as if it were C 4 + H –4 . This is not as bizarre as it might seem at first if you recall that the proton has almost no significant size, so that it is essentially embedded in an electron pair when it is joined to another atom in a covalent bond. This model, which is not as well known as it deserves to be, has considerable predictive power, both as to bond energies and structures.
- The VSEPR model : The "valence shell electron repulsion" model is not so much a model of chemical bonding as a scheme for explaining the shapes of molecules. It is based on the quantum mechanical view that bonds represent electron clouds- physical regions of negative electric charge that repel each other and thus try to stay as far apart as possible.
Quantum Models
Quantum models of bonding take into account the fact that a particle as light a the electron cannot really be said to be in any single location. The best we can do is define a region of space in which the probability of finding the electron has some arbitrary value which will always be less than unity. The shape of this volume of space is called an orbital and is defined by a mathematical function that relates the probability to the (x,y,z) coordinates of the molecule. Like other models of bonding, the quantum models attempt to show how more electrons can be simultaneously close to more nuclei. Instead of doing so through purely geometrical arguments, they attempt this by predicting the nature of the orbitals which the valence electrons occupy in joined atoms.
- The hybrid orbital model : This was developed by Linus Pauling in 1931 and was the first quantum-based model of bonding. It is based on the premise that if the atomic s, p, and d orbitals occupied by the valence electrons of adjacent atoms are combined in a suitable way, the hybrid orbitals that result will have the character and directional properties that are consistent with the bonding pattern in the molecule. The rules for bringing about these combinations turn out to be remarkably simple, so once they were worked out it became possible to use this model to predict the bonding behavior in a wide variety of molecules. The hybrid orbital model is most usefully applied to the p-block elements the first two rows of the periodic table, and is especially important in organic chemistry.
- The molecular orbital model : This model takes a more fundamental approach by regarding a molecule as a collection of valence electrons and positive cores. Just as the nature of atomic orbitals derives from the spherical symmetry of the atom, so will the properties of these new molecular orbitals be controlled by the interaction of the valence electrons with the multiple positive centers of these atomic cores. These new orbitals, unlike those of the hybrid model, are delocalized ; that is, they do not "belong" to any one atom but extend over the entire region of space that encompasses the bonded atoms. The available (valence) electrons then fill these orbitals from the lowest to the highest, very much as in the Aufbau principle that you learned for working out atomic electron configurations. For small molecules (which are the only ones we will consider here), there are simple rules that govern the way that atomic orbitals transform themselves into molecular orbitals as the separate atoms are brought together. The real power of molecular orbital theory, however, comes from its mathematical formation which lends itself to detailed predictions of bond energies and other properties.
- The electron-tunneling model : A common theme uniting all of the models we have discussed is that bonding depends on the fall in potential energy that occurs when opposite charges are brought together. In the case of covalent bonds, the shared electron pair acts as a kind of "electron glue" between the joined nuclei. In 1962, however, it was shown that this assumption is not strictly correct, and that instead of being concentrated in the space between the nuclei, the electron orbitals become even more concentrated around the bonded nuclei. At the same time however, they are free to "move" between the two nuclei by a process known as tunneling . This refers to a well-known quantum mechanical effect that allows electrons (or other particles small enough to exhibit wavelike properties) to pass ("tunnel") through a barrier separating two closely adjacent regions of low potential energy. One result of this is that the effective volume of space available to the electron is increased, and according to the uncertainty principle this will reduce the kinetic energy of the electron.
The electron-tunneling model
According to this model, the bonding electrons act as a kind of fluid that concentrates in the region of each nucleus (lowering the potential energy) and at the same time is able to freely flow between them (reducing the kinetic energy). A summary of the concept, showing its application to a simple molecule, is shown on the next page. Despite its conceptual simplicity and full acknowledgment of the laws of quantum mechanics, this model is not widely known and is rarely taught.
Chemical bonding occurs when one or more electrons can be simultaneously close to two nuclei. But how can this be arranged? The conventional picture of the shared electron bond places the bonding electrons in the region between the two nuclei. This makes a nice picture, but it is not consistent with the principle that opposite charges attract. This would imply that the electrons would be "happiest" (at the lowest potential energy) when they are very close to a nucleus, not half a bond-length away from two of them!
This plot shows how the potential energy of an electron in the hydrogen atom varies with its distance from the nucleus. Notice how the energy falls without limit as the electron approaches the nucleus, represented here as a proton, \(H^+\). If potential energy were the only consideration, the electron would fall right into the nucleus where its potential energy would be minus infinity.
When an electron is added to the proton to make a neutral hydrogen atom, it tries to get as close to the nucleus as possible. The Heisenberg uncertainty principle requires the total energy of the electron energy to increase as the volume of space it occupies diminishes. As the electron gets closer to the nucleus, the nuclear charge confines the electron to such a tiny volume of space that its energy rises, allowing it to "float" slightly away from the nucleus without ever falling into it.
The shaded region above shows the range of energies and distances from the nucleus the electron is able to assume within the 1 s orbital. The electron can thus be regarded as a fluid that occupies a vessel whose walls conform to the red potential energy curves shown above. Note that as the potential energy falls, the kinetic energy increases, but only half as fast (this is called the virial theorem ). Thus close to the nucleus, the kinetic energy is large and so is the electron's effective velocity. The top of the shaded area defines the work required to raise its potential energy to zero, thus removing it from the atom; this corresponds, of course, to the ionization energy.
The Tunneling Effect
A quantum particle can be described by a waveform which is the plot of a mathematical function related to the probability of finding the particle at a given location at any time. If the particle is confined to a box, it turns out that the wave does not fall to zero at the walls of the box, but has a finite probability of being found outside it. This means that a quantum particle is able to penetrate, or "tunnel through" its confining boundaries. This remarkable property is called the tunnel effect .
In terms of the electron fluid model introduced above, the fluid is able to "leak out" of the atom if another low-energy location can be found nearby.
Electron tunneling in the simplest molecule
Suppose we now bring a bare proton up close to a hydrogen atom. Each nucleus has its own potential well, but only that of the hydrogen atom is filled, as indicated by the shading in the leftmost potential well.
But the electron fluid is able to tunnel through the potential energy barrier separating the two wells; like any liquid, it will seek a common level in the two sides of the container as shown below. The electron is now "simultaneously close to two nuclei" while never being in between them. Bear in mind that this would be physically impossible for a real liquid composed of real molecules; this is purely a quantum effect that is restricted to a low-mass particle such as the electron.
Because the same amount of electron fluid is now shared between the two wells, its level in both is lower. The difference between what it is now and what is was before corresponds to the bond energy of the hydrogen molecule ion.
The dihydrogen molecule
Now let's make a molecule of dihydrogen . We start with two hydrogen atoms, each with one electron. But there is a problem here: both potential energy wells are already filled with electron fluid; there is no room for any more without pushing the energy way up.
But quantum theory again comes to the rescue! If the two electrons have opposite spins, the two fluids are able to interpenetrate each other, very much as two gases are able to occupy the same container. This is depicted by the double shading in the diagram below.
When the two hydrogen atoms are within tunneling distance, half of the electron fluid (really the probability of finding the electron) from each well flows into the other well. Because the two fluids are able to interpenetrate, the level is not much different from what it was in the H 2 + ion, but the greater density of the electron-fluid between the two nuclei makes H 2 a strongly bound molecule.
So why does dihelium not exist?
If we tried to join two helium atoms in this way, we would be in trouble. The electron well of He already contains two electrons of opposite spin. There is no room for more electron fluid (without raising the energy), and thus no way the electrons in either He atom can be simultaneously close to two nuclei.
Ionic Bonding
Even before G.N.Lewis developed his theory of the shared electron pair bond, it was believed that bonding in many solid salts could be satisfactorily explained on the basis of simple electrostatic forces between the positive and negative ions which are assumed to be the basic molecular units of these compounds. Lewis himself continued to recognize this distinction, which has continued to be a part of the tradition of chemistry; the shared electron pair bond is known as the covalent bond, while the other type is the ionic or electrovalent bond.
The covalent bond is formed when two atoms are able to share electrons:
whereas the ionic bond is formed when the "sharing" is so unequal that an electron from atom A is completely lost to atom B, resulting in a pair of ions:
The two extremes of electron sharing represented by the covalent and ionic models appear to be generally consistent with the observed properties of molecular and ionic solids and liquids. But does this mean that there are really two kinds of chemical bonds, ionic and covalent?
Bonding in ionic solids
According to the ionic electrostatic model, solids such as NaCl consist of positive and negative ions arranged in a crystal lattice. Each ion is attracted to neighboring ions of opposite charge, and is repelled by ions of like charge; this combination of attractions and repulsions, acting in all directions, causes the ion to be tightly fixed in its own location in the crystal lattice.
Since electrostatic forces are nondirectional, the structure of an ionic solid is determined purely by geometry: two kinds of ions, each with its own radius, will fall into whatever repeating pattern will achieve the lowest possible potential energy. Surprisingly, there are only a small number of possible structures.
Is there such as thing as a "purely" ionic bond?
When two elements form an ionic compound, is an electron really lost by one atom and transferred to the other one? In order to deal with this question, consider the data on the ionic solid LiF. The average radius of the neutral Li atom is about 2.52Å.
Now if this Li atom reacts with an atom of F to form LiF, what is the average distance between the Li nucleus and the electron it has "lost" to the fluorine atom? The answer is 1.56Å; the electron is now closer to the lithium nucleus than it was in neutral lithium! So the answer to the above question is both yes and no: yes, the electron that was now in the 2 s orbital of Li is now within the grasp of a fluorine 2 p orbital, but no, the electron is now even closer to the Li nucleus than before, so how can it be "lost"? The one thing that is inarguably true about LiF is that there are more electrons closer to positive nuclei than there are in the separated Li and F atoms. But this is just the condition that gives rise to all forms of chemical bonding:
Chemical bonds form when electrons can be simultaneously near two or more nuclei
It is obvious that the electron-pair bond brings about this situation, and this is the reason for the stability of the covalent bond. What is not so obvious (until you look at the numbers such as were quoted for LiF above) is that the "ionic" bond results in the same condition; even in the most highly ionic compounds, both electrons are close to both nuclei, and the resulting mutual attractions bind the nuclei together. This being the case, is there really any fundamental difference between the ionic and covalent bond?
The answer, according to modern chemical thinking is probably "no"; in fact, there is some question as to whether it is realistic to consider that these solids consist of "ions" in the usual sense. The preferred picture that seems to be emerging is one in which the electron orbitals of adjacent atom pairs are simply skewed so as to place more electron density around the "negative" element than around the "positive" one.
This being said, it must be reiterated that the ionic model of bonding is a useful one for many purposes, and there is nothing wrong with using the term "ionic bond" to describe the interactions between the atoms in "ionic solids" such as LiF and NaCl.
Polar covalence
If there is no such thing as a "completely ionic" bond, can we have one that is completely covalent? The answer is yes, if the two nuclei have equal electron attracting powers. This situation is guaranteed to be the case with homonuclear diatomic molecules— molecules consisting of two identical atoms. Thus in Cl 2 , O 2 , and H 2 , electron sharing between the two identical atoms must be exactly even; in such molecules, the center of positive charge corresponds exactly to the center of negative charge: halfway between the two nuclei.
Electronegativity
This term was introduced earlier in the course to denote the relative electron-attracting power of an atom. The electronegativity is not the same as the electron affinity; the latter measures the amount of energy released when an electron from an external source "falls into" a vacancy within the outermost orbital of the atom to yield an isolated negative ion.
The products of bond formation, in contrast, are not ions and they are not isolated; the two nuclei are now drawn closely together by attraction to the region of high electron density between them. Any shift of electron density toward one atom takes place at the energetic expense of stealing it from the other atom.
It is important to understand that electronegativity is not a measurable property of an atom in the sense that ionization energies and electron affinity are; electronegativity is a property that an atom displays when it is bonded to another. Any measurement one does make must necessarily depend on the properties of both of the atoms.
By convention, electronegativities are measured on a scale on which the highest value, 4.0, is arbitrarily assigned to fluorine. A number of electronegativity scales have been proposed, each based on slightly different criteria. The most well known of these is due to Linus Pauling, and is based on a study of bond energies in a variety of compounds.
The periodic trends in electronegativity are about what one would expect; the higher the nuclear charge and the smaller the atom, the more strongly attractive will it be to an outer-shell electron of an atom within binding distance. The division between the metallic and nonmetallic elements is largely that between those that have Pauling electronegativies greater than about 1.7, and those that have smaller electronegativities.
The greater the electronegativity difference between two elements A and B , the more polar will be their molecule AB . It is important to point out, however, that the pairs having the greatest electronegativity differences, the alkali halides, are solids order ordinary conditions and exist as molecules only in the rarefied conditions of the gas phase. Even these ionic solids possess a certain amount of covalent character, so, as discussed above, there is no such thing as a "purely ionic" bond. It has become more common to place binary compounds on a scale something like that shown here, in which the degree of shading is a rough indication of the number of compounds at any point on the covalent-ionic scale.
Covalent or ionic: a false dichotomy
The covalent-ionic continuum described above is certainly an improvement over the old covalent - versus - ionic dichotomy that existed only in the textbook and classroom, but it is still only a one-dimensional view of a multidimensional world, and thus a view that hides more than it reveals. The main thing missing is any allowance for the type of bonding that occurs between more pairs of elements than any other: metallic bonding. Intermetallic compounds are rarely even mentioned in introductory courses, but since most of the elements are metals, there are a lot of them, and many play an important role in metallurgy. In metallic bonding, the valence electrons lose their association with individual atoms; they form what amounts to a mobile "electron fluid" that fills the space between the crystal lattice positions occupied by the atoms, (now essentially positive ions.) The more readily this electron delocalization occurs, the more "metallic" the element.
Thus instead of the one-dimension chart shown above, we can construct a triangular diagram whose corners represent the three extremes of "pure" covalent, ionic, and metallic bonding.
We can take this a step farther by taking into account collection of weaker binding effects known generally as van der Waals forces. Contrary to what is often implied in introductory textbooks, these are the major binding forces in most of the common salts that are not alkali halides; these include NaOH, CaCl 2 , MgSO 4 . They are also significant in solids such as CuCl 2 and solid SO 3 in which infinite covalently-bound chains are held together by ion-induced dipole and similar forces.
The only way to represent this four-dimensional bonding-type space in two dimensions is to draw a projection of a tetrahedron, each of its four corners representing the "pure" case of one type of bonding.
Note that some of the entries on this diagram (ice, CH 4 , and the two parts of NH 4 ClO 4 ) are covalently bound units, and their placement refers to the binding between these units. Thus the H 2 O molecules in ice are held together mainly by hydrogen bonding, which is a van der Waals force, with only a small covalent contribution.
Note: the triangular and tetrahedral diagrams above were adapted from those in the excellent article by William B. Jensen, "Logic, history and the chemistry textbook", Part II, J. Chemical Education 1998: 817-828. | libretexts | 2025-03-17T19:53:12.593754 | 2013-10-03T01:37:51 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.01%3A_Three_Views_of_Chemical_Bonding",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "9.1: Three Views of Chemical Bonding",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.02%3A_Molecules_-_Properties_of_Bonded_Atoms | 9.2: Molecules - Properties of Bonded Atoms
Make sure you thoroughly understand the following essential ideas which are presented below.
- How would you define a chemical bond ?
- What is meant by the connectivity of a molecule? What additional information might be needed in order to specify its structure?
- Explain the difference between stability and reactivity , and how these factors might prevent a given structure from existing long enough to qualify as a molecule .
- Sketch out a potential energy curve for a typical diatomic molecule, and show how it illustrates the bond energy and bond length .
- Explain how the heat released or absorbed in a chemical reaction can be related to the bond energies of the reactants and products.
- State the major factors that determine the distance between two bonded atoms.
- Describe, in a general way, how the infrared spectrum of a substance can reveal details about its molecular structure.
The concept of chemical bonding lies at the very core of Chemistry; it is what enables about one hundred elements to form the more than fifty million known chemical substances that make up our physical world. Before we get into the theory of chemical bonding, we need to define what we are talking about: Exactly what is a chemical bond? And what observable properties can we use to distinguish one kind of bond from another? This is the first of ten lessons that will help familiarize you with the fundamental concepts of this very broad subject.
What is a chemical bond?
You probably learned some time ago that chemical bonds are what hold atoms together to form the more complicated aggregates that we know as molecules and extended solids. Chemists talk about bonds all the time, and draw pictures of them as lines joining atom symbols. Teachers often identify them as the little sticks that connect the spheres that represent atoms in a plastic molecular model. So it's not surprising that we sometimes tend to think of chemical bonds as “things”. But no one has ever seen a chemical bond, and there is no reason to believe that they really even exist as physical objects.
"SOMETIMES IT SEEMS to me that a bond between two atoms has become so real, so tangible, so friendly, that I can almost see it. Then I awake with a little shock, for a chemical bond is not a real thing. It does not exist. No one has ever seen one. No one ever can. It is a figment of our own imagination." C.A. Coulson (1910-1974) was an English theoretical chemist who played a central role in the development of quantum theories of chemical bonding.
It is more useful to regard a chemical bond as an effect that causes certain atoms to join together to form enduring structures that have unique physical and chemical properties.
So although the "chemical bond" is no more than a convenient fiction, chemical bonding , which leads to the near-infinity of substances (31 million in mid-2007), lies at the very core of chemistry. The forces that hold bonded atoms together are basically just the same kinds of electrostatic attractions that bind the electrons of an atom to its positively-charged nucleus;
Chemical Bonds
Chemical bonding occurs when one or more electrons are simultaneously attracted to two (or more) nuclei.
This is the most important fact about chemical bonding that you should know, but it is not of itself a workable theory of bonding because it does not describe the conditions under which bonding occurs, nor does it make useful predictions about the properties of the bonded atoms.
What is a molecule?
Even at the end of the 19th century, when compounds and their formulas had long been in use, some prominent chemists doubted that molecules (or atoms) were any more than a convenient model. Molecules suddenly became real in 1905, when Albert Einstein showed that Brownian motion, the irregular microscopic movements of tiny pollen grains floating in water, could be directly attributed to collisions with molecule-sized particles.
Most people think of molecules as the particles that result when atoms become joined together in some way. This conveys the general picture, but a somewhat better definition that we will use in these lessons is:
A molecule is an aggregate of atoms that possesses distinctive observable propertiesA more restrictive definition distinguishes between a "true" molecule that exists as an independent particle, and an extended solid that can only be represented by its simplest formula. Methane, CH 4 , is an example of the former, while sodium chloride, which does not contain any discrete NaCl units, is the most widely-known extended solid. But because we want to look at chemical bonding in the most general way, we will avoid making this distinction here except in a few special cases. In order to emphasize this "aggregate of atoms" definition, we will often use terms such as "chemical species" and "structures" in place of "molecules" in this lesson.
The definition written above is an operational one; that is, it depends on our ability to observe and measure the molecule's properties. Clearly, this means that the molecule must retain its identity for a period of time long enough to carry out the measurements. For most of the molecules that chemistry deals with, this presents no difficulty. But it does happen that some structures that we can write formulas for, such as He 2 , have such brief lives that no significant properties have been observed. So to some extent, what we consider to be a molecule depends on the technology we use to observe them, and this will necessarily change with time.
Structure, structure, structure!
And what are those properties that characterize a particular kind of molecule and distinguish it from others? Just as real estate is judged by "location, location, location", the identity of a chemical species is defined by its structure . In its most fundamental sense, the structure of a molecule is specified by the identity of its constituent atoms and the sequence in which they are joined together, that is, by the bonding connectivity . This, in turn, defines the bonding geometry — the spatial relationship between the bonded atoms.
The importance of bonding connectivity is nicely illustrated by the structures of the two compounds ethanol and dimethyl ether , both of which have the simplest formula C 2 H 6 O. The structural formulas reveal the very different connectivities of these two molecules whose physical and chemistry properties are quite different:
Structures without molecules: stability and reactivity
The precise definition of bonding energy is described in another lesson and is not important here. For the moment you only need to know that in any stable structure, the potential energy of its atoms is lower than that of the individual isolated atoms. Thus the formation of methane from its gaseous atoms (a reaction that cannot be observed under ordinary conditions but for which the energetics are known from indirect evidence)
\[ \ce{ 4 H(g) + C(g) → CH4}\]
is accompanied by the release of heat, and is thus an exothermic process. The quantity of heat released is related to the stability of the molecule. The smaller the amount of energy released, the more easily can the molecule absorb thermal energy from the environment, driving the above reaction in reverse and leading to the molecule's decomposition. A highly stable molecule such as methane must be subjected to temperatures of more than 1000°C for significant decomposition to occur. But the noble-gas molecule KrF 2 is so weakly bound that it decomposes even at 0°C, and the structure He 2 has never been observed. If a particular arrangement of atoms is too unstable to reveal its properties at any achievable temperature, then it does not qualify to be called a molecule.
There are many molecules that are energetically stable enough to meet the above criterion, but are so reactive that their lifetimes are too brief to make their observation possible. The molecule CH 3 , methyl , is a good example: it can be formed by electrical discharge in gaseous CH 4 , but it is so reactive that it combines with almost any molecule it strikes (even another CH 3 )within a few collisions. It was not until the development of spectroscopic methods (in which a molecule is characterized by the wavelengths of light that it absorbs or emits) that methyl was recognized as a stable albeit shamelessly promiscuous molecule that is an important intermediate in many chemical processes ranging from flames to atmospheric chemistry.
How we Depict Chemical Structures
Chemical species are traditionally represented by structural formulas such as the one for ascorbic acid (vitamin C) which we show here. The lines, of course, represent the "chemical bonds" of the molecule. More importantly, the structural formula of a molecule defines its connectivity , as was illustrated in the comparison of ethanol and dimethyl ether shown above.
|
methane |
|
Note how this shows CH 4 to be roughly spherical. |
|
Ordinary structural formula, showing connectivity only.
|
Ball-and-stick model, showing the "chemical bonds" and bonding geometry, but with the individual atoms unrealistically separated.
|
Space-filling model, showing relative sizes of the atoms and general shape of the molecule, but not all atoms visible. No obvious "chemical bonds" here!
|
One limitation of such formulas is that they are drawn on a two-dimensional paper or screen, whereas most molecules have a three-dimensional shape. The wedge-shaped lines in the structural formula are one way of indicating which bonds extend above or below the viewing plane. You will probably be spared having to learn this convention until you get into second-year Chemistry. Three-dimensional models (either real plastic ones or images that incorporate perspective and shading) reveal much more about a molecule's structure. The ball-and-stick and space-filling renditions are widely employed, but each has its limitations, as seen in the following examples:
But what would a molecule "really" look like if you could view it through a magical microscope of some kind? A possible answer would be this computer-generated view of nicotine. At first you might think it looks more like a piece of abstract sculpture than a molecule, but it does reveal the shape of the negative charge-cloud that envelops the collection of atom cores and nuclei hidden within. This can be very important for understanding how the molecule interacts with the similar charge-clouds that clothe solvent and bioreceptor molecules.
Finally, we get to see one! In 2009, IBM scientists in Switzerland succeeded in imaging a real molecule, using a technique known as atomic force microscopy in which an atoms-thin metallic probe is drawn ever-so-slightly above the surface of an immobilized pentacene molecule cooled to nearly absolute zero. In order to improve the image quality, a molecule of carbon monoxide was placed on the end of the probe. The image produced by the AFM probe is shown at the very bottom. What is actually being imaged is the surface of the electron clouds of the molecule, which consists of five fused hexagonal rings of carbon atoms with hydrogens on its periphery. The tiny bumps that correspond to these hydrogen atom attest to the remarkable resolution of this experiment.
Visualization of molecular structures
The purpose of rendering a molecular structure in a particular way is not to achieve "realism" (whatever that might be), but rather to convey useful information of some kind. Modern computer rendering software takes its basic data from various kinds of standard structural databases which are compiled either from experimental X-ray scattering data, or are calculated from theory.
As was mentioned above, it is often desirable to show the "molecular surface"— the veil of negative charge that originates in the valence electrons of the atoms but which tends to be spread over the entire molecule to a distance that can significantly affect van der Waals interactions with neighboring molecules. It is often helpful to superimpose images representing the atoms within the molecule, scaled to their average covalent radii, and to draw the "bonding lines" expressing their connectivity.
Knowing the properties of molecular surfaces is vitally important to understanding any process that depends on one molecule remaining in physical contact with another. Catalysis is one example, but one of the main interests at the present time is biological signaling , in which a relatively small molecule binds to or "docks" with a receptor site on a much larger one, often a protein. Sophisticated molecular modeling software such as was used to produce these images is now a major tool in many areas of research biology.
Visualizing very large molecules such as carbohydrates and proteins that may contain tens of thousands of atoms presents obvious problems. The usual technique is to simplify the major parts of the molecule, representing major kinds of extended structural units by shapes such as ribbons or tubes which are twisted or bent to approximate their conformations. These are then gathered to reveal the geometrical relations of the various units within the overall structure. Individual atoms, if shown at all, are restricted to those of special interest.
Study of the surface properties of large molecules is crucial for understanding how proteins, carbohydrates, and DNA interact with smaller molecules, especially those involved in transport of ions and small molecule across cell membranes, immune-system behavior, and signal transduction processes such as the "turning on" of genes.
Observable Properties of Bonded Atom Pairs
When we talk about the properties of a particular chemical bond, we are really discussing the relationship between two adjacent atoms that are part of the molecule. Diatomic molecules are of course the easiest to study, and the information we derive from them helps us interpret various kinds of experiments we carry out on more complicated molecules.
It is important to bear in mind that the exact properties of a specific kind of bond will be determined in part by the nature of the other bonds in the molecule; thus the energy and length of the C–H bond will be somewhat dependent on what other atoms are connected to the carbon atom. Similarly, the C-H bond length can vary by as much a 4 percent between different molecules. For this reason, the values listed in tables of bond energy and bond length are usually averages taken over a variety of compounds that contain a specific atom pair..
In some cases, such as C—O and C—C, the variations can be much greater, approaching 20 percent. In these cases, the values fall into groups which we interpret as representative of single- and multiple bonds: double, and triple.
Potential energy curves
The energy of a system of two atoms depends on the distance between them. At large distances the energy is zero, meaning “no interaction”. At distances of several atomic diameters attractive forces dominate, whereas at very close approaches the force is repulsive, causing the energy to rise. The attractive and repulsive effects are balanced at the minimum point in the curve. Plots that illustrate this relationship are known as Morse curves , and they are quite useful in defining certain properties of a chemical bond.
The internuclear distance at which the potential energy minimum occurs defines the bond length . This is more correctly known as the equilibrium bond length, because thermal motion causes the two atoms to vibrate about this distance. In general, the stronger the bond, the smaller will be the bond length.
Attractive forces operate between all atoms, but unless the potential energy minimum is at least of the order of RT, the two atoms will not be able to withstand the disruptive influence of thermal energy long enough to result in an identifiable molecule. Thus we can say that a chemical bond exists between the two atoms in \(\ce{H2}\). The weak attraction between argon atoms does not allow \(\ce{Ar2}\) to exist as a molecule, but it does give rise to the van Der Waals force that holds argon atoms together in its liquid and solid forms.
Potential energy and kinetic energy Quantum theory tells us that an electron in an atom possesses kinetic energy K as well as potential energy P, so the total energy E is always the sum of the two: E = P + K. The relation between them is surprisingly simple: K = –0.5 P. This means that when a chemical bond forms (an exothermic process with ΔE < 0), the decrease in potential energy is accompanied by an increase in the kinetic energy (embodied in the momentum of the bonding electrons), but the magnitude of the latter change is only half as much, so the change in potential energy always dominates. The bond energy –ΔE has half the magnitude of the fall in potential energy.
Bond energies
The bond energy is the amount of work that must be done to pull two atoms completely apart; in other words, it is the same as the depth of the “well” in the potential energy curve shown above. This is almost, but not quite the same as the bond dissociation energy actually required to break the chemical bond; the difference is the very small zero-point energy. This relationship will be clarified below in the section on bond vibrational frequencies. Bond energies are usually determined indirectly from thermodynamic data, but there are two main experimental ways of measuring them directly:
1. The direct thermochemical method involves separating the two atoms by an electrical discharge or some other means, and then measuring the heat given off when they recombine. Thus the energy of the C—C single bond can be estimated from the heat of the recombination reaction between methyl radicals, yielding ethane: \[CH_3 + CH_3 → H_3C–CH_3\] Although this method is simple in principle, it is not easy to carry out experimentally. The highly reactive components must be prepared in high purity and in a stream of moving gas.
2. The spectroscopic method is based on the principle that absorption of light whose wavelength corresponds to the bond energy will often lead to the breaking of the bond and dissociation of the molecule. For some bonds, this light falls into the green and blue regions of the spectrum, but for most bonds ultraviolet light is required. The experiment is carried out by observing the absorption of light by the substance being studied as the wavelength is decreased. When the wavelength is sufficiently small to break the bond, a characteristic change in the absorption pattern is observed.
Spectroscopy is quite easily carried out and can yield highly precise results, but this method is only applicable to a relatively small number of simple molecules. The major problem is that the light must first be absorbed by the molecule, and relatively few molecules happen to absorb light of a wavelength that corresponds to a bond energy.
Experiments carried out on diatomic molecules such as O 2 and CS yield unambiguous values of bond energy, but for more complex molecules there are complications. For example, the heat given off in the CH 3 combination reaction written above will also include a small component that represents the differences in the energies of the C-H bonds in methyl and in ethane. These can be corrected for by experimental data on reactions such as
\[CH_3 + H → CH_4\]
\[CH_2 + H → CH_3\]
By assembling a large amount of experimental information of this kind, a consistent set of average bond energies can be obtained (see table below.) The energies of double bonds are greater than those of single bonds, and those of triple bonds are higher still.
Use of bond energies in estimating heats of reaction
One can often get a very good idea of how much heat will be absorbed or given off in a reaction by simply finding the difference in the total bond energies contained in the reactants and products. The strength of an individual bond such as O–H depends to some extent on its environment in a molecule (that is, in this example, on what other atom is connected to the oxygen atom), but tables of "average" energies of the various common bond types are widely available and can provide useful estimates of the quantity of heat absorbed or released in many chemical reaction.
| Single Bonds | Multiple Bonds | ||||||
|---|---|---|---|---|---|---|---|
| H—H |
432
|
N—H |
391
|
I—I |
149
|
C = C |
614
|
| H—F |
565
|
N—N |
160
|
I—Cl |
208
|
C ≡ C |
839
|
| H—Cl |
427
|
N—F |
272
|
I—Br |
175
|
O = O |
495
|
| H—Br |
363
|
N—Cl |
200
|
C = O* |
745
|
||
| H—I |
295
|
N—Br |
243
|
S—H |
347
|
C ≡ O |
1072
|
| N—O |
201
|
S—F |
327
|
N = O |
607
|
||
| C—H |
413
|
O—H |
467
|
S—Cl |
253
|
N = N |
418
|
| C—C |
347
|
O—O |
146
|
S—Br |
218
|
N ≡ N |
941
|
| C—N |
305
|
O—F |
190
|
S—S |
266
|
C ≡ N |
891
|
| C—O |
358
|
O—Cl |
203
|
C = N |
615
|
||
| C—F |
485
|
O—I |
234
|
Si—Si |
340
|
||
| C—Cl |
339
|
Si—H |
393
|
||||
| C—Br |
276
|
F—F |
154
|
Si—C |
360
|
||
| C—I |
240
|
F—Cl |
253
|
Si—O |
452
|
||
| C—S |
259
|
F—Br |
237
|
||||
| Cl—Cl |
239
|
||||||
| Cl—Br |
218
|
||||||
| Br—Br |
193
|
||||||
|
*C
==
O(CO
2
) = 799
|
Average bond energies are the averages of bond dissociation energies (see Table T3 for more complete list). For example the average bond energy of O-H in H 2 O is 464 kJ/mol. This is due to the fact that the H-OH bond requires 498.7 kJ/mol to dissociate, while the O-H bond needs 428 kJ/mol. \[\dfrac{498.7\; kJ/mol + 428\; kJ/mol}{2}=464\; kJ/mol\]
Consider the reaction of chlorine with methane to produce dichloromethane and hydrogen chloride:
\[\ce{CH4(g) + 2 Cl2(g) → CH2Cl2(g) + 2 HCl(g)}\]
In this reaction, two C–Cl bonds and two H–Cl bonds are broken, and two new C–Cl and H–Cl bonds are formed. The net change associated with the reaction is
2(C–H) + 2(Cl–Cl) – 2(C–Cl) – 2(H–Cl) = (830 + 486 –660 – 864) kJ
which comes to –208 kJ per mole of methane; this agrees quite well with the observed heat of reaction, which is –202 kJ/mol.
Bond lengths and angles
The length of a chemical bond the distance between the centers of the two bonded atoms (the internuclear distance .) Bond lengths have traditionally been expressed in Ångstrom units, but picometers are now preferred (1Å = 10 -8 cm = 100 pm.) Bond lengths are typically in the range 1-2 Å or 100-200 pm. Even though the bond is vibrating, equilibrium bond lengths can be determined experimentally to within ±1 pm.
Bond lengths depend mainly on the sizes of the atoms, and secondarily on the bond strengths, the stronger bonds tending to be shorter. Bonds involving hydrogen can be quite short; The shortest bond of all, H–H, is only 74 pm. Multiply-bonded atoms are closer together than singly-bonded ones; this is a major criterion for experimentally determining the multiplicity of a bond. This trend is clearly evident in the above plot which depicts the sequence of carbon-carbon single, double, and triple bonds.
The most common method of measuring bond lengths in solids is by analysis of the diffraction or scattering of X-rays when they pass through the regularly-spaced atoms in the crystal. For gaseous molecules, neutron- or electron-diffraction can also be used.
The complete structure of a molecule requires a specification of the coordinates of each of its atoms in three-dimensional space. This data can then be used by computer programs to construct visualizations of the molecule as discussed above. One such visualization of the water molecule, with bond distances and the HOH bond angle superimposed on a space-filling model, is shown here. (It is taken from an excellent reference source on water). The colors show the results of calculations that depict the way in which electron charge is distributed around the three nuclei.
Bond stretching and infrared absorption
When an atom is displaced from its equilibrium position in a molecule, it is subject to a restoring force which increases with the displacement. A spring follows the same law (Hooke’s law); a chemical bond is therefore formally similar to a spring that has weights (atoms) attached to its two ends. A mechanical system of this kind possesses a natural vibrational frequency which depends on the masses of the weights and the stiffness of the spring. These vibrations are initiated by the thermal energy of the surroundings; chemically-bonded atoms are never at rest at temperatures above absolute zero.
On the atomic scale in which all motions are quantized , a vibrating system can possess a series of vibrational frequencies, or states . These are depicted by the horizontal lines in the potential energy curve shown here. Notice that the very bottom of the curve does not correspond to an allowed state because at this point the positions of the atoms are precisely specified, which would violate the uncertainty principle. The lowest-allowed, or ground vibrational state is the one denoted by 0, and it is normally the only state that is significantly populated in most molecules at room temperature. In order to jump to a higher state, the molecule must absorb a photon whose energy is equal to the distance between the two states.
For ordinary chemical bonds, the energy differences between these natural vibrational frequencies correspond to those of infrared light . Each wavelength of infrared light that excites the vibrational motion of a particular bond will be absorbed by the molecule. In general, the stronger the bond and the lighter the atoms it connects, the higher will be its natural stretching frequency and the shorter the wavelength of light absorbed by it. Studies on a wide variety of molecules have made it possible to determine the wavelengths absorbed by each kind of bond. By plotting the degree of absorption as a function of wavelength, one obtains the infrared spectrum of the molecule which allows one to "see" what kinds of bonds are present.
Infrared spectrum of alcohol The low points in the plot below indicate the frequencies of infrared light that are absorbed by ethanol (ethyl alcohol), CH 3 CH 2 OH. Notice how stretching frequencies involving hydrogen are higher, reflecting the smaller mass of that atom. Only the most prominent absorption bands are noted here.
Actual infrared spectra are complicated by the presence of more complex motions (stretches involving more than two atoms, wagging, etc.), and absorption to higher quantum states (overtones), so infrared spectra can become quite complex. This is not necessarily a disadvantage, however, because such spectra can serve as a "fingerprint" that is unique to a particular molecule and can be helpful in identifying it. Largely for this reason, infrared spectrometers are standard equipment in most chemistry laboratories. Now that you know something about bond stretching vibrations, you can impress your friends by telling them why water is blue!
Infrared Absorption and Global Warming
The aspect of bond stretching and bending frequencies that impacts our lives most directly is the way that some of the gases of the atmosphere absorb infrared light and thus affect the heat balance of the Earth. Owing to their symmetrical shapes, the principal atmospheric components N 2 and O 2 do not absorb infrared light, but the minor components water vapor and carbon dioxide are strong absorbers, especially in the long-wavelength region of the infrared. Absorption of infrared light by a gas causes its temperature to rise, so any source of infrared light will tend to warm the atmosphere; this phenomenon is known as the greenhouse effect .
The incoming radiation from the Sun (which contains relatively little long-wave infrared light) passes freely through the atmosphere and is absorbed by the Earth's surface, warming it up and causing it to re-emit some of this energy as long-wavelength infrared. Most of the latter is absorbed by the H 2 O and CO 2 , the major greenhouse gas is in the unpolluted atmosphere, effectively trapping the radiation as heat. Thus the atmosphere is heated by the Earth, rather than by direct sunlight. Without the “ greenhouse gases ” in the atmosphere, the Earth's heat would be radiated away into space, and our planet would be too cold for life.
Radiation balance of the Earth In order to maintain a constant average temperature, the quantity of radiation (sunlight) absorbed by the surface must be exactly balanced by the quantity of long-wavelength infrared emitted by the surface and atmosphere and radiated back into space. Atmospheric gases that absorb this infrared light (depicted in red on the right part of this diagram) partially block this emission and become warmer, raising the Earth's temperature. This diagram is from the U. of Oregon Web page referenced below.
Since the beginning of the Industrial Revolution in the 19th century, huge quantities of additional greenhouse gases have been accumulating in the atmosphere. Carbon dioxide from fossil fuel combustion has been the principal source, but intensive agriculture also contributes significant quantities of methane (CH 4 ) and nitrous oxide (N 2 O) which are also efficient far-infrared absorbers. The measurable increase in these gases is believed by many to be responsible for the increase in the average temperature of the Earth that has been noted over the past 50 years— a trend that could initiate widespread flooding and other disasters if it continues. | libretexts | 2025-03-17T19:53:12.727295 | 2013-10-03T01:37:50 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.02%3A_Molecules_-_Properties_of_Bonded_Atoms",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "9.2: Molecules - Properties of Bonded Atoms",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.03%3A_Models_of_Chemical_Bonding | 9.3: Models of Chemical Bonding
Make sure you thoroughly understand the following essential ideas which have been presented.
- Comment on the distinction between a theory and a model in the context of chemical bonding.
- What is meant by a classical model of chemical bonding?
Why do atoms bind together— sometimes? The answer to this question would ideally be a simple, easily understood theory that would not only explain why atoms bind together to form molecules, but would also predict the three-dimensional structures of the resulting compounds as well as the energies and other properties of the bonds themselves. Unfortunately, no one theory exists that accomplishes these goals in a satisfactory way for all of the many categories of compounds that are known. Moreover, it seems likely that if such a theory does ever come into being, it will be far from simple.
About Models in Science
When we are faced the need to find a scientific explanation for a complex phenomenon such as bonding, experience has shown that it is often best to begin by developing a model . A scientific model is something like a theory in that it should be able to explain observations and to make useful predictions. But whereas a theory can be discredited by a single contradictory case, a model can be useful even if it does not encompass all instances of the effects it attempts to explain. We do not even require that a model be a credible representation of reality; all we ask is that it be able to explain the behavior of those cases to which it is applicable in terms that are consistent with the model itself.
An example of a model that you may already know about is the kinetic molecular theory of gases. Despite its name, this is really a model (at least at the level that beginning students use it) because it does not even try to explain the observed behavior of real gases. Nevertheless, it serves as a tool for developing our understanding of gases, and as an essential starting point for more elaborate treatments.
One thing is clear: chemical bonding is basically electrical in nature, the result of attraction between bodies of opposite charge; bonding occurs when outer-shell electrons are simultaneously attracted to the positively-charged nuclei of two or more nearby atoms. The need for models arises when we try to understand why
- Not all pairs of atoms can form stable bonds
- Different elements can form different numbers of bonds (this is expressed as "combining power" or "valence".)
- The geometric arrangement of the bonds ("bonding geometry") around a given kind of atom is a property of the element.
Given the extraordinary variety of ways in which atoms combine into aggregates, it should come as no surprise that a number of useful bonding models have been developed. Most of them apply only to certain classes of compounds or attempt to explain only a restricted range of phenomena. In this section we will provide brief descriptions of some of the bonding models; the more important of these will be treated in much more detail in later lessons in this unit.
Some early views of chemical bonding
Intense speculation about “chemical affinity” began in the 18th century. Some likened the tendency of one atom to “close” with another as an expression of a human-like kind of affection. Others attributed bonding to magnetic-like forces (left) or to varying numbers of “hooks” on different kinds of atoms (right). The latter constituted a primitive (and extremely limited) way of explaining the different combining powers ( valances ) of the different elements.
"There are no such things..."
Napoleon's definition of history as a set of lies agreed on by historians seems to have a parallel with chemical bonding and chemists. At least in Chemistry, we can call the various explanations "models" and get away with it even if they are demonstrably wrong, as long as we find them useful. In a provocative article ( J Chem Educ 1990 67(4) 280-298), J. F. Ogilvie tells us that there are no such things as orbitals, or, for that matter, non-bonding electrons, bonds, or even uniquely identifiable atoms within molecules. This idea disturbed a lot of people (teachers and textbook authors preferred to ignore it) and prompted a spirited rejoinder ( J Chem Ed 1992 69(6) 519-521) from Linus Pauling, father of the modern quantum-mechanical view of the chemical bond.
But the idea has never quite gone away. Richard Bader of McMaster University has developed a quantitative "atoms in molecules" model that depicts molecules as a collection of point-like nuclei embedded in a diffuse cloud of electrons. There are no "bonds" in this model, but only "bond paths" that correspond to higher values of electron density along certain directions that are governed by the manner in which the positive nuclei generate localized distortions of the electron cloud.
Classical models of the chemical bond
By classical , we mean models that do not take into account the quantum behavior of small particles, notably the electron. These models generally assume that electrons and ions behave as point charges which attract and repel according to the laws of electrostatics. Although this completely ignores what has been learned about the nature of the electron since the development of quantum theory in the 1920’s, these classical models have not only proven extremely useful, but the major ones also serve as the basis for the chemist’s general classification of compounds into “covalent” and “ionic” categories.
The Ionic Model
Ever since the discovery early in the 19th century that solutions of salts and other electrolytes conduct electric current, there has been general agreement that the forces that hold atoms together must be electrical in nature. Electrolytic solutions contain ions having opposite electrical charges; opposite charges attract, so perhaps the substances from which these ions come consist of positive and negatively charged atoms held together by electrostatic attraction.
It turns out that this is not true generally, but a model built on this assumption does a fairly good job of explaining a rather small but important class of compounds that are called ionic solids . The most well known example of such a compound is sodium chloride, which consists of two interpenetrating lattices of Na + and Cl – ions arranged in such as way that every ion of one type is surrounded (in three dimensional space) by six ions of opposite charge.
One can envision the formation of a solid NaCl unit by a sequence of events in which one mole of gaseous Na atoms lose electrons to one mole of Cl atoms, followed by condensation of the resulting ions into a crystal lattice:
| Na(g) → Na + (g) + e– | +494 kJ | ionization energy |
|---|---|---|
| Cl(g) + e– → Cl – (g) | –368 kJ | electron affinity |
| Na + (g) + Cl – (g) | –498 kJ | lattice energy |
| Na(g) + Cl(g) → NaCl(s) | –372 kJ | Sum: Na-Cl bond energy |
Note: positive energy values denote endothermic processes, while negative ones are exothermic.
Since the first two energies are known experimentally, as is the energy of the sum of the three processes, the lattice energy can be found by difference. It can also be calculated by averaging the electrostatic forces exerted on each ion over the various directions in the solid, and this calculation is generally in good agreement with observation, thus lending credence to the model. The sum of the three energy terms is clearly negative, and corresponds to the liberation of heat in the net reaction (bottom row of the table), which defines the Na–Cl “bond” energy.
The ionic solid is more stable than the equivalent number of gaseous atoms simply because the three-dimensional NaCl structure allows more electrons to be closer to more nuclei. This is the criterion for the stability of any kind of molecule; all that is special about the “ionic” bond is that we can employ a conceptually simple electrostatic model to predict the bond strength.
The main limitation of this model is that it applies really well only to the small class of solids composed of Group 1 and 2 elements with highly electronegative elements such as the halogens. Although compounds such as CuCl 2 dissociate into ions when they dissolve in water, the fundamental units making up the solid are more like polymeric chains of covalently-bound CuCl 2 molecules that have little ionic character.
Shared-electron (covalent) model
This model originated with the theory developed by G.N. Lewis in 1916, and it remains the most widely-used model of chemical bonding. It is founded on the idea that a pair of electrons shared between two atoms can create a mutual attraction, and thus a chemical bond.
Usually each atom contributes one electron (one of its valence electrons ) to the pair, but in some cases both electrons come from one of the atoms. For example, the bond between hydrogen and chlorine in the hydrogen chloride molecule is made up of the single 1 s electron of hydrogen paired up with one of chlorine's seven valence (3 p ) electrons. The stability afforded by this sharing is thought to derive from the noble gas configurations (helium for hydrogen, argon for chlorine) that surround the bound atoms.
The origin of the electrostatic binding forces in this model can best be understood by examining the simplest possible molecule. This is the hydrogen molecule ion H 2 + , which consists of two nuclei and one electron.
First, however, think what would happen if we tried to make the even simpler molecule H 2 2+ . Since this would consist only of two protons whose electrostatic charges would repel each other at all distances, it is clear that such a molecule cannot exist; something more than two nuclei are required for bonding to occur.
H 2 + we have a third particle, the electron. The effect of this electron will depend on its location with respect to the two nuclei. If the electron is in the space between the two nuclei (the binding region in the diagram), it will attract both protons toward itself, and thus toward each other. If the total attraction energy exceeds the internuclear repulsion, there will be a net bonding effect and the molecule will be stable. If, on the other hand, the electron is off to one side (in an antibinding region ), it will attract both nuclei, but it will attract the closer one much more strongly, owing to the inverse-square nature of Coulomb’s law. As a consequence, the electron will now actively work ag against bonding by helping to push the two nuclei apart.Polar covalent model
A purely covalent bond can only be guaranteed when the electronegativities (electron-attracting powers) of the two atoms are identical. When atoms having different electronegativities are joined, the electrons shared between them will be displaced toward the more electronegative atom, conferring a polarity on the bond which can be described in terms of percent ionic character.
Coulombic model
This is an extension of the ionic model to compounds that are ordinarily considered to be non-ionic. Combined hydrogen is always considered to exist as the hydride ion H – , so that methane can be treated as if it were C 4+ H – 4 .
This is not as bizarre as it might seem at first if you recall that the proton has almost no significant size, so that it is essentially embedded in an electron pair when it is joined to another atom in a covalent bond. This model, which is not as well known as it deserves to be, has surprisingly good predictive power, both as to bond energies and structures.
VSEPR model
The “valence shell electron repulsion” model is not so much a model of chemical bonding as a scheme for explaining the shapes of molecules. It is based on the quantum mechanical view that bonds represent electron clouds— physical regions of negative electric charge that repel each other and thus try to stay as far apart as possible. We will explore this concept in much greater detail in a later unit.
3 Quantum-mechanical models
These models of bonding take into account the fact that a particle as light as the electron cannot really be said to be in any single location. The best we can do is define a region of space in which the probability of finding the electron has some arbitrary value which will always be less than unity. The shape of this volume of space is called an orbital and is defined by a mathematical function that relates the probability to the ( x,y,z) coordinates of the molecule.
Like other models of bonding, the quantum models attempt to show how more electrons can be simultaneously close to more nuclei. Instead of doing so through purely geometrical arguments, they attempt this by predicting the nature of the orbitals which the valence electrons occupy in joined atoms.
The hybrid orbital model
This was developed by Linus Pauling in 1931 and was the first quantum-based model of bonding. It is based on the premise that if the atomic s , p , and d orbitals occupied by the valence electrons of adjacent atoms are combined in a suitable way, the hybrid orbitals that result will have the character and directional properties that are consistent with the bonding pattern in the molecule. The rules for bringing about these combinations turn out to be remarkably simple, so once they were worked out it became possible to use this model to predict the bonding behavior in a wide variety of molecules. The hybrid orbital model is most usefully applied to the p -block elements in the first few rows of the periodic table, and is especially important in organic chemistry.
The molecular orbital model
This model takes a more fundamental approach by regarding a molecule as a collection of valence electrons and positive cores. Just as the nature of atomic orbitals derives from the spherical symmetry of the atom, so will the properties of these new molecular orbitals be controlled by the interaction of the valence electrons with the multiple positive centers of these atomic cores.
These new orbitals, unlike those of the hybrid model, are delocalized; that is, they do not “belong” to any one atom but extend over the entire region of space that encompasses the bonded atoms. The available (valence) electrons then fill these orbitals from the lowest to the highest, very much as in the Aufbau principle that you learned for working out atomic electron configurations. For small molecules (which are the only ones we will consider here), there are simple rules that govern the way that atomic orbitals transform themselves into molecular orbitals as the separate atoms are brought together. The real power of molecular orbital theory, however, comes from its mathematical formulation which lends itself to detailed predictions of bond energies and other properties.
The electron-tunneling model
A common theme uniting all of the models we have discussed is that bonding depends on the fall in potential energy that occurs when opposite charges are brought together. In the case of covalent bonds, the shared electron pair acts as a kind of “electron glue” between the joined nuclei. In 1962, however, it was shown that this assumption is not strictly correct, and that instead of being concentrated in the space between the nuclei, the electron orbitals become even more concentrated around the bonded nuclei. At the same time however, they are free to “move” between the two nuclei by a process known as tunneling .
This refers to a well-known quantum mechanical effect that allows electrons (or other particles small enough to exhibit wavelike properties) to pass (“tunnel”) through a barrier separating two closely adjacent regions of low potential energy. One result of this is that the effective volume of space available to the electron is increased, and according to the uncertainty principle this will reduce the kinetic energy of the electron.
According to this model, the bonding electrons act as a kind of fluid that concentrates in the region of each nucleus (lowering the potential energy) and at the same time is able to freely flow between them (reducing the kinetic energy). Despite its conceptual simplicity and full acknowledgment of the laws of quantum mechanics, this model is less known than it deserves to be and is unfortunately absent from most textbooks. | libretexts | 2025-03-17T19:53:12.813473 | 2013-10-03T01:37:50 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.03%3A_Models_of_Chemical_Bonding",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "9.3: Models of Chemical Bonding",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.04%3A_Polar_Covalence | 9.4: Polar Covalence
Make sure you thoroughly understand the following essential ideas which have been presented below.
- Define electronegativity, and describe the general way in which electronegativities depend on the location of an element in the periodic table.
- Explain how electronegativity values relate to the polar nature of a chemical bond.
- What two factors determine the magnitude of an electric dipole moment ?
- Sketch structural diagrams that illustrate how the presence of polar bonds in a molecule can either increase or diminish the magnitude of the molecular dipole moment.
- Calculate the formal charge of each atom in a structure, and comment on its significance in relation to the polarity of that structure.
- Select the more likely of two or more electron-dot structures for a given species.
- Explain the difference between formal charge and oxidation number.
- Describe the limiting cases of covalent and ionic bonds. Explain what both categories of bonds have in common.
The electrons constituting a chemical bond are simultaneously attracted by the electrostatic fields of the nuclei of the two bonded atoms. In a homonuclear molecule such as O 2 the bonding electrons will be shared equally by the two atoms. In general, however, differences in the sizes and nuclear charges of the atoms will cause one of them to exert a greater attraction on the bonding pair, causing the electron cloud to be displaced toward the more strongly-attracting atom.
Electronegativity
The electronegativity of an atom denotes its relative electron-attracting power in a chemical bond. It is important to understand that electronegativity is not a measurable property of an atom in the sense that ionization energies and electron affinities are, although it can be correlated with both of these properties. The actual electron-attracting power of an atom depends in part on its chemical environment (that is, on what other atoms are bonded to it), so tabulated electronegativities should be regarded as no more than predictors of the behavior of electrons, especially in more complicated molecules. There are several ways of computing electronegativities, which are expressed on an arbitrary scale. The concept of electronegativity was introduced by Linus Pauling and his 0-4 scale continues to be the one most widely used.
The 0-4 electronegativity scale of Pauling is the best known of several arbitrary scales of this kind. Electronegativity values are not directly observable, but are derived from measurable atomic properties properties such as ionization energy and electron affinity. The place of any atom on this scale provides a good indication of its ability to compete with another atom in attracting a shared electron pair to it, but the presence of bonds to other atoms, and of multiple- or nonbonding electron pairs may make predictions about the nature a given bond less reliable.
An atom that has a small electronegativity is said to be electropositive . As the diagram shows, the metallic elements are generally electropositive. The position of hydrogen in this regard is worth noting; although physically a nonmetal, much of its chemistry is metal-like.
Dipole moments
When non-identical atoms are joined in a covalent bond, the electron pair will be attracted more strongly to the atom that has the higher electronegativity. As a consequence, the electrons will not be shared equally; the center of the negative charges in the molecule will be displaced from the center of positive charge. Such bonds are said to be polar and to possess partial ionic character , and they may confer a polar nature on the molecule as a whole.
A polar molecule acts as an electric dipole which can interact with electric fields that are created artificially or that arise from nearby ions or polar molecules. Dipoles are conventionally represented as arrows pointing in the direction of the negative end. The magnitude of interaction with the electric field is given by the permanent electric dipole moment of the molecule. The dipole moment corresponding to an individual bond (or to a diatomic molecule) is given by the product of the quantity of charge displaced q and the bond length r :
\[μ = q \times r\]
In SI units, q is expressed in coulombs and r in meters, so μ has the dimensions of \(C \cdot m\). If one entire electron charge is displaced by 100 pm (a typical bond length), then
\[μ = (1.6022 \times 10^{–19}\; C) \times (10^{–10}\; m) = 1.6 \times 10^{–29}\; C \cdot m = 4.8 \;D\]
The quantity denoted by D, the Debye unit, is still commonly used to express dipole moments. It was named after Peter Debye (1884-1966), the Dutch-American physicist who pioneered the study of dipole moments and of electrical interactions between particles; he won the Nobel Prize for Chemistry in 1936.
How dipole moments are measured
When a solution of polar molecules is placed between two oppositely-charged plates, they will tend to align themselves along the direction of the field. This process consumes energy which is returned to the electrical circuit when the field is switched off, an effect known as electrical capacitance .
Measurement of the capacitance of a gas or solution is easy to carry out and serves as a means of determining the magnitude of the dipole moment of a substance.
Estimate the percent ionic character of the bond in the hydrogen fluoride molecule from the experimental data shown at the right.
Solution
Dipole moments as vector sums
In molecules containing more than one polar bond, the molecular dipole moment is just the vector combination of what can be regarded as individual "bond dipole moments". Being vectors, these can reinforce or cancel each other, depending on the geometry of the molecule; it is therefore not uncommon for molecules containing polar bonds to be nonpolar overall, as in the example of carbon dioxide:
The zero dipole moment of CO 2 is one of the simplest experimental methods of demonstrating the linear shape of this molecule.
H 2 O, by contrast, has a very large dipole moment which results from the two polar H–O components oriented at an angle of 104.5°. The nonbonding pairs on oxygen are a contributing factor to the high polarity of the water molecule. In molecules containing nonbonding electrons or multiple bonds, the elecronegativity difference may not correctly predict the bond polarity. A good example of this is carbon monoxide, in which the partial negative charge resides on the carbon, as predicted by its negative formal charge (below.)
Electron densities in a molecule (and the dipole moments that unbalanced electron distributions can produce) are now easily calculated by molecular modeling programs. In this example, for methanol CH 3 OH, the blue area centered on hydrogen represents a positive charge, the red area centered where we expect the lone pairs to be located represents a negative charge, while the light green around methyl is approximately neutral. The manner in which the individual bonds contribute to the dipole moment of the molecule is nicely illustrated by the series of chloromethanes:
(Bear in mind that all four positions around the carbon atom are equivalent in this tetrahedral molecule, so there are only four chloromethanes.)
Formal charge and oxidation number
Although the total number of valence electrons in a molecule is easily calculated, there is not always a simple and unambiguous way of determining how many reside in a particular bond or as non-bonding pairs on a particular atom. For example, one can write valid Lewis octet structures for carbon monoxide showing either a double or triple bond between the two atoms, depending on how many nonbonding pairs are placed on each: C::O::: and :C:::O: (see Problem Example 3 below). The choice between structures such as these is usually easy to make on the principle that the more electronegative atom tends to surround itself with the greater number of electrons. In cases where the distinction between competing structures is not all that clear, an arbitrarily-calculated quantity known as the formal charge can often serve as a guide.
The formal charge on an atom is the electric charge it would have if all bonding electrons were shared equally with its bonded neighbors.
The formal charge on an atom is calculated by the following formula:
in which the core charge is the electric charge the atom would have if all its valence electrons were removed. In simple cases, the formal charge can be worked out visually directly from the Lewis structure, as is illustrated farther on.
Find the formal charges of all the atoms in the sulfuric acid structure shown here.
Solution
The atoms here are hydrogen, sulfur, and double- and single-bonded oxygens. Remember that a double bond is made up of two electron-pairs.
- hydrogen: FC = 1 – 0 – 1 = 0
- sulfur: FC = 6 – 0 – 6 = 0
- hydroxyl oxygen: FC = 6 – 4 – 2 = 0
- double-bonded oxygen: FC = 6 – 4 – 2 = 0
Using formal charge to select the best Lewis structure
The general rule for choosing between alternative structures is that the one involving the smallest formal charges is most favored, although the following example shows that this is not always the case.
Write out some structures for carbon monoxide CO, both those that do and do not obey the octet rule, and select the "best" on the basis of the formal charges.
Solution
Structure that obeys the octet rule:
a) For :C:::O: Carbon: 4 – 2 – 3 = –1 ; Oxygen: 6 – 2 – 3 = +1
Structures that do not obey the octet rule (for carbon):
b) For :C:O::: Carbon: 4 – 2 – 1 = +1 ; Oxygen: 6 – 6 – 1 = –1
c) For :C::O:: Carbon: 4 – 2 –2 = 0 ; Oxygen: 6 – 4 – 2 = 0
Comment : All three structures are acceptable (because the formal charges add up to zero for this neutral molecule) and contribute to the overall structure of carbon monoxide, although not equally. Both experiment and more advanced models show that the triple-bonded form (a) predominates. Formal charge, which is no more than a bookkeeping scheme for electrons, is by itself unable to predict this fact.
In a species such as the thiocyanate ion \(SCN^-\) in which two structures having the same minimal formal charges can be written, we would expect the one in which the negative charge is on the more electronegative atom to predominate.
The electrons in the structures of the top row are the valence electrons for each atom; an additional electron (purple) completes the nitrogen octet in this negative ion. The electrons in the bottom row are divided equally between the bonded atoms; the difference between these numbers and those above gives the formal charges.
Formal charge can also help answer the question “where is the charge located?” that is frequently asked about polyatomic ions. Thus by writing out the Lewis structure for the ammonium ion NH 4 + , you should be able to convince yourself that the nitrogen atom has a formal charge of +1 and each of the hydrogens has 0, so we can say that the positive charge is localized on the central atom.
Oxidation number
This is another arbitrary way of characterizing atoms in molecules. In contrast to formal charge, in which the electrons in a bond are assumed to be shared equally, oxidation number is the electric charge an atom would have if the bonding electrons were assigned exclusively to the more electronegative atom. Oxidation number serves mainly as a tool for keeping track of electrons in reactions in which they are exchanged between reactants, and for characterizing the “combining power” of an atom in a molecule or ion.
The following diagram compares the way electrons are assigned to atoms in calculating formal charge and oxidation number in carbon monoxide.
Ionic compounds
The shared-electron pair model introduced by G.N. Lewis showed how chemical bonds could form in the absence of electrostatic attraction between oppositely-charged ions. As such, it has become the most popular and generally useful model of bonding in all substances other than metals. A chemical bond occurs when electrons are simultaneously attracted to two nuclei, thus acting to bind them together in an energetically-stable arrangement. The covalent bond is formed when two atoms are able to share a pair of electrons:
In general, however, different kinds of atoms exert different degrees of attraction on their electrons, so in most cases the sharing will not be equal. One can even imagine an extreme case in which the sharing is so unequal that the resulting "molecule" is simply a pair of ions:
The resulting substance is sometimes said to contain an ionic bond . Indeed, the properties of a number of compounds can be adequately explained using the ionic model. But does this mean that there are really two kinds of chemical bonds, ionic and covalent? According to the ionic electrostatic model, solids such as NaCl consist of positive and negative ions arranged in a crystal lattice. Each ion is attracted to neighboring ions of opposite charge, and is repelled by ions of like charge; this combination of attractions and repulsions, acting in all directions, causes the ion to be tightly fixed in its own location in the crystal lattice.
Since electrostatic forces are nondirectional, the structure of an ionic solid is determined purely by geometry: two kinds of ions, each with its own radius, will fall into whatever repeating pattern will achieve the lowest possible potential energy. Surprisingly, there are only a small number of possible structures; one of the most common of these, the simple cubic lattice of NaCl, is shown here.
Is there such as thing as an ionic bond?
When two elements form an ionic compound, is an electron really lost by one atom and transferred to the other one? In order to deal with this question, consider the data on the ionic solid LiF. The average radius of the neutral Li atom is about 2.52Å. Now if this Li atom reacts with an atom of F to form LiF, what is the average distance between the Li nucleus and the electron it has “lost” to the fluorine atom? The answer is 1.56Å; the electron is now closer to the lithium nucleus than it was in neutral lithium!
So the answer to the above question is both yes and no: yes, the electron that was now in the 2 s orbital of Li is now within the grasp of a fluorine 2 p orbital, but no, the electron is now even closer to the Li nucleus than before, so how can it be “lost”? The one thing that is inarguably true about LiF is that there are more electrons closer to positive nuclei than there are in the separated Li and F atoms. But this is just the rule we stated at the beginning of this unit: chemical bonds form when electrons can be simultaneously near two or more nuclei.
It is obvious that the electron-pair bond brings about this situation, and this is the reason for the stability of the covalent bond. What is not so obvious (until you look at the numbers such as are quoted for LiF above) is that the “ionic” bond results in the same condition; even in the most highly ionic compounds, both electrons are close to both nuclei, and the resulting mutual attractions bind the nuclei together. This being the case, is there really any fundamental difference between the ionic and covalent bond?
The answer, according to modern chemical thinking is probably “no”; in fact, there is some question as to whether it is realistic to consider that these solids consist of “ions” in the usual sense. The preferred picture that seems to be emerging is one in which the electron orbitals of adjacent atom pairs are simply skewed so as to place more electron density around the “negative” element than around the “positive” one.
This being said, it must be reiterated that the ionic model of bonding is a useful one for many purposes, and there is nothing wrong with using the term “ionic bond” to describe the interactions between the atoms in the very small class of “ionic solids” such as LiF and NaCl.
"Covalent, ionic or metallic" is an oversimplification!
If there is no such thing as a “completely ionic” bond, can we have one that is completely covalent? The answer is yes, if the two nuclei have equal electron attracting powers . This situation is guaranteed to be the case with homonuclear diatomic molecules-- molecules consisting of two identical atoms. Thus in Cl 2 , O 2 , and H 2 , electron sharing between the two identical atoms must be exactly even; in such molecules, the center of positive charge corresponds exactly to the center of negative charge: halfway between the two nuclei.
Categorizing all chemical bonds as either ionic, covalent, or metallic is a gross oversimplification; as this diagram shows, there are examples of substances that exhibit varying degrees of all three bonding characteristics.
Dative (Coordinate) covalent Bonds
In most covalent bonds, we think of the electron pair as having a dual parentage, one electron being contributed by each atom. There are, however, many cases in which both electrons come from only one atom. This can happen if the donor atom has a non-bonding pair of electrons and the acceptor atom has a completely empty orbital that can accommodate them. These are called dative or coordinate covalent bonds.
This is the case, for example, with boron trifluoride and ammonia. In BF 3 , one the 2p orbitals is unoccupied and can accommodate the lone pair on the nitrogen atom of ammonia. The electron acceptor, BF 3 , acts as a Lewis acid here, and NH 3 is the Lewis base . Bonds of this type (sometimes known as coordinate covalent or dative bonds) tend to be rather weak (usually 50-200kJ/mol); in many cases the two joined units retain sufficient individuality to justify writing the formula as a molecular complex or adduct . | libretexts | 2025-03-17T19:53:12.910697 | 2013-10-03T01:37:51 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.04%3A_Polar_Covalence",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "9.4: Polar Covalence",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.05%3A_Molecular_Geometry | 9.5: Molecular Geometry
Make sure you thoroughly understand the following essential ideas:
- Describe the manner in which repulsion between electron-pairs affects the orientation of the regions that contain them.
- Define coordination geometry, and describe the particular geometry associated with electron-pair repulsion between two, three, four, five, or six identical bonding regions.
- Explain the distinction between coordination geometry and molecular geometry, and provide an illustration based on the structure of water or ammonia.
- Draw a diagram of a tetrahedral or octahedral molecule.
The Lewis electron-dot structures you have learned to draw have no geometrical significance other than depicting the order in which the various atoms are connected to one another. Nevertheless, a slight extension of the simple shared-electron pair concept is capable of rationalizing and predicting the geometry of the bonds around a given atom in a wide variety of situations.
Electron-pair repulsion
The valence shell electron pair repulsion ( VSEPR ) model that we describe here focuses on the bonding and nonbonding electron pairs present in the outermost (“valence”) shell of an atom that connects with two or more other atoms. Like all electrons, these occupy regions of space which we can visualize as electron clouds — regions of negative electric charge, also known as orbitals — whose precise character can be left to more detailed theories
.
The covalent model of chemical bonding assumes that the electron pairs responsible for bonding are concentrated into the region of apace between the bonded atoms. The fundamental idea of VSEPR thoery is that these regions of negative electric charge will repel each other, causing them (and thus the chemical bonds that they form) to stay as far apart as possible. Thus the two electron clouds contained in a simple triatomic molecule AX 2 will extend out in opposite directions; an angular separation of 180° places the two bonding orbitals as far away from each other they can get. We therefore expect the two chemical bonds to extend in opposite directions, producing a linear molecule.
If the central atom also contains one or more pairs of nonbonding electrons, these additional regions of negative charge will behave very much like those associated with the bonded atoms. The orbitals containing the various bonding and nonbonding pairs in the valence shell will extend out from the central atom in directions that minimize their mutual repulsions. If the central atom possesses partially occupied d -orbitals, it may be able to accommodate five or six electron pairs, forming what is sometimes called an “ expanded octet ”.
Digonal and trigonal coordination
Linear molecules
As we stated above, a simple triatomic molecule of the type \(AX_2\) has its two bonding orbitals 180° apart, producing a molecule that we describe as having linear geometry. Examples of triatomic molecules for which VSEPR theory predicts a linear shape are BeCl 2 (which, you will notice, doesn't possess enough electrons to conform to the octet rule) and CO 2 . If you write out the electron dot formula for carbon dioxide, you will see that the C-O bonds are double bonds. This makes no difference to VSEPR theory; the central carbon atom is still joined to two other atoms, and the electron clouds that connect the two oxygen atoms are 180° apart.
Trigonal molecules
In an AX 3 molecule such as BF 3 , there are three regions of electron density extending out from the central atom. The repulsion between these will be at a minimum when the angle between any two is (360° ÷ 3) = 120°. This requires that all four atoms be in the same plane; the resulting shape is called trigonal planar , or simply trigonal .
Tetrahedral coordination
Methane, CH 4 , contains a carbon atom bonded to four hydrogens. What bond angle would lead to the greatest possible separation between the electron clouds associated with these bonds? In analogy with the preceding two cases, where the bond angles were 360°/2=180° and 360°/3=120°, you might guess 360°/4=90°; if so, you would be wrong. The latter calculation would be correct if all the atoms were constrained to be in the same plane (we will see cases where this happens later), but here there is no such restriction. Consequently, the four equivalent bonds will point in four geometrically equivalent directions in three dimensions corresponding to the four corners of a tetrahedron centered on the carbon atom. The angle between any two bonds will be 109.5°. This is called tetrahedral coordination .
This is the most important coordination geometry in Chemistry: it is imperative that you be able to sketch at least a crude perspective view of a tetrahedral molecule.
It is interesting to note that the tetrahedral coordination of carbon in most of its organic compounds was worked out in the nineteenth century on purely geometrical grounds and chemical evidence, long before direct methods of determining molecular shapes were developed. For example, it was noted that there is only one dichloromethane, CH 2 Cl 2 .
If the coordination around the carbon were square, then there would have to be two isomers of CH 2 Cl 2 , as shown in the pair of structures here. The distances between the two chlorine atoms would be different, giving rise to differences in physical properties would allow the two isomers to be distinguished and separated.
The existence of only one kind of CH 2 Cl 2 molecule means that all four positions surrounding the carbon atom are geometrically equivalent, which requires a tetrahedral coordination geometry. If you study the tetrahedral figure closely, you may be able to convince yourself that it represents the connectivity shown on both of the "square" structures at the top. A three-dimensional ball-and-stick mechanical model would illustrate this very clearly.
Tetrahedrally-coordinated carbon chains
tetrahedra joined end-to-end.
Similar alkane chains having the general formula H 3 C–(CH 2 ) n –CH 3 (or C n H 2 n +2 ) can be built up; a view of pentane , C 5 H 12 , is shown below.
Notice that these "straight chain hydrocarbons" (as they are often known) have a carbon "backbone" structure that is not really straight, as is illustrated by the zig-zag figure that is frequently used to denote hydrocarbon structures.
Coordination geometry and molecular geometry
Coordination number refers to the number of electron pairs that surround a given atom; we often refer to this atom as the central atom even if this atom is not really located at the geometrical center of the molecule. If all of the electron pairs surrounding the central atom are shared with neighboring atoms, then the coordination geometry is the same as the molecular geometry . The application of VSEPR theory then reduces to the simple problem of naming (and visualizing) the geometric shapes associated with various numbers of points surrounding a central point (the central atom) at the greatest possible angles. Both classes of geometry are named after the shapes of the imaginary geometric figures (mostly regular solid polygons) that would be centered on the central atom and would have an electron pair at each vertex.
If one or more of the electron pairs surrounding the central atom is not shared with a neighboring atom (that is, if it is a lone pair), then the molecular geometry is simpler than the coordination geometry, and it can be worked out by inspecting a sketch of the coordination geometry figure.
Tetrahedral coordination with lone pairs
In the examples we have discussed so far, the shape of the molecule is defined by the coordination geometry; thus the carbon in methane is tetrahedrally coordinated, and there is a hydrogen at each corner of the tetrahedron, so the molecular shape is also tetrahedral.
The AXE Method
It is common practice to represent bonding patterns by "generic" formulas such as \(AX_4\), \(AX_2E_2\), etc., in which "X" stands for bonding pairs and "E" denotes lone pairs. This convention is known as the "AXE Method."
The bonding geometry will not be tetrahedral when the valence shell of the central atom contains nonbonding electrons, however. The reason is that the nonbonding electrons are also in orbitals that occupy space and repel the other orbitals. This means that in figuring the coordination number around the central atom, we must count both the bonded atoms and the nonbonding pairs.
The water molecule: \(AX_2E_2\)
In the water molecule, the central atom is O, and the Lewis electron dot formula predicts that there will be two pairs of nonbonding electrons. The oxygen atom will therefore be tetrahedrally coordinated, meaning that it sits at the center of the tetrahedron as shown below.
Two of the coordination positions are occupied by the shared electron-pairs that constitute the O–H bonds, and the other two by the non-bonding pairs. Thus although the oxygen atom is tetrahedrally coordinated, the bonding geometry (shape) of the H 2 O molecule is described as bent .
There is an important difference between bonding and non-bonding electron orbitals. Because a nonbonding orbital has no atomic nucleus at its far end to draw the electron cloud toward it, the charge in such an orbital will be concentrated closer to the central atom. As a consequence, nonbonding orbitals exert more repulsion on other orbitals than do bonding orbitals. Thus in H 2 O, the two nonbonding orbitals push the bonding orbitals closer together, making the H–O–H angle 104.5° instead of the tetrahedral angle of 109.5°.
Ammonia: \(AX_3E\)
The electron-dot structure of NH 3 places one pair of nonbonding electrons in the valence shell of the nitrogen atom. This means that there are three bonded atoms and one lone pair, for a coordination number of four around the nitrogen, the same as occurs in H 2 O. We can therefore predict that the three hydrogen atom will lie at the corners of a tetrahedron centered on the nitrogen atom. The lone pair orbital will point toward the fourth corner of the tetrahedron, but since that position will be vacant, the NH 3 molecule itself cannot be tetrahedral. Instead, it assumes a pyramidal shape. More precisely, the shape is that of a trigonal pyramid (i.e., a pyramid having a triangular base). The hydrogen atoms are all in the same plane, with the nitrogen above (or below, or to the side; molecules of course don’t know anything about “above” or “below”!) The fatter orbital containing the non-bonding electrons pushes the bonding orbitals together slightly, making the H–N–H bond angles about 107°.
Computer-generated image of NH
3
molecule showing electrostatic potential (red=+, blue=–.)
Central atoms with five bonds
Compounds of the type AX 5 are formed by some of the elements in Group 15 of the periodic table; PCl 5 and AsF 5 are examples.
In what directions can five electron pairs arrange themselves in space so as to minimize their mutual repulsions? In the cases of coordination numbers 2, 3, 4, and 6, we could imagine that the electron pairs distributed themselves as far apart as possible on the surface of a sphere; for the two higher numbers, the resulting shapes correspond to the regular polyhedron having the same number of sides. The problem with coordination number 5 is that there is no such thing as a regular polyhedron with five vertices.
Regular Polyhedra
In 1758, the great mathematician Euler proved that there are only five regular convex polyhedra, known as the platonic solids: tetrahedron (4 triangular faces), octahedron (6 triangular faces), icosahedron (20 triangular faces), cube (6 square faces), and dodecahedron (12 pentagonal faces). Chemical examples of all are known; the first icosahedral molecule, \(LaC_{60}\) (in which the La atom has 20 nearest C neighbors) was prepared in 1986.
Besides the five regular solids, there can be 15 semi-regular isogonal solids in which the faces have different shapes, but the vertex angles are all the same. These geometrical principles are quite important in modern structural chemistry.
The shape of PCl 5 and similar molecules is a trigonal bipyramid . This consists simply of two triangular-base pyramids joined base-to-base. Three of the chlorine atoms are in the plane of the central phosphorus atom ( equatorial positions), while the other two atoms are above and below this plane ( axial positions). Equatorial and axial atoms have different geometrical relationships to their neighbors, and thus differ slightly in their chemical behavior.
In 5-coordinated molecules containing lone pairs, these non-bonding orbitals (which you will recall are closer to the central atom and thus more likely to be repelled by other orbitals) will preferentially reside in the equatorial plane. This will place them at 90° angles with respect to no more than two axially-oriented bonding orbitals.
Using this reasoning, we can predict that an AX 4 E molecule (that is, a molecule in which the central atom A is coordinated to four other atoms “X” and to one nonbonding electron pair) such as SF 4 will have a “see-saw” shape; substitution of more nonbonding pairs for bonded atoms reduces the triangular bipyramid coordination to even simpler molecular shapes, as shown below.
Octahedral coordination
Just as four electron pairs experience the minimum repulsion when they are directed toward the corners of a tetrahedron, six electron pairs will try to point toward the corners of an octahedron . An octahedron is not as complex a shape as its name might imply; it is simply two square-based pyramids joined base to base. You should be able to sketch this shape as well as that of the tetrahedron.
The shaded plane shown in this octahedrally-coordinated molecule is only one of three equivalent planes defined by a four-fold symmetry axis. All the ligands are geometrically equivalent; there are no separate axial and equatorial positions in an AX 6 molecule.
At first, you might think that a coordination number of six is highly unusual; it certainly violates the octet rule, and there are only a few molecules (SF 6 is one) where the central atom is hexavalent. It turns out, however, that this is one of the most commonly encountered coordination numbers in inorganic chemistry. There are two main reasons for this:
- Many transition metal ions form coordinate covalent bonds with lone-pair electron donor atoms such as N (in NH 3 ) and O (in H 2 O). Since transition elements can have an outer configuration of d 10 s 2 , up to six electron pairs can be accommodated around the central atom. A coordination number of 6 is therefore quite common in transition metal hydrates, such as Fe(H 2 O) 6 3+ .
- Although the central atom of most molecules is bonded to fewer than six other atoms, there is often a sufficient number of lone pair electrons to bring the total number of electron pairs to six.
Octahedral coordination with lone pairs
There are well known examples of 6-coordinate central atoms with 1, 2, and 3 lone pairs. Thus all three of the molecules whose shapes are depicted below possess octahedral coordination around the central atom. Note also that the orientation of the shaded planes shown in the two rightmost images are arbitrary; since all six vertices of an octahedron are identical, the planes could just as well be drawn in any of the three possible vertical orientations.
Summary of VSEPR theory
The VSEPR model is an extraordinarily powerful one, considering its great simplicity. Its application to predicting molecular structures can be summarized as follows:
- 1. Electron pairs surrounding a central atom repel each other; this repulsion will be minimized if the orbitals containing these electron pairs point as far away from each other as possible.
- 2. The coordination geometry around the central atom corresponds to the polyhedron whose number of vertices is equal to the number of surrounding electron pairs (coordination number). Except for the special case of 5, and the trivial cases of 2 and 3, the shape will be one of the regular polyhedra.
- 3. If some of the electron pairs are nonbonding, the shape of the molecule will be simpler than that of the coordination polyhedron.
- 4. Orbitals that contain nonbonding electrons are more concentrated near the central atom, and therefore offer more repulsion than bonding pairs to other orbitals.
While VSEPR theory is quite good at predicting the general shapes of most molecules, it cannot yield exact details. For example, it does not explain why the bond angle in H 2 O is 104.5°, but that in H 2 S is about 90°. This is not surprising, considering that the emphasis is on electronic repulsions, without regard to the detailed nature of the orbitals containing the electrons, and thus of the bonds themselves.
The Valence Shell Electron Repulsion theory was developed in the 1960s by Ronald Gillespie of McMaster University (Hamilton, Ontario, Canada) and Ronald Nyholm (University College, London). It is remarkable that what seems to be a logical extension of the 1916 Lewis shared-electron pair model of bonding took so long to be formulated; it was first presented in the authors' classic article Inorganic Stereochemistry published in the 1957 Chemical Society of London Quarterly Reviews (Vol 11, pg 339). Although it post-dates the more complete quantum mechanical models, it is easy to grasp and within a decade had become a staple of every first-year college chemistry course. | libretexts | 2025-03-17T19:53:13.002145 | 2013-10-03T01:37:50 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.05%3A_Molecular_Geometry",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "9.5: Molecular Geometry",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.06%3A_The_Hybrid_Orbital_Model | 9.6: The Hybrid Orbital Model
- Explain why the sharing of atomic orbitals (as implied in the Lewis model) cannot adequately account for the observed bonding patterns in simple molecules.
- Sketch out a diagram illustrating how the plots of atomic s - and p - orbital wave functions give rise to a pair of hybrid orbitals.
- Draw "orbital box" diagrams showing how combinations of an atomic s orbital and various numbers of p orbitals create sp , sp 2 , and sp 3 hybrid orbitals.
- Show how hybrid orbitals are involved in the molecules methane, water, and ammonia.
As useful and appealing as the concept of the shared-electron pair bond is, it raises a somewhat troubling question that we must sooner or later face: what is the nature of the orbitals in which the shared electrons are contained? Up until now, we have been tacitly assuming that each valence electron occupies the same kind of atomic orbital as it did in the isolated atom. As we shall see below, his assumption very quickly leads us into difficulties.
Atomic orbitals alone do not work for Molecules
Consider how we might explain the bonding in a compound of divalent beryllium, such as beryllium hydride, BeH 2 . The beryllium atom, with only four electrons, has a configuration of 1 s 2 2 s 2 . Note that the two electrons in the 2 s orbital have opposite spins and constitute a stable pair that has no tendency to interact with unpaired electrons on other atoms.
The only way that we can obtain two unpaired electrons for bonding in beryllium is to promote one of the 2 s electrons to the 2 p level. However, the energy required to produce this excited-state atom would be sufficiently great to discourage bond formation. It is observed that Be does form reasonably stable bonds with other atoms. Moreover, the two bonds in BeH 2 and similar molecules are completely equivalent; this would not be the case if the electrons in the two bonds shared Be orbitals of different types, as in the "excited state" diagram above.
These facts suggest that it is incorrect to assume that the distribution of valence electrons that are shared with other atoms can be described by atomic-type s, p , and d orbitals at all.
Remember that these different orbitals arise in the first place from the interaction of the electron with the single central electrostatic force field associated with the positive nucleus. An outer-shell electron in a bonded atom will be under the influence of a force field emanating from two positive nuclei, so we would expect the orbitals in the bonded atoms to have a somewhat different character from those in free atoms. In fact, as far as valence electrons are concerned, we can throw out the concept of atomic orbital altogether and reassign the electrons to a new set of molecular orbitals that are characteristic of each molecular configuration. This approach is indeed valid, but we will defer a discussion of it until a later unit.
For now, we will look at a less-radical model that starts out with the familiar valence-shell atomic orbitals, and allows them to combine to form hybrid orbitals whose shapes conform quite well to the bonding geometry that we observe in a wide variety of molecules.
What are hybrid orbitals?
First, recall that the electron, being a quantum particle, cannot have a distinct location; the most we can do is define the region of space around the nucleus in which the probability of finding the electron exceeds some arbitrary value, such as 90% or 99%. This region of space is the orbital . Because of the wavelike character of matter, the orbital corresponds to a standing wave pattern in 3-dimensional space which we can often represent more clearly in 2-dimensional cross section. The quantity that is varying (“waving”) is a number denoted by ψ ( psi ) whose value varies from point to point according to the wave function for that particular orbital.
Orbitals of all types are simply mathematical functions that describe particular standing-wave patterns that can be plotted on a graph but have no physical reality of their own. Because of their wavelike nature, two or more orbitals (i.e., two or more functions ψ) can be combined both in-phase and out-of-phase to yield a pair of resultant orbitals which, to be useful, must have squares that describe actual electron distributions in the atom or molecule.
The s,p,d and f orbitals that you are familiar with are the most convenient ones for describing the electron distribution in isolated atoms because assignment of electrons to them according to the usual rules always yields an overall function Ψ 2 that predicts a spherically symmetric electron distribution, consistent with all physical evidence that atoms are in fact spherical. For atoms having more than one electron, however, the s,p,d , f basis set is only one of many possible ways of arriving at the same observed electron distribution. We use it not because it is unique, but because it is the simplest.
In the case of a molecule such as BeH 2 , we know from experimental evidence that the molecule is linear and therefore the electron density surrounding the central atom is no longer spherical, but must be concentrated along two directions 180° apart, and we need to construct a function Ψ 2 having these geometrical properties. There are any number of ways of doing this, but it is convenient is to use a particular set of functions ψ (which we call hybrid orbitals ) that are constructed by combining the atomic s,p,d, and f functions that are already familiar to us.
You should understand that hybridization is not a physical phenomenon ; it is merely a mathematical operation that combines the atomic orbitals we are familiar with in such a way that the new (hybrid) orbitals possess the geometric and other properties that are reasonably consistent with what we observe in a wide range (but certainly not in all) molecules. In other words, hybrid orbitals are abstractions that describe reality fairly well in certain classes of molecules (and fortunately, in much of the very large class of organic substances) and are therefore a useful means of organizing a large body of chemical knowledge... but they are far from infallible.
Hybridization is not a physical phenomenon ; it is merely a mathematical operation that combines the atomic orbitals we are familiar with in such a way that the new (hybrid) orbitals possess the geometric and other properties that are reasonably consistent with what we observe in a wide range (but certainly not in all) molecules.
This approach, which assumes that the orbitals remain more or less localized on one central atom, is the basis of the theory which was developed in the early 1930s, mainly by Linus Pauling .
Linus Pauling (1901-1994) was the most famous American chemist of the 20th century and the author of the classic book The Nature of the Chemical Bond . His early work pioneered the application of X-ray diffraction to determine the structure of complex molecules; he then went on to apply quantum theory to explain these observations and predict the bonding patterns and energies of new molecules. Pauling, who spent most of his career at Cal Tech, won the Nobel Prize for Chemistry in 1954 and the Peace Prize in 1962.
"In December 1930 Pauling had his famous 'breakthrough' where, in a rush of inspiration, he ' stayed up all night, making, writing out, solving the equations, which were so simple that I could solve them in a few minutes '. This flurry of calculations would eventually become the first of Pauling's germinal series of papers on the nature of the chemical bond. ' I just kept getting more and more euphorious as time went by ', Pauling would recall. "
Although the hybrid orbital approach has proven very powerful (especially in organic chemistry), it does have its limitations. For example, it predicts that both H 2 O and H 2 S will be tetrahedrally coordinated bent molecules with bond angles slightly smaller than the tetrahedral angle of 109.5° owing to greater repulsion by the nonbonding pair. This description fits water (104.5°) quite well, but the bond angle in hydrogen sulfide is only 92°, suggesting that atomic p orbitals (which are 90° apart) provide a better description of the electron distribution about the sulfur atom than do sp 3 hybrid orbitals.
The hybrid orbital model is fairly simple to apply and understand, but it is best regarded as one special way of looking at a molecule that can often be misleading. Another viewpoint, called the molecular orbital theory , offers us a complementary perspective that it is important to have if we wish to develop a really thorough understanding of chemical bonding in a wider range of molecules.
Constructing hybrid orbitals
Below: "Constructive" and "destructive" combinations of 2 p and 2 s wave functions (line plots) give rise to the sp hybrid function shown at the right. The solid figures depict the corresponding probability functions ψ 2 .
Hybrid orbitals are constructed by combining the ψ functions for atomic orbitals. Because wave patterns can combine both constructively and destructively, a pair of atomic wave functions such as the s - and p - orbitals shown at the left can combine in two ways, yielding the sp hybrids shown.
From an energy standpoint, we can represent the transition from atomic s - and p -orbitals to an sp hybrid orbital in this way:
Notice here that 1) the total number of occupied orbitals is conserved, and 2) the two sp hybrid orbitals are intermediate in energy between their parent atomic orbitals. In terms of plots of the actual orbital functions ψ we can represent the process as follows:
The probability of finding the electron at any location is given not by ψ , but by ψ 2 , whose form is roughly conveyed by the solid figures in this illustration.
Hybrids derived from atomic s - and p orbitals
Digonal bonding: sp-hybrid orbitals
Returning to the example of BeH 2 , we can compare the valence orbitals in the free atoms with those in the beryllium hydride molecule as shown here. It is, of course, the overlap between the hydrogen-1 s orbitals and the two lobes of the beryllium sp -hybrid orbitals that constitutes the two Be—H "bonds" in this molecule. Notice that whereas a single p -orbital has lobes on both sides of the atom, a single sp -hybrid has most of its electron density on one side, with a minor and more spherical lobe on the other side. This minor lobe is centered on the central atom (some textbook illustrations don't get this right.)
As far as the shape of the molecule is concerned, the result is exactly the same as predicted by the VSEPR model (although hybrid orbital theory predicts the same result in a more fundamental way.) We can expect any central atom that uses sp -hybridization in bonding to exhibit linear geometry when incorporated into a molecule.
Trigonal (sp 2 ) hybridization
We can now go on to apply the same ideas to some other simple molecules. In boron trifluoride , for example, we start with the boron atom, which has three outer-shell electrons in its normal or ground state, and three fluorine atoms, each with seven outer electrons. As is shown in this configuration diagram, one of the three boron electrons is unpaired in the ground state. In order to explain the trivalent bonding of boron, we postulate that the atomic s - and p - orbitals in the outer shell of boron mix to form three equivalent hybrid orbitals. These particular orbitals are called sp 2 hybrids , meaning that this set of orbitals is derived from one s- orbital and two p-orbitals of the free atom.
This illustration shows how an s -orbital mixes with two p orbitals to form a set of three sp 2 hybrid orbitals. Notice again how the three atomic orbitals yield the same number of hybrid orbitals.
Boron Trifluoride BF 3 is a common example of sp 2 hybridization. The molecule has plane trigonal geometry.
Tetrahedral (sp 3 ) hybridization
Let us now look at several tetravalent molecules, and see what kind of hybridization might be involved when four outer atoms are bonded to a central atom. Perhaps the commonest and most important example of this bond type is methane, CH 4 .
orbitals of carbon mix into four sp 3 hybrid orbitals which are chemically and geometrically identical; the latter condition implies that the four hybrid orbitals extend toward the corners of a tetrahedron centered on the carbon atom.
Methane is the simplest hydrocarbon; the molecule is approximately spherical, as is shown in the space-filling model:
By replacing one or more of the hydrogen atoms in CH 4 with another sp 3 hybridized carbon fragments, hydrocarbon chains of any degree of complexity can be built up. The simplest of these is ethane:
This shows how an sp 3 orbital on each of two two carbon atoms join (overlap) to form a carbon-carbon bond, and then the remaining carbon sp 3 orbital overlaps with six hydrogen 1s orbitals to form the ethane molecule.
Lone pair electrons in hybrid orbitals
If lone pair electrons are present on the central atom, these can occupy one or more of the sp 3 orbitals. This causes the molecular geometry to be different from the coordination geometry, which remains tetrahedral. In the ammonia molecule, for example, the nitrogen atom normally has three unpaired p electrons, but by mixing the 2 s and 2 p orbitals, we can create four sp 3 -hybrid orbitals just as in carbon. Three of these can form shared-electron bonds with hydrogen, resulting in ammonia, NH 3 . The fourth of the sp 3 hybrid orbitals contains the two remaining outer-shell electrons of nitrogen which form a non-bonding lone pair. In acidic solutions these can coordinate with a hydrogen ion, forming the ammonium ion NH 4 + .
Although no bonds are formed by the lone pair in NH 3 , these electrons do give rise to a charge cloud that takes up space just like any other orbital.
In the water molecule, the oxygen atom can form four sp 3 orbitals. Two of these are occupied by the two lone pairs on the oxygen atom, while the other two are used for bonding. The observed H-O-H bond angle in water (104.5°) is less than the tetrahedral angle (109.5°); one explanation for this is that the non-bonding electrons tend to remain closer to the central atom and thus exert greater repulsion on the other orbitals, thus pushing the two bonding orbitals closer together.
Molecular ions
Hybridization can also help explain the existence and structure of many inorganic molecular ions . Consider, for example, electron configurations of zinc in the compounds in the illustrations below. The tetrachlorozinc ion (top row) is another structure derived from zinc and chlorine. As we might expect, this ion is tetrahedral; there are four chloride ions surrounding the central zinc ion. The zinc ion has a charge of +2, and each chloride ion is –1, so the net charge of the complex ion is –2.
At the bottom is shown the electron configuration of atomic zinc, and just above it, of the divalent zinc ion. Notice that this ion has no electrons at all in its 4-shell. In zinc chloride, shown in the next row up, there are two equivalent chlorine atoms bonded to the zinc. The bonding orbitals are of sp character; that is, they are hybrids of the 4s and one 4p orbital of the zinc atom. Since these orbitals are empty in the isolated zinc ion, the bonding electrons themselves are all contributed by the chlorine atoms, or rather, the chloride ions, for it is these that are the bonded species here. Each chloride ion possesses a complete octet of electrons, and two of these electrons occupy each sp bond orbital in the zinc chloride complex ion. This is an example of a coordinate covalent bond , in which the bonded atom contributes both of the electrons that make up the shared pair. | libretexts | 2025-03-17T19:53:13.168814 | 2013-10-03T01:37:51 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.06%3A_The_Hybrid_Orbital_Model",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "9.6: The Hybrid Orbital Model",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.07%3A_The_Hybrid_Orbital_Model_II | 9.7: The Hybrid Orbital Model II
Make sure you thoroughly understand the following essential ideas:
- Sketch out diagrams showing the hybridization and bonding in compounds containing single, double, and triple carbon-carbon bonds.
- Define sigma and pi bonds.
- Describe the hybridization and bonding in the benzene molecule.
This is a continuation of the previous page which introduced the hybrid orbital model and illustrated its use in explaining how valence electrons from atomic orbitals of s and p types can combine into equivalent shared-electron pairs known as sp , sp 2 , and sp 3 hybrid orbitals. In this lesson, we extend this idea to compounds containing double and triple bonds, and to those in which atomic d electrons are involved (and which do not follow the octet rule.)
Hybrid types and Multiple bonds
We have already seen how sp hybridization in carbon leads to its combining power of four in the methane molecule. Two such tetrahedrally coordinated carbons can link up together to form the molecule ethane C 2 H 6 . In this molecule, each carbon is bonded in the same way as the other; each is linked to four other atoms, three hydrogens and one carbon. The ability of carbon-to-carbon linkages to extend themselves indefinitely and through all coordination positions accounts for the millions of organic molecules that are known.
Trigonal hybridization in carbon: the double bond
Carbon and hydrogen can also form a compound ethylene (ethene) in which each carbon atom is linked to only three other atoms. Here, we can regard carbon as being trivalent. We can explain this trivalence by supposing that the orbital hybridization in carbon is in this case not sp 3 , but is sp 2 instead; in other words, only two of the three p orbitals of carbon mix with the 2 s orbital to form hybrids; the remaining p-orbital, which we will call the i orbital, remains unhybridized. Each carbon is bonded to three other atoms in the same kind of plane trigonal configuration that we saw in the case of boron trifluoride, where the same kind of hybridization occurs. Notice that the bond angles around each carbon are all 120°.
This alternative hybridization scheme explains how carbon can combine with four atoms in some of its compounds and with three other atoms in other compounds. You may be aware of the conventional way of depicting carbon as being tetravalent in all its compounds; it is often stated that carbon always forms four bonds, but that sometimes, as in the case of ethylene, one of these may be a double bond. This concept of the multiple bond preserves the idea of tetravalent carbon while admitting the existence of molecules in which carbon is clearly combined with fewer than four other atoms.
These three views of the ethylene molecule emphasize different aspects of the disposition of shared electron pairs in the various bonding orbitals of ethene (ethylene). (a) The "backbone" structure consisting of σ ( sigma ) bonds formed from the three sp 2 -hybridized orbitals on each carbon. (b) The π ( pi ) bonding system formed by overlap of the unhybridized p z orbital on each carbon. The π orbital has two regions of electron density extending above and below the plane of the molecule. (c) A cutaway view of the combined σ and π system.
orbital that is perpendicular to the molecular plane. These two parallel p z orbitals will interact with each other; the two orbitals merge, forming a sausage-like charge cloud (the π bond) that extends both above and below the plane of the molecule. It is the pair of electrons that occupy this new extended orbital that constitutes the “fourth” bond to each carbon, and thus the “other half” of the double bond in the molecule.More about sigma and pi bonds
The σ ( sigma ) bond has its maximum electron density along the line-of-centers joining the two atoms (below left). Viewed end-on, the σ bond is cylindrically symmetrical about the line-of-centers. It is this symmetry, rather than its parentage, that defines the σ bond, which can be formed from the overlap of two s -orbitals, from two p -orbitals arranged end-to-end, or from an s - and a p -orbital. They can also form when sp hybrid orbitals on two atoms overlap end-to-end.
Pi orbitals, on the other hand, require the presence of two atomic p orbitals on adjacent atoms. Most important, the charge density in the π orbital is concentrated above and below the molecular plane; it is almost zero along the line-of-centers between the two atoms. It is this perpendicular orientation with respect to the molecular plane (and the consequent lack of cylindrical symmetry) that defines the π orbital. The combination of a σ bond and a π bond extending between the same pair of atoms constitutes the double bond in molecules such as ethylene.
Carbon-carbon triple bonds: sp hybridization in acetylene
We have not yet completed our overview of multiple bonding, however. Carbon and hydrogen can form yet another compound, acetylene (ethyne), in which each carbon is connected to only two other atoms: a carbon and a hydrogen. This can be regarded as an example of divalent carbon, but is usually rationalized by writing a triple bond between the two carbon atoms.
We assume here that since two geometrically equivalent bonds are formed by each carbon, this atom must be sp -hybridized in acetylene. On each carbon, one sp hybrid bonds to a hydrogen and the other bonds to the other carbon atom, forming the σ bond skeleton of the molecule. In addition to the sp hybrids, each carbon atom has two half-occupied p orbitals oriented at right angles to each other and to the interatomic axis. These two sets of parallel and adjacent p orbitals can thus merge into two sets of π orbitals.
The triple bond in acetylene is seen to consist of one σ bond joining the line-of-centers between the two carbon atoms, and two π bonds whose lobes of electron density are in mutually-perpendicular planes. The acetylene molecule is of course linear, since the angle between the two sp hybrid orbitals that produce the s skeleton of the molecule is 180°.
Multiple bonds between unlike atoms
Multiple bonds can also occur between dissimilar atoms. For example, in carbon dioxide each carbon atom has two unhybridized atomic p orbitals, and each oxygen atom still has one p orbital available. When the two O-atoms are brought up to opposite sides of the carbon atom, one of the p orbitals on each oxygen forms a π bond with one of the carbon p -orbitals. In this case, sp -hybridization is seen to lead to two double bonds. Notice that the two C–O π bonds are mutually perpendicular.
Similarly, in hydrogen cyanide , HCN, we assume that the carbon is sp -hybridized, since it is joined to only two other atoms, and is hence in a divalent state. One of the sp -hybrid orbitals overlaps with the hydrogen 1 s orbital, while the other overlaps end-to-end with one of the three unhybridized p orbitals of the nitrogen atom. This leaves us with two nitrogen p -orbitals which form two mutually perpendicular π bonds to the two atomic p orbitals on the carbon. Hydrogen cyanide thus contains one single and one triple bond. The latter consists of a σ bond from the overlap of a carbon sp hybrid orbital with a nitrogen p orbital, plus two mutually perpendicular π bonds deriving from parallel atomic p orbitals on the carbon and nitrogen atoms.
The nitrate ion
Pi bond delocalization furnishes a means of expressing the structures of other molecules that require more than one electron-dot or structural formula for their accurate representation. A good example is the nitrate ion, which contains 24 electrons:
The electron-dot formula shown above is only one of three equivalent resonance structures that are needed to describe trigonal symmetry of this ion.
Nitrogen has three half-occupied p orbitals available for bonding, all perpendicular to one another. Since the nitrate ion is known to be planar, we are forced to assume that the nitrogen outer electrons are sp 2 -hybridized. The addition of an extra electron fills all three hybrid orbitals completely. Each of these filled sp 2 orbitals forms a σ bond by overlap with an empty oxygen 2 p z orbital; this, you will recall, is an example of coordinate covalent bonding , in which one of the atoms contributes both of the bonding electrons. The empty oxygen 2 p orbital is made available when the oxygen electrons themselves become sp hybridized; we get three filled sp hybrid orbitals, and an empty 2 p atomic orbital, just as in the case of nitrogen.
The π bonding system arises from the interaction of one of the occupied oxygen sp orbitals with the unoccupied 2 p x orbital of the nitrogen. Notice that this, again, is a coordinate covalent sharing, except that in this instance it is the oxygen atom that donates both electrons.
Pi bonds can form in this way between the nitrogen atom and any of the three oxygens; there are thus three equivalent π bonds possible, but since nitrogen can only form one complete π bond at a time, the π bonding is divided up three ways, so that each N–O bond has a bond order of 4/3.
Conjugated Double Bonds
We have seen that the π bonding orbital is distinctly different in shape and symmetry from the σ bond. There is another important feature of the π bond that is of far-reaching consequence, particularly in organic and coordination chemistry. Consider, for example, an extended hydrocarbon molecule in which alternate pairs of carbon atoms are connected by double and single bonds. Each non-terminal carbon atom forms two σ bonds to two other carbons and to a hydrogen (not shown.) This molecule can be viewed as a series of ethylene units joined together end-to-end. Each carbon, being sp hybridized, still has a half-filled atomic p orbital. Since these p orbitals on adjacent carbons are all parallel, we can expect them to interact with each other to form π bonds between alternate pairs of carbon atoms as shown below.
But since each carbon atom possesses a half-filled p orbital, there is nothing unique about the π bond arrangement; an equally likely arrangement might be one in which the π bonding orbitals are shifted to neighboring pairs of carbons (middle illustration above). You will recall that when there are two equivalent choices for the arrangements single and double bonds in a molecule, we generally consider the structure to be a resonance hybrid . In keeping with this idea, we would expect the electron density in a π system of this kind to be extended or shared out evenly along the entire molecular framework, as shown in the bottom figure.
A system of alternating single and double bonds, as we have here, is called a conjugated system . Chemists say that the π bonds in a conjugated system are delocalized ; they are, in effect, “smeared out” over the entire length of the conjugated part of the molecule. Each pair of adjacent carbon atoms is joined by a σ bond and "half" of a π bond, resulting in an a C-C bond order of 1.5. An even higher degree of conjugation exists in compounds containing extended (C=C) n chains. These compounds, known as cumulenes , exhibit interesting electrical properties, and whose derivatives can act as "organic wires".
Benzene
The classic example of π bond delocalization is found in the cyclic molecule benzene (C 6 H 6 ) which consists of six carbon atoms bound together in a hexagonal ring. Each carbon has a single hydrogen atom attached to it. The lines in this figure represent the σ bonds in benzene. The basic ring structure is composed of σ bonds formed from overlap of sp 2 hybrid orbitals on adjacent carbon atoms. The unhybridized carbon p z orbitals project above and below the plane of the ring. They are shown here as they might appear if they did not interact with one another.
But what happens, of course, is that the lobes of these atomic orbitals meld together to form circular rings of electron density above and below the plane of the molecule. The two of these together constitute the "second half" of the carbon-carbon double bonds in benzene. This computer-generated plot of electron density in the benzene molecule is derived from a more rigorous theory that does not involve hybrid orbitals; the highest electron density (blue) appears around the periphery of the ring, while the lowest (red) is in the "doughnut hole" in the center.
Hybrids involving d orbitals
In atoms that are below those in the first complete row of the periodic table, the simple octet rule begins to break down. For example, we have seen that PCl 3 does conform to the octet rule but PCl 5 does not. We can describe the bonding in PCl 3 very much as we do NH 3 : four sp 3 -hybridized orbitals, three of which are shared with electrons from other atoms and the fourth containing a nonbonding pair.
Pentagonal bipyramid molecules: sp 3 d hybridization
hybrid orbitals directed toward the corners of a trigonal bipyramid, as is predicted by VSEPR theory.
Octahedral coordination: sp 3 d 2 hybridization
The molecule sulfur hexafluoride SF 6 exemplifies one of the most common types of d -orbital hybridization. The six bonds in this octahedrally-coordinated molecule are derived from mixing six atomic orbitals into a hybrid set. The easiest way to understand how these come about is to imagine that the molecule is made by combining an imaginary S 6 + ion (which we refer to as the S(VI) valence state ) with six F – ions to form the neutral molecule. These now-empty 3 s and 3 p orbitals then mix with two 3 d orbitals to form the sp 3 d 2 hybrids.
Some of the most important and commonly encountered compounds which involve the d orbitals in bonding are the transition metal complexes . The term “complex” in this context means that the molecule is composed of two or more kinds of species, each of which can have an independent existence.
Square-planar molecules: dsp 2 hybridization
For example, the ions Pt 2 + and Cl – can form the ion [PtCl 4 ] 2– . To understand the hybridization scheme, it helps to start with the neutral Pt atom, then imagine it losing two electrons to become an ion, followed by grouping of the two unpaired 5 d electrons into a single d orbital, leaving one vacant.This vacant orbital, along with the 6 s and two of the 6 p orbitals, can then accept an electron pair from four chlorines.
All of the four-coordinated molecules we have discussed so far have tetrahedral geometry around the central atom. Methane, CH 4 , is the most well known example. It may come as something as a surprise, then, to discover that the tetrachlorplatinum (II) ion [PtCl 4 ] 2– has an essentially two-dimensional square-planar configuration. This type of bonding pattern is quite common when the parent central ion (Pt 2 + in this case) contains only eight electrons in its outmost d -subshell.
Octahedral coordination: sp 3 d 2 and d 2 sp 3
Many of the most commonly encountered transition metal ions accept electron pairs from donors such as CN – and NH 3 (or lacking these, even from H 2 O) to form octahedral coordination complexes. The hexaminezinc(II) cation depicted below is typical.
In sp 3 d 2 hybridization the bonding orbitals are derived by mixing atomic orbitals having the same principal quantum number ( n = 4 in the preceding example). A slightly different arrangement, known as d 2 sp 3 hybridization, involves d orbitals of lower principal quantum number. This is possible because of the rather small energy differences between the d orbitals in one “shell” with the s and p orbitals of the next higher one — hence the term “inner orbital” complex which is sometimes used to describe ions such as hexaminecobalt(III) , shown below.. Both arrangements produce octahedral coordination geometries.
In some cases, the same central atom can form either inner or outer complexes depending on the particular ligand and the manner in which its electrostatic field affects the relative energies of the different orbitals.Thus the hexacyanoiron(II) ion utilizes the iron 3 d orbitals, whereas hexaaquoiron(II) achieves a lower energy by accepting two H 2 O molecules in its 4 d orbitals.
Final remarks about hybrid orbitals
As is the case with any scientific model, the hybridization model of bonding is useful only to the degree to which it can predict phenomena that are actually observed. Most models contain weaknesses that place limits on their general applicability. The need for caution in accepting this particular model is made more apparent when we examine the shapes of the molecules below the first full row of the periodic table. For example, we would expect the bonding in hydrogen sulfide to be similar to that in water, with tetrahedral geometry around the sulfur atom. Experiments, however, reveal that the H–S–H bond angle is only 92°. Hydrogen sulfide thus deviates much more from tetrahedral geometry than does water, and there is no apparent and clear reason why it should. It is certainly difficult to argue that electron-repulsion between the two nonbonding orbitals is pushing the H–S bonds closer together (as is supposed to happen to the H–O bonds in water); many would argue that this repulsion would be less in hydrogen sulfide than in water, since sulfur is a larger atom and is hence less electronegative.
orbitals does not apply to H 2 S. It looks like the “simple” explanation that bonding occurs through two half occupied atomic p orbitals 90° apart comes closer to the mark. Perhaps hybridization is not an all-or-nothing phenomenon; perhaps the two 3 p orbitals are substantially intact in hydrogen sulfide, or are hybridized only slightly. In general, the hybridization model does not work very well with nonmetallic elements farther down in the periodic table, and there is as yet no clear explanation why. We must simply admit that we have reached one of the many points in chemistry where our theory is not sufficiently developed to give a clear and unequivocal answer. This does not detract, however, from the wide usefulness of the hybridization model in elucidating the bond character and bond shapes in the millions of molecules based on first-row elements, particularly of carbon.Are hybrid orbitals real?
The justification we gave for invoking hybridization in molecules such as BeH 2 , BF 3 and CH 4 was that the bonds in each are geometrically and chemically equivalent, whereas the atomic s - and p -orbitals on the central atoms are not. By combining these into new orbitals of sp , sp 2 and sp 3 types we obtain the required number of completely equivalent orbitals. This seemed easy enough to do on paper; we just drew little boxes and wrote “ sp 2 ” or whatever below them. But what is really going on here?
The full answer is beyond the scope of this course, so we can only offer the following very general explanation. First, recall what we mean by “orbital”: a mathematical function ψ having the character of a standing wave whose square ψ 2 is proportional to the probability of finding the electron at any particular location in space. The latter, the electron density distribution , can be observed (by X-ray scattering, for example), and in this sense is the only thing that is “real”.
A given standing wave (ψ-function) can be synthesized by combining all kinds of fundamental wave patterns (that is, atomic orbitals) in much the same way that a color we observe can be reproduced by combining different sets of primary colors in various proportions. In neither case does it follow that these original orbitals (or colors) are actually present in the final product. So one could well argue that hybrid orbitals are not “real”; they simply turn out to be convenient for understanding the bonding of simple molecules at the elementary level, and this is why we use them.
An alternative to hybrids: the Bent-Bond model
It turns out, in fact, that the electron distribution and bonding in ethylene can be equally well described by assuming no hybridization at all. The "bent bond" model requires only that the directions of some of the atomic- p orbitals be distorted sufficiently to provide the overlap needed for bonding; these are sometimes referred to as " banana bonds ".
The smallest of the closed-ring hydrocarbons is cyclopropane, a planar molecule in which the C–C bond angles are 120°— quite a departure from the tetrahedral angle of 109.5° associated with sp 3 hybridization! Theoretical studies suggest that the bent-bond model does quite well in predicting its properties. | libretexts | 2025-03-17T19:53:13.273062 | 2016-02-22T14:54:00 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.07%3A_The_Hybrid_Orbital_Model_II",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "9.7: The Hybrid Orbital Model II",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.08%3A_Molecular_Orbital_Theory | 9.8: Molecular Orbital Theory
Make sure you thoroughly understand the following essential ideas
- In what fundamental way does the molecular orbital model differ from the other models of chemical bonding that have been described in these lessons?
- Explain how bonding and antibonding orbitals arise from atomic orbitals, and how they differ physically.
- Describe the essential difference between a sigma and a pi molecular orbital.
- Define bond order , and state its significance.
- Construct a "molecular orbital diagram" of the kind shown in this lesson for a simple diatomic molecule, and indicate whether the molecule or its positive and negative ions should be stable.
The molecular orbital model is by far the most productive of the various models of chemical bonding, and serves as the basis for most quantiative calculations, including those that lead to many of the computer-generated images that you have seen elsewhere in these units. In its full development, molecular orbital theory involves a lot of complicated mathematics, but the fundamental ideas behind it are quite easily understood, and this is all we will try to accomplish in this lesson.
This is a big departure from the simple Lewis and VSEPR models that were based on the one-center orbitals of individual atoms. The more sophisticated hybridization model recognized that these orbitals will be modified by their interaction with other atoms. But all of these valence-bond models, as they are generally called, are very limited in their applicability and predictive power, because they fail to recognize that distribution of the pooled valence electrons is governed by the totality of positive centers.
Molecular Orbitals
Chemical bonding occurs when the net attractive forces between an electron and two nuclei exceeds the electrostatic repulsion between the two nuclei. For this to happen, the electron must be in a region of space which we call the binding region . Conversely, if the electron is off to one side, in an anti-binding region , it actually adds to the repulsion between the two nuclei and helps push them away.
The easiest way of visualizing a molecular orbital is to start by picturing two isolated atoms and the electron orbitals that each would have separately. These are just the orbitals of the separate atoms, by themselves, which we already understand. We will then try to predict the manner in which these atomic orbitals interact as we gradually move the two atoms closer together. Finally, we will reach some point where the internuclear distance corresponds to that of the molecule we are studying. The corresponding orbitals will then be the molecular orbitals of our new molecule.
To see how this works, we will consider the simplest possible molecule, \(\ce{H2^{+}}\). This is the hydrogen molecule ion, which consists of two nuclei of charge +1, and a single electron shared between them.
As two H nuclei move toward each other, the 1 s atomic orbitals of the isolated atoms gradually merge into a new molecular orbital in which the greatest electron density falls between the two nuclei. Since this is just the location in which electrons can exert the most attractive force on the two nuclei simultaneously, this arrangement constitutes a bonding molecular orbital . Regarding it as a three- dimensional region of space, we see that it is symmetrical about the line of centers between the nuclei; in accord with our usual nomenclature, we refer to this as a σ (sigma) orbital .
Bonding and Antibonding Molecular Orbitals
There is one minor difficulty: we started with two orbitals (the 1 s atomic orbitals), and ended up with only one orbital. Now according to the rules of quantum mechanics, orbitals cannot simply appear and disappear at our convenience. For one thing, this would raise the question of at just what internuclear distance do we suddenly change from having two orbitals, to having only one? It turns out that when orbitals interact, they are free to change their forms, but there must always be the same number. This is just another way of saying that there must always be the same number of possible allowed sets of electron quantum numbers.
How can we find the missing orbital? To answer this question, we must go back to the wave-like character of orbitals that we developed in our earlier treatment of the hydrogen atom. You are probably aware that wave phenomena such as sound waves, light waves, or even ocean waves can combine or interact with one another in two ways: they can either reinforce each other, resulting in a stronger wave, or they can interfere with and partially destroy each other. A roughly similar thing occurs when the “matter waves” corresponding to the two separate hydrogen 1 s orbitals interact; both in-phase and out-of-phase combinations are possible, and both occur. The in-phase, reinforcing interaction yields the bonding orbital that we just considered. The other, corresponding to out-of-phase combination of the two orbitals, gives rise to a molecular orbital that has its greatest electron probability in what is clearly the antibonding region of space. This second orbital is therefore called an antibonding orbital.
When the two 1 s wave functions combine out-of-phase, the regions of high electron probability do not merge. In fact, the orbitals act as if they actually repel each other. Notice particularly that there is a region of space exactly equidistant between the nuclei at which the probability of finding the electron is zero. This region is called a nodal surface , and is characteristic of antibonding orbitals. It should be clear that any electrons that find themselves in an antibonding orbital cannot possibly contribute to bond formation; in fact, they will actively oppose it.
We see, then, that whenever two orbitals, originally on separate atoms, begin to interact as we push the two nuclei toward each other, these two atomic orbitals will gradually merge into a pair of molecular orbitals, one of which will have bonding character, while the other will be antibonding. In a more advanced treatment, it would be fairly easy to show that this result follows quite naturally from the wave-like nature of the combining orbitals.
What is the difference between these two kinds of orbitals, as far as their potential energies are concerned? More precisely, which kind of orbital would enable an electron to be at a lower potential energy? Clearly, the potential energy decreases as the electron moves into a region that enables it to “see” the maximum amount of positive charge. In a simple diatomic molecule, this will be in the internuclear region— where the electron can be simultaneously close to two nuclei. The bonding orbital will therefore have the lower potential energy.
Molecular Orbital Diagrams
This scheme of bonding and antibonding orbitals is usually depicted by a molecular orbital diagram such as the one shown here for the dihydrogen ion H 2 + . Atomic valence electrons (shown in boxes on the left and right) fill the lower-energy molecular orbitals before the higher ones, just as is the case for atomic orbitals. Thus, the single electron in this simplest of all molecules goes into the bonding orbital, leaving the antibonding orbital empty.
Since any orbital can hold a maximum of two electrons, the bonding orbital in H 2 + is only half-full. This single electron is nevertheless enough to lower the potential energy of one mole of hydrogen nuclei pairs by 270 kJ— quite enough to make them stick together and behave like a distinct molecular species. Although H 2 + is stable in this energetic sense, it happens to be an extremely reactive molecule— so much so that it even reacts with itself, so these ions are not commonly encountered in everyday chemistry.
Dihydrogen
If one electron in the bonding orbital is conducive to bond formation, might two electrons be even better? We can arrange this by combining two hydrogen atoms-- two nuclei, and two electrons. Both electrons will enter the bonding orbital, as depicted in the Figure.
We recall that one electron lowered the potential energy of the two nuclei by 270 kJ/mole, so we might expect two electrons to produce twice this much stabilization, or 540 kJ/mole.
Experimentally, one finds that it takes only 452 kJ to break apart a mole of hydrogen molecules. The reason the potential energy was not lowered by the full amount is that the presence of two electrons in the same orbital gives rise to a repulsion that acts against the stabilization. This is exactly the same effect we saw in comparing the ionization energies of the hydrogen and helium atoms.
Dihelium
With two electrons we are still ahead, so let’s try for three. The dihelium positive ion is a three-electron molecule. We can think of it as containing two helium nuclei and three electrons. This molecule is stable, but not as stable as dihydrogen; the energy required to break He 2 + is 301 kJ/mole. The reason for this should be obvious; two electrons were accommodated in the bonding orbital, but the third electron must go into the next higher slot— which turns out to be the sigma antibonding orbital. The presence of an electron in this orbital, as we have seen, gives rise to a repulsive component which acts against, and partially cancels out, the attractive effect of the filled bonding orbital.
Taking our building-up process one step further, we can look at the possibilities of combining to helium atoms to form dihelium. You should now be able to predict that He 2 cannot be a stable molecule; the reason, of course, is that we now have four electrons— two in the bonding orbital, and two in the antibonding orbital. The one orbital almost exactly cancels out the effect of the other. Experimentally, the bond energy of dihelium is only .084 kJ/mol; this is not enough to hold the two atoms together in the presence of random thermal motion at ordinary temperatures, so dihelium dissociates as quickly as it is formed, and is therefore not a distinct chemical species.
Diatomic molecules containing second-row atoms
The four simplest molecules we have examined so far involve molecular orbitals that derived from two 1 s atomic orbitals. If we wish to extend our model to larger atoms, we will have to contend with higher atomic orbitals as well. One greatly simplifying principle here is that only the valence-shell orbitals need to be considered. Inner atomic orbitals such as 1 s are deep within the atom and well-shielded from the electric field of a neighboring nucleus, so that these orbitals largely retain their atomic character when bonds are formed.
Dilithium
For example, when lithium, whose configuration is 1 s 2 2 s 1 , bonds with itself to form Li 2 , we can forget about the 1 s atomic orbitals and consider only the σ bonding and antibonding orbitals. Since there are not enough electrons to populate the antibonding orbital, the attractive forces win out and we have a stable molecule.
The bond energy of dilithium is 110 kJ/mole; notice that this value is less than half of the 270 kJ bond energy in dihydrogen, which also has two electrons in a bonding orbital. The reason, of course, is that the 2 s orbital of Li is much farther from its nucleus than is the 1 s orbital of H, and this is equally true for the corresponding molecular orbitals. It is a general rule, then, that the larger the parent atom, the less stable will be the corresponding diatomic molecule.
Lithium hydride
All the molecules we have considered thus far are homonuclear ; they are made up of one kind of atom. As an example of a heteronuclear molecule, let’s take a look at a very simple example— lithium hydride. Lithium hydride is a stable, though highly reactive molecule. The diagram shows how the molecular orbitals in lithium hydride can be related to the atomic orbitals of the parent atoms. One thing that makes this diagram look different from the ones we have seen previously is that the parent atomic orbitals have widely differing energies; the greater nuclear charge of lithium reduces the energy of its 1 s orbital to a value well below that of the 1 s hydrogen orbital.
There are two occupied atomic orbitals on the lithium atom, and only one on the hydrogen. With which of the lithium orbitals does the hydrogen 1 s orbital interact? The lithium 1 s orbital is the lowest-energy orbital on the diagram. Because this orbital is so small and retains its electrons so tightly, it does not contribute to bonding; we need consider only the 2 s orbital of lithium which combines with the 1 s orbital of hydrogen to form the usual pair of sigma bonding and antibonding orbitals. Of the four electrons in lithium and hydrogen, two are retained in the lithium 1 s orbital, and the two remaining ones reside in the σ orbital that constitutes the Li–H covalent bond.
The resulting molecule is 243 kJ/mole more stable than the parent atoms. As we might expect, the bond energy of the heteronuclear molecule is very close to the average of the energies of the corresponding homonuclear molecules. Actually, it turns out that the correct way to make this comparison is to take the geometric mean, rather than the arithmetic mean, of the two bond energies. The geometric mean is simply the square root of the product of the two energies.
The geometric mean of the H 2 and Li 2 bond energies is 213 kJ/mole, so it appears that the lithium hydride molecule is 30 kJ/mole more stable than it “is supposed” to be. This is attributed to the fact that the electrons in the 2σ bonding orbital are not equally shared between the two nuclei; the orbital is skewed slightly so that the electrons are attracted somewhat more to the hydrogen atom. This bond polarity , which we considered in some detail near the beginning of our study of covalent bonding, arises from the greater electron-attracting power of hydrogen— a consequence of the very small size of this atom. The electrons can be at a lower potential energy if they are slightly closer to the hydrogen end of the lithium hydride molecule. It is worth pointing out, however, that the electrons are, on the average, also closer to the lithium nucleus, compared to where they would be in the 2 s orbital of the isolated lithium atom. So it appears that everyone gains and no one loses here!
\(\Sigma\) and \(\pi\) orbitals
The molecules we have considered thus far are composed of atoms that have no more than four electrons each; our molecular orbitals have therefore been derived from s -type atomic orbitals only. If we wish to apply our model to molecules involving larger atoms, we must take a close look at the way in which p -type orbitals interact as well. Although two atomic p orbitals will be expected to split into bonding and antibonding orbitals just as before, it turns out that the extent of this splitting, and thus the relative energies of the resulting molecular orbitals, depend very much on the nature of the particular p orbital that is involved.
You will recall that there are three possible p orbitals for any value of the principal quantum number. You should also recall that p orbitals are not spherical like s orbitals, but are elongated, and thus possess definite directional properties. The three p orbitals correspond to the three directions of Cartesian space, and are frequently designated p x , p y , and p z , to indicate the axis along which the orbital is aligned. Of course, in the free atom, where no coordinate system is defined, all directions are equivalent, and so are the p orbitals. But when the atom is near another atom, the electric field due to that other atom acts as a point of reference that defines a set of directions. The line of centers between the two nuclei is conventionally taken as the x axis. If this direction is represented horizontally on a sheet of paper, then the y axis is in the vertical direction and the z axis would be normal to the page.
These directional differences lead to the formation of two different classes of molecular orbitals. The above figure shows how two p x atomic orbitals interact. In many ways the resulting molecular orbitals are similar to what we got when s atomic orbitals combined; the bonding orbital has a large electron density in the region between the two nuclei, and thus corresponds to the lower potential energy. In the out-of-phase combination, most of the electron density is away from the internuclear region, and as before, there is a surface exactly halfway between the nuclei that corresponds to zero electron density. This is clearly an antibonding orbital— again, in general shape, very much like the kind we saw in hydrogen and similar molecules. Like the ones derived from s -atomic orbitals, these molecular orbitals are σ ( sigma ) orbitals.
Sigma orbitals are cylindrically symmetric with respect to the line of centers of the nuclei; this means that if you could look down this line of centers, the electron density would be the same in all directions.
orbitals, we get the bonding and antibonding pairs that we would expect, but the resulting molecular orbitals have a different symmetry: rather than being rotationally symmetric about the line of centers, these orbitals extend in both perpendicular directions from this line of centers. Orbitals having this more complicated symmetry are called π ( pi ) orbitals. There are two of them, π y and π z differing only in orientation, but otherwise completely equivalent.
The different geometric properties of the π and σ orbitals causes the latter orbitals to split more than the π orbitals, so that the σ* antibonding orbital always has the highest energy. The σ bonding orbital can be either higher or lower than the π bonding orbitals, depending on the particular atom.
Second-Row Diatomics
If we combine the splitting schemes for the 2 s and 2 p orbitals, we can predict bond order in all of the diatomic molecules and ions composed of elements in the first complete row of the periodic table. Remember that only the valence orbitals of the atoms need be considered; as we saw in the cases of lithium hydride and dilithium, the inner orbitals remain tightly bound and retain their localized atomic character.
Dicarbon
Carbon has four outer-shell electrons, two 2 s and two 2 p . For two carbon atoms, we therefore have a total of eight electrons, which can be accommodated in the first four molecular orbitals. The lowest two are the 2 s -derived bonding and antibonding pair, so the “first” four electrons make no net contribution to bonding. The other four electrons go into the pair of pi bonding orbitals, and there are no more electrons for the antibonding orbitals— so we would expect the dicarbon molecule to be stable, and it is. (But being extremely reactive, it is known only in the gas phase.)
You will recall that one pair of electrons shared between two atoms constitutes a “single” chemical bond; this is Lewis’ original definition of the covalent bond. In C 2 there are two paris of electrons in the π bonding orbitals, so we have what amounts to a double bond here; in other words, the bond order in dicarbon is two.
Dioxygen
The electron configuration of oxygen is 1 s 2 2 s 2 2 p 4 . In O 2 , therefore, we need to accommodate twelve valence electrons (six from each oxygen atom) in molecular orbitals. As you can see from the diagram, this places two electrons in antibonding orbitals. Each of these electrons occupies a separate π* orbital because this leads to less electron-electron repulsion (Hund's Rule).
The bond energy of molecular oxygen is 498 kJ/mole. This is smaller than the 945 kJ bond energy of N 2 — not surprising, considering that oxygen has two electrons in an antibonding orbital, compared to nitrogen’s one.
The two unpaired electrons of the dioxygen molecule give this substance an unusual and distinctive property: O 2 is paramagneti c. The paramagnetism of oxygen can readily be demonstrated by pouring liquid O 2 between the poles of a strong permanent magnet; the liquid stream is trapped by the field and fills up the space between the poles.
Since molecular oxygen contains two electrons in an antibonding orbital, it might be possible to make the molecule more stable by removing one of these electrons, thus increasing the ratio of bonding to antibonding electrons in the molecule. Just as we would expect, and in accord with our model, O 2 + has a bond energy higher than that of neutral dioxygen; removing the one electron actually gives us a more stable molecule. This constitutes a very good test of our model of bonding and antibonding orbitals. In the same way, adding an electron to O 2 results in a weakening of the bond, as evidenced by the lower bond energy of O 2 – . The bond energy in this ion is not known, but the length of the bond is greater, and this is indicative of a lower bond energy. These two dioxygen ions, by the way, are highly reactive and can be observed only in the gas phase. | libretexts | 2025-03-17T19:53:13.367141 | 2013-10-03T01:37:50 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.08%3A_Molecular_Orbital_Theory",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "9.8: Molecular Orbital Theory",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.09%3A_Bonding_in_Coordination_Complexes | 9.9: Bonding in Coordination Complexes
Make sure you thoroughly understand the following essential ideas:
- Define the terms coordination complex, ligand, polydentate , and chelate .
- Explain the origins of d -orbital splitting; that is, why the energies of certain atomic- d orbitals are more affected by some ligands than others in an octahedral complex.
- Why are many coordination complexes highly colored ?
- Explain the meaning of high-spin and low-spin complexes, and illustrate in a general way how a particular set of ligands can change one kind into another. Also, describe how these differences are observed experimentally.
- Describe the role of iron in heme and the general structural components of hemoglobin.
Complexes such as Cu(NH 3 ) 6 2+ have been known and studied since the mid-nineteenth century. and their structures had been mostly worked out by 1900. Although the hybrid orbital model was able to explain how neutral molecules such as water or ammonia could bond to a transition metal ion, it failed to explain many of the special properties of these complexes. Finally, in 1940-60, a model known as ligand field theory was developed that is able to organize and explain most of the observed properties of these compounds. Since that time, coordination complexes have played major roles in cellular biochemistry and inorganic catalysis.
What is a Complex?
If you have taken a lab course in chemistry, you have very likely admired the deep blue color of copper sulfate crystals, CuSO 4 ·5H 2 O. The proper name of this substance is copper(II) sulfate pentahydrate, and it is typical of many salts that incorporate waters of hydration into their crystal structures. It is also a complex , a term used by chemists to describe a substance composed of two other substances (in this case, CuSO 4 and H 2 O) each of which is capable of an independent existence.
The binding between the components of a complex is usually weaker than a regular chemical bond; thus most solid hydrates can be decomposed by heating, driving off the water and yielding the anhydrous salt:
\[\underbrace{\ce{CuSO4 \cdot 5 H2O}}_{\text{blue}} \rightarrow \underbrace{\ce{CuSO_{4 (s)}}}_{\text{white}} + 5 H_2O\]
Driving off the water in this way also destroys the color, turning it from a beautiful deep blue to a nondescript white. If the anhydrous salt is now dissolved in water, the blue color now pervades the entire solution. It is apparent that the presence of water is somehow necessary for the copper(II) ion to take on a blue color, but why should this be?
A very common lab experiment that most students carry out is to add some dilute ammonia to a copper sulfate solution. At first, the solution turns milky as the alkaline ammonia causes the precipitation of copper hydroxide:
\[\ce{Cu^{2+} + 2 OH^{–} \rightarrow Cu(OH)2 (s)}\]
However, if more ammonia is added, the cloudiness disappears and the solution assumes an intense deep blue color that makes the original solution seem pale by comparison. The equation for this reaction is usually given as
\[\ce{Cu^{2+} + 6 NH3 \rightarrow Cu(NH3)6^{2+} } \label{ammine}\]
The new product is commonly known as the copper-ammonia complex ion , or more officially, hexamminecopper(II) complex ion.
Equation \(\ref{ammine}\) is somewhat misleading, however, in that it implies the formation of a new complex where none existed before. In fact, since about 1895, it has been known that the ions of most transition metals dissolve in water to form complexes with water itself, so a better representation of the reaction of dissolved copper with ammonia would be
\[\ce{Cu(H2O)6^{2+} + 6 NH3 \rightarrow Cu(NH3)6^{2+} + 6 H2O}\]
In effect, the ammonia binds more tightly to the copper ion than does water, and it thus displaces the latter when it comes into contact with the hexaaquocopper(II) ion, as the dissolved form of Cu 2 + is properly known.
Most transition metals dissolve in water to form complexes with water itself.
The basics of Coordination Complexes
Although our primary focus in this unit is on bonding, the topic of coordination complexes is so important in chemistry and biochemistry that some of their basic features are worth knowing about, even if their detailed chemistry is beyond the scope of this course. These complexes play an especially crucial role in physiology and biochemistry. Thus heme, the oxygen-carrying component of red blood cells (and the source of the red color) is basically a complex of iron, and the part of chlorophyll that converts sunlight into chemical energy within green plants is a magnesium complex.
Some Definitions
We have already defined a complex as a substance composed of two or more components capable of an independent existence. A coordination complex is one in which a central atom or ion is joined to one or more ligands (Latin ligare , to tie) through what is called a coordinate covalent bond in which both of the bonding electrons are supplied by the ligand. In such a complex the central atom acts as an electron-pair acceptor ( Lewis acid — think of H + which has no electrons at all, but can accept a pair from something like Cl – ) and the ligand as an electron-pair donor ( Lewis base ). The central atom and the ligands coordinated to it constitute the coordination sphere . Thus the salt [Co(NH 3 ) 5 Cl]Cl 2 is composed of the complex ion [Co(NH 3 ) 5 Cl] 2+ and two Cl – ions; components within the square brackets are inside the coordination sphere, whereas the two chloride ions are situated outside the coordination sphere. These latter two ions could be replaced by other ions such as NO 3 – without otherwise materially changing the nature of the salt.
The central atoms of coordination complexes are most often cations (positive ions), but may in some cases be neutral atoms, as in nickel carbonyl Ni(CO) 4 .
Ligands composed of ions such as F – or small molecules such as H 2 O or CN – possess more than one set of lone pair electrons, but only one of these pairs can coordinate with a central ion. Such ligands are said to be monodentate (“one tooth”.) Larger ligands may contain more than one atom capable of coordinating with a single central ion, and are described as polydentate . Thus ethylenediamine (shown below) is a bidentate ligand. Polydentate ligands whose geometry enables them to occupy more than one coordinating position of a central ion act as chelating agents (Greek χελος, chelos , claw) and tend to form extremely stable complexes known as chelates .
Chelation is widely employed in medicine, water-treatment, analytical chemistry and industry for binding and removing metal ions of particular kinds. Some of the more common ligands (chelating agents) are shown here:
Structure and bonding in transition metal complexes
Complexes such as Cu(NH 3 ) 6 2+ have been known and studied since the mid-nineteenth century. Why they should form, or what their structures might be, were complete mysteries. At that time all inorganic compounds were thought to be held together by ionic charges, but ligands such as water or ammonia are of course electrically neutral. A variety of theories such as the existence of “secondary valences” were concocted, and various chain-like structures such as CuNH 3 -NH 3 -NH 3 -NH 3 -NH 3 -NH 3 were proposed. Finally, in the mid-1890s, after a series of painstaking experiments, the chemist Alfred Werner (Swiss, 1866-1919) presented the first workable theory of complex ion structures.
Werner claimed that his theory first came to him in a flash after a night of fitful sleep; by the end of the next day he had written his landmark paper that eventually won him the 1913 Nobel Prize in Chemistry.
Werner was able to show, in spite of considerable opposition, that transition metal complexes consist of a central ion surrounded by ligands in a square-planar, tetrahedral, or octahedral arrangement. This was an especially impressive accomplishment at a time long before X-ray diffraction and other methods had become available to observe structures directly. His basic method was to make inferences of the structures from a careful examination of the chemistry of these complexes and particularly the existence of structural isomers. For example, the existence of two different compounds AX 4 having the same composition shows that its structure must be square-planar rather than tetrahedral.
What holds them together?
An understanding of the nature of the bond between the central ion and its ligands would have to await the development of Lewis’ shared-electron pair theory and Pauling’s valence-bond picture. We have already shown how hybridization of the d orbitals of the central ion creates vacancies able to accommodate one or more pairs of unshared electrons on the ligands. Although these models correctly predict the structures of many transition metal complexes, they are by themselves unable to account for several of their special properties:
- The metal-to-ligand bonds are generally much weaker than ordinary covalent bonds;
- Some complexes utilize “inner” d orbitals of the central ion, while others are “outer-orbital” complexes;
- Transition metal ions tend to be intensely colored.
Paramagnetism of coordination complexes
Unpaired electrons act as tiny magnets; if a substance that contains unpaired electrons is placed near an external magnet, it will undergo an attraction that tends to draw it into the field. Such substances are said to be paramagnetic , and the degree of paramagnetism is directly proportional to the number of unpaired electrons in the molecule. Magnetic studies have played an especially prominent role in determining how electrons are distributed among the various orbitals in transition metal complexes.
Studies of this kind are carried out by placing a sample consisting of a solution of the complex between the poles of an electromagnet. The sample is suspended from the arm of a sensitive balance, and the change in apparent weight is measured with the magnet turned on and off. An increase in the weight when the magnet is turned on indicates that the sample is attracted to the magnet ( paramagnetism ) and must therefore possess one or more unpaired electrons. The precise number can be determined by calibrating the system with a substance whose electron configuration is known.
Crystal field theory
The current model of bonding in coordination complexes developed gradually between 1930-1950. In its initial stages, the model was a purely electrostatic one known as crystal field theory which treats the ligand ions as simple point charges that interact with the five atomic d orbitals of the central ion. It is this theory which we describe below.
It is remarkable that this rather primitive model, quite innocent of quantum mechanics, has worked so well. However, an improved and more complete model that incorporates molecular orbital theory is known as ligand field theory . In an isolated transition metal atom the five outermost d orbitals all have the same energy which depends solely on the spherically symmetric electric field due to the nuclear charge and the other electrons of the atom. Suppose now that this atom is made into a cation and is placed in solution, where it forms a hydrated species in which six H 2 O molecules are coordinated to the central ion in an octahedral arrangement. An example of such an ion might be hexaaquotitanium(III), Ti(H 2 O) 6 3+ .
The ligands (H 2 O in this example) are bound to the central ion by electron pairs contributed by each ligand. Because the six ligands are located at the corners of an octahedron centered around the metal ion, these electron pairs are equivalent to clouds of negative charge that are directed from near the central ion out toward the corners of the octahedron. We will call this an octahedral electric field, or the ligand field .
d-orbital splitting
The differing shapes of the five kinds of d orbitals cause them to interact differently with the electric fields created by the coordinated ligands. This diagram (from a Purdue U. chemistry site) shows outlines of five kinds of d orbitals.
The green circles represent the coordinating electron-pairs of the ligands located at the six corners of the octahedron around the central atom. The two d orbitals at the bottom have regions of high electron density pointing directly toward the ligand orbitals; the resulting electron-electron repulsion raises the energy of these d orbitals.
Although the five d orbitals of the central atom all have the same energy in a spherically symmetric field, their energies will not all be the same in the octahedral field imposed by the presence of the ligands. The reason for this is apparent when we consider the different geometrical properties of the five d orbitals. Two of the d orbitals, designated d x 2 and d x 2 - y 2 , have their electron clouds pointing directly toward ligand atoms. We would expect that any electrons that occupy these orbitals would be subject to repulsion by the electron pairs that bind the ligands that are situated at corresponding corners of the octahedron. As a consequence, the energies of these two d orbitals will be raised in relation to the three other d orbitals whose lobes are not directed toward the octahedral positions.
The number of electrons in the d orbital of the central atom is easily determined from the location of the element in the periodic table, taking in account, of course, of the number of electrons removed in order to form the positive ion.
The effect of the octahedral ligand field due to the ligand electron pairs is to split the d orbitals into two sets whose energies differ by a quantity denoted by Δ ("delta") which is known as the d orbital splitting energy . Note that both sets of central-ion d orbitals are repelled by the ligands and are both raised in energy; the upper set is simply raised by a greater amount. Both the total energy shift and Δ are strongly dependent on the particular ligands.
Why are transition metal complexes often highly colored?
Returning to our example of Ti(H 2 O) 6 3+ , we note that Ti has an outer configuration of 4s 2 3d 2 , so that Ti 3 + will be a d 1 ion. This means that in its ground state, one electron will occupy the lower group of d orbitals, and the upper group will be empty. The d -orbital splitting in this case is 240 kJ per mole which corresponds to light of blue-green color; absorption of this light promotes the electron to the upper set of d orbitals, which represents the exited state of the complex. If we illuminate a solution of Ti(H 2 O) 6 3+ with white light, the blue-green light is absorbed and the solution appears violet in color.
High- and low spin complexes
The magnitude of the d orbital splitting depends strongly on the nature of the ligand and in particular on how strong an electrostatic field is produced by its electron pair bond to the central ion.
If Δ is not too large then the electrons that occupy the d orbitals do so with their spins unpaired until a d 5 configuration is reached, just as occurs in the normal Aufbau sequence for atomic electron configurations. Thus a weak-field ligand such as H 2 O leads to a “high spin” complex with Fe(II).
In contrast to this, the cyanide ion acts as a strong-field ligand; the d orbital splitting is so great that it is energetically more favorable for the electrons to pair up in the lower group of d orbitals rather than to enter the upper group with unpaired spins. Thus hexacyanoiron(II) is a “low spin” complex— actually zero spin, in this particular case.
Different d orbital splitting patterns occur in square planar and tetrahedral coordination geometries, so a very large number of arrangements are possible. In most complexes the value of Δ corresponds to the absorption of visible light, accounting for the colored nature of many such compounds in solution and in solids such as \(\ce{CuSO4·5H2O}\) ()Figure \(\PageIndex{1}\).
Coordination Complexes in Biochemistry
Approximately one-third of the chemical elements are present in living organisms. Many of these are metallic ions whose function within the cell depends on the formation of d -orbital coordination complexes with small molecules such as porphyrins (see below). These complexes are themselves bound within proteins ( metalloproteins ) which provide a local environment that is essential for their function, which is either to transport or store diatomic molecule (oxygen or nitric oxide), to transfer electrons in oxidation-reduction processes, or to catalyze a chemical reaction. The most common of these utilize complexes of Fe and Mg, but other micronutrient metals including Cu, Mn, Mo, Ni, Se, and Zn are also important.
Hemoglobin
Hemoglobin is one of a group of heme proteins that includes myoglobin, cytochrome-c, and catalase. Hemoglobin performs the essential task of transporting dioxygen molecules from the lungs to the tissues in which it is used to oxidize glucose, this oxidation serving as the source of energy required for cellular metabolic processes.
Hemoglobin consists of four globin protein subunits (depicted by different colors in this diagram) joined together by weak intermolecular forces. Each of these subunits contains, buried within it, a molecule of heme , which serves as the active site of oxygen transport. The presence of hemoglobin increases the oxygen carrying capacity of 1 liter of blood from 5 to 250 ml. Hemoglobin is also involved in blood pH regulation and CO 2 transport.
Heme itself consists of an iron atom coordinated to a tetradentate porphyrin . When in the ferrous (Fe 2+ state) the iron binds to oxygen and is converted into Fe 3 + . Because a bare heme molecule would become oxidized by the oxygen without binding to it, the adduct must be stabilized by the surrounding globin protein. In this environment, the iron becomes octahedrally-coordinated through binding to a component of the protein in a fifth position, and in the sixth position either by an oxygen molecule or by a water molecule, depending on whether the hemoglobin is in its oxygenated state (in arteries) or deoxygenated state (in veins).
The heme molecule (purple) is enfolded within the polypeptide chain as shown here. The complete hemoglobin molecule contains four of these subunits, and all four must be present for it to function. The binding of O 2 to heme in hemoglobin is not a simple chemical equilibrium; the binding efficiency is regulated by the concentrations of H + , CO 2 , and organic phosphates. It is remarkable that the binding sites for these substances are on the outer parts of the globin units, far removed from the heme. The mechanism of this exquisite molecular-remote-control arises from the fact that the Fe 2+ ion is too large to fit inside the porphyrin, so it sits slightly out of the porphyrin plane. This Fe radius diminishes when it is oxygenated, allowing it to move into the plane. In doing so, it pulls the protein component to which it is bound with it, triggering a sequence of structural changes that extend throughout the protein.
Myoglobin is another important heme protein that is found in muscles. Unlike hemoglobin, which consists of four protein subunits, myoglobin is made up of only one unit. Its principal function is to act as an oxygen storage reservoir, enabling vigorous muscle activity at a rate that could not be sustained by delivery of oxygen through the bloodstream. Myoglobin is responsible for the red color of meat. Cooking of meat releases the O 2 and oxidizes the iron to the +3 state, changing the color to brown.
Carbon monoxide poisoning
Other ligands, notably cyanide ion and carbon monoxide , are able to bind to hemoglobin much more strongly than does iron, thereby displacing it and rendering hemoglobin unable to transport oxygen. Air containing as little as 1 percent CO will convert hemoglobin to carboxyhemoglobin in a few hours, leading to loss of consciousness and death. Even small amounts of carbon monoxide can lead to substantial reductions in the availability of oxygen. The 400-ppm concentration of CO in cigarette smoke will tie up about 6% of the hemoglobin in heavy smokers; the increased stress this places on the heart as it works harder to compensate for the oxygen deficit is believed to be one reason why smokers are at higher risk for heart attacks. CO binds to hemoglobin 200 times more tightly than does \(O_2\).
Chlorophyll
Chlorophyll is the light-harvesting pigment present in green plants. Its name comes from the Greek word χλορος ( chloros ), meaning “green”- the same root from which chlorine gets its name. Chlorophyll consists of a ring-shaped tetradentate ligand known as a porphin coordinated to a central magnesium ion. A histidine residue from one of several types of associated proteins forms a fifth coordinate bond to the Mg atom.
The light energy trapped by chlorophyll is utilized to drive a sequence of reactions whose net effect is to bring about the reduction of CO 2 to glucose (C 6 H 12 O 6 ) in a process known as photosynthesis which serves as the fuel for all life processes in both plants and animals. | libretexts | 2025-03-17T19:53:13.467525 | 2013-10-03T01:37:49 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.09%3A_Bonding_in_Coordination_Complexes",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "9.9: Bonding in Coordination Complexes",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.10%3A_Bonding_in_Metals | 9.10: Bonding in Metals
- Explain the fundamental difference between the bonding in metallic solids compared to that in other types of solids and within molecules. Name some physical properties of metals that reflect this difference.
- Sketch out a diagram illustrating how a simple molecular-orbital approach to bonding in metals of Groups 1 and 2 always leaves some upper MO's empty.
- Describe, at the simplest level, the origin of electron "bands" in metals.
- Describe how the electrical and thermal conductivity of metals can be explained according to band theory.
- Explain why the electrical conductivity of a metal decreases with temperature, whereas that of a semiconductor increases.
Most of the known chemical elements are metals, and many of these combine with each other to form a large number of intermetallic compounds . The special properties of metals— their bright, lustrous appearance, their high electrical and thermal conductivities, and their malleability— suggest that these substances are bound together in a very special way.
Properties of metals
The fact that the metallic elements are found on the left side of the periodic table offers an important clue to the nature of how they bond together to form solids.
- These elements all possess low electronegativities and readily form positive ions M n + . Because they show no tendency to form negative ions, the kind of bonding present in ionic solids can immediately be ruled out.
- The metallic elements have empty or nearly-empty outer p -orbitals, so there are never enough outer-shell electrons to place an octet around an atom.
These points lead us to the simplest picture of metals, which regards them as a lattice of positive ions immersed in a “sea of electrons” which can freely migrate throughout the solid. In effect the electropositive nature of the metallic atoms allows their valence electrons to exist as a mobile fluid which can be displaced by an applied electric field, hence giving rise to their high electrical conductivities . Because each ion is surrounded by the electron fluid in all directions, the bonding has no directional properties; this accounts for the high malleability and ductility of metals.
This view is an oversimplification that fails to explain metals in a quantitative way, nor can it account for the differences in the properties of individual metals. A more detailed treatment, known as the bond theory of metals, applies the idea of resonance hybrids to metallic lattices. In the case of an alkali metal, for example, this would involve a large number of hybrid structures in which a given Na atom shares its electron with its various neighbors.
Molecular orbitals in metals
will look. These are all constructed by combining the individual atomicMO’s that are so closely spaced in energy that they form what is known as a band of allowed energies. In metallic lithium only the lower half of this band is occupied.Origin of Metallic Properties
Metallic solids possess special properties that set them apart from other classes of solids and make them easy to identify and familiar to everyone. All of these properties derive from the liberation of the valence electrons from the control of individual atoms, allowing them to behave as a highly mobile fluid that fills the entire crystal lattice. What were previously valence-shell orbitals of individual atoms become split into huge numbers of closely-spaced levels known as bands that extend throughout the crystal.
Melting Point and Strength
The strength of a metal derives from the electrostatic attraction between the lattice of positive ions and the fluid of valence electrons in which they are immersed. The larger the nuclear charge (atomic number) of the atomic kernel and the smaller its size, the greater this attraction. As with many other periodic properties, these work in opposite ways, as is seen by comparing the melting points of some of the Group 1-3 metals (right). Other factors, particularly the lattice geometry are also important, so exceptions such as is seen in Mg are not surprising.
In general, the transition metals with their valence-level d electrons are stronger and have higher melting points: Fe, 1539°C; Re 3180, Os 2727; W 3380°C.
Malleability and ductility
These terms refer respectively to how readily a solid can be shaped by pressure (forging, hammering, rolling into a sheet) and by being drawn out into a wire. Metallic solids are known and valued for these qualities, which derive from the non-directional nature of the attractions between the kernel atoms and the electron fluid. The bonding within ionic or covalent solids may be stronger, but it is also directional, making these solids subject to fracture (brittle) when struck with a hammer, for example. A metal, by contrast, is more likely to be simply deformed or dented.
Electrical conductivity: why are metals good conductors?
In order for a substance to conduct electricity, it must contain charged particles ( charge carriers ) that are sufficiently mobile to move in response to an applied electric field. In the case of ionic solutions and melts, the ions themselves serve this function. (Ionic solids contain the same charge carriers, but because they are fixed in place, these solids are insulators.) In metals the charge carriers are the electrons, and because they move freely through the lattice, metals are highly conductive. The very low mass and inertia of the electrons allows them to conduct high-frequency alternating currents, something that electrolytic solutions are incapable of. In terms of the band structure, application of an external field simply raises some of the electrons to previously unoccupied levels which possess greater momentum.
The conductivity of an electrolytic solution decreases as the temperature falls due to the decrease in "viscosity" which inhibits ionic mobility. The mobility of the electron fluid in metals is practically unaffected by temperature, but metals do suffer a slight conductivity decrease (opposite to ionic solutions) as the temperature rises; this happens because the more vigorous thermal motions of the kernel ions disrupts the uniform lattice structure that is required for free motion of the electrons within the crystal. Silver is the most conductive metal, followed by copper, gold, and aluminum.
Metals conduct electricity readily because of the essentially infinite supply of higher-energy empty MOs that electrons can populate as they acquire higher kinetic energies. This diagram illustrates the overlapping band structure (explained farther on) in beryllium. The MO levels are so closely spaced that even thermal energies can provide excitation and cause heat to rapidly spread through the solid.
Electrical conductivities of the metallic elements vary over a wide range. Notice that those of silver and copper (the highest of any metal) are in classes by themselves. Gold and aluminum follow close behind.
Thermal conductivity: why do metals conduct heat?
Everyone knows that touching a metallic surface at room temperature produces a colder sensation than touching a piece of wood or plastic at the same temperature. The very high thermal conductivity of metals allows them to draw heat out of our bodies very efficiently if they are below body temperature. In the same way, a metallic surface that is above body temperature will feel much warmer than one made of some other material. The high thermal conductivity of metals is attributed to vibrational excitations of the fluid-like electrons; this excitation spreads through the crystal far more rapidly than it does in non-metallic solids which depend on vibrational motions of atoms which are much heavier and possess greater inertia.
Appearance: Why are metals shiny?
We usually recognize a metal by its “metallic luster”, which refers to its ability of reflect light. When light falls on a metal, its rapidly changing electromagnetic field induces similar motions in the more loosely-bound electrons near the surface (this could not happen if the electrons were confined to the atomic valence shells.) A vibrating charge is itself an emitter of electromagnetic radiation, so the effect is to cause the metal to re-emit, or reflect , the incident light, producing the shiny appearance. What color is a metal? With the two exceptions of copper and gold, the closely-spaced levels in the bands allow metals to absorb all wavelengths equally well, so most metals are basically black, but this is ordinarily evident only when the metallic particles are so small that the band structure is not established.
The distinctive color of gold is a consequence of Einstein's theory of special relativity acting on the extremely high momentum of the inner-shell electrons, increasing their mass and causing the orbitals to contract. The outer (5d) electrons are less affected, and this gives rise to increased blue-light absorption, resulting in enhanced reflection of yellow and red light.
Thermionic Effect
The electrons within the electron fluid have a distribution of velocities very much like that of molecules in a gas. When a metal is heated sufficiently, a fraction of these electrons will acquire sufficient kinetic energy to escape the metal altogether; some of the electrons are essentially “boiled out” of the metal. This thermionic effect , which was first observed by Thomas Edison, was utilized in vacuum tubes which served as the basis of electronics from its beginning around 1910 until semiconductors became dominant in the 1960’s.
Band Structure of Metals
Most metals are made of atoms that have an outer configuration of s 2 , which we would expect to completely fill the band of MO’s we have described. With the band completely filled and no empty levels above, we would not expect elements such as beryllium to be metallic. What happens is that the empty p orbitals also split into a band. Although the energy of the 2 p orbital of an isolated Be atom is about 160 kJ greater than that of the 2 s orbital, the bottom part of the 2 p band overlaps the upper part of the 2 s band, yielding a continuous conduction band that has plenty of unoccupied orbitals. It is only when these bands become filled with 2 p electrons that the elements lose their metallic character.
This diagram illustrates the band structure in a 3 rd -row metal such as Na or Mg, and how it arises from MO splitting in very small units M 2 - M 6 . The conduction bands for the "infinite" molecule M N are shaded.
In most metals there will be bands derived from the outermost s -, p -, and d atomic levels, leading to a system of bands, some of which will overlap as described above. Where overlap does not occur, the almost continuous energy levels of the bands are separated by a forbidden zone, or band gap . Only the outermost atomic orbitals form bands; the inner orbitals remain localized on the individual atoms and are not involved in bonding.
In its mathematical development, the band model relies strongly on the way that the free electrons within the metal interact with the ordered regularity of the crystal lattice. The alternative view shown here emphasizes this aspect by showing the inner orbitals as localized to the atomic cores, while the valence electrons are delocalized and belong to the metal as a whole, which in a sense constitutes a huge molecule in its own right. | libretexts | 2025-03-17T19:53:13.544461 | 2013-10-03T01:37:49 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.10%3A_Bonding_in_Metals",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "9.10: Bonding in Metals",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.11%3A_Bonding_in_Semiconductors | 9.11: Bonding in Semiconductors
- With the aid of simple diagrams, show how different band energy ranges in solids can produce conductors, insulators, and semiconductors.
- Describe the nature and behavior of a simple PN junction.
The band theory of solids provides a clear set of criteria for distinguishing between conductors (metals), insulators and semiconductors. As we have seen, a conductor must posses an upper range of allowed levels that are only partially filled with valence electrons. These levels can be within a single band, or they can be the combination of two overlapping bands. A band structure of this type is known as a conduction band .
Band arrangements in conductors. Metallic conduction requires the presence of empty levels into which electrons can move as they acquire momentum. This can be achieved when a band is only partially occupied or overlaps an empty band (right), or when the gap between a filled band and an upper empty one is sufficiently small (left) to allow ordinary thermal energy to supply the promotion energy.
Insulators and semiconductors
An insulator is characterized by a large band gap between the highest filled band and an even higher empty band. The band gap is sufficiently great to prevent any significant population of the upper band by thermal excitation of electrons from the lower one. The presence of a very intense electric field may be able to supply the required energy, in which case the insulator undergoes dielectric breakdown . Most molecular crystals are insulators, as are covalent crystals such as diamond.
If the band gap is sufficiently small to allow electrons in the filled band below it to jump into the upper empty band by thermal excitation, the solid is known as a semiconductor . In contrast to metals, whose electrical conductivity decreases with temperature (the more intense lattice vibrations interfere with the transfer of momentum by the electron fluid), the conductivity of semiconductors increases with temperature. In many cases the excitation energy can be provided by absorption of light, so most semiconductors are also photoconductors . Examples of semiconducting elements are Se, Te, Bi, Ge, Si, and graphite.
The presence of an impurity in a semiconductor can introduce a new band into the system. If this new band is situated within the forbidden region, it creates a new and smaller band gap that will increase the conductivity. The huge semiconductor industry is based on the ability to tailor the band gap to fit the desired application by introducing an appropriate impurity atom ( dopant ) into the semiconductor lattice. The dopant elements are normally atoms whose valance shells contain one electron more or less than the atoms of the host crystal.
Semiconductor materials have traditionally been totally inorganic, composed mostly of the lighter P-block elements. More recently, organic semiconductors have become an important field of study and development.
Thermal properties of Semiconductors
At absolute zero, all of the charge carriers reside in lower of the bands below the small band gap in a semiconductor (that is, in the valence band of the illustration on the left above, or in the impurity band of the one on the right.) At higher temperatures, thermal excitation of the electrons allows an increasing fraction jump across this band gap and populate either the empty impurity band or the conduction band as shown at the right. The effect is the same in either case; the semiconductor becomes more conductive as the temperature is raised. Note that this is just the opposite to the way temperature affects the conductivity of metals.
N- and P-type materials
For example, a phosphorus atom introduced as an impurity into a silicon lattice possesses one more valence electron than Si. This electron is delocalized within the impurity band and serves as the charge carrier in what is known as an N-type semiconductor . In a semiconductor of the P-type, the dopant might be arsenic, which has only three valence electrons. This creates what amounts to an electron deficiency or hole in the electron fabric of the crystal, although the solid remains electrically neutral overall. As this vacancy is filled by the electrons from silicon atoms the vacancy hops to another location, so the charge carrier is effectively a positively charged hole, hence the P-type designation.
Substitution of just one dopant atom into 10 7 atoms of Si can increase the conductivity by a factor of 100,000.
The PN junction
When P- and N-type materials are brought into contact, creating a PN junction . Holes in the P material and electrons in the N material drift toward and neutralize each other, creating a depletion region that is devoid of charge carriers. But the destruction of these carriers leaves immobile positive ions in the N material and negative ions in the P material, giving rise to an interfacial potential difference (" space charge ") as depicted here.
As this charge builds up, it acts to resist the further diffusion of electrons and holes, leaving a carrier-free depletion region , which acts as a barrier at the junction interface. | libretexts | 2025-03-17T19:53:13.610891 | 2017-06-22T19:10:51 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.11%3A_Bonding_in_Semiconductors",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "9.11: Bonding in Semiconductors",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.12%3A_The_Shared-Electron_Covalent_Bond | 9.12: The Shared-Electron Covalent Bond
More than one non-equivalent structure
It sometimes happens that the octet rule can be satisfied by arranging the electrons in different ways. For example, there are three different ways of writing valid electron dot structures for the thiocyanate ion SCN – . Some of these structures are more realistic than others; to decide among them, you need to know something about the concepts of formal charge and electronegativity . These topics are discussed in the lesson that follows this one, where examples of working out such structures are given. | libretexts | 2025-03-17T19:53:13.666963 | 2013-10-03T01:37:51 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.12%3A_The_Shared-Electron_Covalent_Bond",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "9.12: The Shared-Electron Covalent Bond",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/10%3A_Fundamentals_of_Acids_and_Bases | 10: Fundamentals of Acids and Bases
Acids and bases touch upon virtually all areas of chemistry, biochemistry, and physiology. This set of lessons will get you started by presenting the underlying concepts in a systematic way. Aside from the section on pH which presumes an elementary knowledge of logarithms. The subject of acid-base equilibrium calculations is not covered in this lesson.
-
- 10.1: Introduction to Acids and Bases
- The concepts of an acid, a base, and a salt are very old ones that have undergone several major refinements as chemical science has evolved. Our treatment of the subject at this stage will be mainly conceptual and qualitative, emphasizing the definitions and fundamental ideas associated with acids and bases. We will not cover calculations involving acid-base equilibria in these lessons.
-
- 10.2: Aqueous Solutions- pH and Titrations
- As you will see in the lesson that follows this one, water plays an essential role in acid-base chemistry as we ordinarily know it. To even those who know very little about chemistry, the term pH is recognized as a measure of "acidity", so the major portion of this unit is devoted to the definition of pH and of the pH scale. But since these topics are intimately dependant on the properties of water and its ability do dissociate into hydrogen and hydroxyl ions.
-
- 10.3: Acid-base reactions à la Brønsted
- In this lesson we develop this concept and illustrate its applications to "strong" and "weak" acids and bases, emphasizing the common theme that acid-base chemistry is always a competition between two bases for the proton. In the final section, we show how the concept of "proton energy" can help us understand and predict the direction and extent of common types of acid-base reactions without the need for calculations.
-
- 10.4: Acid-Base Reactions
- Will this acid react with that base? And if so, to what extent? These questions can be answered quantitatively by carrying out the detailed equilibrium calculations you will learn about in another lesson. However, modern acid-base chemistry offers a few simple principles that can enable you to make a qualitative decision at a glance. More importantly, the ideas which we develop in this section are guaranteed to give you a far better conceptual understanding of proton-based acid-base reactions.
-
- 10.5: Lewis Acids and Bases
- The Brønsted-Lowry proton donor-acceptor concept has been one of the most successful theories of Chemistry. But as with any such theory, it is fair to ask if this is not just a special case of a more general theory that could encompass an even broader range of chemical science. In 1916, G.N. Lewis of the University of California proposed that the electron pair is the dominant actor in acid-base chemistry.
-
- 10.6: Types of Acids and Bases
- You will already have noticed that not every compound that contains hydrogen atoms is acidic; .e.g, ammonia gives an alkaline aqueous solution. Similarly, some compounds containing the group -OH are basic, but others are acidic. An important part of understanding chemistry is being able to recognize what substances will exhibit acidic and basic properties in aqueous solution. Fortunately, most of the common acids and bases fall into a small number of fairly well-defined groups.
-
- 10.7: Acid-Base Gallery
- Acids and bases are of interest not only to the chemically inclined; they play a major role in our modern industrial society — so anyone who participates in it, or who is interested in its history and development, needs to know something about them. Five of the major acids and bases fall into the "Top 20" industrial chemicals manufactured in the world. | libretexts | 2025-03-17T19:53:13.730286 | 2013-10-03T01:37:41 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/10%3A_Fundamentals_of_Acids_and_Bases",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "10: Fundamentals of Acids and Bases",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/10%3A_Fundamentals_of_Acids_and_Bases/10.01%3A_Introduction_to_Acids_and_Bases | 10.1: Introduction to Acids and Bases
Make sure you thoroughly understand the following essential ideas which have been presented above.
- Suggest simple tests you could carry out to determine if an unknown substance is an acid or a base.
- State the chemical definitions of an acid and a base in terms of their behavior in water.
- Write the formula of the salt formed when a given acid and base are combined.
The concepts of an acid, a base, and a salt are very old ones that have undergone several major refinements as chemical science has evolved. Our treatment of the subject at this stage will be mainly conceptual and qualitative, emphasizing the definitions and fundamental ideas associated with acids and bases. We will not cover calculations involving acid-base equilibria in these lessons.
1 Acids
The term acid was first used in the seventeenth century; it comes from the Latin root ac -, meaning “sharp”, as in acetum , vinegar. Some early writers suggested that acidic molecules might have sharp corners or spine-like projections that irritate the tongue or skin. Acids have long been recognized as a distinctive class of compounds whose aqueous solutions exhibit the following properties:
- A characteristic sour taste (think of lemon juice!);
- ability to change the color of litmus from blue to red;
- react with certain metals to produce gaseous H 2 ;
- react with bases to form a salt and water.
Note: Litmas
Litmus is a natural dye found in certain lichens. The name is of Scandinavian origin, e.g. lit (color) + mosi (moss) in Icelandic. "Litmus test" has acquired a meaning that transcends both Chemistry and science to denote any kind of test giving a yes/no answer.
How oxygen got mis-named
The first chemistry-based definition of an acid turned out to be wrong: in 1787, Antoine Lavoisier, as part of his masterful classification of substances, identified the known acids as a separate group of the “complex substances” (compounds). Their special nature, he postulated, derived from the presence of some common element that embodies the “acidity” principle, which he named oxygen , derived from the Greek for “acid former”.
Note
Lavoisier had recently assigned this name to the new gaseous element that Joseph Priestly had discovered a few years earlier as the essential substance that supports combustion. Many combustion products (oxides) do give acidic solutions, and oxygen is in fact present in most acids, so Lavoisier’s mistake is understandable. In 1811 Humphrey Davy showed that muriatic (hydrochloric) acid (which Lavoisier had regarded as an element) does not contain oxygen, but this merely convinced some that chlorine was not an element but an oxygen-containing compound. Although a dozen oxygen-free acids had been discovered by 1830, it was not until about 1840 that the hydrogen theory of acids became generally accepted. By this time, the misnomer oxygen was too well established a name to be changed. The root oxy comes from the Greek word οξνς, which means "sour".
2 Acids and the hydrogen ion
The key to understanding acids (as well as bases and salts) had to await Michael Faraday's mid-nineteenth century discovery that solutions of salts (known as electrolytes) conduct electricity. This implies the existence of charged particles that can migrate under the influence of an electric field. Faraday named these particles ions (“wanderers”). Later studies on electrolytic solutions suggested that the properties we associate with acids are due to the presence of an excess of hydrogen ions in the solution. By 1890 the Swedish chemist Svante Arrhenius (1859-1927) was able to formulate the first useful theory of acids:
Arrhenius Definition
"an acidic substance is one whose molecular unit contains at least one hydrogen atom that can dissociate, or ionize , when dissolved in water, producing a hydrated hydrogen ion and an anion."
| hydrochloric acid | HCl → H + (aq) + Cl – (aq) |
| sulfuric acid | H 2 SO 4 → H + (aq) + HSO 4 – (aq) |
| hydrogen sulfite ion | HSO 3 – (aq) → H + (aq) + SO 3 2– (aq) |
| acetic acid | H 3 CCOOH → H + (aq) + H 3 CCOO – (aq) |
Strictly speaking, an “Arrhenius acid” must contain hydrogen. However, there are substances that do not themselves contain hydrogen, but still yield hydrogen ions when dissolved in water; the hydrogen ions come from the water itself, by reaction with the substance.
Definition of an acid
An acid is a substance that yields an excess of hydrogen ions when dissolved in water.There are three important points to understand about hydrogen in acids:
- Although all Arrhenius acids contain hydrogen, not all hydrogen atoms in a substance are capable of dissociating; thus the –CH 3 hydrogens of acetic acid are “non-acidic”. An important part of knowing chemistry is being able to predict which hydrogen atoms in a substance will be able to dissociate into hydrogen ions.
- Those hydrogens that do dissociate can do so to different degrees. The strong acids such as HCl and HNO 3 are effectively 100% dissociated in solution. Most organic acids, such as acetic acid, are weak ; only a small fraction of the acid is dissociated in most solutions. HF and HCN are examples of weak inorganic acids.
- Acids that possess more than one dissociable hydrogen atom are known as polyprotic acids; H 2 SO 4 and H 3 PO 4 are well-known examples. Intermediate forms such as HPO 4 2– , being capable of both accepting and losing protons, are called ampholytes .
|
H
2
SO
4
sulfuric acid |
→ | HSO 4 – hydrogen sulfate ("bisulfate") ion | → |
SO
4
2–
sulfate ion |
||
|
H
2
S
hydrosulfuric acid |
→ |
HS
–
hydrosulfide ion |
→ |
S
2–
|
||
|
H
3
PO
4
phosphoric acid |
→ |
H
2
PO
4
–
dihydrogen phosphate ion |
→ |
HPO
4
2–
hydrogen phosphate ion |
→ |
PO
4
3–
phosphate ion |
|
HOOC-COOH
oxalic acid |
→ |
HOOC-COO
–
hydrogen oxalate ion |
→ |
–
OOC-COO
–
oxalate ion |
You will find out in a later section of this lesson that hydrogen ions cannot exist as such in water, but don't panic! It turns out that chemists still find it convenient to pretend as if they are present, and to write reactions that include them.
3 Bases
The name base has long been associated with a class of compounds whose aqueous solutions are characterized by:
- a bitter taste;
- a “soapy” feeling when applied to the skin
- ability to restore the original blue color of litmus that has been turned red by acids
- ability to react with acids to form salts.
- react with certain metals to produce gaseous H 2
Note
Just as an acid is a substance that liberates hydrogen ions into solution, a base yields hydroxide ions when dissolved in water:
NaOH (s) → Na + (aq) + OH – (aq)
Sodium hydroxide is an Arrhenius base because it contains hydroxide ions. However, other substances which do not contain hydroxide ions can nevertheless produce them by reaction with water, and are therefore also classified as bases. Two classes of such substances are the metal oxides and the hydrogen compounds of certain nonmetals:
Na 2 O (s) + H 2 O → [2 NaOH] → 2 Na + (aq) + 2 OH – (aq)
NH 3 + H 2 O → NH 4 + (aq) + OH – (aq)
Defination of a base
A base is a substance that yields an excess of hydroxide ions when dissolved in water.4 Neutralization
Acids and bases react with one another to yield two products: water, and an ionic compound known as a salt . This kind of reaction is called a neutralization reaction .
This "molecular" equation is convenient to write, but we need to re-cast it as a net ionic equation to reveal what is really going on here when the reaction takes place in water, as is almost always the case.
H + + Cl – + Na + + OH – → Na + + Cl – + H 2 O
If we cancel out the ions that appear on both sides (and therefore don't really participate in the reaction), we are left with the net equation
H + (aq) + OH – (aq) → H 2 O ( 1 )
which is the fundamental process that occurs in all neutralization reactions.
Note
Confirmation that this equation describes all neutralization reactions that take place in water is provided by experiments indicating that no matter what acid and base are combined, all liberate the same amount of heat (57.7 kJ) per mole of H + neutralized.
In the case of a weak acid, or a base that is not very soluble in water, more than one step might be required. For example, a similar reaction can occur between acetic acid and calcium hydroxide to produce calcium acetate :
2 CH 3 COOH + Ca(OH) 2 → CH 3 COOCa + 2 H 2 O
If this takes place in aqueous solution, the reaction is really between the very small quantities of H + and OH – resulting from the dissociation of the acid and the dissolution of the base, so the reaction is identical with Equation 1:
H + (aq) + OH – (aq) → H 2 O
If, on the other hand, we add solid calcium hydroxide to pure liquid acetic acid, the net reaction would include both reactants in their "molecular" forms:
2 CH 3 COOH (l) + Ca(OH) 2 (s) → 2 CH 3 COO – + Ca 2 + + 2 H 2 O
The “salt” that is produced in a neutralization reaction consists simply of the anion and cation that were already present. The salt can be recovered as a solid by evaporating the water. | libretexts | 2025-03-17T19:53:13.821774 | 2013-10-03T01:37:43 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/10%3A_Fundamentals_of_Acids_and_Bases/10.01%3A_Introduction_to_Acids_and_Bases",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "10.1: Introduction to Acids and Bases",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/10%3A_Fundamentals_of_Acids_and_Bases/10.02%3A_Aqueous_Solutions-_pH_and_Titrations | 10.2: Aqueous Solutions- pH and Titrations
Make sure you thoroughly understand the following essential ideas which have been presented above. It is especially important that you know the precise meanings of all the highlighted terms in the context of this topic.
- Define the ion product of water , and know its room-temperature value.
- State the criteria, in terms of H + and OH – concentrations, of an acidic , alkaline , and a neutral solution.
- Given the effective hydrogen ion concentration in a solution, calculate the pH . Conversely, find the hydrogen- or hydroxide ion concentration in a solution having a given pH.
- Find the pH or pOH of a solution when either one is known.
- Describe the process of acid-base titration , and explain the significance of the equivalence point.
- Sketch a typical titration curve for a monoprotic or polyprotic acid .
As you will see in the lesson that follows this one, water plays an essential role in acid-base chemistry as we ordinarily know it. To even those who know very little about chemistry, the term pH is recognized as a measure of "acidity", so the major portion of this unit is devoted to the definition of pH and of the pH scale. But since these topics are intimately dependant on the properties of water and its ability do dissociate into hydrogen and hydroxyl ions, we begin our discussion with this topic. We end this lesson with a brief discussion of acid-base titration— probably the most frequently carried-out chemistry laboratory operation in the world.
Dissociation of water
The ability of acids to react with bases depends on the tendency of hydrogen ions to combine with hydroxide ions to form water:
H + (aq) + OH – (aq) → H 2 O ( 1 )
This tendency happens to be very great, so the reaction is practically complete— but not "completely" complete; a few stray H + and OH – ions will always be present. What's more, this is true even if you start with the purest water attainable. This means that in pure water, the reverse reaction, the "dissociation" of water
H 2 O → H + (aq) + OH – (aq) (2)
will proceed to a very slight extent. Both reactions take place simultaneously, but (1) is so much faster than (2) that only a minute fraction of H 2 O molecules are dissociated.
Liquids that contain ions are able to conduct an electric current. Pure water is practically an insulator, but careful experiments show that even the most highly purified water exhibits a very slight conductivity that corresponds to a concentration of both the H + ion and OH – ions of almost exactly 1.00 × 10 –7 mol L –1 at 25°C.
All chemical reactions that take place in a single phase (such as in a solution) are theoretically "incomplete" and are said to be reversible.
What fraction of water molecules in a liter of water are dissociated
Solution:
1 L of water has a mass of 1000 g. The number of moles in 1000 g of H
2
O is
(1000 g)/(18 g mol
–1
) = 55.5 mol. This corresponds to (55.5 mol)(6.02E23 mol-1) = 3.34E25 H
2
O molecules.
An average of 10 -7 mole, or (10 -7 )(6.02E23) = 6.0E16 H 2 O molecules will be dissociated at any time. The fraction of dissociated water molecules is therefore (6.0E16)/(3.3E25) = 1.8E–9.
Thus we can say that only about two out of every billion (10 9 ) water molecules will be dissociated.
Ion Product of water
The degree of dissociation of water is so small that you might wonder why it is even mentioned here. The reason stems from an important relationship that governs the concentrations of H + and OH – ions in aqueous solutions:
[H + ][OH – ] = 1.00 × 10 –14 (3)
must know this!
in which the square brackets [ ] refer to the concentrations (in moles per litre) of the substances they enclose.
Note
The quantity 1.00 x 10 –14 is commonly denoted by K w . Its value varies slightly with temperature, pressure, and the presence of other ions in the solution.
This expression is known as the ion product of water , and it applies to all aqueous solutions, not just to pure water. The consequences of this are far-reaching, because it implies that if the concentration of H + is large, that of OH – will be small, and vice versa. This means that H + ions are present in all aqueous solutions, not just acidic ones.
This leads to the following important definitions , which you must know:
| acidic solution | [H + ] > [OH – ] |
| alkaline ("basic") solution | [H + ] < [OH – ] |
| neutral solution | [H + ] = [OH – ] = 1.00×10 –7 mol L –1 |
Take special note of the following definition:
A neutral solution is one in which the concentrations of H + and OH – ions are identical.The values of these concentrations are constrained by Eq. 3 . Thus, in a neutral solution , both the hydrogen- and hydroxide ion concentrations are 1.00 × 10 –7 mol L –1 :
[H + ][OH – ] = [1.00 × 10 –7 ][1.00 × 10 –7 ] =1.00 × 10 –14
Hydrochloric acid is a typical strong acid that is totally dissociated in solution:
HCl → H + (aq) + Cl – (aq)
A 1.0 M solution of HCl in water therefore does not really contain any significant concentration of HCl molecules at all; it is a solution in of H + and Cl – in which the concentrations of both ions are 1.0 mol L –1 . The concentration of hydroxide ion in such a solution, according to Eq 2 , is
[OH – ] = ( K w )/[H + ] = (1.00 x 10 –14 ) / (1 mol L –1 ) = 1.00 x 10 –14 mol L –1 .
Similarly, the concentration of hydrogen ion in a solution made by dissolving 1.0 mol of sodium hydroxide in water will be 1.00 x 10 –14 mol L –1 .
2 pH
When dealing with a range of values (such as the variety of hydrogen ion concentrations encountered in chemistry) that spans many powers of ten, it is convenient to represent them on a more compressed logarithmic scale. By convention, we use the pH scale to denote hydrogen ion concentrations:
pH = – log 10 [H + ] (4) must know this!
or conversely, [H + ] = 10 –pH .
This notation was devised by the Danish chemist Soren Sorenson (1868-1939) in 1909. There are several accounts of why he chose "pH"; a likely one is that the letters stand for the French term pouvoir hydrogène , meaning "power of hydrogen"— "power" in the sense of an exponent. It has since become common to represent other small quantities in "p-notation". Two that you need to know in this course are the following:
pOH = – log 10 [OH – ]
p K w = – log K w (= 14 when K w = 1.00 × 10 –14 )
Note that pH and pOH are expressed as numbers without any units, since logarithms must be dimensionless.
Recall from Eq 3 that [H + ][OH – ] = 1.00 × 10 –14 ; if we write this in "p-notation" it becomes
pH + pOH = 14
(5) must know this!
In a neutral solution at 25°C, pH = pOH = 7.0. As pH increases, pOH diminishes; a higher pH corresponds to an alkaline solution, a lower pH to an acidic solution. In a solution with [H + ] = 1 M , the pH would be 0; in a 0.00010 M solution of H + , it would be 4.0. Similarly, a 0.00010 M solution of NaOH would have a pOH of 4.0, and thus a pH of 10.0. It is very important that you thoroughly understand the pH scale, and be able to convert between [H + ] or [OH – ] and pH in both directions.
The pH of blood must be held very close to 7.40. Find the hydroxide ion concentration that corresponds to this pH.
Solution:
The pOH will be (14.0 – 7.40) = 6.60.
[OH
–
] = 10
–pOH
= 10
–6.6
= 2.51 x 10
–7
M
The pH scale
The range of possible pH values runs from about 0 to 14.
The word "about" in the above statement reflects the fact that at very high concentrations (10 M hydrochloric acid or sodium hydroxide, for example,) a significant fraction of the ions will be associated into neutral pairs such as H + ·Cl – , thus reducing the concentration of “available” ions to a smaller value which we will call the effective concentration . It is the effective concentration of H + and OH – that determines the pH and pOH. For solutions in which ion concentrations don't exceed 0.1 M, the formulas pH = –log [H + ] and pOH = –log[OH – ] are generally reliable, but don't expect a 10.0 M solution of a strong acid to have a pH of exactly –1.00!
The table shown here will help give you a general feeling for where common substances fall on the pH scale. Notice especially that
- most foods are slightly acidic;
- the principal "bodily fluids" are slightly alkaline, as is seawater— not surprising, since early animal life began in the oceans.
- the pH of freshly-distilled water will drift downward as it takes up carbon dioxide from the air; CO 2 reacts with water to produce carbonic acid, H 2 CO 3 .
- the pH of water that occurs in nature varies over a wide range. Groundwaters often pick up additional CO 2 respired by organisms in the soil, but can also become alkaline if they are in contact with carbonate-containing sediments. "Acid" rain is by definition more acidic than pure water in equilibrium with atmospheric CO 2 , owing mainly to sulfuric and nitric acids that originate from fossil-fuel emissions of nitrogen oxides and SO 2 .
pH indicators
The colors of many dye-like compounds depend on the pH, and can serve as useful indicators to determine whether the pH of a solution is above or below a certain value.
Natural indicator dyes
|
The best known of these is of course litmus , which has served as a means of distinguishing between acidic and alkaline substances since the early 18th century. |
Many flower pigments are also dependent on the pH. You may have noticed that the flowers of some hydrangea shrub species are blue when grown in acidic soils, and white or pink in alkaline soils. |
Red cabbage is a popular make-it-yourself indicator.
Universal indicators
Most indicator dyes show only one color change, and thus are only able to determine whether the pH of a solution is greater or less than the value that is characteristic of a particular indicator. By combining a variety of dyes whose color changes occur at different pHs, a "universal" indicator can be made. Commercially-prepared pH test papers of this kind are available for both wide and narrow pH ranges.
Titration
Since acids and bases readily react with each other, it is experimentally quite easy to find the amount of acid in a solution by determining how many moles of base are required to neutralize it. This operation is called titration , and you should already be familiar with it from your work in the Laboratory.
We can titrate an acid with a base, or a base with an acid. The substance whose concentration we are determining (the analyte ) is the substance being titrated; the substance we are adding in measured amounts is the titrant . The idea is to add titrant until the titrant has reacted with all of the analyte; at this point, the number of moles of titrant added tells us the concentration of base (or acid) in the solution being titrated.
36.00 ml of a solution of HCl was titrated with 0.44 M KOH. The volume of KOH solution required to neutralize the acid solution was 27.00 ml. What was the concentration of the HCl?
Solution: The number of moles of titrant added was
(.027 L)(.44 mol L
–1
) = .0119 mol. Because one mole of KOH reacts with one mole of HCl, this is also the number of moles of HCl; its concentration is therefore
(.0119 mol) ÷ (.036 L) = 0.33
M
.
Titration curves
The course of a titration can be followed by plotting the pH of the solution as a function of the quantity of titrant added. The figure shows two such curves, one for a strong acid (HCl) and the other for a weak acid, acetic acid, denoted by HAc. Looking first at the HCl curve, notice how the pH changes very slightly until the acid is almost neutralized. At that point, which corresponds to the vertical part of the plot, just one additional drop of NaOH solution will cause the pH to jump to a very high value— almost as high as that of the pure NaOH solution.
Compare the curve for HCl with that of HAc. For a weak acid, the pH jump near the neutralization point is less steep. Notice also that the pH of the solution at the neutralization point is greater than 7. These two characteristics of the titration curve for a weak acid are very important for you to know.
If the acid or base is polyprotic, there will be a jump in pH for each proton that is titrated. In the example shown here, a solution of carbonic acid H 2 CO 3 is titrated with sodium hydroxide. The first equivalence point (at which the H 2 CO 3 has been converted entirely into bicarbonate ion HCO 3 – ) occurs at pH 8.3. The solution is now identical to one prepared by dissolving an identical amount of sodium bicarbonate in water.
Addition of another mole equivalent of hydroxide ion converts the bicarbonate into carbonate ion and is complete at pH 10.3; an identical solution could be prepared by dissolving the appropriate amount of sodium carbonate in water.
Finding the equivalence point: indicators
When enough base has been added to react completely with the hydrogens of a monoprotic acid, the equivalence point has been reached. If a strong acid and strong base are titrated, the pH of the solution will be 7.0 at the equivalence point. However, if the acid is a weak one, the pH will be greater than 7; the “neutralized” solution will not be “neutral” in terms of pH. For a polyprotic acid, there will be an equivalence point for each titratable hydrogen in the acid. These typically occur at pH values that are 4-5 units apart, but they are occasionally closer, in which case they may not be readily apparent in the titration curve.
The key to a successful titration is knowing when the equivalence point has been reached. The easiest way of finding the equivalence point is to use an indicator dye; this is a substance whose color is sensitive to the pH. One such indicator that is commonly encountered in the laboratory is phenolphthalein ; it is colorless in acidic solution, but turns intensely red when the solution becomes alkaline. If an acid is to be titrated, you add a few drops of phenolphthalein to the solution before beginning the titration. As the titrant is added, a local red color appears, but quickly dissipates as the solution is shaken or stirred. Gradually, as the equivalence point is approached, the color dissipates more slowly; the trick is to stop the addition of base after a single drop results in a permanently pink solution.
Different indicators change color at different pH values. Since the pH of the equivalence point varies with the strength of the acid being titrated, one tries to fit the indicator to the particular acid. One can titrate polyprotic acids by using a suitable combination of several indicators, as is illustrated above for carbonic acid. | libretexts | 2025-03-17T19:53:13.925951 | 2013-10-03T01:37:43 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/10%3A_Fundamentals_of_Acids_and_Bases/10.02%3A_Aqueous_Solutions-_pH_and_Titrations",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "10.2: Aqueous Solutions- pH and Titrations",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/10%3A_Fundamentals_of_Acids_and_Bases/10.03%3A_Acid-base_reactions_a_la_Brnsted | 10.3: Acid-base reactions à la Brønsted
Make sure you thoroughly understand the following essential ideas which have been presented above. It is especially important that you know the precise meanings of all the highlighted terms in the context of this topic.
- Explain the difference between the Arrhenius and Bronsted-Lowry concepts of acids and bases, and give examples of an acid and a base in which the Arrhenius concept is inapplicable.
- Explain why a hydrogen ion cannot exist in water.
- Given the formula of an acid or base, write the formula of its conjugate .
- State the fundamental difference between a strong acid and a weak acid .
- Describe the leveling effect , and explain its origin.
- State the factors that determine whether a solution of a salt will be acidic or alkaline.
- Write the equation for the autoprotolysis of a given ampholyte .
In this lesson we develop this concept and illustrate its applications to "strong" and "weak" acids and bases, emphasizing the common theme that acid-base chemistry is always a competition between two bases for the proton. In the final section, we show how the concept of "proton energy" can help us understand and predict the direction and extent of common types of acid-base reactions without the need for calculations.
Proton donors and acceptors
The older Arrhenius theory of acids and bases viewed them as substances which produce hydrogen ions or hydroxide ions on dissociation. As useful a concept as this has been, it was unable to explain why NH 3 , which contains no OH – ions, is a base and not an acid, why a solution of FeCl 3 is acidic, or why a solution of Na 2 S is alkaline. A more general theory of acids and bases was developed by Franklin in 1905, who suggested that the solvent plays a central role. According to this view, an acid is a solute that gives rise to a cation (positive ion) characteristic of the solvent, and a base is a solute that yields a anion (negative ion) which is also characteristic of the solvent. The most important of these solvents is of course H 2 O, but Franklin's insight extended the realm of acid-base chemistry into non-aqueous systems as we shall see in a later lesson.
Brønsted acids and bases
In 1923, the Danish chemist J.N. Brønsted, building on Franklin's theory, proposed that an acid is a proton donor; a base is a proton acceptor. In the same year the English chemist T.M. Lowry published a paper setting forth some similar ideas without producing a definition; in a later paper Lowry himself points out that Brønsted deserves the major credit, but the concept is still widely known as the Brønsted-Lowry theory.
Brønsted-Lowry Acids and Bases
An acid is a proton donor and a base is a proton acceptor.These definitions carry a very important implication: a substance cannot act as an acid without the presence of a base to accept the proton , and vice versa . As a very simple example, consider the equation that Arrhenius wrote to describe the behavior of hydrochloric acid:
\[HCl \rightarrow H^+ + A^–\]
This is fine as far as it goes, and chemists still write such an equation as a shortcut. But in order to represent this more realistically as a proton donor-acceptor reaction, we now depict the behavior of HCl in water by
in which the acid HCl donates its proton to the acceptor (base) H 2 O.
"Nothing new here", you might say, noting that we are simply replacing a shorter equation by a longer one. But consider how we might explain the alkaline solution that is created when ammonia gas NH 3 dissolves in water. An alkaline solution contains an excess of hydroxide ions, so ammonia is clearly a base, but because there are no OH – ions in NH 3 , it is clearly not an Arrhenius base. It is, however, a Brønsted base:
In this case, the water molecule acts as the acid, donating a proton to the base NH 3 to create the ammonium ion NH 4 + .
The foregoing examples illustrate several important aspects of the Brønsted-Lowry concept of acids and bases:
- A substance cannot act as an acid unless a proton acceptor (base) is present to receive the proton;
- A substance cannot act as a base unless a proton donor (acid) is present to supply the proton;
- Water plays a dual role in many acid-base reactions; H 2 O can act as a proton acceptor (base) for an acid, or it can serve as a proton donor (acid) for a base (as we saw for ammonia.
- The hydronium ion H 3 O + plays a central role in the acid-base chemistry of aqueous solutions.
Brønsted
Brønsted (1879-1947) was a Danish physical chemist. Although he is now known mainly for his proton donor-acceptor theory of acids and bases (see his original article ), he published numerous earlier papers on chemical affinity, and later on the catalytic effects of acids and bases on chemical reactions. In World War II he opposed the Nazis, and this led to his election to the Danish parliament in 1947, but he was unable to take his seat because of illness and he died later in that year.
Lowry
Lowry (1874-1936) was the first holder of the chair in physical chemistry at Cambridge University. His extensive studies of the effects of acids and bases on the optical behavior of camphor derivatives (specifically, how they rotate the plane of polarized light) led him to formulate a theory of acids and bases similar to and simultaneously with that of Brønsted.
The Hydronium ion
There is another serious problem with the Arrhenius view of an acid as a substance that dissociates in water to produce a hydrogen ion. The hydrogen ion is no more than a proton, a bare nucleus. Although it carries only a single unit of positive charge, this charge is concentrated into a volume of space that is only about a hundred-millionth as large as the volume occupied by the smallest atom. (Think of a pebble sitting in the middle of a sports stadium!) The resulting extraordinarily high charge density of the proton strongly attracts it to any part of a nearby atom or molecule in which there is an excess of negative charge. In the case of water, this will be the lone pair (unshared) electrons of the oxygen atom; the tiny proton will be buried within the lone pair and will form a shared-electron (coordinate) bond with it, creating a hydronium ion , H 3 O + . In a sense, H 2 O is acting as a base here, and the product H 3 O + is the conjugate acid of water:
Owing to the overwhelming excess of \(H_2O\) molecules in aqueous solutions, a bare hydrogen ion has no chance of surviving in water.
Although other kinds of dissolved ions have water molecules bound to them more or less tightly, the interaction between H + and H 2 O is so strong that writing “H + (aq) ” hardly does it justice, although it is formally correct. The formula H 3 O + more adequately conveys the sense that it is both a molecule in its own right, and is also the conjugate acid of water.
The equation "HA → H + + A – " is so much easier to write that chemists still use it to represent acid-base reactions in contexts in which the proton donor-acceptor mechanism does not need to be emphasized. Thus it is permissible to talk about “hydrogen ions” and use the formula H + in writing chemical equations as long as you remember that they are not to be taken literally in the context of aqueous solutions.
Interestingly, experiments indicate that the proton does not stick to a single H 2 O molecule, but changes partners many times per second. This molecular promiscuity, a consequence of the uniquely small size and mass the proton, allows it to move through the solution by rapidly hopping from one H 2 O molecule to the next, creating a new H 3 O + ion as it goes. The overall effect is the same as if the H 3 O + ion itself were moving. Similarly, a hydroxide ion, which can be considered to be a “proton hole” in the water, serves as a landing point for a proton from another H 2 O molecule, so that the OH – ion hops about in the same way.
Because hydronium- and hydroxide ions can “move without actually moving” and thus without having to plow their way through the solution by shoving aside water molecules as do other ions, solutions which are acidic or alkaline have extraordinarily high electrical conductivities .
Acid-base reactions à la Brønsted
According to the Brønsted concept, the process that was previously written as a simple dissociation of a generic acid HA ("HA → H + + A – )" is now an acid-base reaction in its own right:
\[HA + H_2O \rightarrow A^- + H_3O^+\]
The idea, again, is that the proton, once it leaves the acid, must end up somewhere; it cannot simply float around as a free hydrogen ion.
Conjugate pairs
A reaction of an acid with a base is thus a proton exchange reaction ; if the acid is denoted by AH and the base by B, then we can write a generalized acid-base reaction as
\[AH + B \rightarrow A^- + BH^+\]
Notice that the reverse of this reaction,
\[BH^+ + A^- \rightarrow B + AH^+\]
is also an acid-base reaction. Because all simple reactions can take place in both directions to some extent, it follows that transfer of a proton from an acid to a base must necessarily create a new pair of species that can, at least in principle, constitute an acid-base pair of their own.
In this schematic reaction, base 1 is said to be conjugate to acid 1 , and acid 2 is conjugate to base 2 . The term conjugate means “connected with”, the implication being that any species and its conjugate species are related by the gain or loss of one proton. The table below shows the conjugate pairs of a number of typical acid-base systems.
|
acid
|
base
|
||
|---|---|---|---|
| hydrochloric acid | HCl | Cl – | chloride ion |
| acetic acid | CH 3 CH 2 COOH | CH 3 CH 2 COO – | acetate ion |
| nitric acid | HNO 3 | NO 3 – | nitrate ion |
| dihydrogen phosphate ion | H 2 PO 4 – | HPO 4 – | monohydrogen phosphate ion |
| hydrogen sulfate ion | HSO 4 – | SO 4 2– | sulfate ion |
| hydrogen carbonate ("bicarbonate") ion | HCO 3 – | CO 3 2– | carbonate ion |
| ammonium ion | NH 4 + | NH 3 | ammonia |
| iron(III) ("ferric") ion | Fe(H 2 O) 6 3+ | Fe(H 2 O) 5 OH 2 + | |
| water | H 2 O | OH – | hydroxide ion |
| hydronium ion | H 3 O+ | H 2 O | water |
Strong acids and weak acids
We can look upon the generalized acid-base reaction
as a competition of two bases for a proton:
If the base H 2 O overwhelmingly wins this tug-of-war, then the acid HA is said to be a strong acid . This is what happens with hydrochloric acid and the other common strong "mineral acids" H 2 SO 4 , HNO 3 , and HClO 4 :
| hydrochloric acid | HCl + H 2 O → Cl – + H 3 O + |
| sulfuric acid | H 2 SO 4 + H 2 O → HSO 4 – + H 3 O + |
| nitric acid | HNO 3 + H 2 O → NO 3 – + H 3 O + |
| perchloric acid | HClO 4 + H 2 O → ClO 4 – + H 3 O + |
Solutions of these acids in water are really solutions of the ionic species shown in heavy type on the right. This being the case, it follows that what we call a 1 M solution of "hydrochloric acid" in water, for example, does not really contain a significant concentration of HCl at all; the only real a acid present in such a solution is H 3 O + !
These considerations give rise to two important rules:
- H 3 O + is the strongest acid that can exist in water;
- All strong acids appear to be equally strong in water.
The Leveling Effect
The second of these statements is called the leveling effect . It means that although the inherent proton-donor strengths of the strong acids differ, they are all completely dissociated in water. Chemists say that their strengths are "leveled" by the solvent water.
A comparable effect would be seen if one attempted to judge the strengths of several adults by conducting a series of tug-of-war contests with a young child. One would expect the adults to win overwhelmingly on each trial; their strengths would have been "leveled" by that of the child.
Weak acids
Most acids, however, are able to hold on to their protons more tightly, so only a small fraction of the acid is dissociated. Thus hydrocyanic acid, HCN, is a weak acid in water because the proton is able to share the lone pair electrons of the cyanide ion CN – more effectively than it can with those of H 2 O, so the reaction
\[HCN + H_2O \rightarrow H_3O^+ + CN^–\]
proceeds to only a very small extent. Since a strong acid binds its proton only weakly, while a weak acid binds it tightly, we can say that
Strong acids are "weak" and weak acids are "strong." If you are able to explain this apparent paradox, you understand one of the most important ideas in acid-base chemistry!
|
reaction
|
acid
|
base
|
conjugate acid
|
conjugate base
|
|---|---|---|---|---|
| autoionization of water H 2 O | H 2 O | H 2 O | H 3 O + | OH – |
| ionization of hydrocyanic acid HCN | HCN | H 2 O | H 3 O + | CN – |
| ionization of ammonia NH 3 in water | NH 3 | H 2 O | NH 4 + | OH – |
| hydrolysis of ammonium chloride NH 4 Cl | NH 4 + | H 2 O | H 3 O + | NH 3 |
| hydrolysis of sodium acetate CH3COO - Na + | H 2 O | CH 3 COO – | CH 3 COOH | OH – |
| neutralization of HCl by NaOH | HCl | OH – | H 2 O | Cl – |
| neutralization of NH 3 by acetic acid | CH 3 COOH | NH 3 | NH 4 + | CH 3 COO – |
| dissolution of BiOCl (bismuth oxychloride) by HCl | 2 H 3 O + | BiOCl | Bi(H 2 O) 3+ | H 2 O, Cl – |
| decomposition of Ag(NH3) 2+ by HNO 3 | 2 H 3 O + | Ag(NH 3 ) 2 + | NH 4 + | H 2 O |
| displacement of HCN by CH 3 COOH | CH 3 COOH | CN – | HCN | CH 3 COO – |
Strong acids have weak conjugate bases
This is just a re-statement of what is implicit in what has been said above about the distinction between strong acids and weak acids. The fact that HCl is a strong acid implies that its conjugate base Cl – is too weak a base to hold onto the proton in competition with either H 2 O or H 3 O + . Similarly, the CN – ion binds strongly to a proton, making HCN a weak acid.
Salts of weak acids give alkaline solutions
The fact that HCN is a weak acid implies that the cyanide ion CN – reacts readily with protons, and is thus is a relatively good base. As evidence of this, a salt such as KCN, when dissolved in water, yields a slightly alkaline solution:
CN – + H 2 O → HCN + OH –
This reaction is still sometimes referred to by its old name hydrolysis ("water splitting"), which is literally correct but tends to obscure its identity as just another acid-base reaction. Reactions of this type take place only to a small extent; a 0.1M solution of KCN is still, for all practical purposes, 0.1M in cyanide ion.
In general, the weaker the acid, the more alkaline will be a solution of its salt. However, it would be going to far to say that "ordinary weak acids have strong conjugate bases." The only really strong base is hydroxide ion, OH – , so the above statement would be true only for the very weak acid H 2 O.
Strong bases and weak bases
The only really strong bases you are likely to encounter in day-to-day chemistry are alkali-metal hydroxides such as NaOH and KOH, which are essentially solutions of the hydroxide ion. Most other compounds containing hydroxide ions such as Fe(OH) 3 and Ca(OH) 2 are not sufficiently soluble in water to give highly alkaline solutions, so they are not usually thought of as strong bases.
There are actually a number of bases that are stronger than the hydroxide ion — best known are the oxide ion O 2– and the amide ion NH 2 – , but these are so strong that they can rob water of a proton:
O 2– + H 2 O → 2 OH –
NH 2 – + H 2 O → NH 3 + OH –
This gives rise to the same kind of leveling effect we described for acids, with hydroxide ion as the strongest base in water.
Hydroxide ion is the strongest base that can exist in aqueous solution.
The most common example of this is ammonium chloride, NH 4 Cl, whose aqueous solutions are distinctly acidic:
NH 4 + + H 2 O → NH 3 + H 3 O +
Because this (and similar) reactions take place only to a small extent, a solution of ammonium chloride will only be slightly acidic.
Autoprotolysis
From some of the examples given above, we see that water can act as an acid
CN – + H 2 O → HCN + OH –
and as a base
NH 4 + + H 2 O → NH 3 + H 3 O +
If this is so, then there is no reason why "water-the-acid" cannot donate a proton to "water-the-base":
This reaction is known as the autoprotolysis of water .
Chemists still often refer to this reaction as the "dissociation" of water and use the Arrhenius-style equation H 2 O → H + + OH – as a kind of shorthand. As discussed in the previous lesson, this process occurs to only a tiny extent. It does mean, however, that hydronium and hydroxide ions are present in any aqueous solution.
Ammonia and Sulfuric acid
Other liquids also exhibit autoprotolysis with the most well-known example is liquid ammonia:
2 NH 3 → NH 4 + + NH 2 –
Even pure liquid sulfuric acid can play the game:
2 H 2 SO 4 → H 3 SO 4 + + HSO 4 –
Each of these solvents can be the basis of its own acid-base "system", parallel to the familiar "water system".
Ampholytes
Water, which can act as either an acid or a base, is said to be amphiprotic : it can "swing both ways". A substance such as water that is amphiprotic is called an ampholyte .
As indicated here, the hydroxide ion can also be an ampholyte, but not in aqueous solution in which the oxide ion cannot exist.
It is of course the amphiprotic nature of water that allows it to play its special role in ordinary aquatic acid-base chemistry. But many other amphiprotic substances can also exist in aqueous solutions. Any such substance will always have a conjugate acid and a conjugate base, so if you can recognize these two conjugates of a substance, you will know it is amphiprotic.
The carbonate system
For example, the triplet set {carbonic acid, bicarbonate ion, carbonate ion} constitutes an amphiprotric series in which the bicarbonate ion is the ampholyte, differing from either of its neighbors by the addition or removal of one proton:
If the bicarbonate ion is both an acid and a base, it should be able to exchange a proton with itself in an autoprotolysis reaction:
\[HCO_3^– + HCO_3^– \rightarrow H_2CO_3 + CO_3^{2-}\]
Carbonic Acid
Your very life depends on the above reaction! CO 2 , a metabolic by-product of every cell in your body, reacts with water to form carbonic acid H 2 CO 3 which, if it were allowed to accumulate, would make your blood fatally acidic. However, the blood also contains carbonate ion, which reacts according to the reverse of the above equation to produce bicarbonate which can be safely carried by the blood to the lungs. At this location the autoprotolysis reaction runs in the forward direction, producing H 2 CO 3 which loses water to form CO 2 which gets expelled in the breath. The carbonate ion is recycled back into the blood to eventually pick up another CO 2 molecule.
If you can write an autoprotolysis reaction for a substance, then that substance is amphiprotic. | libretexts | 2025-03-17T19:53:14.061444 | 2013-10-03T01:37:43 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/10%3A_Fundamentals_of_Acids_and_Bases/10.03%3A_Acid-base_reactions_a_la_Brnsted",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "10.3: Acid-base reactions à la Brønsted",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/10%3A_Fundamentals_of_Acids_and_Bases/10.04%3A_Acid-Base_Reactions | 10.4: Acid-Base Reactions
It is especially important that you know the precise meanings of all the highlighted terms in the context of this topic.
- Sketch out a proton-energy diagram for a strong acid , a weak acid or base , and for a strong base .
- Describe how the pH affects the relative concentrations of a conjugate acid-base pair .
Will this acid react with that base? And if so, to what extent? These questions can be answered quantitatively by carrying out the detailed equilibrium calculations you will learn about in another lesson. However, modern acid-base chemistry offers a few simple principles that can enable you to make a qualitative decision at a glance. More importantly, the ideas which we develop in this section are guaranteed to give you a far better conceptual understanding of proton-based acid-base reactions in general.
Which base gets the proton?
Will acid HA react with base B? We stated above that the outcome of any acid-base reaction depends on how well two different bases can compete in the tug-of-war for the proton
\[A^– \leftarrow H^+\rightarrow B^– \label{9.4.1}\]
The proton will always go to the stronger base . Some insight into this can be had by thinking of the proton as having different potential energies when it is bound to different acceptors. We can draw a useful analogy with the electrons in an atom, which, you will recall, will always fall into the lowest-potential energy orbitals available, filling them from the bottom up. In a similar way, protons will "fall" into the lowest-energy empty spots (bases) they can find.
Consider the scheme shown here, which depicts two hypothetical acid-base conjugate pairs. Take careful note of the labeling of this diagram: the acids HA and HB are proton sources and the conjugate bases A – and B – are proton sinks . This "source-sink" terminology is synonymous with the "donor-acceptor" language that Brønsted taught us, but it also carries an implication about the relative energies of the proton as it exists in the two molecules HA and HB. If, as is indicated here, the proton is at a higher "potential energy" when it is in the form of HA than in HB, the reaction HA + B – → HB + A – will be favored compared to the reverse process HB + A – → HA + B – , which would require elevating the proton up to the A – level. In this example, HA is the stronger acid because its proton can fall to a lower potential energy when it joins with B – to form HB.
We will refer to diagrams such as the one in Figure \(\PageIndex{1}\) as "proton-energy diagrams", which is not quite correct, but we do not want to get into thermodynamics at this point. (If you already know something about chemical thermodynamics, we are really referring to Gibbs energy. )
It follows, then, that if we can arrange all the common acid-base conjugate pairs on this kind of a scale, we can predict the direction of any simple acid-base reaction without resorting to numbers. This will be illustrated further on, but in order to keep things simple, let's look at a few proton-energy diagrams that illustrate some of the acid-base chemistry that we discussed in the preceding section.
Strong acids and weak acids
The hydronium ion is the dividing line; a strong acid, you will recall, is one whose conjugate base A – loses out to the "stronger" base H 2 O in the competition for the proton:
\[A^– \leftarrow H^+ \rightarrow H_2O \label{9.4.2}\]
An acid that is a stronger proton donor than hydronium ion is a "strong" acid; if it is a weaker proton donor than H 3 O + , it is by definition "weak". This is seen most clearly in the diagram here, which contrasts the strong acid HA with the weak acid HB. HB "dissociates" to only a tiny extent because it is energetically unfavorable to promote its proton up to the H 2 O-H 3 O + level (process 3 in the diagram).
Strong acids and leveling
A strong acid, you will recall, is one whose conjugate base A – loses out to the "stronger" base H 2 O in the competition for the proton:
\[A^- ← H^+→ H_2O \label{9.4.3}\]
Because the reaction
\[HA + H_2O \rightarrow A^–+ H_3O^+ \label{9.4.4}\]
for any strong acid HA is virtually complete, all strong acids appear to be equally strong in water (the leveling effect.)
From the proton-energy standpoint, a strong acid is one in which the energy of the proton is substantially greater when attached to the anion A – than when it is attached to H 2 O. Adding a strong acid HA to water will put it in contact with a huge proton sink that drains off the protons from any such acid, leaving the conjugate base A – along with hydronium ion, the strongest acid that can exist in water .
Weak bases
Conjugate bases of weak acids tend to accept protons from water, leaving a small excess of OH – ions and thus an alkaline solution. As you can see in the diagram, the weak base ammonia accepts a proton from water:
\[NH_3 + H_2O \rightarrow NH_4^+ + OH^– \label{9.4.6}\]
The "weakness" of such a base is a consequence of the energetically unfavorable process ( 1 ) in which a proton must be raised up from the low-lying H 2 O-OH – level. From the standpoint of the "proton sources" column on the left, you can think of this as similar to the situation for weak acids that we discussed above; it can be considered a special case in which the weak acid is H 2 O.
The weakest acid and the strongest base
For a very long time, chemists had regarded methane, CH 4 , as the weakest acid, making the methide ion CH 3 – (which is also the simplest carbanion ) the strongest base. Methane still holds its position as the weakest acid, but in 2008, the ion LiO – was found to be an even stronger base than CH 4 – . Because both of these bases are observable only in the gas phase, these facts have little obvious import on aqueous-solution chemistry.
Autoprotolysis
Because water is amphiprotic, one H 2 O molecule can donate a proton to another, as explained above. In this case the proton has to acquire considerable energy to make the jump ( 1 ) from the H 2 O-OH – level to the H 3 O + -H 2 O level, so the reaction
\[2 H_2O \rightarrow H_3O^++ OH^– \label{9.4.7}\]
occurs only to a minute extent. Think of this as the special case of the "weakest" acid H 2 O reacting with the "weakest" base H 2 O.
Strong bases
Finally, what is a strong base? Just as a strong acid lies above the H 3 O + -H 2 O level, so does a strong base lie below the H 2 O-OH – level. And for the same reason that H 3 O + is the strongest acid that can exist in water, OH – is the strongest base that can exist in water. The example of the oxide ion O 2– is shown here. Sodium oxide Na 2 O is a white powder that dissolves in water to give oxide ions which immediately decompose into hydroxide ions
\[O^{2–} + H_2O \rightarrow 2 OH^– \label{9.4.8}\]
Putting it all together, and the meaning of pH
This table combines common examples covering the entire range of acid-base strengths, from the strong to the very weak. The energy scale at the left gives you some idea of the relative proton-energy levels for each conjugate pair; notice that the zero is arbitrarily set to that of the H 3 O + -H 2 O pair.
Of more importance is the pH scale on the right. The pH that corresponds to any conjugate pair is the pH at which equal concentrations of that pair are in their acid and base forms. For example, acetic acid CH 3 COOH is "half ionized" at a pH of 4.7. If another strong acid such as HCl is added so as to reduce the pH, the proportion of acetate ion decreases, while if sodium hydroxide is added to force the pH higher, a larger fraction of the acetic acid will be "dissociated".
This illustrates another aspect of pH: at its most fundamental level, pH is an inverse measure of the "proton intensity" in the solution. The lower the pH, the higher the proton intensity, and the greater will be the fraction of higher-energy proton levels populated— which translates to higher acid-to-conjugate base concentration ratios. It is easy to see why acids such as H 2 SO 4 and bases such as the amide ion NH 2 – cannot exist in aqueous solution; the pH would have to be at the impossible level of –6 for the former and +23 for the latter!
Why acids are titrated with hydroxide ion
When you titrate an acid with a base, you want virtually every molecule of the acid to react with the base. In the case of a weak acid such as hypochlorous, the reaction would be
\[HOCl + OH^– \rightarrow OCl^– + H_2O \label{9.4.9}\]
Because the proton level in HOCl is considerably above that in H 2 O, titration with NaOH solution will ensure that every last proton is eaten up by the hydroxide ion. If, instead, you used ammonia NH 3 as a titrant, the closeness of the two proton levels would cause the reaction to be incomplete, yielding a less distinct equivalence point. And, of course, titration with a base that is weaker then hypochlorite ion (such as sodium bicarbonate) would be hopeless.
As a practical matter, you can usually estimate that when the pH differs by more than about two units from the value that corresponds to the conjugate-pair for a monoprotic acid, the concentration of the non-favored species will be down by a factor of around 1000. | libretexts | 2025-03-17T19:53:14.142708 | 2013-10-03T01:37:42 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/10%3A_Fundamentals_of_Acids_and_Bases/10.04%3A_Acid-Base_Reactions",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "10.4: Acid-Base Reactions",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/10%3A_Fundamentals_of_Acids_and_Bases/10.05%3A_Lewis_Acids_and_Bases | 10.5: Lewis Acids and Bases
Make sure you thoroughly understand the following essential ideas which have been presented above. It is especially important that you know the precise meanings of all the highlighted terms in the context of this topic.
- Write the equation for the proton transfer reaction involving a Brønsted-Lowry acid or base, and show how it can be interpreted as an electron-pair transfer reaction, clearly identifying the donor and acceptor.
- Give an example of a Lewis acid-base reaction that does not involve protons.
- Write equations illustrating the behavior of a given non-aqueous acid-base system.
The Brønsted-Lowry proton donor-acceptor concept has been one of the most successful theories of Chemistry. But as with any such theory, it is fair to ask if this is not just a special case of a more general theory that could encompass an even broader range of chemical science. In 1916, G.N. Lewis of the University of California proposed that the electron pair is the dominant actor in acid-base chemistry. The Lewis theory did not become very well known until about 1923 (the same year that Brønsted and Lowry published their work), but since then it has been recognized as a very powerful tool for describing chemical reactions of widely different kinds and is widely used in organic and inorganic chemistry. According to Lewis,
- An acid is a substance that accepts a pair of electrons, and in doing so, forms a covalent bond with the entity that supplies the electrons.
- A base is a substance that donates an unshared pair of electrons to a recipient species with which the electrons can be shared.
In modern chemistry, electron donors are often referred to as nucleophiles, while acceptors are electrophiles.
Proton-Transfer Reactions Involve Electron-Pair Transfer
Just as any Arrhenius acid is also a Brønsted acid, any Brønsted acid is also a Lewis acid, so the various acid-base concepts are all "upward compatible". Although we do not really need to think about electron-pair transfers when we deal with ordinary aqueous-solution acid-base reactions, it is important to understand that it is the opportunity for electron-pair sharing that enables proton transfer to take place.
This equation for a simple acid-base neutralization shows how the Brønsted and Lewis definitions are really just different views of the same process. Take special note of the following points:
- The arrow shows the movement of a proton from the hydronium ion to the hydroxide ion.
- Note carefully that the electron-pairs themselves do not move; they remain attached to their central atoms. The electron pair on the base is "donated" to the acceptor (the proton) only in the sense that it ends up being shared with the acceptor, rather than being the exclusive property of the oxygen atom in the hydroxide ion.
- Although the hydronium ion is the nominal Lewis acid here, it does not itself accept an electron pair, but acts merely as the source of the proton that coordinates with the Lewis base.
The point about the electron-pair remaining on the donor species is especially important to bear in mind. For one thing, it distinguishes a Lewis acid-base reaction from an oxidation-reduction reaction , in which a physical transfer of one or more electrons from donor to acceptor does occur. The product of a Lewis acid-base reaction is known formally as an "adduct" or "complex", although we do not ordinarily use these terms for simple proton-transfer reactions such as the one in the above example. Here, the proton combines with the hydroxide ion to form the "adduct" H 2 O. The following examples illustrate these points for some other proton-transfer reactions that you should already be familiar with.
Another example, showing the autoprotolysis of water. Note that the conjugate base is also the adduct.
Ammonia is both a Brønsted and a Lewis base, owing to the unshared electron pair on the nitrogen. The reverse of this reaction represents the hydrolysis of the ammonium ion.
Because \(\ce{HF}\) is a weak acid, fluoride salts behave as bases in aqueous solution. As a Lewis base, F – accepts a proton from water, which is transformed into a hydroxide ion.
The bisulfite ion is amphiprotic and can act as an electron donor or acceptor.
Acid-base Reactions without Transferring Protons
The major utility of the Lewis definition is that it extends the concept of acids and bases beyond the realm of proton transfer reactions. The classic example is the reaction of boron trifluoride with ammonia to form an adduct :
\[\ce{BF_3 + NH_3 \rightarrow F_3B-NH_3}\]
One of the most commonly-encountered kinds of Lewis acid-base reactions occurs when electron-donating ligands form coordination complexes with transition-metal ions.
Here are several more examples of Lewis acid-base reactions that cannot be accommodated within the Brønsted or Arrhenius models. Identify the Lewis acid and Lewis base in each reaction.
- \(\ce{Al(OH)_3 + OH^{–} \rightarrow Al(OH)_4^–}\)
- \(\ce{SnS_2 + S^{2–} \rightarrow SnS_3^{2–}}\)
- \(\ce{Cd(CN)_2 + 2 CN^– \rightarrow Cd(CN)_4^{2+}}\)
- \(\ce{AgCl + 2 NH_3 \rightarrow Ag(NH_3)_2^+ + Cl^–}\)
- \(\ce{Fe^{2+} + NO \rightarrow Fe(NO)^{2+}}\)
- \(\ce{Ni^{2+} + 6 NH_3 \rightarrow Ni(NH_3)_5^{2+}}\)
Applications to organic reaction mechanisms
Although organic chemistry is beyond the scope of these lessons, it is instructive to see how electron donors and acceptors play a role in chemical reactions. The following two diagrams show the mechanisms of two common types of reactions initiated by simple inorganic Lewis acids:
In each case, the species labeled "Complex" is an intermediate that decomposes into the products, which are conjugates of the original acid and base pairs. The electric charges indicated in the complexes are formal charges, but those in the products are "real".
In reaction 1, the incomplete octet of the aluminum atom in \(\ce{AlCl3}\) serves as a better electron acceptor to the chlorine atom than does the isobutyl part of the base. In reaction 2, the pair of non-bonding electrons on the dimethyl ether coordinates with the electron-deficient boron atom, leading to a complex that breaks down by releasing a bromide ion.
Non-aqueous Protonic Acid-Base Systems
We ordinarily think of Brønsted-Lowry acid-base reactions as taking place in aqueous solutions, but this need not always be the case. A more general view encompasses a variety of acid-base solvent systems , of which the water system is only one (Table \(\PageIndex{1}\)). Each of these has as its basis an amphiprotic solvent (one capable of undergoing autoprotolysis), in parallel with the familiar case of water.
The ammonia system is one of the most common non-aqueous system in Chemistry. Liquid ammonia boils at –33° C, and can conveniently be maintained as a liquid by cooling with dry ice (–77° C). It is a good solvent for substances that also dissolve in water, such as ionic salts and organic compounds since it is capable of forming hydrogen bonds. However, many other familiar substances can also serve as the basis of protonic solvent systems as Table \(\PageIndex{1}\) indicates:
|
solvent
|
autoprotolysis reaction
|
pK ap |
|---|---|---|
| water | 2 H 2 O → H 3 O + + OH – | 14 |
| ammonia | 2 NH 3 → NH 4 + + NH 2 – | 33 |
| acetic acid | 2 CH 3 COOH → CH 3 COOH 2 + + CH 3 COO – | 13 |
| ethanol | 2 C 2 H 5 OH → C 2 H 5 OH 2 + + C 2 H 5 O – | 19 |
| hydrogen peroxide | 2 HO-OH → HO-OH 2 + + HO-O – | 13 |
| hydrofluoric acid | 2 HF → H 2 F + + F – | 10 |
| sulfuric acid | 2 H 2 SO 4 → H 3 SO 4 + + HSO 4 – | 3.5 |
One use of nonaqueous acid-base systems is to examine the relative strengths of the strong acids and bases, whose strengths are " leveled " by the fact that they are all totally converted into H 3 O + or OH – ions in water. By studying them in appropriate non-aqueous solvents which are poorer acceptors or donors of protons, their relative strengths can be determined. | libretexts | 2025-03-17T19:53:14.223623 | 2013-10-03T01:37:43 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/10%3A_Fundamentals_of_Acids_and_Bases/10.05%3A_Lewis_Acids_and_Bases",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "10.5: Lewis Acids and Bases",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/10%3A_Fundamentals_of_Acids_and_Bases/10.06%3A_Types_of_Acids_and_Bases | 10.6: Types of Acids and Bases
- Given the formula of a binary compound of hydrogen with an element of Z<18, predict whether its aqueous solution be acidic or basic, and write an appropriate equation.
- Do the same for an oxygen- or hydroxy compound of a similar element.
- Define an oxyacid , and explain why many of these are very strong acids .
- Write an equation describing the amphoteric nature of zinc or aluminum hydroxide.
- Define an acid anhydride , and write an equation describing its behavior.
- Explain how metal cations can give acidic solutions.
- Write equations showing why aqueous solutions of some salts are acidic, while others are alkaline.
- Write the formulas of an organic acid and an organic base , and write an equation showing why the latter gives an alkaline solution in water.
You will already have noticed that not every compound that contains hydrogen atoms is acidic; ammonia NH 3 , for example, gives an alkaline aqueous solution. Similarly, some compounds containing the group -OH are basic, but others are acidic. An important part of understanding chemistry is being able to recognize what substances will exhibit acidic and basic properties in aqueous solution. Fortunately, most of the common acids and bases fall into a small number of fairly well-defined groups, so this is not particularly difficult.
Binary Hydrides
Strictly speaking, the term hydride refers to ionic compounds of hydrogen with the electropositive metals of Groups 1-2 ; these contain the hydride ion , H – , and are often referred to as "true" hydrides. However, the term is often used in its more general sense to refer to any binary compound MH n in which M stands for any element. The hydride ion is such a strong base that it cannot exist in water, so salts such as sodium hydride react with water to yield hydrogen gas and an alkaline solution:
\[NaH + H_2O \rightarrow Na^+ + OH^– + H_2\]
The more electronegative elements form covalent hydrides which generally react as acids, a well-known example being hydrogen chloride, a gas which dissolves readily in water to give the solution we know as hydrochloric acid
\[HCl_{(g)} + H_2O_{(l)} \rightarrow H_3O^+ + Cl^–\]
Most of the covalent hydrogen compounds are weak acids— in some cases, such as methane, CH 4 , so weak that their acidic properties are rarely evident. Many, such as H 2 O and NH 3 , are amphiprotic. The latter compound, ammonia, is a weaker acid than H 2 O, so it exhibits basic properties in water
\[NH_3 + H_2O \rightarrow NH_4^+ + OH^–\]
but behaves as an acid in non-aqueous solvents such as liquid ammonia itself:
\[NH_3 + NH_3 \rightarrow NH_4^+ + NH_2^–\]
In general, the acidity of the non-metallic hydrides increases with the atomic number of the element to which it is connected. Thus as the element M moves from left to right across the periodic table or down within a group, the acids MH become stronger, as indicated by the acid dissociation constants shown at the right. Note that
- The formulas shown in red represent "strong" acids (that is, acids stronger than H 3 O + .)
- Hydrofluoric acid is the only weak member of the hydrohalogen acids.
- Acids weaker than water do not behave as acids in aqueous solution. Thus for most practical purposes, methane and ammonia are not commonly regarded as acids. H 2 O itself is treated as an acid only in the narrow context of aqueous solution chemistry.
Attempts to explain these trends in terms of a single parameter such as the electronegativity of M tend not to be very useful. The reason is that acid strengths depend on a number of factors such as the strength of the M-H bond and the energy released when the resultant ions become hydrated in solution. This last factor plays a major role in making HF something of an anomaly amongst the strong acids of Group 17.
Ammonia is such a weak acid that its conjugate base, amide ion NH 2 – , cannot exist in water. In aqueous solution, NH 3 acts as a weak base, accepting a proton from water and leaving a OH – ion. An aqueous solution of NH 3 is sometimes called “ammonium hydroxide”. This misnomer reflects the pre-Brønsted view that all bases contain –OH units that yield hydroxide ions on dissociation according to the Arrhenius scheme
\[NH_4OH \rightleftharpoons NH_4^+ + OH^–\]
A solution of ammonia in water is more correctly referred to as "aqueous ammonia" and represented by the formula NH 3 ( aq ). There is no physical evidence for the existence of NH 4 OH, but the name seems to remain forever etched on reagent bottles in chemical laboratories and in the vocabularies of chemists.
Hydroxy Compounds
Compounds containing the hydroxyl group –OH constitute the largest category of acids, especially if the organic acids (discussed separately farther on) are included. M–OH compounds also include many of the most common bases.
Whether a compound of the general type M–O–H will act as an acid or a base depends is influenced by the relative tendencies of the M–O and the O–H bonds to break apart in water. If the M–O bond cleaves more readily, then the –OH part will tend to retain its individuality and with its negative charge will become a hydroxide ion. If the O–H bond breaks, the MO-part of the molecule will remain intact as an oxyanion MO – and release of the proton will cause the MOH compound to act as an acid.
This is not solely a matter of the relative strengths of the two bonds; the energy change that occurs when the resulting ions interact with water molecules is also an important factor.
In general, if M is a metallic element, the metal hydroxide compound \(\ce{MOH}\) will be basic. The case of the highly electropositive elements of Groups 1 and 2 is somewhat special in that their solid MOH compounds exist as interpenetrating lattices of metal cations and OH – ions, so those that can dissolve readily in water form strongly alkaline solutions; KOH and NaOH are well known examples of strong bases. From the Brønsted standpoint, these different “bases” are really just different sources for the single strong base OH – . As one moves into Group 2 of the periodic table the M-OH compounds become less soluble; thus a saturated solution of Ca(OH) 2 (commonly known as limewater ) is only weakly alkaline. Hydroxides of the metallic elements of the p-block and of the transition metals are so insoluble that their solutions are not alkaline at all. Nevertheless these solids dissolve readily in acidic solutions to yield a salt plus water, so they are formally bases.
The acidic character of hydroxy compounds of the nonmetals, known collectively as oxyacids , is attributed to the displacement of negative charge from the hydroxylic oxygen atom by the electronegative central atom. The net effect is to make the oxygen slightly more positive, thus easing the departure of the hydrogen as H + . The presence of other electron-attracting groups on the central atom has a marked effect on the strength of an oxyacid. Of special importance is the doubly-bonded oxygen atom. With the exception of the halogen halides, all of the common strong acids contain one or more of these oxygens, as in sulfuric acid SO 2 (OH), nitric acid NO 2 (OH) and phosphoric acid PO(OH) 3 . In general the strengths of these acids depends more on the number of oxygens than on any other factor, so periodic trends are not so important.
Most of the halogen elements form more than one oxyacid. Fluorine is an exception; being more electronegative than oxygen, no oxyacids of this element are known. Chlorine is the only halogen for which all four oxyacids are known, and the K a values for this series show how powerfully the Cl–O oxygen atoms affect the acid strength.
Oxygen compounds
Binary oxides that contain no hydrogen atoms can exhibit acid-base behavior when they react with water. The division between acidic and basic oxygen oxides largely parallels that between the hydroxy compounds. The oxygen compounds of the highly electropositive metals of Groups 1-2 actually contain the oxide ion O – . This ion is another case of a proton acceptor that is stronger than OH – , and thus cannot exist in aqueous solution. Ionic oxides therefore tend to give strongly alkaline solutions:
\[\ce{O^{-} + H2O -> 2OH^{-} (aq)}\]
In some cases, such as that of MgO, the solid is so insoluble that little change in pH is noticed when it is placed in water. CaO, however, which is known as quicklime , is sufficiently soluble to form a strongly alkaline solution with the evolution of considerable heat; the result is the slightly-soluble slaked lime , Ca(OH) 2 . Oxygen compounds of the transition metals are generally insoluble solids having rather complex extended structures. Although some will dissolve in acids, they display no acidic properties in water.
Amphoteric oxides and hydroxides
The oxides and hydroxides of the metals of Group 3 and higher tend to be only weakly basic, and most display an amphoteric nature. Most of these compounds are so slightly soluble in water that their acidic or basic character is only obvious in their reactions with strong acids or bases. In general, these compounds tend to be more basic than acidic; thus the oxides and hydroxides of aluminum, iron, and zinc all dissolve in mildly acidic solutions, whereas they require treatment with concentrated hydroxide ion solutions to react as acids.
| Al(OH) 3 +3 H + → Al 3 + (aq) +3H 2 O | Al(OH) 3 (s) +OH – → Al(OH) 3 – 3 (aq) |
| Zn(OH) 2 +3 H + → Zn 2 + (aq) +2H 2 O | Zn(OH) 2 (s) +2 OH – → Zn(OH) 3 4– (aq) |
| FeO(OH) + 3 H + → Fe 3 + (aq) +3H 2 O | Fe 2 O 3 (s) + 3 OH – → 2 FeO 2 + (aq) +3 H 2 O |
The product ions in the second column are known as aluminate, zincate, and ferrate. Other products, in which only some of the –OH groups of the parent hydroxides are deprotonated, are also formed, so there are actually whole series of these oxyanions for most metals.
Amphiprotic vs. amphoteric: what's the difference?
An amphoteric substance is one that can act as either an acid or a base. An amphiprotic substance can act as either a proton donor or a proton acceptor. So all amphiprotic compounds are also amphoteric. An example of an amphoteric compound that is not amphiprotic is ZnO, which can act as an acid even though it has no protons to donate:
\[\ce{ZnO(s) + 4 OH^{-} (aq) -> Zn(OH)^{2-}4 (aq)}\]
As a base, it "accepts" protons but does not retain them:
\[\ce{ZnO(s) + 2H^{+} <=> Zn^{2+} + H_2O}\]
The same remarks can be made about the other compounds shown in the table above. For most practical purposes, the distinction between amphiprotic and amphoteric is not worth worrying about.
Acid anhydrides
The binary oxygen compounds of the non-metallic elements tend to produce acidic solutions when they are added to water. Such compounds are sometimes referred to as acid anhydrides (“acids without water”.)
\[CO_2 + H_2O \rightarrow H_2CO_3\]
\[SO_2 + H_2O \rightarrow [H_2SO_3]\]
\[SO_3 + H_2O \rightarrow H_2SO_4\]
\[P_4O_{10} + 6 H_2O \rightarrow 4 H_3PO_4\]
In some cases, the reaction involves more than simply incorporating the elements of water. Thus nitrogen dioxide, used in the commercial preparation of nitric acid, is not an anhydride in the strict sense:
\[3 NO_2 + H_2O \rightarrow 2 HNO+3 + NO\]
Metal cations as acids
When sodium chloride is dissolved in pure water, the pH remains unchanged because neither ion reacts with water. However, a solution of magnesium chloride will be faintly acidic, and a solution of iron(III) chloride FeCl 3 will be distinctly so. How can this be? Since none of these cations contains hydrogen, we can only conclude that the protons come from the water.
The water molecules in question are those that find themselves close to any cation in aqueous solution; the positive field of the metal ion interacts with the polar H 2 O molecule through ion-dipole attraction, and at the same time increases the acidity of these loosely-bound waters by making facilitating the departure H + ion. In general, the smaller and more highly charged the cation, the more acidic will it be; the acidity of the alkali metals and of ions like Ag + (aq) is negligible, but for more highly-charged ions such as Mg 2 + , Pb 2 + and Al 3 + , the effect is quite noticeable.
Most of the transition-metal cations form organized coordination complexes in which four or six H 2 O molecules are chemically bound to the metal ion where they are well within the influence of the coulombic field of the cation, and thus subject to losing a proton. Thus an aqueous solution of "Fe 3+ " is really a solution of the ion hexaaquo iron III , whose first stage of "dissociation" can be represented as
\[Fe(H_2O)_6^{3+} + H_2O \rightarrow Fe(H_2O)_5(OH)^{2+} + H_3O^+\]
As a consequence of this reaction, a solution of FeCl 3 turns out to be a stronger acid than an equimolar solution of acetic acid. A solution of FeCl 2 , however, will be a much weaker acid; the +2 charge is considerably less effective in easing the loss of the proton.
It should be possible for a hydrated cation to lose more than one proton. For example, an Al(H 2 O) 6 3+ ion should form, successively, the following species:
AlOH(H 2 O) 5 2+ → Al(OH) 2 (H 2 O) 4 + → Al(OH) 3 (H 2 O) 3 0 → Al(OH) 4 (H 2 O) 2 – → Al(OH) 5 (H 2 O) 2– → Al(OH) 6 3–
However, removal of protons becomes progressively more difficult as the charge decreases from a high positive value to a negative one; the last three species have not been detected in solution. In dilute solutions of aluminum chloride the principal species are actually Al(H 2 O) 6 3+ (commonly represented simply as Al 3+ ) and AlOH(H 2 O) 5 2+ ("AlOH 2+ ").
Salts
When salts dissolve in water, they yield solutions of anions and cations, so their effects on the pH of the solution will depend on the properties of the particular pair of ions. For a salt such as sodium chloride, the solution wil remain neutral because sodium ions have no acidic properties and chloride ions, being conjugate to the strong acid HCl have negligible proton-accepting tendencies. Ions of this kind are often referred to as "strong" ions (that is, derived from a strong acid and a strong base— HCl and NaOH in the case of NaCl.) The possible outcomes for the other three possibilities are shown below.
| salt derived from | example | pH | reaction |
|---|---|---|---|
| weak acid + strong base | \(\ce{NaF}\) | >7 | F – + H 2 O → HF + OH – |
| strong acid + weak base | \(\ce{NH4Cl}\) | <7 | NH 4 + + H 2 O → NH 3 + H 3 O + |
| weak acid + weak base | \(\ce{NH4F}\) | ? | depends on competition between above two reactions; need to do calculation |
The reactions that cause salt solutions to have non-neutral pH values are sometimes still referred to by the older term hydrolysis (“water splitting”)— a reminder of times before the concept of proton transfer acid-base reactions had developed.
Organic Acids and Bases
The carboxyl group –CO(OH) is the characteristic functional group of the organic acids. The acidity of the carboxylic hydrogen atom is due almost entirely to electron-withdrawal by the non-hydroxylic oxygen atom; if it were not present, we would have an alcohol –COH whose acidity is smaller even than that of H 2 O. This partial electron withdrawal from one atom can affect not only a neighboring atom, but that atom’s neighbor as well. Thus the strength of a carboxylic acid will be affected by the bonding environment of the carbon atom to which it is connected. This propagation of partial electron withdrawal through several adjacent atoms is known as the inductive effect and is extremely important in organic chemistry.
A very good example of the inductive effect produced by chlorine (another highly electronegative atom) is seen by comparing the strengths of acetic acid and of the successively more highly substituted chloroacetic acids:
|
CH
3
–COOH
|
ClCH
2
–COOH
monochloroacetic acid 0.0014 |
Cl
2
CH–COOH
|
Cl
3
C–COOH
|
Phenols
The acidic character of the carboxyl group is really a consequence of the enhanced acidity of the –OH group as influenced by the second oxygen atom that makes up the –COOH group. The benzene ring has a similar although weaker electron-withdrawing effect, so hydroxyl groups that are attached to benzene rings also act as acids. The most well known example of such an acid is phenol, C 6 H 5 OH, also known as carbolic acid . Compared to carboxylic acids, phenolic acids are quite very weak, as indicated by the acid dissociation constants listed below:
|
CH
3
–COOH
acetic acid 1.8 × 10 –5 |
C
6
H
5
–OH
phenol 1.1× 10 –10 |
C
6
H
5
–COOH
benzoic acid 6.3 × 10 –5 |
Amines and organic bases
We have already discussed organic acids, so perhaps a word about organic bases would be in order. The –OH group, when bonded to carbon, is acidic rather than basic, so alcohols are not the analogs of the inorganic hydroxy compounds. The amines, consisting of the –NH 2 group bonded to a carbon atom, are the most common class of organic bases. Amines give weakly alkaline solutions in water:
\[CH_3NH_2 + H_2O \rightarrow CH_3NH_3^+ + OH^–\]
Amines are end products of the bacterial degradation of nitrogenous organic substances such as proteins. They tend to have rather unpleasant “rotten fish” odors. This is no coincidence, since seafood contains especially large amounts of nitrogen-containing compounds which begin to break down very quickly. Methylamine CH 3 NH 2 , being a gas at room temperature, is especially apt to make itself known to us. Addition of lemon juice or some other acidic substance to fish will convert the methylamine to the methylaminium ion CH 3 NH 3 + . Because ions are not volatile they have no odor. | libretexts | 2025-03-17T19:53:14.401771 | 2013-10-03T01:37:44 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/10%3A_Fundamentals_of_Acids_and_Bases/10.06%3A_Types_of_Acids_and_Bases",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "10.6: Types of Acids and Bases",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/10%3A_Fundamentals_of_Acids_and_Bases/10.07%3A_Acid-Base_Gallery | 10.7: Acid-Base Gallery
Make sure you thoroughly understand the following essential ideas which have been presented above. It is especially important that you know the precise meanings of all the highlighted terms in the context of this topic.
- Describe some of the special properties of sulfuric acid that make it especially important both in the laboratory and in industry.
- Name the major acids and bases that are important to the fertilizer industry
- Name the natural sources of any three of the major organic acids.
- What are fatty acids ? In what major way do the physical properties of saturated and unsaturated fatty acids differ?
- Describe the general structure of an amino acid , and state why they are important.
Acids and bases are of interest not only to the chemically inclined; they play a major role in our modern industrial society — so anyone who participates in it, or who is interested in its history and development, needs to know something about them. Five of the major acids and bases fall into the "Top 20" industrial chemicals manufactured in the world. The following table shows year-2000 figures for the U.S:
| cal and rank |
Sulfuric acid - 1 |
Lime (CaO)
|
Phosphoric acid - 4 |
Ammonia
|
Sodium hydroxide - 9 |
Nitric acid - 11 |
|---|---|---|---|---|---|---|
| production in 10 9 kg | 40 | 20 | 16 | 15 | 11 | 8 |
| major use | chemicals | cement | fertilizers | fertilizers | chemicals | chemicals |
The mineral acids
This term refers to any inorganic acid, but its common use is usually limited to the major strong acids plus phosphoric acid. The major mineral acids— sulfuric, nitric, and hydrochloric— have been known since medieval times. Their discovery is usually credited to the Persian alchemist Abu Musa Jabir ibn Hayyan, known in the West by his Latinized name Geber. Jabir also invented aqua regia , the mixture of nitric and hydrochloric acids that has the unique ability to dissolve gold.
Sulfuric acid
More sulfuric acid is manufactured than any other industrial chemical, and it is the cheapest industrial acid worldwide. It has been continuously manufactured in the U.S. since 1793 and in Europe for much longer.
What you should know about it
- Pure anhydrous H 2 SO 4 is a dense, viscous liquid which melts at 10.4°C and boils at about 300°C, decomposing back into its constituents, H 2 O and SO 3 .
- The acid undergoes autoprotolysis
2 H 2 SO 4 → H 3 SO 4 + + HSO 4 –
H 2 SO 4 + 2 NaCl → 2 Na + + SO 4 2– + HCl (g)
C 12 H 22 O 11 (s) → 12 C (s) + 11 H 2 O
H 2 SO 4 + H 2 SO 4 → HS 2 O 7 – + H 3 O +
- Its high boiling point makes the acid ideal for making other acids, such as nitric and hydrochloric, which are more volatile; removal of the gaseous product drives the reaction to the right as predicted by the Le Chatelier Principle :
- Sulfuric acid has a voracious appetite for water, and thus is an excellent dehydrating agent . This is seen most spectacularly if some concentrated acid is poured onto a small pile of table sugar; after a short time, a vigorous reaction ensues resulting in a pile of porous, steaming carbon
- Sulfuric acid can even dehydrate itself! , some of which have been detected on the surface of Jupiter's moon Europa.
- Owing to the autoprotolysis and self-dehydration reactions described above, "pure" sulfuric acid contains at least six minority species in addition to H 2 SO 4 .
How it is made
Sulfur trioxide, the anhydride of sulfuric acid, is the immediate precursor. Gaseous SO 3 reacts vigorously with water, liberating much heat in the process:
\[SO_{3(g)} + H_2O_{(l)} → H_2SO_{4(l)}\]
Industrial manufacture of the acid starts with sulfur dioxide, prepared from burning elemental sulfur or obtained as a byproduct from roasting sulfide ores. The oxidation of SO 2 to SO 3 looks simple
\[SO_{2(g)} + ½ O_{2(g)} → SO_{3(g)}\]
but there are several complications:
- All chemical reactions take place more rapidly at higher temperatures, but because this reaction is highly exothermic, raising the temperature decreases the yield.
- Because of the decrease in volume (1.5 moles of gases to 1 mole), raising the pressure will increase the yield, so the reaction is carried out at a temperature below 600°C but at very high pressure.
- Specialized catalysts are used to speed up the reaction at these lower temperatures.
- Dissolving the SO 3 directly in water would release large amounts of heat, creating a mist of fine acid droplets that would escape into the atmosphere. The SO 3 is instead dissolved in sulfuric acid to form pyrosulfuric acid or oleum , sometimes known as fuming sufuric acid :
\[H_2SO_{4(l)} + SO_{3(g)} → H_2S_2O_{7(l)}\]
\[H_2S_2O_{7(l)} + H_2O_{(l)} → 2 H_2SO_{4(l)}\]
- The oleum is then treated with water to form industrial grade (96-98%) sulfuric acid:
What it is used for
Sulfuric acid has a broad spectrum of industrial uses, and the annual tonnage follows the economic cycle quite closely.
- Sixty percent of worldwide product goes into the manufacture of phosphoric acid H 3 PO 4 which is used to make phosphate fertilizers and phosphate-based household detergents.
- An important nitrogen fertilizer, ammonium sulfate (NH 4 ) 2 SO 4 , is made by reacting sulfuric acid with ammonia, the latter often obtained from the thermal decomposition of coal.
- Sulfuric acid is the major component of pickling acid that is used to remove surface oxide scale from steel before it is fabricated into steel product for the automobile and other industries.
- Aluminum sulfate (made from bauxite Al 2 O 3 with H 2 SO 4 ), is widely used in the papermaking industry to coagulate the cellulose fibers, producing a smooth, hard paper surface. Another major use is to make aluminum hydroxide which is used to filter out particulate matter in water treatment facilities.
- The familiar lead-acid storage battery employs sulfuric acid as an electrolyte. As the battery discharges, the concentraton of sulfuric acid in the electrolyte decreases as sulfate ions are taken up as PbSO 4 . Owing to the high density of the acid, the state of charge of the battery can be measured by means of a hydrometer.
Sulfuric acid in the environment
Acid Rain - Combustion of fossil fuels which contain organic sulfur compounds releases SO 2 into the atmosphere. Photochemical oxidation of this compound to SO 3 , which rapidly takes up moisture, leads to the formation of H 2 SO 4 , a major component of acid rain.
Acid mine drainage results when sediments of the very common iron pyrite , FeS2, are exposed to air and are oxidized:
FeS 2 (s) + 7/2 O 2 + H 2 O → Fe 2 + 2 SO 4 2– + 2 H +
further oxidation of the iron to Fe 3 + results in additional reactions. The resulting drainage liquid is often orange-brown in color and can a have a pH of below zero.
Nitric acid
Anhydrous HNO 3 is a colorless liquid boiling at 82.6°C, but "pure" HNO 3 only exists as the solid which melts at –41.6°C. In its liquid and gaseous states, the acid is always partially decomposed into nitrogen dioxide:
2 HNO 3 → 2 NO 2 + ½ O 2 + H 2 O
This reaction, which is catalyzed by light, accounts for the brownish color of HNO 3 solutions.
What you should know about it
- HNO 2 undergoes autoprotolysis to a greater extent than any other liquid. Further reactions of the conjugate acid H 2 NO 3 + with HNO 3 lead to a complicated mixture of species in the liquid.
- Dilute nitric acid can be concentrated by distillation up to a maximum of 68%, at which point it forms a constant-boiling (azeotropic) mixture with water. Higher concentrations require dehydration with sulfuric acid; the result is fuming nitric acid.
- "Concentrated nitric acid" is sold as a 70% solution in water, corresponding to a concentration of about 16M.
- Nitric acid is a very strong oxidizing agent , which adds to its corrosive behavior with organic materials including, of course, skin, which it turns yellow owing to a reaction with the protein keratin . Reactions with many organic compounds are highly exothermic and often violent. The well-known reaction of nitric acid with metallic copper produces copious amounts of brown nitrogen dioxide gas.
How it is made
The simplest method, which was used industrially before 1900, was by treatment of sodium nitrate ("Chile saltpeter", NaNO 3 ) with sulfuric acid. The direct synthesis of the acid from atmospheric nitrogen and oxygen is thermodynamically favorable
½ N 2 + 5/4 O 2 + ½ H 2 O → HNO 3
but is kinetically hindered by an extremely high activation energy, a fact for which we can be most thankful (see sidebar.) The first industrial nitrogen fixation process, developed in 1903, used this reaction to produce nitric acid, but it required the use of an electric arc to supply the activation energy and was therefore too energy-intensive to be economical.
Note
If it were not for the high activation energy required to sustain this reaction, all of the oxygen in the atmosphere would be consumed and the oceans would be a dilute solution of nitric acid.
The modern Ostwald process involves the catalytic oxidation of ammonia to nitric oxide NO, which is oxidized in a further step to NO 2 ; reaction of the latter with water yields HNO 3 . This route, first developed in 1901, did not become practical until the large-scale production of ammonia by the Haber-Bosch process in 1910.
What it is used for
The major industrial uses of nitric acid are for the production of ammonium nitrate fertilizer , and in the manufacture of explosives . On a much small scale, the acid is used in metal pickling, etching semiconductors and electronic circuit boards, and in the manufacture of acrylic fibers.
In the laboratory, the acid finds use in a wide variety of roles.
In the environment
High-temperature combustion processes (in internal combustion engines, power plants, and incinerators) can oxidize atmospheric nitrogen to nitric oxide (NO) and other oxides ("NO x "); the NO is then photooxidized to NO 2 , which reacts with water to form HNO 3 which is a major component of acid rain . NO 2 is the major precursor of photochemical smog .
Hydrochloric acid
Unlike the other major acids, there is no such substance as "pure" hydrochloric acid; what we call "hydrochloric acid" is just an aqueous solution of hydrogen chloride gas (bp –84°C). But in a sense it is more "pure" than the acids discussed above, since there is no autoprotolysis; hydronium and chloride ions are the only significant species in the solution. Hydrochloric acid is usually sold as a 32-38% (12M) solution of HCl in water; concentrations greater than this are known as fuming hydrochloric acid .
Note
Hydrochloric acid is still sometimes sold under its older name muriatic acid for cleaning bricks and other household purposes. The name comes from the same root as marine, reflecting its preparation from salt.
The acid has been known to chemists (and alchemists), and used for industrial purposes since the middle ages. Its composition HCl was demonstrated by Humphrey Davy in 1816.
What you should know about it
- Hydrochloric acid is the least hazardous of the strong mineral acids to work with because unlike the other ones, it is not an oxidizing agent. It is usually the acid of choice for titrations and other operations in which the main requirement is simply a strong source of hydronium ions.
- The concentrated acid boils at 48°C. As boiling continues, it loses HCl and the boiling point rises to 109°C, at which point a constant-boiling (azeotropic) solution remains, consisting of 20.2% HCl.
What it is used for
The uses of hydrochloric acid are far too many to enumerate individually, but the following stand out:
- A major industrial use is to remove surface scale from iron or steel ("pickling") before it is processed into sheets or other forms, or galvanized or coated.
- Production of chlorinated organic chemicals , particularly vinyl chloride, polyurethanes, and other construction polymers, consumes huge amounts of HCl.
- The acid is widely used for pH control of water, including neutralization of wastwater streams, and for regenerating ion-exchange water softeners .
How it is made
The ancient method of treating salt with sulfuric acid to release HCl has long since been supplanted by more efficient processes, including direct synthesis by "burning" hydrogen gas in chlorine:
\[H_{2(g)} + Cl_{2(g)} \rightarrow 2 HCl_{(g)}\]
Most hydrochloric acid production now comes from reclaiming byproduct hydrogen chloride gas from other processes, especially those associated with the production of industrial organic compounds.
2 The Alkali Metals
The term alkali usually means a basic salt of a Group 1 or 2 ("alkali" or "alkaline earth") metal. All alkalies are of course bases, but the latter term is much more general, whether defined according to the Arrhenius, Brønsted-Lowry, or Lewis concepts. The word alkali comes from the Arabic al-qali , which refers to the ashes from which sodium and potassium hydroxides ( potash , "ashes remaining in the pot", and the origin of the element name potassium ) were extracted as a step in the making of soap.
Sodium hydroxide
Pure sodium hydroxide is a white solid consisting of Na + and OH – ions in a crystal lattice. Although it is widely thought of as an ionic solid, van der Waals forces make a substantial contribution to its stability.
What you should know about it
- Owing to its deliquescence (ability to take up moisture) and its tendency to react with carbon dioxide, the solid must be stored in a closed container.
- In industry, sodium hydroxide is commonly known as caustic soda or simply as caustic ; NaOH sold for household purposes is usually known as lye .
- Sodium hydroxide slowly attacks glass to form sodium silicate. Glass vessels used to store concentrated solutions gradually develop a cloudy coating on the inside.
- Some metals , notably aluminum, zinc, and titanium, react with strongly alkaline solutions, but iron and copper are immune to this kind of attack.
- Highly alkaline solutions also soften and dissolve skin , accounting for the slippery feeling associated with strong bases. Sodium hydroxide was once used to dispose of animal carcasses, digesting them into an easily disposable liquid form.
How it is made
Sodium hydroxide is now manufactured by the electrolysis of brine solutions, and along with chlorine, is one of the two major products of the chloralkali industry .
Electrolysis of aqueous NaCl produces Cl 2 at the anode, but because H 2 O can be reduced more readily than Na + , the water is decomposed to H 2 and OH – at the cathode, leaving a solution of NaOH. An older mercury cell process reduces the Na + to Na within a mercury amalgam (alloy), and the metallic sodium is then combined with water to produce NaOH and hydrogen. The net reaction for the reduction step is the same for both methods:
\[2 Na^+ + 2 H_2O + 2e^– \rightarrow H_{2(g)] + 2 NaOH\]
The resulting solution is usually evaporated to such a high concentration that it solidifies at ordinary temperatures. It is commonly shipped in rail cars or barges which can be heated with steam to liquefy the mixture for removal. (It is obviously uneconomical to ship large quantities of water across the country!)
What it is used for
- Sodium hydroxide is one of the most diverse industrial chemicals in terms of its applications. Most householders know it as the active ingredient of drain cleaning agents .
- Huge quantities are consumed by the pulp and paper industry, which is probably its single largest specific industrial application. It is used to remove the lignin component of wood pulp from the cellulose so that the latter can be processed into paper.
- About half of the NaOH output goes into the production of a wide variety of other industrial chemicals , and in degreasing steel drums and other industrial surfaces.
- "Lye" plays a role in the processing of many types of foods , including chocolate, olives, pretzels, and the "hominy" and "grits" corn products used in the Southern U.S. It also acts as a chemical peeling agent for fruits and vegetables.
The economic push and pull of caustic and chlorine
In contrast to the extremely diverse applications of sodium hydroxide which makes the demand for this commodity relatively immune to the ups and downs of the economic cycle, the consumption of chlorine is directly dependent on the economy as reflected in the demand for polyvinyl chloride products that are now widely used in the the construction and home furnishings industries. Because chlorine, being a gas, is expensive to store, the output of the chloralkali industry is governed largely by the demand for this commodity. When times are good this presents no problem; caustic is then largely a by-product and can easily be stockpiled if supply exceeds demand. But during an economic downturn, the demand for chlorine declines, limiting its production along with that of caustic. But because the demand for caustic tends to decline much less, it becomes scarce and its price rises, thus tending to drive the industrial economy into even deeper trouble.
Sodium carbonate
This compound is known industrially as soda ash , and domestically as washing soda . The common form is the heptahydrate , Na 2 CO 3 ·7 H 2 O. The white crystals of this substance spontaneously lose water ( effloresce ) when exposed to the air, forming the monohydrate.
What it is used for
- Although carbonates are much weaker bases than hydroxides, a solution of sodium carbonate can still have a pH of 11 or so, sufficiently high to allow it to substitute for sodium hydroxide in many applications— especially when the price of caustic is high.
- The single most important use of soda ash is in the manufacture of glass, where it serves to lower the melting point of the principal component, SiO 2 .
- Another emerging major use is to neutralize the SO 2 emissions of fossil fuel-burning power plants.
- The older use of sodium carbonate as a cleaning agent (hence the name washing soda ) was based partly on the ability of its alkaline solutions to emulsify grease, but mainly as a means of precipitating the insoluble carbonates of calcium and magnesium before these ions (commonly present in hard water) could form undesirable preciptates with soaps. The use of modern detergents has largely eliminated this once important market.
How it is made
Most of the world's sodium carbonate is made by the ammonia-soda "Solvay" process developed in 1861 by the Belgian chemist Ernest Solvay (1838-1922) whose patents made him into a major industrialist and a rich philanthropist. This process involves a set of simple reactions that essentially converts limestone (CaCO 3 ), ammonia NH 3 and brine (NaCl) into sodium bicarbonate NaHCO 3 and eventually Na 2 CO 3 , recycling several of the intermediate products in an ingenious way.
A minor source of soda ash (but quite significant in some countries, such as the U.S.) is the mining of natural evaporites (the remains of ancient lakes), such as the trona found in Southern California.
Ammonia
Ammonia NH 3 is of course not a true alkali, but it is conveniently included in this section for discussion purposes. Most people are familiar with the pungent odor of this gas, which can be detected at concentrations as low as 20-50 ppm.
Tradition dies slowly: a non-existent chemical available in bottles!
What you should know about it
- More moles of ammonia are manufactured than of any other industrial chemical.
- Ammonia is extremely soluble in water. An aqueous solution of ammonia is still sometimes referred to in commerce as "ammonium hydroxide", but this term is no longer favored by chemists because no such compound as NH 4 OH has ever been shown to exist. At neutral pH, about 99% of the ammonia in water exists as NH 4 + ions.
- Ammonia is an end product of nitrogen metabolism in most organisms. One source that may be familiar to parents of infants is the bacterial decomposition of the contents of diapers.
- Liquid ammonia (bp –33°C) is often used as an ionizing laboratory solvent.
What it is used for
- The major use of ammonia (about 80%) is as a fertilizer, most commonly as anhydrous ammonia (the gas is injected directly into the soil) or after conversion to (NH 4 ) 2 SO 4 , NH 4 NO 3 , or urea O–C(NH 3 ) 2 .
- Ammonia is used in the production of numerous polymers, including nylons and polyurethanes.
- Explosives manufacture accounts for about 5% of ammonia production.
- Beyond these, there are hundreds of minor uses, including as a household cleaning agent (aqua ammonia). a refrigerant, and as a laboratory reagent.
How it is made
Ammonia is made by direct synthesis from the elements:
\[N_{2(g)} + 3 H_{2(g)} \rightarrow 2 NH_{3(g)}\]
... a simple-looking reaction, but one that required some very creative work to implement; the Haber-Bosch process is considered to be the most important chemical synthesis of the 20th Century.
3 Some important organic acids
Most acids are organic— there are millions of them. The acidic function is usually a hydroxyl group connected to a carbon that is bonded to an electron-withdrawing oxygen atom; the combination is the well-known carboxyl group , –COOH. Here are a few that are part of everyone's life.
Acetic acid
This is next to formic acid in being the simplest of the organic acids, and in the form of vinegar (a 5-8% solution in water) its characteristic odor is known to everyone. The pure acid is a colorless liquid above 16.7°C; below this temperature it forms a crystalline solid, hence the term "glacial acetic acid" that is commonly applied to the pure substance. The name of the acid comes from acetum , the Latin word for vinegar.
What you should know about it
- The pure acid, although quite weak in the proton-donor sense, is quite corrosive and its vapors are very irritating.
- A 1.0M solution of the acid has a pH of about 2.4, corresponding to only four out of every thousand CH 3 COOH molecules being dissociated.
What it is used for
Slightly less than half of the world production of acetic acid goes into the production of polymers . The end product visible to most people would be the flexible plastic bottles in which drinking water is sold. Other uses are related mostly to the production of other chemicals, mainly acetic anhydride , but also including aspirin .
How it is made
Bacterial fermentation of sugars has been the source of vinegar since ancient times, and is still accounts for most food-grade acetic acid and vinegar, but it now amounts to only about 10% of total acetic acid production:
C 6 H 12 O 6 → 3 CH 3 COOH
There are several important synthetic routes to acetic acid production, but the major one is by treating methanol with carbon monoxide:
CH 3 OH + CO → CH 3 COOH | libretexts | 2025-03-17T19:53:14.553774 | 2013-10-03T01:37:42 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/10%3A_Fundamentals_of_Acids_and_Bases/10.07%3A_Acid-Base_Gallery",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "10.7: Acid-Base Gallery",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/11%3A_Chemical_Equilibrium | 11: Chemical Equilibrium
The laws of chemical equilibrium define the direction in which a chemical reaction will proceed, as well as the quantities of reactants and products that will remain after the reaction comes to an end. An understanding of chemical equilibrium and how it can be manipulated is essential for anyone involved in Chemistry and its applications.
-
- 11.1: Introduction to Chemical Equilibrium
- Chemical change is one of the two central concepts of chemical science, the other being structure. The very origins of Chemistry itself are rooted in the observations of transformations such as the combustion of wood, the freezing of water, and the winning of metals from their ores that have always been a part of human experience. It was the quest for some kind of constancy underlying change that led the Greek thinkers of around 200 BCE to the idea of elements and later to that of the atom.
-
- 11.2: Le Chatelier's Principle
- The previous Module emphasized the dynamic character of equilibrium as expressed by the Law of Mass Action. This law serves as a model explaining how the composition of the equilibrium state is affected by the "active masses" (concentrations) of reactants and products. In this lesson, we develop the consequences of this law to answer the very practical question of how an existing equilibrium composition is affected by the addition or withdrawal of one of the components.
-
- 11.3: Reaction Quotient
- Consider a simple reaction such as the gas-phase synthesis of hydrogen iodide from its elements: \(H_2 + I_2 \rightarrow 2 HI\) Suppose you combine arbitrary quantities of \(H_2\), \(I_2\) and \(HI\). Will the reaction create more HI, or will some of the HI be consumed as the system moves toward its equilibrium state? The concept of the reaction quotient, which is the focus of this short lesson, makes it easy to predict what will happen.
-
- 11.4: Equilibrium Expressions
- You know that an equilibrium constant expression looks something like K = [products] / [reactants]. But how do you translate this into a format that relates to the actual chemical system you are interested in? This lesson will show you how to write the equilibrium constant expressions that you will need to use when dealing with the equilibrium calculation problems in the chapter that follows this one.
-
- 11.5: Equilibrium Calculations
- This page presents examples that cover most of the kinds of equilibrium problems you are likely to encounter in a first-year university course. Reading this page will not teach you how to work equilibrium problems! The only one who can teach you how to interpret, understand, and solve problems is yourself.
-
- 11.6: Phase Distribution Equilibria
- If two immiscible liquid phases are in contact and one contains a solute, how will the solute tend to distribute itself between the two phases? One’s first thought might be that some of the solute will migrate from one phase into the other until it is distributed equally between the two phases. This, however, does not take into the account the differing solubilities the solute might have in the two solvents; the solute will preferentially migrate into the phase in which it is more soluble. | libretexts | 2025-03-17T19:53:14.618450 | 2013-10-03T01:37:54 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/11%3A_Chemical_Equilibrium",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "11: Chemical Equilibrium",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/11%3A_Chemical_Equilibrium/11.01%3A_Introduction_to_Chemical_Equilibrium | 11.1: Introduction to Chemical Equilibrium
- Define " the equilibrium state of a chemical reaction system ". What is its practical significance?
-
State the meaning and significance of the following terms:
- reversible reaction
- quantitative reaction
- kinetically inhibited reaction
- Explain the meaning of the statement " equilibrium is macroscopically static, but microscopically dynamic ". Very important!
- Explain how the relatve magnitudes of the forward and reverse reaction rate constants in the Mass Action expression affect the equilibrium composition of a reaction system.
- Describe several things you might look for during an experiment that would help determine if a reaction system is in its equilibrium state.
Chemical change is one of the two central concepts of chemical science, the other being structure . The very origins of Chemistry itself are rooted in the observations of transformations such as the combustion of wood, the freezing of water, and the winning of metals from their ores that have always been a part of human experience. It was, after all, the quest for some kind of constancy underlying change that led the Greek thinkers of around 200 BCE to the idea of elements and later to that of the atom. It would take almost 2000 years for the scientific study of matter to pick up these concepts and incorporate them into what would emerge, in the latter part of the 19th century, as a modern view of chemical change.
Chemical change: how far, how fast?
Chemical change occurs when the atoms that make up one or more substances rearrange themselves in such a way that new substances are formed. These substances are the components of the chemical reaction system; those components which decrease in quantity are called reactants, while those that increase are products. A given chemical reaction system is defined by a balanced net chemical equation which is conventionally written as
\[\text{reactants} \rightarrow \text{products}\]
The first thing we need to know about a chemical reaction represented by a balanced equation is whether it can actually take place. If the reactants and products are all substances capable of an independent existence, then in principle, the answer is always "yes". This answer must be qualified, however, by the following considerations:
- How complete is the reaction?
- That is, what fraction of the reactants are converted into products? Some reactions convert essentially 100% of reactants to products, while for others the quantity of products may be undetectable. Many are somewhere in between, meaning that significant quantities of all components remain at the end. Later on, in another part of the course, you will learn that the tendency of a reaction to occur can be predicted entirely from the properties of the reactants and products through the laws of thermodynamics.
- How fast does the reaction occur?
- Some reactions are over in microseconds; others take years. The speed of any one reaction can vary over a huge range depending on the temperature, the state of matter (gas, liquid, solid) and the presence of a catalyst. Unlike the question of completeness, there is no simple way of predicting reaction speed.
- What is the mechanism of the reaction?
- What happens, at the atomic or molecular level, when reactants are transformed into products? What intermediate species (those that are produced but later consumed so that they do not appear in the net reaction equation) are involved? This is the microscopic , or kinetic view of chemical change, and cannot be predicted by theory as it is presently developed and must be inferred from the results of experiments.
A reaction that is thermodynamically possible but for which no reasonably rapid mechanism is available is said to be kinetically limited . Conversely, one that occurs rapidly but only to a small extent is thermodynamically limited . As you will see later, there are often ways of getting around both kinds of limitations, and their discovery and practical applications constitute an important area of industrial chemistry.
What is equilibrium?
Basically, the term refers to what we might call a "balance of forces". In the case of mechanical equilibrium, this is its literal definition. A book sitting on a table top remains at rest because the downward force exerted by the earth's gravity acting on the book's mass (this is what is meant by the "weight" of the book) is exactly balanced by the repulsive force between atoms that prevents two objects from simultaneously occupying the same space, acting in this case between the table surface and the book. If you pick up the book and raise it above the table top, the additional upward force exerted by your arm destroys the state of equilibrium as the book moves upward. If you wish to hold the book at rest above the table, you adjust the upward force to exactly balance the weight of the book, thus restoring equilibrium.
An object is in a state of mechanical equilibrium when it is either static (motionless) or in a state of unchanging motion. From the relation f = ma , it is apparent that if the net force on the object is zero, its acceleration must also be zero, so if we can see that an object is not undergoing a change in its motion, we know that it is in mechanical equilibrium.
Thermal equilibrium
Another kind of equilibrium we all experience is thermal equilibrium . When two objects are brought into contact, heat will flow from the warmer object to the cooler one until their temperatures become identical. Thermal equilibrium arises from the tendency of thermal energy to become as dispersed or "diluted" as possible.
A metallic object at room temperature will feel cool to your hand when you first pick it up because the thermal sensors in your skin detect a flow of heat from your hand into the metal, but as the metal approaches the temperature of your hand, this sensation diminishes. The time it takes to achieve thermal equilibrium depends on how readily heat is conducted within and between the objects; thus a wooden object will feel warmer than a metallic object even if both are at room temperature because wood is a relatively poor thermal conductor and will therefore remove heat from your hand more slowly.
Thermal equilibrium is something we often want to avoid, or at least postpone; this is why we insulate buildings, perspire in the summer and wear heavier clothing in the winter.
Chemical equilibrium
When a chemical reaction takes place in a container which prevents the entry or escape of any of the substances involved in the reaction, the quantities of these components change as some are consumed and others are formed. Eventually this change will come to an end, after which the composition will remain unchanged as long as the system remains undisturbed. The system is then said to be in its equilibrium state , or more simply, "at equilibrium".
Why reactions go toward equilibrium
What is the nature of the "balance of forces" that drives a reaction toward chemical equilibrium? It is essentially the balance struck between the tendency of energy to reside within the chemical bonds of stable molecules, and its tendency to become dispersed and diluted. Exothermic reactions are particularly effective in this, because the heat released gets dispersed in the infinitely wider world of the surroundings.
In the reaction represented here, this balance point occurs when about 60% of the reactants have been converted to products. Once this equilibrium state has been reached, no further net change will occur. The only spontaneous changes that are allowed follow the arrows pointing toward maximum dispersal of energy.
Equilibrium is death!
Chemical equilibrium is something you definitely want to avoid for yourself as long as possible. The myriad chemical reactions in living organisms are constantly moving toward equilibrium, but are prevented from getting there by input of reactants and removal of products. So rather than being in equilibrium, we try to maintain a "steady-state" condition which physiologists call homeostasis — maintenance of a constant internal environment. Equilibrium is death!
For the time being, it's very important that you know this definition:
The direction in which a chemical reaction is written (and thus which components are considered reactants and which are products) is arbitrary. Consider the following two reactions:
\[\underset{\text{synthesis of hydrogen iodide}}{H_2 + I_2 \rightarrow 2 HI} \label{10.1}\]
\[\underset{\text{dissociation of hydrogen iodide}}{2 HI \rightarrow H_2 + I_2} \label{10.2}\]
Equations \(\ref{10.1}\) and \(\ref{10.2}\) represent the same chemical reaction system in which the roles of the components are reversed, and both yield the same mixture of components when the change is completed. This is central to the concept of chemical equilibrium. It makes no difference whether we start with two moles of HI or one mole each of H 2 and I 2 ; once the reaction has run to completion, the quantities of these two components will be the same. In general, then, we can say that the composition of a chemical reaction system will tend to change in a direction that brings it closer to its equilibrium composition . Once this equilibrium composition has been attained, no further change in the quantities of the components will occur as long as the system remains undisturbed.
The composition of a chemical reaction system will tend to change in a direction that brings it closer to its equilibrium composition.
The two diagrams below show how the concentrations of the three components of this chemical reaction change with time. Examine the two sets of plots carefully, noting which substances have zero initial concentrations, and are thus "products" of the reaction equations shown. Satisfy yourself that these two sets represent the same chemical reaction system , but with the reactions occurring in opposite directions. Most importantly, note how the final (equilibrium) concentrations of the components are the same in the two cases.
Whether we start with an equimolar mixture of H 2 and I 2 (left) or a pure sample of hydrogen iodide (shown on the right, using twice the initial concentration of HI to keep the number of atoms the same), the composition after equilibrium is attained (shaded regions on the right) will be the same.
The equilibrium composition is independent of the direction from which it is approached (i.e., the initial conditions).
A chemical equation of the form A → B represents the transformation of A into B, but it does not imply that all of the reactants will be converted into products, or that the reverse reaction B → A cannot also occur. In general, both processes (forward and reverse) can be expected to occur, resulting in an equilibrium mixture containing finite amounts of all of the components of the reaction system. (We use the word components when we do not wish to distinguish between reactants and products.)
If the equilibrium state is one in which significant quantities of both reactants and products are present (as in the hydrogen iodide example given above), then the reaction is said to incomplete or reversible . The latter term is preferable because it avoids confusion with "complete" in its other sense of being completed or finished, implying that the reaction has run its course and is now at equilibrium.
Note that there is no fundamental difference between the meanings of A → B and A B. Some older textbooks just use A = B.
- If it is desired to emphasize the reversibility of a reaction, the single arrow in the equation is replaced with a pair of hooked lines pointing in opposite directions, as in A B.
- A reaction is said to be complete or quantitative when the equilibrium composition contains no significant amount of the reactants. However, a reaction that is complete when written in one direction is said "not to occur" when written in the reverse direction.
In principle, all chemical reactions are reversible, but this reversibility may not be observable if the fraction of products in the equilibrium mixture is very small, or if the reverse reaction is very slow (the chemist's term is " kinetically inhibited ")
How did Napoleon Bonaparte help discover reversible reactions?
We can thank Napoleon for bringing the concept of reaction reversibility to Chemistry. Napoleon recruited the eminent French chemist Claude Louis Berthollet (1748-1822) to accompany him as scientific advisor on the most far-flung of his campaigns, the expedition in Egypt in 1798. Once in Egypt, Berthollet noticed deposits of sodium carbonate around the edges of some the salt lakes found there. He was already familiar with the reaction
\[Na_2CO_3 + CaCl_2 \rightarrow CaCO_3 + 2 NaCl \label{10.3}\]
which was known to proceed to completion in the laboratory. He immediately realized that the Na 2 CO 3 must have been formed by the reverse of this process brought about by the very high concentration of salt in the slowly-evaporating waters. This led Berthollet to question the belief of the time that a reaction could only proceed in a single direction. His famous textbook Essai de statique chimique (1803) presented his speculations on chemical affinity and his discovery that an excess of the product of a reaction could drive it in the reverse direction.
Unfortunately, Berthollet got a bit carried away by the idea that a reaction could be influenced by the amounts of substances present, and maintained that the same should be true for the compositions of individual compounds. This brought him into conflict with the recently accepted Law of Definite Proportions (that a compound is made up of fixed numbers of its constituent atoms), so his ideas (the good along with the bad) were promptly discredited and remained largely forgotten for 50 years. (Ironically, it is now known that certain classes of compounds do in fact exhibit variable composition of the kind that Berthollet envisioned.)
What is the law of mass action?
Berthollet's ideas about reversible reactions were finally vindicated by experiments carried out by others, most notably the Norwegian chemists (and brothers-in-law) Cato Guldberg and Peter Waage. During the period 1864-1879 they showed that an equilibrium can be approached from either direction (see the hydrogen iodide illustration above), implying that any reaction
\[aA + bB \rightarrow cC + dD\]
is really a competition between a "forward" and a "reverse" reaction. When a reaction is at equilibrium, the rates of these two reactions are identical, so no net (macroscopic) change is observed, although individual components are actively being transformed at the microscopic level.
Chemical Equilibrium is dynamic !
Guldberg and Waage showed that for a reaction
\[aA + bB \rightarrow cC + dD\]
the rate (speed) of the reaction in either direction is proportional to what they called the "active masses" of the various components:
- rate of forward reaction = k f [A] a [B] b
- rate of reverse reaction = k r [C] c [D] d
in which the proportionality constants k are called rate constants and the quantities in square brackets represent concentrations. If we combine the two reactants A and B , the forward reaction starts immediately; then, as the products C and D begin to build up, the reverse process gets underway. As the reaction proceeds, the rate of the forward reaction diminishes while that of the reverse reaction increases. Eventually the two processes are proceeding at the same rate, and the reaction is at equilibrium:
rate of forward reaction = rate of reverse reaction
\[k_f [A]^a [B]^b = k_r [C]^c [D]^d\]
It is very important that you understand the significance of this relation. The equilibrium state is one in which there is no net change in the quantities of reactants and products. But do not confuse this with a state of "no change"; at equilibrium, the forward and reverse reactions continue, but at identical rates , essentially cancelling each other out.
Equilibrium is macroscopically static, but is microscopically dynamic! To further illustrate the dynamic character of chemical equilibrium, suppose that we now change the composition of the system previously at equilibrium by adding some C or withdrawing some A (thus changing their "active masses"). The reverse rate will temporarily exceed the forward rate and a change in composition ("a shift in the equilibrium") will occur until a new equilibrium composition is achieved.Composition of the Equilibrium State Depends on the ratio of the forward- and reverse rate constants.
Be sure you understand the difference between the rate of a reaction and a rate constant . The latter, usually designated by k , relates the reaction rate to the concentration of one or more of the reaction components — for example, rate = k [A]. At equilibrium the rates of the forward and reverse processes are identical, but the rate constants are generally different. To see how this works, consider the simplified reaction A → B in the following three scenarios.
- k f >> k r
- If the rate constants are greatly different (by many orders of magnitude), then this requires that the equilibrium concentrations of products exceed those of the reactants by the same ratio. Thus the equilibrium composition will lie strongly on the "right"; the reaction can be said to be "complete" or "quantitative".
- k f << k r
- The rates can only be identical (equilibrium achieved) if the concentrations of the products are very small. We describe the resulting equilibrium as strongly favoring the left; very little product is formed. In the most extreme cases, we might even say that "the reaction does not take place".
- k f ≈ k r
- If k f and k r have comparable values (within, say, several orders of magnitude), then significant concentrations of products and reactants are present at equilibrium; we say the the reaction is "incomplete" and "reversible".
The images shown below offer yet another way of looking at these three cases. The plots show how the relative concentrations of the reactant and product change during the course of the reaction. The plots differ in the assumptions we make about the ratio of k f to k r . The equilibrium composition of the system is illustrated by the proportions of A and B in the horizontal parts of each plot where the composition remains unchanged. In each case, the two rate constants are sufficiently close in magnitude that each reaction can be considered "incomplete".
- In plot (i) the forward rate constant is twice as large as the reverse rate constant, so product (B) is favored, but there is sufficient reverse reaction to maintain a significant quantity of A.
- In (ii) , the forward and reverse rate constants have identical magnitudes. Not surprisingly, so are the equilibrium values of [A] and [B].
- In (iii) , the reverse rate constant exceeds the forward rate constant, so the equilibrium composition is definitely "on the left".
The Law of Mass Action is thus essentially the statement that the equilibrium composition of a reaction mixture can vary according to the quantities of components that are present. This of course is just what Berthollet observed in his Egyptian salt ponds, but we now understand it to be a consequence of the dynamic nature of chemical equilibrium.
When is a Reaction at Equilibrium?
Clearly, if we observe some change taking place— a change in color, the release of gas bubbles, the appearance of a precipitate, or the release of heat, we know the reaction is not yet at equilibrium. However, the absence of any apparent change does not by itself establish that the reaction is at equilibrium . The equilibrium state is one in which not only no change in composition take place, but also one in which no energetic tendency for further change is present. Unfortunately, "tendency" is not a property that is directly observable! Consider, for example, the reaction representing the synthesis of water from its elements:
\[2 H_{2(g)} + O_{2(g)} \rightarrow 2 H_2O_{(g)} \label{10.5}\]
You can store the two gaseous reactants in the same container indefinitely without any observable change occurring. But if you create an electrical spark in the container or introduce a flame, bang ! After you pick yourself up off the floor and remove the shrapnel from what's left of your body, you will know very well that the system was not initially at equilibrium! It happens that this particular reaction has a tremendous tendency to take place, but for reasons that we will discuss in a later chapter, nothing can happen until we "set it off" in some way— in this case by exposing the mixture to a flame or spark, or (in a more gentle way) by introducing a platinum wire, which acts as a catalyst .
A reaction of this kind is said to be highly favored thermodynamically, but inhibited kinetically. The similar reaction of hydrogen and iodine
\[H_{2(g)} + I_{2(g)} \rightarrow 2 HI_{(g)} \label{10.6}\]
by contrast is only moderately favored thermodynamically (and is thus incomplete), but its kinetics are both unspectacular and reasonably facile.
Some Simple Tests for the Equilibrium State
- As we explained above in the context of the law of mass action, addition or removal of one component of the reaction will affect the amounts of all the others. For example, if we add more of a reactant, we would expect to see the concentration of a product change. If this does not happen, then it is likely that the reaction is kinetically inhibited and that the system is unable to attain equilibrium.
- It is almost always the case, however, that once a reaction actually starts, it will continue on its own until it reaches equilibrium, so if we can observe the change as it occurs and see it slow down and stop, we can be reasonably certain that the system is in equilibrium. This is by far the chemist's most common criterion.
- There is one other experimental test for equilibrium in a chemical reaction, although it is really only applicable to the kind of reactions we described above as being reversible. As we shall see later, the equilibrium state of a system is always sensitive to the temperature, and often to the pressure, so any changes in these variables, however, small, will temporarily disrupt the equilibrium, resulting in an observable change in the composition of the system as it moves toward its new equilibrium state.
Summary
Make sure you thoroughly understand the following essential ideas which have been presented above.
-
Any reaction that can be represented by a balanced chemical equation can take place, at least in principle. However, there are two important qualifications:
-
The tendency for the change to occur may be so small that the quantity of products formed may be very low, and perhaps negligible.
A reaction of this kind is said to be thermodynamically inhibited . The tendency for chemical change is governed solely by the properties of the reactants and products, and can be predicted by applying the laws of thermodynamics. - The rate at which the reaction proceeds may be very small, or even zero, in which case we say the reaction is kinetically inhibited . Reaction rates depend on the mechanism of the reaction— that is, on what actually happens to the atoms as reactants are transformed into products. Reaction mechanisms cannot generally be predicted, and must be worked out experimentally. Also, the same reaction may have different mechansims under different conditions.
-
The tendency for the change to occur may be so small that the quantity of products formed may be very low, and perhaps negligible.
- As a chemical change proceeds, the quantities of the components on one side of the reaction equation will decrease, and those on the other side will increase. Eventually the reaction slows down and the composition of the system stops changing. At this point the reaction is in its equilibrium state , and no further change in composition will occur as long as the system is left undisturbed.
- For many reactions, the equilibrium state is one in which components on both sides of the equation (that is, both reactants and products) are present in significant amounts. Such a reaction is said to be incomplete or reversible .
- The equilibrium composition is independent of the direction from which it is approached ; the labeling of substances as "reactants" or "products" is entirely a matter of convenience. (See the hydrogen iodide reaction plots above.)
- The law of mass action states that any chemical change is a competition between a forward reaction (left-to-right in the chemical equation) and a reverse reaction. The rate of each of these processes is governed by the concentrations of the substances reacting; as the reaction proceeds, these rates approach each other and at equilibrium they become identical.
- From the above, it follows that equilibrium is a dynamic process in which microscopic change (the forward and reverse reactions) continues to occur, but macroscopic change (changes in the quantities of substances) is absent.
- When a chemical reaction is at equilibrium, any disturbance of the system, such as a change in temperature, or addition or removal of one of the reaction components, will "shift" the composition to a new equilibrium state. This is the only unambiguous way of verifying that a reaction is at equilibrium. The fact that the composition remains static does not in itself prove that a reaction is at equilibrium, because the change may be kinetically inhibited. | libretexts | 2025-03-17T19:53:14.719295 | 2013-10-03T01:37:54 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/11%3A_Chemical_Equilibrium/11.01%3A_Introduction_to_Chemical_Equilibrium",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "11.1: Introduction to Chemical Equilibrium",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/11%3A_Chemical_Equilibrium/11.02%3A_Le_Chatelier's_Principle | 11.2: Le Chatelier's Principle
Make sure you thoroughly understand the following essential ideas:
- A system in its equilibrium state will remain in that state indefinitely as long as it is undisturbed. If the equilibrium is destroyed by subjecting the system to a change of pressure, temperature, or the number of moles of a substance, then a net reaction will tend to take place that moves the system to a new equilibrium state. Le Chatelier's principle says that this net reaction will occur in a direction that partially offsets the change.
- The Le Chatelier Principle has practical effect only for reactions in which signficant quantities of both reactants and products are present at equilibrium— that is, for reactions that are thermodynamically reversible .
- Addition of more product substances to an equilibrium mixture will shift the equilibrium to the left; addition of more reactant substances will shift it to the right. These effects are easily explained in terms of competing forward- and reverse reactions— that is, by the law of mass action .
- If a reaction is exothermic (releases heat), an increase in the temperature will force the equilibrium to the left, causing the system to absorb heat and thus partially ofsetting the rise in temperature. The opposite effect occurs for endothermic reactions, which are shifted to the right by rising temperature.
- The effect of pressure on an equilibrium is significant only for reactions which involve different numbers of moles of gases on the two sides of the equation. If the number of moles of gases increases, than an increase in the total pressure will tend to initiate a reverse reaction that consumes some the products, partially reducing the effect of the pressure increase.
- The classic example of the practical use of the Le Chatelier principle is the Haber-Bosch process for the synthesis of ammonia, in which a balance between low temperature and high pressure must be found.
The previous Module emphasized the dynamic character of equilibrium as expressed by the Law of Mass Action. This law serves as a model explaining how the composition of the equilibrium state is affected by the "active masses" (concentrations) of reactants and products. In this lesson, we develop the consequences of this law to answer the very practical question of how an existing equilibrium composition is affected by the addition or withdrawal of one of the components.
Le Chatelier's Principle
If a reaction is at equilibrium and we alter the conditions so as to create a new equilibrium state , then the composition of the system will tend to change until that new equilibrium state is attained. (We say "tend to change" because if the reaction is kinetically inhibited , the change may be too slow to observe or it may never take place.) In 1884, the French chemical engineer and teacher Henri Le Chatelier (1850-1936) showed that in every such case, the new equilibrium state is one that partially reduces the effect of the change that brought it about.
This law is known to every Chemistry student as the Le Chatelier principle . His original formulation was somewhat complicated, but a reasonably useful paraphrase of it reads as follows:
Le Chatelier principle : If a system at equilibrium is subjected to a change of pressure, temperature, or the number of moles of a component, there will be a tendency for a net reaction in the direction that reduces the effect of this change.
To see how this works (and you must do so, as this is of such fundamental importance that you simply cannot do any meaningful chemistry without a thorough working understanding of this principle), look again at the hydrogen iodide dissociation reaction
\[2 HI \rightarrow H_2 + I_2\]
Consider an arbitrary mixture of these three components at equilibrium, and assume that we inject more hydrogen gas into the container. Because the H 2 concentration now exceeds its new equilibrium value, the system is no longer in its equilibrium state, so a net reaction now ensues as the system moves to the new state.
The Le Chatelier principle states that the net reaction will be in a direction that tends to reduce the effect of the added H 2 . This can occur if some of the H 2 is consumed by reacting with I 2 to form more HI; in other words, a net reaction occurs in the reverse direction. Chemists usually simply say that "the equilibrium shifts to the left".
To get a better idea of how this works, carefully examine the diagram below which follows the concentrations of the three components of this reaction as they might change in time (the time scale here will typically be about an hour):
Disruption and restoration of equilibrium. At the left, the concentrations of the three components do not change with time because the system is at equilibrium. We then add more hydrogen to the system, disrupting the equilibrium. A net reaction then ensues that moves the system to a new equilibrium state (right) in which the quantity of hydrogen iodide has increased; in the process, some of the I 2 and H 2 are consumed. Notice that the new equilibrium state contains more hydrogen than did the initial state, but not as much as was added; as the Le Chatelier principle predicts, the change we made (addition of H 2 ) has been partially counteracted by the "shift to the right". Table \(\PageIndex{1}\) Contains several examples showing how changing the quantity of a reaction component can shift an established equilibrium.
|
|
|
result
|
|---|---|---|
| CO 2 + H 2 → H 2 O (g) + CO | a drying agent is added to absorb H 2 O | Shift to the right . Continuous removal of a product will force any reaction to the right |
| H 2 (g) + I 2 (g) → 2HI (g) | Some nitrogen gas is added | No change ; N 2 is not a component of this reaction system. |
| NaCl (s) + H 2 SO 4 (l) → Na 2 SO 4 (s) + HCl (g) | reaction is carried out in an open container | Because HCl is a gas that can escape from the system, the reaction is forced to the right . This is the basis for the commercial production of hydrochloric acid. |
| H 2 O (l) → H 2 O (g) | water evaporates from an open container | Continuous removal of water vapor forces the reaction to the right , so equilibrium is never achieved. |
| HCN (aq) → H + (aq) + CN – (aq) | the solution is diluted | Shift to right ; the product [H + ][CN — ] diminishes more rapidly than does [HCN]. |
| AgCl (s) → Ag + (aq) + Cl – (aq) | some NaCl is added to the solution | Shift to left due to increase in Cl – concentration. This is known as the common ion effect on solubility. |
| N 2 + 3 H 2 → 2 NH 3 | a catalyst is added to speed up this reaction | No change . Catalysts affect only the rate of a reaction; the have no effect at all on the composition of the equilibrium state. |
How do changes in temperature affect Equilibria?
Virtually all chemical reactions are accompanied by the liberation or uptake of heat. If we regard heat as a "reactant" or "product" in an endothermic or exothermic reaction respectively, we can use the Le Chatelier principle to predict the direction in which an increase or decrease in temperature will shift the equilibrium state. Thus for the oxidation of nitrogen, an endothermic process, we can write
\[\text{[heat]} + N_2 + O_2 \rightleftharpoons 2 NO\]
Suppose this reaction is at equilibrium at some temperature \(T_1\) and we raise the temperature to \(T_2\). The Le Chatelier principle tells us that a net reaction will occur in the direction that will partially counteract this change. Since the reaction is endothermic, a shift of the equilibrium to the right will take place.
Nitric oxide, the product of this reaction, is a major air pollutant which initiates a sequence of steps leading to the formation of atmospheric smog. Its formation is an unwanted side reaction which occurs when the air (which is introduced into the combustion chamber of an engine to supply oxygen) gets heated to a high temperature. Designers of internal combustion engines now try, by various means, to limit the temperature in the combustion region, or to restrict its highest-temperature part to a small volume within the combustion chamber.
How do changes in pressure affect equilibria?
You will recall that if the pressure of a gas is reduced, its volume will increase; pressure and volume are inversely proportional. With this in mind, suppose that the reaction
\[2 NO_{2(g)} \rightleftharpoons N_2O_{4(g)}\]
is in equilibrium at some arbitrary temperature and pressure, and that we double the pressure, perhaps by compressing the mixture to a smaller volume. From the Le Chatelier principle we know that the equilibrium state will change to one that tends to counteract the increase in pressure. This can occur if some of the NO 2 reacts to form more of the dinitrogen tetroxide, since two moles of gas are being removed from the system for every mole of N 2 O 4 formed, thereby decreasing the total volume of the system. Thus increasing the pressure will shift this equilibrium to the right.
\[Δn_g = (n_{products} – n_{reactants}) = 1 – 2 = –1.\]
In the case of the nitrogen oxidation reaction
\[N_2 + O_2 \rightleftharpoons 2 NO\]
Δ n g = 0 and changing pressure will have no effect on the equilibrium.
The volumes of solids and liquids are hardly affected by the pressure at all, so for reactions that do not involve gaseous substances, the effects of pressure changes are ordinarily negligible. Exceptions arise under conditions of very high pressure such as exist in the interior of the Earth or near the bottom of the ocean. A good example is the dissolution of calcium carbonate
\[CaCO_{3(s)} \rightleftharpoons Ca^{2+} + CO_3^{2–}\]
There is a slight decrease in the volume when this reaction takes place, so an increase in the pressure will shift the equilibrium to the right, with the results that calcium carbonate becomes more soluble at higher pressures.
The skeletons of several varieties of microscopic organisms that inhabit the top of the ocean are made of CaCO 3 , so there is a continual rain of this substance toward the bottom of the ocean as these organisms die. As a consequence, the floor of the Atlantic ocean is covered with a blanket of calcium carbonate. This is not true for the Pacific ocean, which is deeper; once the skeletons fall below a certain depth, the higher pressure causes them to dissolve. Some of the seamounts (undersea mountains) in the Pacific extend above the solubility boundary so that their upper parts are covered with CaCO 3 sediments.
The effect of pressure on a reaction involving substances whose boiling points fall within the range of commonly encountered temperature will be sensitive to the states of these substances at the temperature of interest. For reactions involving gases, only changes in the partial pressures of those gases directly involved in the reaction are important; the presence of other gases has no effect.
The commercial production of hydrogen is carried out by treating natural gas with steam at high temperatures and in the presence of a catalyst (“steam reforming of methane”):
\[CH_4 + H_2O \rightleftharpoons CH_3OH + H_2 \nonumber\]
Given the following boiling points: CH 4 (methane) = –161°C, H 2 O = 100°C, CH 3 OH = 65°, H 2 = –253°C, predict the effects of an increase in the total pressure on this equilibrium at 50°, 75° and 120°C.
Solution
Calculate the change in the moles of gas for each process:
|
temp
|
equation
|
\(Δn_g\)
|
shift
|
|---|---|---|---|
|
50° |
CH 4 (g) + H 2 O (l) → CH 3 OH (l) + H 2 (g) | 0 | none |
| 75° | CH 4 (g) + H 2 O (l) → CH 3 OH (g) + H 2 (g) | +1 | to left |
| 120° | CH 4 (g) + H 2 O (g) → CH 3 OH (g) + H 2 (g) | 0 | none |
The Haber Process and why is it important
The Haber process for the synthesis of ammonia is based on the exothermic reaction
N 2 (g) + 3 H 2 (g) → 2 NH 3 (g) Δ H = –92 kJ/mol
The Le Chatelier principle tells us that in order to maximize the amount of product in the reaction mixture, it should be carried out at high pressure and low temperature. However, the lower the temperature, the slower the reaction (this is true of virtually all chemical reactions.) As long as the choice had to be made between a low yield of ammonia quickly or a high yield over a long period of time, this reaction was infeasible economically.
Nitrogen is available for free, being the major component of air, but the strong triple bond in N 2 makes it extremely difficult to incorporate this element into species such as NO 3 – and NH 4 + which serve as the starting points for the wide variety of nitrogen-containing compounds that are essential for modern industry. This conversion is known as nitrogen fixation , and because nitrogen is an essential plant nutrient, modern intensive agriculture is utterly dependent on huge amounts of fixed nitrogen in the form of fertilizer. Until around 1900, the major source of fixed nitrogen was the NaNO 3 found in extensive deposits in South America. Several chemical processes for obtaining nitrogen compounds were developed in the early 1900's, but they proved too inefficient to meet the increasing demand.
Although the direct synthesis of ammonia from its elements had been known for some time, the yield of product was found to be negligible. In 1905, Fritz Haber (1868-1934) began to study this reaction, employing the thinking initiated by Le Chatelier and others, and the newly-developing field of thermodynamics that served as the basis of these principles. From the Le Chatelier law alone, it is apparent that this exothermic reaction is favored by low temperature and high pressure. However, it was not as simple as that: the rate of any reaction increases with the temperature, so working with temperature alone, one has the choice between a high product yield achieved only very slowly, or a very low yield quickly. Further, the equipment and the high-strength alloy steels need to build it did not exist at the time. Haber solved the first problem by developing a catalyst that would greatly speed up the reaction at lower temperatures.
The second problem, and the development of an efficient way of producing hydrogen, would delay the practical implementation of the process until 1913, when the first plant based on the Haber-Bosch process (as it is more properly known, Carl Bosch being the person who solved the major engineering problems) came into operation. The timing could not have been better for Germany, since this country was about to enter the First World War, and the Allies had established a naval blockade of South America, cutting off the supply of nitrate for the the German munitions industry.
Bosch's plant operated the ammonia reactor at 200 atm and 550°C. Later, when stronger alloy steels had been developed, pressures of 800-1000 atm became common. The source of hydrogen in modern plants is usually natural gas, which is mostly methane:
| CH 4 + H 2 O → CO + 3 H 2 | formation of synthesis gas from methane |
| CO + H 2 O → CO 2 + H 2 | shift reaction carried out in reformer |
The Haber-Bosch process is considered the most important chemical synthesis developed in the 20th century. Besides its scientific importance as the first large-scale application of the laws of chemical equilibrium, it has had tremendous economic and social impact; without an inexpensive source of fixed nitrogen, the intensive crop production required to feed the world's growing population would have been impossible. Haber was awarded the 1918 Nobel Prize in Chemistry in recognition of his work. Carl Bosch, who improved the process, won the Nobel Prize in 1931.
The Le Chatelier Principle in Physiology
Many of the chemical reactions that occur in living organisms are regulated through the Le Chatelier principle.
Oxygen transport by the blood
Few of these are more important to warm-blooded organisms than those that relate to aerobic respiration, in which oxygen is transported to the cells where it is combined with glucose and metabolized to carbon dioxide, which then moves back to the lungs from which it is expelled.
hemoglobin + O 2 oxyhemoglobin
The partial pressure of O 2 in the air is 0.2 atm, sufficient to allow these molecules to be taken up by hemoglobin (the red pigment of blood) in which it becomes loosely bound in a complex known as oxyhemoglobin. At the ends of the capillaries which deliver the blood to the tissues, the O 2 concentration is reduced by about 50% owing to its consumption by the cells. This shifts the equilibrium to the left, releasing the oxygen so it can diffuse into the cells.
Maintence of blood pH
Carbon dioxide reacts with water to form a weak acid H 2 CO 3 which would cause the blood pH to fall to dangerous levels if it were not promptly removed as it is excreted by the cells. This is accomplished by combining it with carbonate ion through the reaction
\[H_2CO_3 + CO_3^{2–} \rightleftharpoons 2 HCO_3^– \nonumber\]
which is forced to the right by the high local CO 2 concentration within the tissues. Once the hydrogen carbonate (bicarbonate) ions reach the lung tissues where the CO 2 partial pressure is much smaller, the reaction reverses and the CO 2 is expelled.
Carbon monoxide poisoning
Carbon monoxide, a product of incomplete combustion that is present in automotive exhaust and cigarette smoke, binds to hemoglobin 200 times more tightly than does O 2 . This blocks the uptake and transport of oxygen by setting up a competing equilibrium
O 2 -hemoglobin hemoglobin CO-hemoglobin
Air that contains as little as 0.1 percent carbon monoxide can tie up about half of the hemoglobin binding sites, reducing the amount of O 2 reaching the tissues to fatal levels. Carbon monoxide poisoning is treated by administration of pure O 2 which promotes the shift of the above equilibrium to the left. This can be made even more effective by placing the victim in a hyperbaric chamber in which the pressure of O 2 can be made greater than 1 atm. | libretexts | 2025-03-17T19:53:14.821900 | 2013-10-03T01:37:55 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/11%3A_Chemical_Equilibrium/11.02%3A_Le_Chatelier's_Principle",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "11.2: Le Chatelier's Principle",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/11%3A_Chemical_Equilibrium/11.03%3A_Reaction_Quotient | 11.3: Reaction Quotient
Make sure you thoroughly understand the following essential ideas:
- When arbitrary quantities of the different components of a chemical reaction system are combined, the overall system composition will not likely correspond to the equilibrium composition. As a result, a net change in composition ("a shift to the right or left") will tend to take place until the equilibrium state is attained.
- The status of the reaction system in regard to its equilibrium state is characterized by the value of the equilibrium expression whose formulation is defined by the coefficients in the balanced reaction equation; it may be expressed in terms of concentrations, or in the case of gaseous components, as partial pressures.
- The various terms in the equilibrium expression can have any arbitrary value (including zero); the value of the equilibrium expression itself is called the reaction quotient Q.
- If the concentration or pressure terms in the equilibrium expression correspond to the equilibrium state of the system, then Q has the special value K , which we call the equilibrium constant .
- The ratio of Q/K (whether it is 1, >1 or <1) thus serves as an index of how far the system is from its equilibrium composition, and its value indicates the direction in which the net reaction must proceed in order to reach its equilibrium state.
- When Q = K , then the equilibrium state has been reached, and no further net change in composition will take place as long as the system remains undisturbed.
Consider a simple reaction such as the gas-phase synthesis of hydrogen iodide from its elements: \[H_2 + I_2 \rightarrow 2 HI\] Suppose you combine arbitrary quantities of \(H_2\), \(I_2\) and \(HI\). Will the reaction create more HI, or will some of the HI be consumed as the system moves toward its equilibrium state? The concept of the reaction quotient , which is the focus of this short lesson, makes it easy to predict what will happen.
What is the Equilibrium Quotient?
In the previous section we defined the equilibrium expression for the reaction
In the general case in which the concentrations can have any arbitrary values (including zero), this expression is called the reaction quotient (the term equilibrium quotient is also commonly used.) and its value is denoted by \(Q\) (or \(Q_c\) or \(Q_p\) if we wish to emphasize that the terms represent molar concentrations or partial pressures.) If the terms correspond to equilibrium concentrations, then the above expression is called the equilibrium constant and its value is denoted by \(K\) (or \(K_c\) or \(K_p\)).
\(K\) is thus the special value that \(Q\) has when the reaction is at equilibrium
The value of Q in relation to K serves as an index how the composition of the reaction system compares to that of the equilibrium state, and thus it indicates the direction in which any net reaction must proceed. For example, if we combine the two reactants A and B at concentrations of 1 mol L –1 each, the value of Q will be 0÷1=0. The only possible change is the conversion of some of these reactants into products. If instead our mixture consists only of the two products C and D , Q will be indeterminately large (1÷0) and the only possible change will be in the reverse direction.
For example, if we combine the two reactants A and B at concentrations of 1 mol L –1 each, the value of Q will be 0÷1=0. The only possible change is the conversion of some of these reactants into products. If instead our mixture consists only of the two products C and D , Q will be indeterminately large (1÷0) and the only possible change will be in the reverse direction.
It is easy to see (by simple application of the Le Chatelier principle) that the ratio of Q/K immediately tells us whether, and in which direction, a net reaction will occur as the system moves toward its equilibrium state. A schematic view of this relationship is shown below:
It is very important that you be able to work out these relations for yourself, not by memorizing them, but from the definitions of \(Q\) and \(K\).
| Condition | Status of System |
|---|---|
| Q > K | Product concentration too high for equilibrium; net reaction proceeds to left . |
| Q = K | System is at equilibrium; no net change will occur. |
| Q < K | Product concentration too low for equilibrium; net reaction proceeds to right . |
It is very important that you be able to work out these relations for yourself, not by memorizing them, but from the definitions of \(Q\) and \(K\).
The equilibrium constant for the oxidation of sulfur dioxide is K p = 0.14 at 900 K.
\[\ce{2 SO_2(g) + O_2(g) \rightleftharpoons 2 SO_3(g)} \nonumber\]
If a reaction vessel is filled with SO 3 at a partial pressure of 0.10 atm and with O 2 and SO 2 each at a partial pressure of 0.20 atm, what can you conclude about whether, and in which direction, any net change in composition will take place?
Solution:
The value of the equilibrium quotient Q for the initial conditions is
\[ Q= \dfrac{p_{SO_3}^2}{p_{O_2}p_{SO_2}^2} = \dfrac{(0.10\; atm)^2}{(0.20 \;atm) (0.20 \; atm)^2} = 1.25\; atm^{-1} \nonumber\]
Since Q > K , the reaction is not at equilibrium, so a net change will occur in a direction that decreases Q . This can only occur if some of the SO 3 is converted back into products. In other words, the reaction will "shift to the left".
The formal definitions of Q and K are quite simple, but they are of limited usefulness unless you are able to relate them to real chemical situations. The following diagrams illustrate the relation between Q and K from various standpoints. Take some time to study each one carefully, making sure that you are able to relate the description to the illustration.
For the reaction
\[N_2O_{4(g)} \rightleftharpoons 2 NO_{2(g)} \nonumber\]
K c = 0.0059 at 298 K.
This equilibrium condition is represented by the red curve that passes through all points on the graph that satisfy the requirement that
\[Q = \dfrac{[NO_2]^2}{ [N_2O_4]} = 0.0059 \nonumber\]
There are of course an infinite number of possible Q 's of this system within the concentration boundaries shown on the plot. Only those points that fall on the red line correspond to equilibrium states of this system (those for which \(Q = K_c\)). The line itself is a plot of [NO 2 ] that we obtain by rearranging the equilibrium expression
\[[NO_2] = \sqrt{[N_2O_4]K_c} \nonumber\]
If the system is initially in a non-equilibrium state, its composition will tend to change in a direction that moves it to one that is on the line. Two such non-equilibrium states are shown. The state indicated by has \(Q > K\), so we would expect a net reaction that reduces Q by converting some of the NO 2 into N 2 O 4 ; in other words, the equilibrium "shifts to the left". Similarly, in state , Q < K , indicating that the forward reaction will occur.
The blue arrows in the above diagram indicate the successive values that Q assumes as the reaction moves closer to equilibrium. The slope of the line reflects the stoichiometry of the equation. In this case, one mole of reactant yields two moles of products, so the slopes have an absolute value of 2:1.
One of the simplest equilibria we can write is that between a solid and its vapor. In this case, the equilibrium constant is just the vapor pressure of the solid. Thus for the process
\[I_{2(s)} \rightleftharpoons I_{2(g)} \nonumber\]
all possible equilibrium states of the system lie on the horizontal red line and is independent of the quantity of solid present (as long as there is at least enough to supply the relative tiny quantity of vapor.)
So adding various amounts of the solid to an empty closed vessel (states and ) causes a gradual buildup of iodine vapor. Because the equilibrium pressure of the vapor is so small, the amount of solid consumed in the process is negligible, so the arrows go straight up and all lead to the same equilibrium vapor pressure.
The decomposition of ammonium chloride is a common example of a heterogeneous (two-phase) equilibrium. Solid ammonium chloride has a substantial vapor pressure even at room temperature:
\[NH_4Cl_{(s)} \rightleftharpoons NH_{3(g)} + HCl_{(g)}\]
Arrow traces the states the system passes through when solid NH 4 Cl is placed in a closed container. Arrow represents the addition of ammonia to the equilibrium mixture; the system responds by following the path back to a new equilibrium state which, as the Le Chatelier principle predicts, contains a smaller quantity of ammonia than was added. The unit slopes of the paths and reflect the 1:1 stoichiometry of the gaseous products of the reaction. | libretexts | 2025-03-17T19:53:14.903223 | 2013-10-03T01:37:55 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/11%3A_Chemical_Equilibrium/11.03%3A_Reaction_Quotient",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "11.3: Reaction Quotient",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/11%3A_Chemical_Equilibrium/11.04%3A_Equilibrium_Expressions | 11.4: Equilibrium Expressions
Make sure you thoroughly understand the following essential ideas:
- The equilibrium quotient Q is the value of the equilibrium expression of a reaction for any arbitrary set of concentrations or partial pressures of the reaction components.
- The equilibrium constant K is the value of Q when the reaction is at equilibrium. K has a unique value for a given reaction at a fixed temperature and pressure.
- Q and K can be expressed in terms of concentrations, partial pressures, or, when appropriate, in some combination of these.
- For a reaction in which all the components are gases, Q c and K c will have different values except in the special case in which the total number of moles of gas does not change.
- Concentration terms for substances whose concentrations do not change in the reaction do not appear in equilibrium expressions. The most common examples are [H 2 O] when the reaction takes place in aqueous solution (so that [H 2 O] is effectively constant at 55.6 M ), and in heterogeneous reactions involving solids, in which the concentration of the solid is determined by the density of the solid itself.
- A reaction whose equilibrium constant is in the range of about 0.01 to 100 is said to be incomplete or [thermodynamically] reversible .
- Q and K are conventionally treated as dimensionless quantities, and need not ordinarily have units associated with them.
- Heterogeneous reactions are those in which two or more phases are involved; homogeneous reactions take place in a single phase. A common type of heterogeneous reaction is the loss of water of crystallization by a solid hydrate such as CuSO 4 ·5H 2 O.
-
The equilibrium expression can be manipulated and combined in the following ways:
- If the reaction is written in reverse, Q becomes Q –1 ;
- If the coefficients of an equation are multiplied by n , Q becomes Q n ;
- Q for the sum of two reactions (that is, for two reactions that take place in sequence) is the product ( Q 1 )( Q 2 ).
You know that an equilibrium constant expression looks something like K = [products] / [reactants]. But how do you translate this into a format that relates to the actual chemical system you are interested in? This lesson will show you how to write the equilibrium constant expressions that you will need to use when dealing with the equilibrium calculation problems in the chapter that follows this one.
Pressures can Express Concentrations
Although we commonly write equilibrium quotients and equilibrium constants in terms of molar concentrations, any concentration-like term can be used, including mole fraction and molality. Sometimes the symbols \(K_c\) , \(K_x\) , and \(K_m\) are used to denote these forms of the equilibrium constant. Bear in mind that the numerical values of K ’s and Q ’s expressed in these different ways will not generally be the same.
Most of the equilibria we deal with in this course occur in liquid solutions and gaseous mixtures. We can express \(K_c\) values in terms of moles per liter for both, but when dealing with gases it is often more convenient to use partial pressures. These two measures of concentration are of course directly proportional:
\[c=\dfrac{n}{V}=\dfrac{\dfrac{PV}{RT}}{V}=\dfrac{P}{RT} \label{Eq1}\]
so for a reaction \(A_{(g)} \rightarrow B_{(g)}\) we can write the equilibrium constant as
\[K_p =\dfrac{P_B}{P_A} \label{Eq2}\]
Owing to interactions between molecules, especially when ions are involved, all of these forms of the equilibrium constant are only approximately correct, working best at low concentrations or pressures. The only equilibrium constant that is truly “constant” (except that it still varies with the temperature!) is expressed in terms of activities , which you can think of as “effective concentrations” that allow for interactions between molecules. In practice, this distinction only becomes important for equilibria involving gases at very high pressures (such as are often encountered in chemical engineering) and in ionic solutions more concentrated than about 0.001 M . We will not deal much with activities in this course.
For a reaction such as
\[CO_{2\, (g)} + OH^–_{(aq)} \rightleftharpoons HCO_{3\, (aq)}^- \label{Eq3}\]
that involves both gaseous and dissolved components, a “hybrid” equilibrium constant is commonly used:
\[ K =\dfrac{[HCO_3^-]}{P_{CO_2}[OH^-]} \label{Eq4}\]
Clearly, it is essential to be sure of the units when you see an equilibrium constant represented simply by "\(K\)".
In this lesson (and in most of the others in this set,) we express concentrations in mol L –1 and pressures in atmospheres. Although this reflects common usage among chemists (older ones, especially!), these units are not part of the SI system which has been the international standard since the latter part of the 20th Century.
Molar concentrations are now more properly expressed in mol dm –3 and the "standard atmosphere " corresponds to a pressure of 101.325 kPa. Until 1990, 1 atm was the "standard pressure" employed in calculations involving the gas laws, and also in thermodynamics. Since that date, "standard pressure" has been 100.000 kPa, also expressed as 1 bar.
For most practical purposes, the differences between these values are so small that they can be neglected.
Converting between \(K_p\) and \(K_c\)
It is sometimes necessary to convert between equilibrium constants expressed in different units. The most common case involves pressure- and concentration equilibrium constants. Note that when V is expressed in liters and P in atmospheres, R must have the value 0.08206 L-atm/mol K.). The ideal gas law relates the partial pressure of a gas to the number of moles and its volume:
\[PV = nRT \]
Concentrations are expressed in moles/unit volume n/V , so by rearranging the above equation we obtain the explicit relation of pressure to concentration:
\[P = \left(\dfrac{n}{V} \right)RT \label{Eq5}\]
Conversely,
\[c = \dfrac{n}{V} = \dfrac{P}{RT}\]
so a concentration of a gas [A] can be expressed as \(\dfrac{P_A}{RT}\).
For a reaction of the form \(A + 3 B\rightleftharpoons 2C\), we can write
Assuming that all of the components are gases, the difference
\[(\text{moles of gas in products}) – (\text{moles of gas in reactants}) = \Delta{n_g} \nonumber\]
is given by
\[ \color{red} {K_p = K_c (RT)^{\Delta{n_g}} \label{Eq6}}\]
Do not show unchanging concentrations!
Substances whose concentrations undergo no significant change in a chemical reaction do not appear in equilibrium constant expressions. How can the concentration of a reactant or product not change when a reaction involving that substance takes place? There are two general cases to consider.
The substance is also the Solvent
This happens all the time in acid-base chemistry. Thus for the hydrolysis of the cyanide ion
\[\ce{CN^{-} + H2O <=> HCN + OH^{–}} \nonumber\]
we write
\[K_c= \dfrac{[\ce{HCN}][\ce{OH^-}]}{[\ce{CN^-}]} \nonumber\]
in which no \([H_2O]\) term appears. The justification for this omission is that water is both the solvent and reactant, but only the tiny portion that acts as a reactant would ordinarily go in the equilibrium expression. The amount of water consumed in the reaction is so minute (because \(K\) is very small) that any change in the concentration of \(H_2O\) from that of pure water (55.6 mol L –1 ) will be negligible.
Similarly, for the "autodissociation" of water
\[H_2O = H^+ + OH^– \nonumber\]
the equilibrium constant is expressed as the "ion product"
\[K_w = [H^+][OH^–] \nonumber\]
Be careful about throwing away H 2 O whenever you see it. In the esterification reaction
\[\ce{CH3COOH + C2H5OH <=> CH3COOC2H5 + H2O} \nonumber\]
that we discussed in a previous section, a [H 2 O] term must be present in the equilibrium expression if the reaction is assumed to be between the two liquids acetic acid and ethanol. If, on the other hand, the reaction takes place between a dilute aqueous solution of the acid and the alcohol, then the [H 2 O] term would not be included.
The substance is a solid or a pure liquid phase
This is most frequently seen in solubility equilibria, but there are many other reactions in which solids are directly involved:
\[CaF_{2\, (s)} \rightarrow Ca^{2+}_{(aq)} + 2F^-_{(aq)} \label{Eq11}\]
\[Fe_3O_4(s) + 4 H_{2\, (g)} \rightarrow 4 H_2O_{(g)} + 3Fe_{(s)} \label{Eq12}\]
These are heterogeneous reactions (meaning reactions in which some components are in different phases), and the argument here is that concentration is only meaningful when applied to a substance within a single phase.
Thus the term \([CaF_2]\) would refer to the “concentration of calcium fluoride within the solid \(CaF_2\)", which is a constant depending on the molar mass of \(CaF_2\) and the density of that solid. The concentrations of the two ions will be independent of the quantity of solid \(CaF_2\) in contact with the water; in other words, the system can be in equilibrium as long as any \(CaF_2\) at all is present. Throwing out the constant-concentration terms can lead to some rather sparse-looking equilibrium expressions. For example, the equilibrium expression for each of the processes shown in the following table consists solely of a single term involving the partial pressure of a gas:
| \(CaCO_{3(s)} \rightleftharpoons CaO_{(s)} + CO_{2(g)}\) |
\(K_p = P_{CO_2}\) |
Thermal decomposition of limestone, a first step in the manufacture of cement. |
|
\(Na_2SO_4 \cdot 10 H_2O_{(s)}
Na_2SO_{4(s)} + 10 H_2O_{(g)}\) |
\(K_p = P_{H_2O}^{10}\) |
Sodium sulfate decahydrate is a solid in which H2O molecules (“waters of hydration") are incorporated into the crystal structure.) |
| \(I_{2(s)} \rightleftharpoons I_{2(g)}\) |
\(K_p = P_{I_2}\) |
sublimation of solid iodine; this is the source of the purple vapor you can see above solid iodine in a closed container. |
| \(H_2O_{(l)} \rightleftharpoons H_2O_{(g)}\) |
\(K_p = P_{H_2O}\) |
Vaporization of water. When the partial pressure of water vapor in the air is equal to K, the relative humidity is 100%. |
The last two processes represent changes of state ( phase changes ) which can be treated exactly the same as chemical reactions. In each of the heterogeneous processes shown in Table \(\PageIndex{1}\), the reactants and products can be in equilibrium (that is, permanently coexist) only when the partial pressure of the gaseous product has the value consistent with the indicated \(K_p\). Bear in mind also that these \(K_p\)'s all increase with the temperature.
What are the values of \(K_p\) for the equilibrium between liquid water and its vapor at 25°C, 100°C, and 120°C? The vapor pressure of water at these three temperatures is 23.8 torr, 760 torr (1 atm), and 1489 torr, respectively.
Comment: Thes e vapor pressures are the partial pressures of water vapor in equilibrium with the liquid, so they are identical with the \(K_p\)'s when expressed in units of atmospheres.
Solution
|
25°C
|
|---|
Values of Equilibrium Constants
or “ reversible ”.
As an equilibrium constant approaches the limits of zero or infinity, the reaction can be increasingly characterized as a one-way process; we say it is “ complete ” or “ irreversible ”. The latter term must of course not be taken literally; the Le Chatelier principle still applies (especially insofar as temperature is concerned), but addition or removal of reactants or products will have less effect.
Kinetically Hindered Reactions
Although it is by no means a general rule, it frequently happens that reactions having very large equilibrium constants are kinetically hindered , often to the extent that the reaction essentially does not take place.
The examples in the following table are intended to show that numbers (values of K ), no matter how dull they may look, do have practical consequences!
| Reaction |
K
|
remarks |
|---|---|---|
| \(N_{2(g)} + O_{2(g)} \rightleftharpoons 2 NO_{(g)}\) |
\(5 \times 10^{–31}\) at 25°C,
0.0013 at 2100°C |
These two very different values of K illustrate very nicely why reducing combustion-chamber temperatures in automobile engines is environmentally friendly. |
| \(3 H_{2(g)} + N_{2(g)} \rightleftharpoons 2 NH_{3(g)}\) |
\(7 \times 10^5\) at 25°C,
56 at 1300°C |
See the discussion of this reaction in the section on the Haber process. |
| \(H_{2(g)} \rightleftharpoons 2 H_{(g)}\) |
\(10^{–36}\) at 25°C,
\(6 \times 10^{–5}\) at 5000° |
Dissociation of any stable molecule into its atoms is endothermic. This means that all molecules will decompose at sufficiently high temperatures. |
| \(H_2O_{(g)} \rightleftharpoons H_{2(g)} + ½ O_{2(g)}\) | \(8 \times 10^{–41}\) at 25°C | You won’t find water a very good source of oxygen gas at ordinary temperatures! |
|
\(CH_3COOH_{(l)} \rightleftharpoons
2 H_2O_{(l)} + 2 C_{(s)}\) |
\(K_c = 10^{13}\) at 25°C | This tells us that acetic acid has a great tendency to decompose to carbon, but nobody has ever found graphite (or diamonds!) forming in a bottle of vinegar. A good example of a super kinetically-hindered reaction! |
The equilibrium expression for the synthesis of ammonia
\[3 H_{2(g)} + N_{2(g)} \rightarrow 2 NH_{3(g)} \label{Eq13}\]
can be expressed as
\[ K_p =\dfrac{P^2_{NH_3}}{P_{N_2}P^3_{H_2}} \label{Eq14}\]
or
\[ K_c = \dfrac{[NH_3]^2}{[N_2] [H_2]^3} \label{Eq15}\]
so \(K_p\) for this process would appear to have units of atm –1 , and \(K_c\) would be expressed in mol –2 L 2 . And yet these quantities are often represented as being dimensionless. Which is correct? The answer is that both forms are acceptable. There are some situations (which you will encounter later) in which K ’s must be considered dimensionless, but in simply quoting the value of an equilibrium constant it is permissible to include the units, and this may even be useful in order to remove any doubt about the units of the individual terms in equilibrium expressions containing both pressure and concentration terms. In carrying out your own calculations, however, there is rarely any real need to show the units.
Strictly speaking, equilibrium expressions do not have units because the concentration or pressure terms that go into them are really ratios having the forms ( n mol L –1 )/(1 mol L –1 ) or ( n atm)/(1 atm) in which the unit quantity in the denominator refers to the standard state of the substance; thus the units always cancel out. (But first-year students are not expected to know this!)
For substances that are liquids or solids, the standard state is just the concentration of the substance within the liquid or solid, so for something like CaF (s) , the term going into the equilibrium expression is [CaF 2 ]/[CaF 2 ] which cancels to unity; this is the reason we don’t need to include terms for solid or liquid phases in equilibrium expressions. The subject of standard states would take us beyond where we need to be at this point in the course, so we will simply say that the concept is made necessary by the fact that energy, which ultimately governs chemical change, is always relative to some arbitrarily defined zero value which, for chemical substances, is the standard state.
How the Reaction Equation affects K
It is important to remember that an equilibrium quotient or constant is always tied to a specific chemical equation, and if we write the equation in reverse or multiply its coefficients by a common factor, the value of \(Q\) or \(K\) will change. The rules are very simple:
- Writing the equation in reverse will invert the equilibrium expression;
- Multiplying the coefficients by a common factor will raise Q or K to the corresponding power.
Here are some of the possibilities for the reaction involving the equilibrium between gaseous water and its elements:
Example 1: \(\ce{2 H2 + O2 <=> 2 H2O} \) with equilibrium expression \[K_p = \dfrac{P_{H_2O}^2}{P_{H_2}^2P_{O_2}} \nonumber\]
Example 2: \(\ce{10 H2 + 5 O2 <=> 10 H2O}\) with equilibrium expression \[\begin{align*} K_p &= \dfrac{P_{H_2O}^{10}}{P_{H_2}^{10}P_{O_2}^5} \\[4pt] &= \left(\dfrac{P_{H_2O}^2}{P_{H_2}^2P_{O_2}}\right)^{5}\end{align*}\]
Example 3: \(\ce{H2 + 1/2 O2 <=> H2O} \) with equilibrium expression \[\begin{align*} K_p &= \dfrac{P_{H_2O}}{P_{H_2}P_{O_2}^{1/2}} \\[4pt] &= \left(\dfrac{P_{H_2O}^2}{P_{H_2}^2P_{O_2}}\right)^{1/2} \end{align*}\]
Example 4: \(\ce{H2O <=> H2 + 1/2 O2 } \) with equilibrium expression \[\begin{align*} K_p &= \dfrac{P_{H_2}P_{O_2}^{1/2}}{P_{H_2O}} \\[4pt] &= \left(\dfrac{P_{H_2O}^2}{P_{H_2}^2P_{O_2}}\right)^{-1/2} \end{align*}\]
Many chemical changes can be regarded as the sum or difference of two or more other reactions. If we know the equilibrium constants of the individual processes, we can easily calculate that for the overall reaction according to the following rule.
T he equilibrium constant for the sum of two or more reactions is the product of the equilibrium constants for each of the steps.
Calculate the value of \(K\) for the reaction
\[\ce{CaCO3(s) + H^{+}(aq) <=> Ca^{2+}(aq) + HCO^{–}3(aq)} \nonumber\]
given the following equilibrium constants:
|
\(CaCO_{3(s)} \rightleftharpoons Ca^{2+}_{(aq)} + CO^{2–}_{3(aq)}\) |
\(K_1 = 10^{–6.3}\) |
|
\(HCO^–_{3(aq)} \rightleftharpoons H^+_{(aq)} + CO^{2–}_{3(aq)}\) |
\(K_2 = 10^{–10.3}\) |
Solution
The net reaction is the sum of reaction 1 and the reverse of reaction 2:
|
\(CaCO_{3(s)} \rightleftharpoons Ca^{2+}_{(aq)} + CO^{2–}_{3(aq)}\) |
\(K_1 = 10^{–6.3}\) |
|
\( H^+_{(aq)} + CO^{2–}_{3(aq)} \rightleftharpoons HCO^–_{3(aq)} \) |
\(K_{–2} = 10^{–(–10.3)}\) |
|
\(CaCO_{3(s)} + H^+_{(aq)} \rightarrow Ca^{2+}_{(aq)} + HCO^–_{3(aq)}\) |
\(K = \dfrac{K_1}{K_2} = 10^{(-6.4+10.3)} =10^{+3.9}\) |
Comment :
This net reaction describes the dissolution of limestone by acid; it is responsible for the eroding effect of acid rain on buildings and statues. This an example of a reaction that has practically no tendency to take place by itself (small K 1 ) being "driven" by a second reaction having a large equilibrium constant (K –2 ). From the standpoint of the Le Chatelier principle, the first reaction is "pulled to the right" by the removal of carbonate by hydrogen ion. Coupled reactions of this type are widely encountered in all areas of chemistry, and especially in biochemistry, in which a dozen or so reactions may be linked.
The synthesis of \(\ce{HBr}\) from hydrogen and liquid bromine has an equilibrium constan t \(K_p = 4.5 \times 10^{15}\) a t 25°C. Given that the vapor pressure of liquid bromine is 0.28 atm, find \(K_p\) for the homogeneous gas-phase reaction at the same temperature.
Solution
The net reaction we seek is the sum of the heterogeneous synthesis of \(\ce{HBr}\) and the reverse of the vaporization of liquid bromine:
| \(H_{2(g)} + Br_{2(l)} \rightleftharpoons 2 HBr_{(g)}\) | \(K_p = 4.5\times 10^{15}\) |
| \(Br_{2(g)} \rightleftharpoons Br_{2(l)}\) | \(K_p = (0.28)^{–1}\) |
| \(H_{2(g)} + Br_{2(g)} \rightleftharpoons 2 HBr_{(g)}\) | \(K_p = 1.6 \times 10^{16}\) |
More on heterogeneous reactions
Heterogeneous reactions are those involving more than one phase. Some examples:
| \(Fe(s) + O_2(g) \rightleftharpoons FeO_2(s)\) | air-oxidation of metallic iron (formation of rust) |
|---|---|
| \(CaF_2(s) \rightleftharpoons Ca(aq) + F^+(aq)\_ | dissolution of calcium fluoride in water |
| \(H_2O(s) \rightleftharpoons H_2O(g)\) | sublimation of ice (a phase change ) |
|
\(NaHCO_3(s) + H^+(aq) \rightleftharpoons
|
formation of carbon dioxide gas from sodium bicarbonate when water is added to baking powder (the hydrogen ions come from tartaric acid, the other component of baking powder.) |
The vapor pressure of solid hydrates
A particularly interesting type of heterogeneous reaction is one in which a solid is in equilibrium with a gas. The sublimation of ice illustrated in the above table is a very common example. The equilibrium constant for this process is simply the partial pressure of water vapor in equilibrium with the solid— the vapor pressure of the ice.
Many common inorganic salts form solids which incorporate water molecules into their crystal structures. These water molecules are usually held rather loosely and can escape as water vapor. Copper(II) sulfate, for example forms a pentahydrate in which four of the water molecules are coordinated to the Cu 2 + ion while the fifth is hydrogen-bonded to SO 4 2– . This latter water is more tightly bound, so that the pentahydrate loses water in two stages on heating:
\[\ce{CuSO4 \cdot 5H2O ->[140^oC] CuSO4 \cdot 5H2O ->[400^oC] CuSO4} \nonumber\]
These dehydration steps are carried out at the temperatures indicated above, but at any temperature, some moisture can escape from a hydrate. For the complete dehydration of the pentahydrate we can define an equilibrium constant:
\[\ce{CuSO4 \cdot 5H2O(s) <=> CuSO4(s) + 5 H2O(g)} \quad K_p = 1.14 \times 10^{10} \nonumber\]
The vapor pressure of the hydrate (for this reaction) is the partial pressure of water vapor at which the two solids can coexist indefinitely; its value is \(K_p\) 1/5 atm. If a hydrate is exposed to air in which the partial pressure of water vapor is less than its vapor pressure, the reaction will proceed to the right and the hydrate will lose moisture. Vapor pressures always increase with temperature, so any of these compounds can be dehydrated by heating.
Loss of water usually causes a breakdown in the structure of the crystal; this is commonly seen with sodium sulfate, whose vapor pressure is sufficiently large that it can exceed the partial pressure of water vapor in the air when the relative humidity is low. What one sees is that the well-formed crystals of the decahydrate undergo deterioration into a powdery form, a phenomenon known as efflorescenc e.
| name | formula | vapor pressure, torr | |
|---|---|---|---|
| 25°C | 30°C | ||
| sodium sulfate decahydrate | Na 2 SO 4 ·10H 2 O | 19.2 | 25.3 |
| copper(II) sulfate pentahydrate | CuSO 4 ·5H 2 O | 7.8 | 12.5 |
| calcium chloride monohydrate | CaCl 2 ·H 2 O | 3.1 | 5.1 |
| (water) | H 2 O | 23.5 | 31.6 |
At what relative humidity will copper sulfate pentahydrate lose its waters of hydration when the air temperature is 30°C? What is \(K_p\) for this process at this temperature?
SolutionFrom the Table \(\PageIndex{3}\) , we see that the vapor pressure of the hydrate is 12.5 torr, which corresponds to a relative humidity of 12.5/31.6 = 0.40 or 40%. This is the humidity that will be maintained if the hydrate is placed in a closed container of dry air
For this hydr ate, \(K_p = \sqrt{p_{H_2O)}}\), so the part ial pressure of water vapor that will be in equilibrium with the hydrate and the dehydrated solid (remember that both solids must be present to have equilibrium!), expressed in atmospheres, will be
\[\left(\dfrac{12.5}{760}\right)^5 = 1.20 \times 10^{-9}. \nonumber\]
One of the first hydrates investigated in detail was calcium sulfate hemihydrate (CaSO 4 ·½ H 2 O) which Le Chatelier (he of the “principle”) showed to be the hardened form of CaSO 4 known as plaster of Paris. Anhydrous CaSO 4 forms compact, powdery crystals, whereas the elongated crystals of the hemihydrate bind themselves into a cement-like mass that makes this material useful for making art objects, casts for immobilizing damaged limbs, and as a construction material (fireproofing, drywall). | libretexts | 2025-03-17T19:53:15.035157 | 2013-10-03T01:37:54 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/11%3A_Chemical_Equilibrium/11.04%3A_Equilibrium_Expressions",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "11.4: Equilibrium Expressions",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/11%3A_Chemical_Equilibrium/11.05%3A_Equilibrium_Calculations | 11.5: Equilibrium Calculations
The five Examples presented above were carefully selected to span the range of problem types that students enrolled in first-year college chemistry courses are expected to be able to deal with. If you are able to reproduce these solutions on your own, you should be well prepared on this topic.
- The first step in the solution of all but the simplest equilibrium problems is to sketch out a table showing for each component the initial concentration or pressure, the change in this quantity (for example, +2 x ), and the equilibrium values (for example, .0036 + 2 x ). In doing so, the sequence of calculations required to get to the answer usually becomes apparent.
- Equilibrium calculations often involve quadratic- or higher-order equations. Because concentrations, pressures, and equilibrium constants are seldom known to a precision of more than a few significant figures, there is no need to seek exact solutions. Iterative approximations (as in Example \(\PageIndex{3}\)) or use of a graphical calculator (Example \(\PageIndex{4}\)) are adequate and convenient.
- Phase distribution equilibria play an important role in chemical separation processes on both laboratory and industrial scales. They are also involved in the movement of chemicals between different parts of the environment, and in the bioconcentration of pollutants in the food chain.
This page presents examples that cover most of the kinds of equilibrium problems you are likely to encounter in a first-year university course. Reading this page will not teach you how to work equilibrium problems! The only one who can teach you how to interpret, understand, and solve problems is yourself . So don't just "read" this and think you are finished. You need to find and solve similar problems on your own. Look over the problems in your homework assignment or at the end of the appropriate chapter in a textbook, and see how they fit into the general types described below. When you can solve them without looking at the examples below, you will be well on your way!
Calculating Equilibrium Constants
Clearly, if the concentrations or pressures of all the components of a reaction are known, then the value of \(K\) can be found by simple substitution. Observing individual concentrations or partial pressures directly may be not always be practical, however. If one of the components is colored, the extent to which it absorbs light of an appropriate wavelength may serve as an index of its concentration. Pressure measurements are ordinarily able to measure only the total pressure of a gaseous mixture, so if two or more gaseous products are present in the equilibrium mixture, the partial pressure of one may need to be inferred from that of the other, taking into account the stoichiometry of the reaction.
In an experiment carried out by Taylor and Krist ( J. Am. Chem. Soc . 1941: 1377 ), hydrogen iodide was found to be 22.3% dissociated at 730.8 K. Calculate \(K_c\) for
\[\ce{ 2 HI(g) <=> H2(g) + I2} \nonumber \]
Solution
No explicit molar concentrations are given, but we do know that for every \(n\) moles of \(\ce{HI}\), \(0.223n\) moles of each product is formed and \((1–0.223)n = 0.777n\) moles of \(\ce{HI}\) remains. For simplicity, we assume that \(n=1\) and that the reaction is carried out in a 1.00-L vessel, so that we can substitute the required concentration terms directly into the equilibrium expression for \(K_c\).
\[\begin{align*} K_c &= \dfrac{[\ce{H2}][\ce{I2}]}{[\ce{HI}]^2} \\[4pt] &= \dfrac{(0.223)(0.223)}{(0.777)^2} \\[4pt] &= 0.082 \end{align*}\]
Ordinary white phosphorus, \(\ce{P4}\), forms a vapor which dissociates into diatomic molecules at high temperatures:
\[\ce{P4(g) <=> 2 P2(g)} \nonumber\]
A sample of white phosphorus, when heated to 1000°C, formed a vapor having a total pressure of 0.20 atm and a density of 0.152 g L –1 . Use this information to evaluate the equilibrium constant \(K_p\) for this reaction.
Solution
Before worrying about what the density of the gas mixture has to do with \(K_p\), start out in the usual way by laying out the information required to express \(K_p\) in terms of an unknown \(x\)
| ICE Table | \(P_{4(g)}\) | \(P_{2(g)}\) | comment |
|---|---|---|---|
| I nitial (moles) | 1 | 0 | Since \(K\) is independent of the number of moles, assume the simplest initial case. |
| C hange (moles) | \(-x\) | \(2x\) | x is the fraction of P 4 that dissociates. |
| E quilibrium (moles) | \(1-x\) | \(2x\) | The denominator is the total number of moles: |
| Mole Fraction | \(\chi_{P_4}=\dfrac{1-x}{1+x}\) | \(\chi_{P_2}=\dfrac{2x}{1+x}\) | The denominator is the total number of moles: |
|
Equilibrium
(pressures) |
\(p_{P_4} = \chi_{P_4} 0.2 = \left( \dfrac{1-x}{1+x}\right) 0.2 \) | \(p_{P_2} = \chi_{P_2} 0.2 = \left( \dfrac{2x}{1+x}\right) 0.2 \) | Partial pressure is the mole fraction times the total pressure. |
The partial pressures in the bottom row were found by multiplying the mole fraction of each gas by the total pressure:
\[P_i = \chi_i P_{tot} \nonumber\]
with the term in the denominator of each mole fraction is the total number of moles of gas present at equilibrium:
\[P_{tot} = 1-x + 2x = 1+x \nonumber\]
Expressing the equilibrium constant in terms of \(x\) gives
\[ \begin{align*} K_p &= \dfrac{p^2_{P_2}}{p_{P_4}} \\[4pt] &= \dfrac{\left(\dfrac{2x}{1+x}\right)^2 0.2^2} {\left(\dfrac{1-x}{1+x}\right) 0.2} \\[4pt] &= \left(\dfrac{4x^2}{(1-x)(1+x)}\right) 0.2 \\[4pt] &= \left(\dfrac{4x^2}{1+x^2}\right) 0.2 \end{align*}\]
Now we need to find the dissociation fraction \(x\) of \(P_4\), and at this point we hope you remember those gas laws that you were told you would be needing later in the course! The density of a gas is directly proportional to its molecular weight, so you need to calculate the densities of pure \(P_4\) and pure \(P_2\) vapors under the conditions of the experiment. One of these densities will be greater than 0.152 gL –1 and the other will be smaller; all you need to do is to find where the measured density falls in between the two limits, and you will have the dissociation fraction.
The molecular weight of phosphorus is 31.97, giving a molar mass of 127.9 g for \(\ce{P4}\). This mass must be divided by the volume to find the density; assuming ideal gas behavior, the volume of 127.9 g (1 mole) of \(\ce{P4}\) is given by RT/P , which works out to 522 L (remember to use the absolute temperature here.) The density of pure \(\ce{P4}\) vapor under the conditions of the experiment is then
\[\rho = \dfrac{m}{V} = (128\; g \;mol^{–1}) \times x = (522\; L mol^{–1}) = 0.245\; g\; L^{–1} \nonumber\]
The density of pure \(P_2\) would be half this, or 0.122 g L –1 . The difference between these two limiting densities is 0.123 g L –1 , and the difference between the density of pure \(\ce{P4}\) and that of the equilibrium mixture is (0.245 – 0.152) g L –1 or 0.093 g L –1 . The ratio 0.093/0.123 = 0.76 is therefore the fraction of \(\ce{P4}\) that remains and its fractional dissociation is (1 – 0.76) = 0.24. Substituting into the equilibrium expression above gives \(K_p = 1.2\).
Solve Example \(\PageIndex{2}\) by using a different set of initial conditions to demonstrated that the initial conditions indeed have no effect on determining the Equilibrium state and \(K_p\).
Calculating Equilibrium Concentrations
This is by far the most common kind of equilibrium problem you will encounter: starting with an arbitrary number of moles of each component, how many moles of each will be present when the system comes to equilibrium? The principal source of confusion and error for beginners relates to the need to determine the values of several unknowns (a concentration or pressure for each component) from a single equation, the equilibrium expression. The key to this is to make use of the stoichiometric relationships between the various components, which usually allow us to express the equilibrium composition in terms of a single variable. The easiest and most error-free way of doing this is adopt a systematic approach in which you create and fill in a small table as shown in the following problem example. You then substitute the equilibrium values into the equilibrium constant expression, and solve it for the unknown.
This very often involves solving a quadratic or higher-order equation. Quadratics can of course be solved by using the familiar quadratic formula, but it is often easier to use an algebraic or graphical approximation, and for higher-order equations this is the only practical approach. There is almost never any need to get an exact answer, since the equilibrium constants you start with are rarely known all that precisely anyway.
Phosgene (\(\ce{COCl2}\)) is a poisonous gas that dissociates at high temperature into two other poisonous gases, carbon monoxide and chlorine. The equilibrium constant K p = 0.0041 at 600°K. Find the equilibrium composition of the system after 0.124 atm of \(\ce{COCl2}\) is allowed to reach equilibrium at this temperature.
Solution
First we need a balanced chemical reaction
\[\ce{COCl_2 <=> CO(g) + Cl2(g)} \nonumber\]
Start by drawing up a table showing the relationships between the components:
| ICE Table | \(COCl_2\) | \(CO_{(g)}\) | \(Cl_{2(g)}\) |
|---|---|---|---|
| I nitial (pressure) | 0.124 atm | 0 | 0 |
| C hange (pressure) | -x | +x | +x |
| E quilibrium (pressure) | 0.124 – x | +x | +x |
Substitution of the equilibrium pressures into the equilibrium expression gives
\[ \dfrac{x^2}{0.124 - x} = 0.0041 \nonumber\]
This expression can be rearranged into standard polynomial form
\[x^2 +0.0041 x – 0.00054 = 0 \nonumber\]
and solved by the quadratic formula, but we will simply obtain an approximate solution by iteration. Because the equilibrium constant is small, we know that x will be rather small compared to 0.124, so the above relation can be approximated by
\[ \dfrac{x^2}{0.124 - x} \approx \dfrac{x^2}{0.124}= 0.0041 \nonumber\]
which gives x = 0.0225. To see how good this is, substitute this value of x into the denominator of the original equation and solve again:
\[ \dfrac{x^2}{0.124 - 0.0225} = \dfrac{x^2}{0.102}= 0.0041 \nonumber\]
This time, solving for \(x\) gives 0.0204. Iterating once more, we get
\[ \dfrac{x^2}{0.124 - 0.0204} = \dfrac{x^2}{0.104}= 0.0041 \nonumber\]
and x = 0.0206 which is sufficiently close to the previous to be considered the final result. The final partial pressures are then 0.104 atm for COCl 2 , and 0.0206 atm each for CO and Cl 2 .
Comment : Using the quadratic formula to find the exact solution yields the two roots –0.0247 (which we ignore) and 0.0206, which show that our approximation is quite good.
The gas-phase dissociation of phosphorus pentachloride to the trichloride has \(K_p = 3.60\) at 540°C:
\[\ce{PCl5(g) <=> PCl3(g) + Cl2(g)} \nonumber\]
What will be the partial pressures of all three components if 0.200 mole of \(\ce{PCl5}\) and 3.00 moles of \(\ce{PCl3}\) are combined and brought to equilibrium at this temperature and at a total pressure of 1.00 atm?
Solution
As always, set up a table showing what you know (first two rows) and then expressing the equilibrium quantities:
| ICE Table | \(\ce{PC5(g)}\) | \(\ce{PCl3(g)}\) | \(\ce{Cl2(g)}\) |
|---|---|---|---|
| Initial (moles) | 0.200 | 3.00 | 0 |
| Change (moles) | –x | +x | +x |
| Equilibrium (moles) | 0.200 – x | 3.00 + x | x |
| Equilibrium (partial pressures) | \(\dfrac{2.00 - x }{ 3.20 + x}\) | \(\dfrac{3.00 + x }{ 3.20 + x}\) | \(\dfrac{ x }{ 3.20 + x}\) |
The partial pressures in the bottom row were found by multiplying the mole fraction of each gas by the total pressure:
\[P_i = \chi_i P_{tot} \nonumber\]
with the term in the denominator of each mole fraction is the total number of moles of gas present at equilibrium:
\[P_{tot} = (0.200 – x) + (3.00 + x) + x = 3.20 + x \nonumber\]
Substituting the equilibrium partial pressures into the equilibrium expression, we have
\[ \dfrac{ (3.00 +x)(x)}{(0.200 -x)(3.20 +x)} = 3.60 \nonumber\]
whose polynomial form is
\[4.60x^2 + 13.80x – 2.304 = 0. \nonumber\]
You can use the quadratic question to solve this or you can do it graphically (more useful for higher order equations). Plotting this on a graphical calculator yields \(x = 0.159\) as the positive root:
Substitution of this root into the expressions for the equilibrium partial pressures in the table yields the following values:
- \(P_{\ce{PCl5}}\) = 0.012 atm,
- \(P_{\ce{PCl3}}\) = 0.94 atm,
- \(P_{\ce{Cl2}}\) = 0.047 atm.
Effects of Dilution on Equilibrium
In the section that introduced the Le Chatelier principle, it was mentioned that diluting a weak acid such as acetic acid \(\ce{CH3COOH}\) (“\(\ce{HAc}\)”) will shift the dissociation equilibrium to the right:
\[\ce{HAc + H_2O \rightleftharpoons H_3O^{+} + Ac^{–}} \nonumber\]
Thus a \(0.10\,M\) solution of acetic acid is 1.3% ionized, while in a 0.01 M solution, 4.3% of the \(\ce{HAc}\) molecules will be dissociated. This is because as the solution becomes more dilute, the product [H 3 O + ][Ac – ] decreases more rapidly than does the \(\ce{[HAc]}\) term. At the same time the concentration of \(\ce{H2O}\) becomes greater, but because it is so large to start with (about 55.5 M), any effect this might have is negligible, which is why no \(\ce{[H2O]}\) term appears in the equilibrium expression.
For a reaction such as
\[\ce{CH_3COOH (l) + C_2H_5OH (l) \rightleftharpoons CH_3COOC_2H_5 (l) + H_2O (l) }\]
(in which the water concentration does change), dilution will have no effect on the equilibrium; the situation is analogous to the way the pressure dependence of a gas-phase reaction depends on the number of moles of gaseous components on either side of the equation (i.e., \(\Delta n_g\)).
The biochemical formation of a disaccharide (double) sugar from two monosaccharides is exemplified by the reaction
\[\text{fructose} (aq) + \text{glucose-6-phosphate} (aq) → \text{sucrose-6-phosphate} (aq) + \ce{H2O} (l)\]
(Sucrose is ordinary table sugar.) To what volume should a solution containing 0.050 mol of each monosaccharide be diluted in order to bring 5% conversion to sucrose phosphate? The equilibrium constant for this reaction is \(K_{c} = 7.1 \times 10^{-6}\) at room temperature.
Solution
The initial and final numbers of moles in this equation are as follows:
| ICE Table | fructose (\(\text{fruc}\)) | glucose-6-P (\(\text{gluc6P}\)) | sucrose-6-P (\(\text{suc6P}\)) | water \( \ce{H2O}\)) |
|---|---|---|---|---|
| I nitial (moles) | 0.05 | 0.05 | 0 | - |
| C hange (moles) | -x | -x | +x | - |
| E quilibrium (moles) | 0.05-x | 0.05-x | +x | - |
What is the value of \(x\)? That is when 5% of the reaction has proceeded or when 5% of the fructose (or glucose-6-P) is consumed:
\[\dfrac{x}{0.05} = 0.05 \nonumber\]
so \(x = 0.0025 \nonumber\). The equilibrated concentrations are then
- \([\text{suc6P}]_{equil} = \dfrac{0.0025}{V}\)
- \([\text{fruc}]_{equil} = \dfrac{0.0475}{V}\)
- \([\text{glu6P}]_{equil} = \dfrac{0.0475}{V}\)
Substituting into the values in for the expression of \(K_c\) (in which the solution volume is the unknown), we have
\[\begin{align*} K_{c} &= \dfrac{[\text{suc6P}]_{equil} }{[\text{fruc}]_{equil} [\text{gluc6P}]_{equil} } \\[4pt] &= \dfrac{\left(\dfrac{0.0025}{V}\right)}{\left(\dfrac{0.0475}{V}\right)^2} = 7.1 \times 10^{-6} \end{align*}\]
\[V = (7.1 \times 10^{-6}) \dfrac{(0.0475)^2}{0.0025} \nonumber\]
Solving for \(V\) gives a final solution volume of \(6.4 \times 10^{-4}\,L\) or \(640 \mu L\). Why so small? The reaction is not favored and to push it forward, large concentrations of reactants are needed (Le Chatelier principle in action). | libretexts | 2025-03-17T19:53:15.144068 | 2013-10-03T01:37:54 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/11%3A_Chemical_Equilibrium/11.05%3A_Equilibrium_Calculations",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "11.5: Equilibrium Calculations",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/11%3A_Chemical_Equilibrium/11.06%3A_Phase_Distribution_Equilibria | 11.6: Phase Distribution Equilibria
- Phase distribution equilibria play an important role in chemical separation processes on both laboratory and industrial scales. They are also involved in the movement of chemicals between different parts of the environment, and in the bioconcentration of pollutants in the food chain.
It often happens that two immiscible liquid phases are in contact, one of which contains a solute. How will the solute tend to distribute itself between the two phases? One’s first thought might be that some of the solute will migrate from one phase into the other until it is distributed equally between the two phases, since this would correspond to the maximum dispersion (randomness) of the solute. This, however, does not take into the account the differing solubilities the solute might have in the two liquids; if such a difference does exist, the solute will preferentially migrate into the phase in which it is more soluble.
For a solute \(S\) distributed between two phases a and b the process S a = S b is defined by the distribution law
\[K_{a,b} = \dfrac{[S]_a}{[S]_b}\]
in which
- \(K_{a,b}\) is the distribution ratio (also called the distribution coefficient) and
- \([S]_i\) is the solubility of the solute in the phase.
biomagnification
The transport of substances between different phases is of immense importance in such diverse fields as pharmacology and environmental science. For example, if a drug is to pass from the aqueous phase with the stomach into the bloodstream, it must pass through the lipid (oil-like) phase of the epithelial cells that line the digestive tract. Similarly, a pollutant such as a pesticide residue that is more soluble in oil than in water will be preferentially taken up and retained by marine organism, especially fish, whose bodies contain more oil-like substances; this is basically the mechanism whereby such residues as DDT can undergo biomagnification as they become more concentrated at higher levels within the food chain. For this reason, environmental regulations now require that oil-water distribution ratios be established for any new chemical likely to find its way into natural waters. The standard “oil” phase that is almost universally used is octanol, C 8 H 17 OH.
In preparative chemistry it is frequently necessary to recover a desired product present in a reaction mixture by extracting it into another liquid in which it is more soluble than the unwanted substances. On the laboratory scale this operation is carried out in a separatory funnel as shown below. The two immiscible liquids are poured into the funnel through the opening at the top. The funnel is then shaken to bring the two phases into intimate contact, and then set aside to allow the two liquids to separate into layers, which are then separated by allowing the more dense liquid to exit through the stopcock at the bottom.
If the distribution ratio is too low to achieve efficient separation in a single step, it can be repeated; there are automated devices that can carry out hundreds of successive extractions, each yielding a product of higher purity. In these applications our goal is to exploit the Le Chatelier principle by repeatedly upsetting the phase distribution equilibrium that would result if two phases were to remain in permanent contact.
Video \(\PageIndex{1}\) : How to perform a liquid-liquid extraction using a separating funnel.
The distribution ratio for iodine between water and carbon disulfide is 650. Calculate the concentration of I 2 remaining in the aqueous phase after 50.0 mL of 0.10M I 2 in water is shaken with 10.0 mL of CS 2 .
Solution
The equilibrium constant is
\[K_d = \dfrac{C_{CS_2}}{C_{H_2O}} = 650 \nonumber\]
\[((5.00 – m_1) mmol / 10 mL) ÷ (m_1 mmol / 50 mL) = 650 \nonumber\]
Simplifying and solving for m 1 yields
\[ \dfrac{(0.50 – 0.1)m_1}{(0.02 m_1} = 650 \nonumber\]
with m 1 = 0.0382 mmol.
The concentration of solute in the water layer is (0.0382 mmol) / (50 mL) = 0.000763 M , showing that almost all of the iodine has moved into the CS 2 layer. | libretexts | 2025-03-17T19:53:15.210353 | 2017-11-26T21:59:29 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/11%3A_Chemical_Equilibrium/11.06%3A_Phase_Distribution_Equilibria",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "11.6: Phase Distribution Equilibria",
"author": null
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/12%3A_Solubility_Equilibria | 12: Solubility Equilibria
Make sure you thoroughly understand the following essential ideas:
- Discuss the roles of lattice- and hydration energy in determining the solubility of a salt in water.
- Explain what a qualitative analysis separation scheme is, and how it works.
- Write the solubility product expression for a salt, given its formula.
- Explain the distinction between an ion product and a solubility product.
- Given the formula of a salt and its K s value, calculate the molar solubility .
- Explain the Le Chatelier principle leads to the common ion effect .
- Explain why a strong acid such as HCl will dissolve a sparingly soluble salt of a weak acid, but not a salt of a strong acid.
- Describe what happens (and why) when aqueous ammonia is slowly added to a solution of silver nitrate
Dissolution of a salt in water is a chemical process that is governed by the same laws of chemical equilibrium that apply to any other reaction. There are, however, a number of special aspects of of these equilibria that set them somewhat apart from the more general ones that are covered in the lesson set devoted specifically to chemical equilibrium. These include such topics as the common ion effect, the influence of pH on solubility, supersaturation, and some special characteristics of particularly important solubility systems.
Solubility: the dissolution of salts in water
Drop some ordinary table salt into a glass of water, and watch it "disappear". We refer to this as dissolution , and we explain it as a process in which the sodium and chlorine units break away from the crystal surface, get surrounded by H 2 O molecules, and become hydrated ions .
\[NaCl_{(s)} \rightarrow Na^+_{(aq)}+ Cl^–_{(aq)} \]
The designation (aq) means "aqueous" and comes from aqua , the Latin word for water. It is used whenever we want to emphasize that the ions are hydrated — that H 2 O molecules are attached to them.
Remember that solubility equilibrium and the calculations that relate to it are only meaningful when both sides (solids and dissolved ions) are simultaneously present. But if you keep adding salt, there will come a point at which it no longer seems to dissolve. If this condition persists, we say that the salt has reached its solubility limit , and the solution is saturated in NaCl. The situation is now described by
\[NaCl_{(s)} \rightleftharpoons Na^+_{(aq)}+ Cl^–_{(aq)}\]
in which the solid and its ions are in equilibrium .
Salt solutions that have reached or exceeded their solubility limits (usually 36-39 g per 100 mL of water) are responsible for prominent features of the earth's geochemistry. They typically form when NaCl leaches from soils into waters that flow into salt lakes in arid regions that have no natural outlets; subsequent evaporation of these brines force the above equilibrium to the left, forming natural salt deposits. These are often admixed with other salts, but in some cases are almost pure NaCl. Many parts of the world contain buried deposits of NaCl (known as halite) that formed from the evaporation of ancient seas, and which are now mined.
Solubilities are most fundamentally expressed in molar (mol L –1 of solution) or molal (mol kg –1 of water) units. But for practical use in preparing stock solutions, chemistry handbooks usually express solubilities in terms of grams-per-100 ml of water at a given temperature, frequently noting the latter in a superscript. Thus 6.9 20 means 6.9 g of solute will dissolve in 100 mL of water at 20° C. When quantitative data are lacking, the designations "soluble", "insoluble", "slightly soluble", and "highly soluble" are used. There is no agreed-on standard for these classifications, but a useful guideline might be that shown below.
The solubilities of salts in water span a remarkably large range of values, from almost completely insoluble to highly soluble. Moreover, there is no simple way of predicting these values, or even of explaining the trends that are observed for the solubilities of different anions within a given group of the periodic table.
Ultimately, the driving force for dissolution (and for all chemical processes) is determined by the Gibbs free energy change. But because many courses cover solubility before introducing free energy, we will not pursue this here. Dissolution of a salt is conceptually understood as a sequence of the two processes depicted above:
- breakup of the ionic lattice of the solid,
- followed by attachment of water molecules to the released ions.
The first step consumes a large quantity of energy, something that by itself would strongly discourage solubility. But the second step releases a large amount of energy and thus has the opposite effect. Thus the net energy change depends on the sum of two large energy terms (often approaching 1000 kJ/mol) having opposite signs. Each of these terms will to some extent be influenced by the size, charge, and polarizability of the particular ions involved, and on the lattice structure of the solid. This large number of variables makes it impossible to predict the solubility of a given salt. Nevertheless, there are some clear trends for how the solubilities of a series of salts of a given anion (such as hydroxides, sulfates, etc.) change with a periodic table group. And of course, there are a number of general solubility rules — for example, that all nitrates are soluble, while most sulfides are insoluble.
Solubility and temperature
Solubility usually increases with temperature - but not always. This is very apparent from the solubility-vs.-temperature plots shown here. (Some of the plots are colored differently in order to make it easier to distinguish them where they crowd together.) The temperature dependence of any process depends on its entropy change — that is, on the degree to which thermal kinetic energy can spread throughout the system. When a solid dissolves, its component molecules or ions diffuse into the much greater volume of the solution, carrying their thermal energy along with them. So we would normally expect the entropy to increase — something that makes any process take place to a greater extent at a higher temperature.
So why does the solubility of cerium sulfate (green plot) diminish with temperature? Dispersal of the Ce 3 + and SO 4 2– ions themselves is still associated with an entropy increase, but in this case the entropy of the water decreases even more owing to the ordering of the H 2 O molecules that attach to the Ce 3 + ions as they become hydrated. It's difficult to predict these effects, or explain why they occur in individual cases — but they do happen.
The Importance of Sparingly Soluble Solids
All solids that dissociate into ions exhibit some limit to their solubilities, but those whose saturated solutions exceed about 0.01 mol L –1 cannot be treated by simple equilibrium constants owing to ion-pair formation that greatly complicates their behavior. For this reason, most of what follows in this lesson is limited to salts that fall into the "sparingly soluble" category. The importance of sparingly soluble solids arises from the fact that formation of such a product can effectively remove the corresponding ions from the solution, thus driving the reaction to the right. Consider, for example, what happens when we mix solutions of strontium nitrate and potassium chloride in a 1:2 mole ratio. Although we might represent this process by
\[Sr(NO_3)_{2(aq)}+ 2 KCl_{(aq)}→ SrCl_{(aq)}+ 2 KNO_{3(aq)} \label{1}\]
the net ionic equation
\[Sr^{2+} + 2 NO_3^– + 2 K^+ + 2 Cl^– → Sr^{2+} + 2 NO_3^– + 2 K^+ + 2 Cl^–\]
indicates that no net change at all has taken place! Of course if the solution were than evaporated to dryness, we would end up with a mixture of the four salts shown in Equation \(\ref{1}\), so in this case we might say that the reaction is half-complete. Contrast this with what happens if we combine equimolar solutions of barium chloride and sodium sulfate:
\[BaCl_{2(aq)}+ Na_2SO_{4(aq)}→ 2 NaCl_{(aq)}+ BaSO_{4(s)} \label{2}\]
whose net ionic equation is
\[Ba^{2+} + \cancel{ 2 Cl^–} + \cancel{2 Na^+} + SO_4^{2–} → \cancel{2 Na^+} + \cancel{2 Cl^–} + BaSO_{4(s)}\]
which after canceling out like terms on both sides, becomes simply
\[Ba^{2+} + SO_4^{2– }→ BaSO_{4(s)} \label{3}\]
Because the formation of sparingly soluble solids is "complete" (that is, equilibria such as the one shown above for barium sulfate lie so far to the right), virtually all of one or both of the contributing ions are essentially removed from the solution. Such reactions are said to be quantitative , and they are especially important in analytical chemistry:
- Qualitative analysis: This most commonly refers to a procedural scheme, widely encountered in first-year laboratory courses, in which a mixture of cations (usually in the form of their dissolved nitrate salts) is systematically separated and identified on the basis of the solubilities of their various anion salts such as chlorides, carbonates, sulfates, and sulfides. Although this form of qualitative analysis is no longer employed by modern-day chemists (instrumental techniques such as atomic absorption spectroscopy are much faster and comprehensive), it is still valued as an educational tool for familiarizing students with some of the major classes of inorganic salts, and for developing basic skills relating to observing, organizing, and interpreting results in the laboratory.
- Quantitative gravimetric analysis: In this classical form of chemical analysis, an insoluble salt of a cation is prepared by precipitating it by addition of a suitable anion. The precipitate is then collected, dried, and weighed ("gravimetry") in order to determine the concentration of the cation in the sample. For example, a gravimetric procedure for determining the quantity of barium in a sample might involve precipitating the metal as the sulfate according to Equation \(\ref{3}\) above, using an excess of sulfate ion to ensure complete removal of the barium. This method of quantitative analysis became extremely important in the latter half of the nineteenth century, by which time reasonably accurate atomic weights had become available, and sensitive analytical balances had been developed. It was not until the 1960's that it became largely supplanted by instrumental techniques which were much quicker and accurate. Gravimetric analysis is still usually included as a part of more advanced laboratory instruction, largely as a means of developing careful laboratory technique.
Solubility Products and Equilibria
Some salts and similar compounds (such as some metal hydroxides) dissociate completely when they dissolve, but the extent to which they dissolve is so limited that the resulting solutions exhibit only very weak conductivities. In these salts, which otherwise act as strong electrolytes, we can treat the dissolution-dissociation process as a true equilibrium. Although this seems almost trivial now, this discovery, made in 1900 by Walther Nernst who applied the Law of Mass Action to the dissociation scheme of Arrhenius, is considered one of the major steps in the development of our understanding of ionic solutions.
Using silver chromate as an example, we express its dissolution in water as
\[Ag_2CrO_{4(s)} \rightarrow 2 Ag^+_{(aq)}+ CrO^{2–}_{4(aq)} \label{4a}\]
When this process reaches equilibrium ( which requires that some solid be present ), we can write (leaving out the " (aq) s" for simplicity)
\[Ag_2CrO_{4(s)} \rightleftharpoons 2 Ag^+ + CrO^{2–}_{4} \label{4b}\]
The equilibrium constant is formally
\[K = \dfrac{[Ag^+]^2[CrO_4^{2–}]}{[Ag_2CrO_{4(s)}]} = [Ag^+]^2[CrO_4^{2–}] \label{5a}\]
But because solid substances do not normally appear in equilibrium expressions, the equilibrium constant for this process is
\[[Ag^+]^2 [CrO_4^{2–}] = K_s = 2.76 \times 10^{–12} \label{5b}\]
Because equilibrium constants of this kind are written as products, the resulting K 's are commonly known as solubility products , denoted by \(K_s\) or \(K_{sp}\).
Strictly speaking, concentration units do not appear in equilibrium constant expressions. However, many instructors prefer that students show them anyway, especially when using solubility products to calculate concentrations. If this is done, \(K_s\) in Equation \(\ref{5b}\) would have units of mol 3 L –3 .
Equilibrium and non-equilibrium in solubility systems
An expression such as [Ag + ] 2 [CrO 4 2– ] in known generally as an ion product — this one being the ion product for silver chromate. An ion product can in principle have any positive value, depending on the concentrations of the ions involved. Only in the special case when its value is identical with K s does it become the solubility product. A solution in which this is the case is said to be saturated . Thus when
\[[Ag^+]^2 [CrO_4^{2–}] = 2.76 \times 10^{-12}\]
at the temperature and pressure at which this value \(K_s\) of applies, we say that the "solution is saturated in silver chromate".
A solution must be saturated to be in equilibrium with the solid. This is a necessary condition for solubility equilibrium, but it is not by itself sufficient . True chemical equilibrium can only occur when all components are simultaneously present. A solubility system can be in equilibrium only when some of the solid is in contact with a saturated solution of its ions. Failure to appreciate this is a very common cause of errors in solving solubility problems.
Undersaturated and supersaturated solutions
If the ion product is smaller than the solubility product, the system is not in equilibrium and no solid can be present. Such a solution is said to be undersaturated . A supersaturated solution is one in which the ion product exceeds the solubility product. A supersaturated solution is not at equilibrium, and no solid can ordinarily be present in such a solution. If some of the solid is added, the excess ions precipitate out and until solubility equilibrium is achieved.
How to know the saturation status of a solution
This is just a simple matter of comparing the ion product \(Q_s\) with the solubility product \(K_s\). S o for the system
\[Ag_2CrO_{4(s)} \rightleftharpoons 2 Ag^+ + CrO_4^{2–} \label{4ba}\]
a solution in which \(Q_s < K_s\) (i.e., \(K_s /Q_s > 1\)) is undersaturated (blue shading) and the no solid will be present. The combinations of [Ag + ] and [CrO 4 2– ] that correspond to a saturated solution (and thus to equilibrium) are limited to those described by the curved line. The pink area to the right of this curve represents a supersaturated solution.
A sample of groundwater that has percolated through a layer of gypsum (CaSO 4 , K s = 4.9E–5 = 10 –4.3 ) is found to have be 8.4E–5 M in Ca 2 + and 7.2E–5 M in SO 4 2– . What is the equilibrium state of this solution with respect to gypsum?
Solution
The ion product
\[Q_s = (8.4 \times 10^{–5})(7.2 \times 10^{-5}) = 6.0 \times 10^{–4}\]
exceeds \(K_s\), so the ratio K s / Q s > 1 and the solution is supersaturated in CaSO 4 .
How are solubilities determined?
There are two principal methods, neither of which is all that reliable for sparingly soluble salts:
- Evaporate a saturated solution of the solid to dryness, and weigh what's left.
- Measure the electrical conductivity of the saturated solution, which will be proportional to the concentrations of the ions.
How Solubilities relate to solubility products
The solubility (by which we usually mean the molar solubility ) of a solid is expressed as the concentration of the "dissolved solid" in a saturated solution. In the case of a simple 1:1 solid such as AgCl, this would just be the concentration of Ag + or Cl – in the saturated solution. However, for a more complicated stoichiometry such as as silver chromate, the solubility would be only one-half of the Ag + concentration.
For example, let us denote the solubility of Ag 2 CrO 4 as \(S\) mol L –1 . Then for a saturated solution, we have
- \([Ag^+] = 2S\)
- \( [CrO_4^{2–}] = S\)
Substituting this into Equation \(\ref{5b}\) above,
\[(2S)^2 (S) = 4S^3 = 2.76 \times 10^{–12}\]
\[S= \left( \dfrac{K_s}{4} \right)^{1/3} = (6.9 \times 10^{-13})^{1/3} = 0.88 \times 10^{-4} \label{6a}\]
thus the solubility is \(8.8 \times 10^{–5}\; M\).
Note that the relation between the solubility and the solubility product constant depends on the stoichiometry of the dissolution reaction. For this reason it is meaningless to compare the solubilities of two salts having the formulas \(A_2B\) and \(AB_2\), on the basis of their \(K_s\) values.
It is meaningless to compare the solubilities of two salts having different formulas on the basis of their \(K_s\) values.
under these conditions.
Solution
moles of solute in 100 mL; S = 0.0016 g / 78.1 g/mol = 2.05E-5 mol
S = 2.05E–5 mol/0.100 L = 2.05E-4 M
K s = [Ca 2 + ][F – ] 2 = ( S )(2 S ) 2 = 4 × (2.05E–4) 3 = 3.44E–11
Estimate the solubility of La(IO 3 ) 3 and calculate the concentration of iodate in equilibrium with solid lanthanum iodate, for which K s = 6.2 × 10 –12 .
Solution
The equation for the dissolution is
\[La(IO_3)_3 \rightleftharpoons La^{3+ }+ 3 IO_3^–\]
If the solubility is S , then the equilibrium concentrations of the ions will be
[La 3 + ] = S and [IO 3 – ] = 3 S . Then K s = [La 3+ ][IO 3 – ] 3 = S (3 S ) 3 = 27 S 4
27 S 4 = 6.2 × 10 –12 , S = ( ( 6.2 ÷ 27) × 10 –12 ) ¼ = 6.92 × 10 –4 M
[IO 3 – ] = 3 S = 2.08 × 10 –5 ( M )
Cadmium is a highly toxic environmental pollutant that enters wastewaters associated with zinc smelting (Cd and Zn commonly occur together in ZnS ores) and in some electroplating processes. One way of controlling cadmium in effluent streams is to add sodium hydroxide, which precipitates insoluble Cd(OH) 2 ( K s = 2.5E–14). If 1000 L of a certain wastewater contains Cd 2+ at a concentration of 1.6E–5 M , what concentration of Cd 2+ would remain after addition of 10 L of 4 M NaOH solution?
Solution
As with most real-world problems, this is best approached as a series of smaller problems, making simplifying approximations as appropriate.
Volume of treated water: 1000 L + 10 L = 1010 L
Concentration of OH – on addition to 1000 L of pure water:
(4 M ) × (10 L)/(1010 L) = .040 M
Initial concentration of Cd 2 + in 1010 L of water:
(1.6E–5 M ) x (100/101) ≈ 1.6E–5 M
The easiest way to tackle this is to start by assuming that a stoichiometric quantity of Cd(OH) 2 is formed — that is, all of the Cd 2 + gets precipitated.
| Concentrations | [Cd 2+ ], M | [OH – ], M |
|---|---|---|
| initial | 1.6E–5 | 0.04 |
| change | –1.6E–5 | –3.2E–5 |
| final: | 0 | 0.04 – 3.2E–5 ≈ .04 |
Now "turn on the equilibrium" — find the concentration of Cd 2 + that can exist in a 0.04 M OH – solution:
| Concentrations | [Cd 2+ ], M | [OH – ], M |
|---|---|---|
| initial | o | 0.04 |
| change | + x | +2 x |
| at equilibrium | x | 0.04 + 2 x ≈ .04 |
Substitute these values into the solubility product expression:
\[Cd(OH)_{2(s) } = [Cd^{2+}] [OH^–]^2 = 2.5 \times 10^{–14}\]
\[[Cd^{2+}] = \dfrac{2.5 \times 10^{–14}}{ 16 \times 10^{–4}} = 1.6 \times 10^{–13}\; M\]
Note that the effluent will now be very alkaline:
\[pH = 14 + \log 0.04 = 12.6\]
so in order to meet environmental standards an equivalent quantity of strong acid must be added to neutralize the water before it is released.
All Just a Simplification of Reality
The simple relations between K s and molar solubility outlined above, and the calculation examples given here, cannot be relied upon to give correct answers. Some of the reasons for this are explained in Part 2 of this lesson, and have mainly to do with incomplete dissociation of many salts and with complex formation in the presence of anions such as Cl – and OH – . The situation is nicely described in the article What Should We Teach Beginners about Solubility and Solubility Products? by Stephen Hawkes ( J Chem Educ. 1998 75(9) 1179-81). See also the earlier article by Meites, Pode and Thomas Are Solubilities and Solubility Products Related? ( J Chem Educ. 1966 43(12) 667-72).
It turns out that solubility equilibria more often than not involve many competing processes and their rigorous treatment can be quite complicated. Nevertheless, it is important that students master these over-simplified examples. However, it is also important that they are not taken too seriously!
The Common Ion Effect
It has long been known that the solubility of a sparingly soluble ionic substance is markedly decreased in a solution of another ionic compound when the two substances have an ion in common. This is just what would be expected on the basis of the Le Chatelier Principle; whenever the process
\[CaF_{2(s)} \rightleftharpoons Ca^{2+} + 2 F^– \label{7}\]
is in equilibrium, addition of more fluoride ion (in the form of highly soluble NaF) will shift the composition to the left, reducing the concentration of Ca 2 + , and thus effectively reducing the solubility of the solid. We can express this quantitatively by noting that the solubility product expression
\[[Ca^{2+}][F^–]^2 = 1.7 \times 10^{–10} \label{8}\]
must always hold, even if some of the ionic species involved come from sources other than CaF 2 (s) . For example, if some quantity x of fluoride ion is added to a solution initially in equilibrium with solid CaF 2 , we have
- \([Ca^{2+}] = S\)
- \([F^–] = 2S + x\)
so that
\[K_s = [Ca^{2+}][ F^–]^2 = S (2S + x)^2 . \label{9a}\]
\[K_s ≈ S x^2 \]
or
\[S ≈ \dfrac{K_s}{x^2} \label{9b}\]
University-level students should be able to derive these relations for ion-derived solids of any stoichiometry.
The plots shown below illustrate the common ion effect for silver chromate as the chromate ion concentration is increased by addition of a soluble chromate such as \(Na_2CrO_4\).
What's different about the plot on the right? If you look carefully at the scales, you will see that this one is plotted logarithmically (that is, in powers of 10.) Notice how a much wider a range of values can display on a logarithmic plot. The point of showing this pair of plots is to illustrate the great utility of log-concentration plots in equilibrium calculations in which simple approximations (such as that made in Equation \(\ref{9b}\) can yield straight-lines within the range of values for which the approximation is valid.
Calculate the solubility of strontium sulfate ( K s = 2.8 × 10 –7 ) in
- pure water and
- in a 0.10 mol L –1 solution of \(Na_2SO_4\).
\[S = \sqrt{K_s} = \sqrt{ 2.8 \times 10^{–7} } = 5.3 \times 10^{–4}\]
(b) In 0.10 mol L –1 Na 2 SO 4 , we have
= [Sr 2 + ][SO 4 2– ] = S × (0.10 + S ) = 2.8 × 10 –7Because S is negligible compared to 0.10 M, we make the approximation
= [Sr 2 + ][SO 4 2– ] ≈ S × (0.10 M) = 2.8 × 10 –7so
This is roughly 100 times smaller than the result from (a) .
Selective Precipitation and Separations
Differences in solubility are widely used to selectively remove one species from a solution containing several kinds of ions.
The solubility products of AgCl and Ag 2 CrO 4 are 1.8E–10 and 2.0E–12, respectively. Suppose that a dilute solution of AgNO 3 is added dropwise to a solution containing 0.001 M Cl – and 0.01 M CrO 4 2– .
- Which solid, AgCl or Ag 2 CrO 4 , will precipitate first?
- What fraction of the first anion will have been removed when the second just begins to precipitate? Neglect any volume changes.
Solution
The silver ion concentrations required to precipitate the two salts are found by substituting into the appropriate solubility product expressions:
- to precipitate AgCl: [Ag + ] = 1.8E-10 / .001 = 1.8E-7 M
- to precipitate Ag 2 CrO 4 : [Ag + ] = (2.0E-12 / .01) ½ = 1.4E–5 M
The first solid to form as the concentration of Ag + increases will be AgCl. Eventually the Ag + concentration reaches 1.4E-5 M and Ag 2 CrO 4 begins to precipitate. At this point the concentration of chloride ion in the solution will be 1.3E-5 M which is about 13% of the amount originally present.
The preceding example is the basis of the Mohr titration of chloride by Ag + , commonly done to determine the salinity of water samples. The equivalence point of this precipitation titration occurs when no more AgCl is formed, but there is no way of observing this directly in the presence of the white AgCl which is suspended in the container. Before the titration is begun, a small amount of K 2 CrO 4 is added to the solution. Ag 2 CrO 4 is red-orange in color, so its formation, which signals the approximate end of AgCl precipitation, can be detected visually.
Competing Equilibria involving solids
Solubility expression are probably the exception rather than the rule. Such equilibria are often in competition with other reactions with such species as H + or OH – , complexing agents, oxidation-reduction, formation of other sparingly soluble species or, in the case of carbonates and sulfites, of gaseous products. The exact treatments of these systems can be extremely complicated, involving the solution of large sets of simultaneous equations. For most practical purposes it is sufficient to recognize the general trends, and to carry out approximate calculations.
Salts of weak acids are soluble in strong acids, but strong acids will not dissolve salts of strong acids
The solubility of a sparingly soluble salt of a weak acid or base will depend on the pH of the solution. To understand the reason for this, consider a hypothetical salt MA which dissolves to form a cation M + and an anion A – which is also the conjugate base of a weak acid HA. The fact that the acid is weak means that hydrogen ions (always present in aqueous solutions) and M + cations will both be competing for the A – :
The weaker the acid HA, the more readily will reaction take place, thus gobbling up A – ions. If an excess of H + is made available by addition of a strong acid, even more A – ions will be consumed, eventually reversing reaction , causing the solid to dissolve.
In , for example, sulfate ions react with calcium ions to form insoluble CaSO 4 . Addition of a strong acid such as HCl (which is totally dissociated ) has no effect because CaCl 2 is soluble. Although H + can protonate some SO 4 2– ions to form hydrogen sulfate ("bisulfate") HSO 4 – , this ampholyte acid is too weak to reverse by drawing a significant fraction of sulfate ions out of CaSO 4 (s) .
Calculate the concentration of aluminum ion in a solution that is in equilibrium with aluminum hydroxide when the pH is held at 6.0.
The equilibria are
\[Al(OH)_3 \rightleftharpoons Al^{3+} + 3 OH^–\]
with
\[K_s = 1.4 \times 10^{–34}\]
and
\[H_2O \rightleftharpoons H^+ + OH^–\]
with
\[K_w = 1 \times 10^{–14}\]
Substituting the equilibrium expression for the second of these into that for the first, we obtain
\[[OH^–]^3 = \left( \dfrac{K_w}{ [H^+]}\right)^3 = \dfrac{K_s}{[Al^{3+}]}\]
(1.0 × 10 –14 ) / (1.0 × 10 –6 ) 3 = (1.4 × 10 –24 ) / [Al 3 + ]
from which we find
\[[Al^{3+}] = 1.4 \times 10^{–10}\; M\]
Formation of a Competing Precipitate
If two different anions compete with a single cation to form two possible precipitates, the outcome depends not only on the solubilities of the two solids, but also on the concentrations of the relevant ions.
These kinds of competitions are especially important in groundwaters, which acquire solutes from various sources as they pass through sediment layers having different compositions. As the following example shows, competing equilibria of these kinds are very important for understanding geochemical processes involving the formation and transformation of mineral deposits.
Suppose that groundwater containing 0.001 M F – and 0.0018 M CO 3 2– percolates through a sediment containing calcite, CaCO 3 . Will the calcite be replaced by fluorite, CaF 2 ?
The two solubility equilibria are
\[\ce{CaCO3 <=> Ca^{2+} + CO3^{2–} \quad K_s = 10^{–8.1}\]
\[\ce{CaF2 <=> Ca^{2+} + 2 F^{–} \quad K_s = 10^{–10.4}\]
Solution:
The equilibrium between the two solids and the two anions is
\[CaCO_3 + 2 F^–\rightleftharpoons CaF_2 + CO_3^{2–}\]
This is just the sum of the dissolution reaction for CaCO 3 and the reverse of that for CaF 2 , so the equilibrium constant is
\[K = \dfrac{[CO_3^{2–}]}{ [F^–]^2} = \dfrac{10^{–8.1}}{ 10^{–10.4}} = 200\]
That is, the two solids can coexist only if the reaction quotient Q ≤ 200. Substituting the given ion concentrations we find that
\[Q = \dfrac{0.0018}{0.0012} = 1800\]
Since Q > K , we can conclude that the calcite will not change into fluorite.
Complex Ion Formation
Most transition metal ions possess empty d orbitals that are sufficiently low in energy to be able to accept electron pairs from electron donors from cations, resulting in the formation of a covalently-bound complex ion . Even neutral species that have a nonbonding electron pair can bind to ions in this way. Water is an active electron donor of this kind, so aqueous solutions of ions such as Fe 3 + (aq) and Cu 2 + (aq) exist as the octahedral complexes Fe(H 2 O) 6 3+ and Cu(H 6 O) 6 2+ , respectively.
Many of the remarks made above about the relation between K s and solubility also apply to calculations involving complex formation. See Stephen Hawkes' article Complexation Calculations are Worse Than Useless ("... to the point of absurdity...and should not be taught" in introductory courses.) ( J Chem Educ. 1999 76(8) 1099-1100 ). However, it is very important that you understand the principles outlined in this section.
H 2 O is only one possible electron donor; NH 3 , CN – and many other species (known collectively as ligands ) possess lone pairs that can occupy vacant d orbitals on a metallic ion. Many of these bind much more tightly to the metal than does H 2 O, which will undergo displacement and substitution by one or more of these ligands if they are present in sufficiently high concentration.
If a sparingly soluble solid is placed in contact with a solution containing a ligand that can bind to the metal ion much more strongly than H 2 O, then formation of a complex ion will be favored and the solubility of the solid will be greater. Perhaps the most commonly seen example of this occurs when ammonia is added to a solution of copper(II) nitrate, in which the Cu 2 + (aq) ion is itself the complex hexaaquo complex ion shown at the left:
Because ammonia is a weak base, the first thing we observe is formation of a cloudy precipitate of Cu(OH) 2 in the blue solution. As more ammonia is added , this precipitate dissolves, and the solution turns an intense deep blue, which is the color of hexamminecopper(II) and the various other related species such as Cu(H 2 O) 5 (NH 3 ) 2+ , Cu(H 2 O) 4 (NH 3 ) 2 2+ , etc.
In many cases, the complexing agent and the anion of the sparingly soluble salt are identical. This is particularly apt to happen with insoluble chlorides, and it means that addition of chloride to precipitate a metallic ion such as Ag + will produce a precipitate at first, but after excess Cl – has been added the precipitate will redissolve as complex ions are formed.
Some important solubility systems
In this section, we discuss solubility equilibria that relate to some very commonly-encountered anions of metallic salts. These are especially pertinent to the kinds of separations that most college students are required to carry out (and understand!) in their first-year laboratory courses.
Solubility of oxides and hydroxides
Metallic oxides and hydroxides both form solutions containing OH – ions. For example, the solubilities of the [sparingly soluble] oxide and hydroxide of magnesium are represented by
\[Mg(OH)_{2(s)} → Mg^{2+} + 2 OH^– \label{10}\]
\[MgO_{(S)} + H_2O → Mg^{2+} + 2 OH^– \label{11}\]
If you write out the solubility product expressions for these two reactions, you will see that they are identical in form and value.
Recall that pH = –log 10 [H + ], so that [H + ] = 10 –pH .
One might naïvely expect that the dissolution of an oxide such as MgO would yield as one of its products the oxide ion O 2+ . But the oxide ion is such a strong base that it grabs a proton from water, forming two hydroxide ions instead:
\[O^{2+} + H_2O → 2 OH^–\]
This is an example of the rule that the hydroxide ion is the strongest base that can exist in aqueous solution. 2" is an equilibrium mixture of hydrated CO 2 molecules and carbonic acid. To keep things as simple as possible, we will not distinguish between them in what follows, and just use the formula H 2 CO 3 to represent the two species collectively.
The other Group 2 metals, especially Mg, along with iron and several other transition elements are also found in carbonate sediments. When rain falls through the air, it absorbs atmospheric carbon dioxide, a small portion of which reacts with the water to form carbonic acid. Thus all pure water in contact with the air becomes acidic, eventually reaching a pH of 5.6.
As noted above, the equilibrium between bicarbonate and carbonate ions depends on the pH. Since the pH scale is logarithmic, it makes sense (and greatly simplifies the construction of the plot) to employ a log scale for the concentrations. The plot shown below corresponds to a total carbonate-system concentration of 10 –3 M , which is representative of many ground waters. For river and lake waters, 10 –5 M would be more typical; this would simply shift the curves downward without affecting their shapes.
Points 1 and 2 where adjacent curves overlap correspond to the two pK's. Recall that when the pH is the same as the pK, the concentrations of the two conjugate species are identical and half of the total system concentration. This places the crossover points at log 0.5 = –0.3 below the system concentration level.
A 10 –3 M solution of sodium bicarbonate would have a pH denoted by point 3, with [H 2 CO 3 ] and [CO 3 2– ] constituting only 1% (10 –5 M ) of the system. This corresponds to the equilibrium
\[2 HCO_3^– \rightleftharpoons H_2CO_3 + CO_3^{2–}\]
Carbonates act as bases and, as such, react with acids. Thus, the portion of the global water cycle that transports carbon from the air into natural waters constitutes a gigantic acid-base reaction that yields hydrogen carbonate ions, commonly referred to as bicarbonate. The natural waters that result have pH values between 6 and 10 and are essentially solutions of bicarbonates.
Limestone caves and sinkholes
When rainwater permeates into the soil, it can become even more acidic owing to the additional CO 2 produced by soil organisms. Also, the deeper the water penetrates, the greater its hydrostatic pressure and the more CO 2 it can hold, further increasing its acidity. If this water then works its way down through the fissures and cracks within a limestone layer, it will dissolve some of limestone, leaving void spaces which may eventually grow into limestone caves or form sinkholes that can swallow up cars or houses.
A well-known feature of limestone caves is the precipitated carbonate formations that decorate the ceilings and floors. These are known as stalactites and stalagmites , respectively. When water emerges from the ceiling of a cave that is open to the atmosphere, some of the excess CO 2 it contains is released as it equilibrates with the air. This raises its pH and thus reduces the solubility of of the carbonates, which precipitate as stalactites. Some of the water remains supersaturated and does not precipitate until it drips to the cave floor, where it builds up the stalagmite formations.
Hard Water
This term refers to waters that, through contact with rocks and sediments in lakes, streams, and especially in soils (groundwaters), have acquired metallic cations such as Ca 2 + , Mg 2 + , Fe 2 + , Fe 3 + , Zn 2 + Mn 2 + , etc. Owing to the ubiquity of carbonate sediments, the compensating negative charge is frequently supplied by the bicarbonate ion HCO 3 – , but other anions such as SO 4 2– , F – , Cl – , PO 4 3– and SiO 4 2– may also be significant.
Solid bicarbonates are formed only by Group 1 cations and all are readily soluble in water. But because HCO 3 – is amphiprotic, it can react with itself to yield carbonate:
\[2 HCO_3^– → H_2O + CO_3^[2–} + CO_{2(g)}\]
If bicarbonate-containing water is boiled, the CO 2 is driven off, and the equilibrium shifts to the right, causing any Ca 2 + or similar ions to form a cloudy precipitate. If this succeeds in removing the "hardness cations", the water has been "softened". Such water is said to possess carbonate hardness , sometimes known as " temporary hardness ". Waters in which anions other than HCO 3 – predominate cannot be softened by boiling, and thus possess non-carbonate hardness or " permanent hardness ".
Hard waters present several kinds of problems, both in domestic and industrial settings:
- Waters containing dissolved salts leave solid deposits when they evaporate. Residents of areas having hard water (about 85 percent of the U.S.) notice evaporative deposits on shower walls, in teakettles, and on newly-washed windows, glassware, and vehicles.
- Much more seriously from an economic standpoint, evaporation of water in boilers used for the production of industrial steam leaves coatings on the heat exchanger surfaces that impede the transfer of heat from the combustion chamber, reducing the thermal transfer efficiency. The resultant overheating of these surfaces can lead to their rupture, and in the case of high-pressure boilers, to disastrous explosions. In the case of calcium and magnesium carbonates, the process is exacerbated by the reduced solubility of these salts at high temperatures. Removal of boiler scales is difficult and expensive.
- Municipal water supplies in hard-water areas tend to be supersaturated in hardness ions. As this water flows through distribution pipes and the plumbing of buildings, these ions often tend to precipitate out on their interior surfaces. Eventually, this scale layer can become thick enough to restrict or even block the flow of water through the pipes. When scale deposits within appliances such as dishwashers and washing machines, it can severely degrade their performance.
- Cations of Group 2 and above react with soaps, which are sodium salts of fatty acids such as stearic acid, C 17 H 35 COOH. The sodium salts of such acids are soluble in water, which allows them to dissociate and act as surfactants:
\[C_{17}H_{35}COONa → C_{17}H_{35}COO^– Na^+\]
but the presence of polyvalent ions causes them to form precipitates
\[2 C_{17}H_{35}COO^– + Ca^{2+} → (C_{17}H_{35}COO^–)_2Ca_{(s)}\]
Calcium stearate is less dense than water, so it forms a scum that floats on top of the water surface; anyone who lives in a hard-water area is likely familiar with the unsightly "bathtub rings" it leaves around the high-water mark or the shower-wall stains.
Solubility Complications
All heterogeneous equilibria, on close examination, are beset with complications. But solubility equilibria are somewhat special in that there are more of them. Back in the days when the principal reason for teaching about solubility equilibria was to prepare chemists to separate ions in quantitative analysis procedures, these problems could be mostly ignored. But now that the chemistry of the environment has grown in importance — especially that relating to the ocean and natural waters — there is more reason for chemical scientists to at least know about the limitations of simple solubility products. This section will offer a quick survey of the most important of these complications, while leaving their detailed treatment to more advanced courses.
Tabulated K s values are notoriously unreliable
Many of the \(K_s\) v alues found in tables were determined prior to 1940 (some go back to the 1880s!) at a time before highly accurate methods became av ailable. Especially suspect are many of those for highly insoluble salts which are more difficult to measure. A table showing the variations in \(K_{sp}\) values for the same salts among ten textbooks was published by Clark and Bonikamp in J Chem Educ. 1998 75(9) 1183-85.A good An example that used a variety of modern techniques to measure the solubility of silver chromate was published by A.L. Jones et al in the Australian J. of Chemistry, 1971 24 2005-12.
Generations of chemistry students have amused themselves by comparing the disparate K s values to be found in various textbooks and table. In some cases, they differ by orders of magnitude. There are several reasons for this in addition to the ones described in detail further on.
- The most direct methods of measuring solubilities tend to not be very accurate for sparingly soluble salts. Two-significant figure precision is about the best one can hope in a single measurement.
- Many insoluble salts can exist in more than one crystalline form ( polymorphs ), and in some cases also as amorphous solids. Precipitation under different conditions (in the presence of different ions, at different temperatures, etc.) can yield different or mixed polymorphs.
- Other ions present in the solution can often get incorporated into the crystalline solid, usually replacing an ion of similar size ( substitutional solid solutions ). When this happens, it is no longer valid to write the equilibrium condition as a simple "product". This is very common in mineral deposits, and an important consideration in geochemistry,
Most salts are not Completely Dissociated in Water
The dissolution of cadmium iodide is water is commonly represented as
\[CdI_{2(s)} → Cd^{2+} + 2 I^–\]
Firstly, they combine to form neutral, largely-covalent molecular species:
\[Cd^{2+}_{(aq)} + 2 I^–_{(aq)} → CdI_{2(aq)}\]
This non-ionic form accounts for 78% of the Cd present in the solution! In addition, they form a molecular ion \(CdI^–_{(aq)}\) according to the following scheme:
| \(CdI_{2(s)} \rightleftharpoons Cd^{2+} + 2 I^–\) | \(K_1 = 10^{–3.9}\) |
| \(Cd^{2+} + I^– \rightleftharpoons CdI^+\) | \(K_2= 10^{+2.3}\) |
| \(CdI2_{(s)} \rightleftharpoons CdI^++ I^–\) | \(K = 10^{–1.6} = 0.023\) |
The data shown Tables \(\PageIndex{1}\) and \(\PageIndex{2}\) are taken from the article Salts are Mostly NOT Ionized by Stephen Hawkes: 1996 J Chem Educ. 73(5) 421-423. This fact was stated by Arrhenius in 1887, but has been largely ignored and is rarely mentioned in standard textbooks.
As a consequence, the concentration of "free" Cd 2 + (aq) in an aqueous cadmium iodide solution is only about 2% of the value you would calculate by taking K 1 as the solubility product. The principal component of such as solution is actually [covalently-bound] CdI 2 (aq) . It turns out that many salts, especially those of metals beyond Group 2 , are similarly only partially ionized in aqueous solution:
| salt | molarity | % cation | other species |
|---|---|---|---|
| KCl | 0.52 | 95 | KCl (aq) 5% |
| MgSO 4 | 0.04 | 58 | MgSO 4 (aq) 42% |
| CaCl 2 | 0.44 | 70 | CaCl + (aq) 30% |
| CuSO 4 | 0.045 | 56 | CuSO 4 (aq) 44% |
| CdI 2 | 0.50 | 2 | CdI 2 (aq) 76%, CdI – (aq) 22% |
| FeCl 3 | 0.1 | 10 | FeCl 2 + (aq) 42%, FeCl 2 (aq) 40%, FeOH 2 + (aq) 6%, Fe(OH) 2 + (aq) 2% |
If you are enrolled in an introductory course and do not plan on taking more advanced courses in chemistry or biochemistry, you can probably be safe in ignoring this, since your instructor and textbook likely do so. However, if you expect to do more advanced work or teach, you really should take note of these points, since few textbooks mention them.
Formation of Hydrous Complexes
Transition metal ions form a large variety of complexes with H 2 O and OH – , both of which have electron-pairs available to coordinate with the central ion. This gives rise to a large variety of soluble species that are in competition with an insoluble solid. Because of this, a single equilibrium constant (solubility product) cannot describe the behavior of a solid such as Fe(OH) 3 , which we summarize here as an example.
Aquo complexes : The electrostatic field of the positively-charged metal ion enhances the acidic nature of these H 2 O molecules, encouraging them to shed a proton and leaving OH – groups in their place.
\[Fe(H_2O)_6^{3+} → Fe(H_2O)_5(OH)^{2+}+H^+\]
This is just the first of a series of similar reactions, each one having a successively smaller equilibrium constant:
\[Fe(H_2O)_5(OH)^{2+}→ Fe(H_2O)_4(OH)_2^+→ Fe(H_2O)_3(OH)_3 → Fe(H_2O)_2(OH)_4^-\]
Hydroxo complexes: But there's more: when the hydroxide ion acts as a ligand, it gives rise to a series of hydroxo complexes, of which the insoluble Fe(OH) 3 can be considered a member:
| Fe 3 + + 3 OH – → Fe(OH) 3 (s) | 1/ K s = 10 38 |
| Fe 3 + + H 2 O → Fe(OH) 2 + + H + | K = 10 –2.2 |
| Fe 3 + + 2H 2 O → FeOH + + 2H + | K = 10 –6.7 |
| Fe 3 + + 4H 2 O → Fe(OH) 4 – + 4H + | K = 10 –23 |
| 2Fe 3 + + 2H 2 O → Fe 2 (OH) 2 4+ + 2H + | K = 10 –2.8 |
The equilibria listed above all involve H + and OH – ions, and are therefore pH dependent, as illustrated by the straight lines in the plot, whose slopes reflect the pH dependence of the corresponding ionic species. At any given pH, the equilibrium with solid Fe(OH) 3 is controlled by the ionic species having the highest concentration at any given pH. The corresponding lines in the plot therefore delineate the region (indicated by the orange shading) at which the solid can exist.
Ionic interactions: The "non-common ion effect"
A sparingly-soluble salt will be more soluble in a solution that contains non-participating ions. This is just the opposite of the common ion effect, and it might at first seem rather counter-intuitive: why would adding more ions of any kind make a salt more likely to dissolve?
Figure \(\PageIndex{2}\): Solubility of thallium iodate in solutions containing dissolved salts
A clue to the answer can be found in another fact: the higher the charge of the foreign ion, the more pronounced is the effect. This tells us that inter-ionic (and thus electrostatic) interactions must play a role. The details are rather complicated, but the general idea is that all ions in solution, besides possessing tightly-held waters of hydration, tend to attract oppositely-charged ions (" counter-ions ") around them. This "atmosphere" of counterions is always rather diffuse, but much less so (and more tightly bound) when one or both kinds of ions have greater charges. From a distance, these ion-counterion bodies appear to be almost electrically neutral, which keeps them from interacting with each other (as to form a precipitate).
The overall effect is to reduce the concentrations of the less-shielded ions that are available to combine to form a precipitate. We say that the thermodynamically-effective concentrations of these ions are less than their "analytical" concentrations. Chemists refer to these effective concentrations as ionic activities , and they denote them by curly brackets {Ag + } as opposed to square brackets [Ag + ] which refer to the nominal or analytical concentrations.
Although the concentrations of ions in equilibrium with a sparingly soluble solid are so low that they are essentially the same as the activities, the presence of other ions at concentrations of about 0.001M or greater can materially reduce the activities of the dissolution products, permitting the solubilities to be greater than what simple equilibrium calculations would predict.
Measured solubilities are averages and depend on size
The heterogeneous nature of dissolution reactions leads to a number of peculiar effects relating to the nature of equilibria involving surfaces. These arise from the fact that the tendency of a crystalline solid to dissolve will depend on the particular face or location from which dissolution occurs. Since all crystals present a variety of faces to the solution, a measured K s is really an average of values for these various faces.
And because many salts can exhibit different external shapes depending on the conditions under which they are formed, solubility products are similarly dependent on these conditions.
Very small crystals are more soluble than big ones
Molecules or ions that are situated on edges or corners are less strongly bound to the remainder of the solid than those on plane surfaces, and will consequently tend to dissolve more readily. Thus the leftmost face in the schematic lattice below will have more edge-bound molecular units than the other two, and this face (11) will be more soluble.
This means, among other things, that smaller crystals, in which the ratio of edges and corners is greater, will tend to have greater K s values than larger ones. As a consequence, smaller crystals will tend to disappear in favor of larger ones. Practical use is sometimes made of this when the precipitate initially formed in a chemical analysis or separation is too fine to be removed by filtration. The suspension is held at a high temperature for several hours, during with time the crystallites grow in size. This procedure is sometimes referred to as digestion.
Formation of supersaturated solutions
Contrary to what you may have been taught, precipitates do not form when the ion concentration product reaches the solubility product of a salt in a solution that is pure and initially unsaturated; to form a precipitate from a homogeneous solution, a certain degree of supersaturation is required. The extent of supersaturation required to initiate precipitation can be surprisingly great. Thus formation of barium sulfate BaSO 4 by combining the two kinds of ions does not occur until Q s exceeds K s by a factor of 160 or more. In part, this reflects the fact that precipitation proceeds by a series of reactions beginning with formation of an ion-pair which eventually becomes an ion cluster:
Ba 2 + + SO 4 2– → (BaSO 4 ) 0 → (BaSO 4 ) 2 0 → (BaSO 4 ) 3 0 → etc.
Owing to their overall neutrality, these aggregates are not stabilized by hydration, so they are more likely to break up than not. But a few may eventually survive until they are large enough (but still submicroscopic in size) to serve as precipitation nuclei.
Many substances other than salts form supersaturated solutions, and some salts form them more readily than others. Supersaturated solutions are easily made by dissolving the solid to near its solubility limit in a warmed solvent, and then letting it cool.
K s ) and are inherently unstable; dropping a "seed" crystal of the solid into such a solution will usually initiate rapid precipitation. But as is explained below, even a tiny dust particle may be enough. An old chemist's trick is to use the tip of a glass stirring rod to scrape the inner surface of a container holding a supersaturated solution; the minute particles of glass that are released presumably serve as precipitation nuclei.
The nucleation problem: precipitation is [theoretically] impossible!
Any process in which a new phase forms within an existing homogeneous phase is beset by the nucleation problem : the smallest of these new phases — raindrops forming in air, tiny bubbles forming in a liquid at its boiling point — are inherently less stable than larger ones, and therefore tend to disappear. The same is true of precipitate formation: if smaller crystals are more soluble, then how can the tiniest, first crystal, form at all?
In any ionic solution, small clumps of oppositely-charged ions are continually forming by ordinary collisional processes. The smallest of these aggregates possess a higher free energy than the isolated solvated ions, and they rapidly dissociate. Occasionally, however, one of these proto-crystallites reaches a critical size whose stability allows it to remain intact long enough to serve as a surface (a "nucleus") onto which the deposition of additional ions can lead to still greater stability. At this point, the process passes from the nucleation to the growth stage.
Theoretical calculations predict that nucleation from a perfectly homogeneous solution is a rather unlikely process; tenfold supersaturation should produce only one nucleus per cm 3 per year. Most nucleation is therefore believed to occur heterogeneously on the surface of some other particle, possibly a dust particle. The efficiency of this process is critically dependent on the nature and condition of the surface that gives rise to the nucleus. | libretexts | 2025-03-17T19:53:15.431086 | 2013-10-03T01:38:05 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/12%3A_Solubility_Equilibria",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "12: Solubility Equilibria",
"author": "Stephen Lower"
} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/13%3A_Acid-Base_Equilibria | 13: Acid-Base Equilibria
Acid-base chemistry can be extremely confusing, particularly when dealing with weak acids and bases. This set of lessons presents an updated view of the Brønsted-Lowry theory that makes it easy to understand answers to common questions: What's the fundamental difference between a strong acid and a weak acid? Can acid A neutralize base B? Why are some salts acidic and others alkaline? How do buffers work? What governs the shapes of titration curves?
-
- 13.1: Introduction to Acid/Base Equilibria
- Acid-base reactions, in which protons are exchanged between donor molecules (acids) and acceptors (bases), form the basis of the most common kinds of equilibrium problems which you will encounter in almost any application of chemistry. Acid-base reactions, in which protons are exchanged between donor molecules (acids) and acceptors (bases), form the basis of the most common kinds of equilibrium problems which you will encounter in almost any application of chemistry.
-
- 13.2: Strong Monoprotic Acids and Bases
- To a good approximation, strong acids, in the forms we encounter in the laboratory and in much of the industrial world, have no real existence; they are all really solutions of H3O+. So if you think about it, the labels on those reagent bottles you see in the lab are not strictly true! However, if the strong acid is highly diluted, the amount of H3O+ it contributes to the solution becomes comparable to that which derives from the autoprotolysis of water.
-
- 13.3: Finding the pH of weak Acids, Bases, and Salts
- Most acids are weak; there are hundreds of thousands of them, whereas there are fewer than a dozen strong acids. We can treat weak acid solutions in much the same general way as we did for strong acids. The only difference is that we must now take into account the incomplete "dissociation"of the acid. We will start with the simple case of the pure acid in water, and then go from there to the more general one in which salts of the acid are present - these are known as buffer solutions.
-
- 13.4: Conjugate Pairs and Buffers
- We often tend to regard the pH as a quantity that is dependent on other variables such as the concentration and strength of an acid, base or salt. But in much of chemistry (and especially in biochemistry), we find it more useful to treat pH as the "master" variable that controls the relative concentrations of the acid- and base-forms of one or more sets of conjugate acid-base systems. In this Module, we explore this approach in some detail, showing its application to buffer solutions.
-
- 13.5: Acid/Base Titration
- The objective of an acid-base titration is to determine Ca, the nominal concentration of acid in the solution. In its simplest form, titration is carried out by measuring the volume of the solution of strong base required to complete the neutralization reaction . The point at which this reaction is just complete is known as the equivalence point, which is distinguished from the end point, which is the value we observe experimentally.
-
- 13.7: Exact Calculations and Approximations
- The methods for dealing with acid-base equilibria that we developed in the earlier units of this series are widely used in ordinary practice. Although many of these involve approximations of various kinds, the results are usually good enough for most purposes. Sometimes, however — for example, in problems involving very dilute solutions, the approximations break down, often because they ignore the small quantities of H+ and OH– ions always present in pure water. | libretexts | 2025-03-17T19:53:15.498352 | 2013-10-03T01:37:44 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"url": "https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/13%3A_Acid-Base_Equilibria",
"book_url": "https://commons.libretexts.org/book/chem-3482",
"title": "13: Acid-Base Equilibria",
"author": "Stephen Lower"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.