text stringlengths 4 602k |
|---|
Displacement And Position From Velocity
To get our first two equations, we start with the definition of average velocity:
Substituting the simplified notation for Î
Solving for x gives us
where the average velocity is
reflects the fact that when acceleration is constant, v v â is just the simple average of the initial and final velocities. Figure 3.18 illustrates this concept graphically. In part of the figure, acceleration is constant, with velocity increasing at a constant rate. The average velocity during the 1-h interval from 40 km/h to 80 km/h is 60 km/h:
In part , acceleration is not constant. During the 1-h interval, velocity is closer to 80 km/h than 40 km/h. Thus, the average velocity is greater than in part .
Acceleration As A Vector
Acceleration is a vector in the same direction as the change in velocity, v. Since velocity is a vector, it can change either in magnitude or in direction. Acceleration is therefore a change in either speed or direction, or both.
Keep in mind that although acceleration is in the direction of the change in velocity, it is not always in the direction of motion. When an object slows down, its acceleration is opposite to the direction of its motion. This is known as .
Figure 2. A subway train in Sao Paulo, Brazil, decelerates as it comes into a station. It is accelerating in a direction opposite to its direction of motion.
Solved Examples On Acccerelation Formula
Question- A woman is traveling by her sports car at a constant velocity v = 5.00 m/s. When she steps on the gas, it makes the car to accelerate forward. Further, past 10.0 seconds, she stops the acceleration and continues a constant velocity v = 25.0 m/s. Calculate the acceleration of the car.
Answer- In the forward direction, initial velocity is \ = 5.00 m/s. Further, the final velocity in the forward direction is \ = 25.0 m/s. The time in which the change took place is 10.0 s. Therefore, the acceleration is in the forward direction, with a value:
a = \
a = \
a = \
a = 2.00 m/s2
Therefore, we see that the acceleration of the car is 2.00 m/s2 forward.
Question- A man takes a rock and drops it off from a cliff. It falls for 15.0 s before it hits the ground. The acceleration due to gravity g = 9.80 m/s2. Calculate the velocity of the rock the moment before it had hit the ground.
Answer-;The man released the rock from rest, therefore, we get the initial velocity as \ = 0.00 m/s. The time for the change to take place is 15.0 s. The acceleration for this is 9.80 m/s2. Therefore, to find the velocity we will rearrange the equation like:
a = \ + at
\ + at
\ = 0.00 m/s +
\ = 147 m/s
Therefore, as the rock is falling, the direction of the velocity is down.
You May Like: Holt Geometry Lesson 4.5 Practice B Answers
Example 1 Calculating Acceleration: A Racehorse Leaves The Gate
A racehorse coming out of the gate accelerates from rest to a velocity of 15.0 m/s due west in 1.80 s. What is its average acceleration?
First we draw a sketch and assign a coordinate system to the problem. This is a simple problem, but it always helps to visualize it. Notice that we assign east as positive and west as negative. Thus, in this case, we have negative velocity.
We can solve this problem by identifying v and t from the given information and then calculating the average acceleration directly from the equation;\bar=\frac=\frac_-_}_-_}.
1. Identify the knowns. v0;= 0, vf;= 15.0 m/s , t;= 1.80 s.
2. Find the change in velocity. Since the horse is going from zero to 15.0 m/s, its change in velocity equals its final velocity: v;=;vf;= 15.0 m/s.
3. Plug in the known values and solve for the unknown \bar.
The negative sign for acceleration indicates that acceleration is toward the west. An acceleration of 8.33 m/s2 due west means that the horse increases its velocity by 8.33 m/s due west each second, that is, 8.33 meters per second per second, which we write as 8.33 m/s2. This is truly an average acceleration, because the ride is not smooth. We shall see later that an acceleration of this magnitude would require the rider to hang on with a force nearly equal to his weight.
Calculating Average Acceleration From Two Velocities
Don’t Miss: What Does Kw Mean In Chemistry
Charged Particle In An Electromagnetic Field
One of the fundamental forces of nature, the electromagnetic force, consists of the electric and the magnetic field. On one hand, the electric field always acts on the charged particle, and on the other hand, the magnetic field acts only when this particle is moving.
The magnitude of force F which is exerted on the charged particle in the electric field E can be described with the Coulomb’s law. It states that F = q * E, where q is the charge of the particle. You can see that particles with the higher charge will always be attracted stronger.
Solving For Final Position With Constant Acceleration
We can combine the previous equations to find a third equation that allows us to calculate the final position of an object experiencing constant acceleration. We start with
to each side of this equation and dividing by 2 gives
for constant acceleration, we have
Now we substitute this expression for v into the equation for displacement, x
Recommended Reading: What Math Class Do 11th Graders Take
Example 3 Comparing Distance Traveled With Displacement: A Subway Train
What are the distances traveled for the motions shown in parts and of the subway train in Figure 7?
To answer this question, think about the definitions of distance and distance traveled, and how they are related to displacement. Distance between two positions is defined to be the magnitude of displacement, which was found in Example 1. Distance traveled is the total length of the path traveled between the two positions. In the case of the subway train shown in Figure 7, the distance traveled is the same as the distance between the initial and final positions of the train.
1. The displacement for part was +2.00 km. Therefore, the distance between the initial and final positions was 2.00 km, and the distance traveled was 2.00 km.
2. The displacement for part was 1.5 km. Therefore, the distance between the initial and final positions was 1.50 km, and the distance traveled was 1.50 km.
An airplane lands on a runway traveling east. Describe its acceleration.
If we take east to be positive, then the airplane has negative acceleration, as it is accelerating toward the west. It is also decelerating: its acceleration is opposite in direction to its velocity.
Speed Velocity And Acceleration
Average speed is distance divided by time. Velocity is speed in a given direction. Acceleration is change in velocity divided by time. Movement can be shown in distance-time and velocity-time graphs.
You can calculate the acceleration of an object from its change in velocity and the time taken.
Velocity is not exactly the same as speed. Velocity has a direction as well as a speed. For example, 15 m/s is a speed, but 15 m/s North is a velocity .
Commonly velocities are + or – .
For example, -15 m/s means moving backwards at 15 metres every second.
Don’t Miss: How To Prepare Geography For Upsc Quora
Examples Of Acceleration Gravitational
Gravitational acceleration refers to the force between two object masses. As you now know from Newtons Third Law of Motion, the earth uses the same amount of force on you as you use on it, but as the planet is bigger than you, you dont feel the gravity. How this fits into an acceleration calculator: Standard gravity = 31.17405 ft/s²Average human weight = 137 poundsTo get the gravitational force = 4270.84 pdl
How To Find The Acceleration From The Velocity Difference
First things first – both acceleration and velocity are vectors. From the previous section, we know that the acceleration results from subtracting the final and the initial velocity divided by the time difference.
Imagine a sphere in the Cartesian coordinates system. The initial velocity is v0 = m/s, and the final velocity equals v1 = m/s. The velocity changed in time interval t = 5 s. We can ask two questions: What is the acceleration? and How to calculate the magnitude of acceleration? Let’s find out:
Evaluate the velocities’difference. For vectors, subtract each of the coordinates separately:
v1 – v0 = – = = = m/s;
Divide both components by time difference: = ;
The result is our acceleration: a = m/s².
So how to find the magnitude of the acceleration? Let’s use the formula with acceleration coordinates:
Square each of the components: ² = 1.44, ² = 0.16;
Add these numbers: 1.44 + 0.16 = 1.6;
Estimate the square root of this value: = 1.265. We will stick with four significant figures; and
That’s all! The magnitude of the acceleration is 1.265 m/s².
Read Also: What Is Three Dimensional Geometry
How To Calculate Acceleration: Step
Now well breakdown the acceleration formula step-by-step using a real example.
A race car accelerates from 15 m/s to 35 m/s in 3 seconds. What is its average acceleration?
First, write the acceleration equation.
$$a = /$$
Next, define your variables.
$a$ = what we are solving for
$$V = 35 m/s$$
Now, plug your variables into the equation and solve:
$$A = / m/s^2$$
$$A = -23.2/1.4 m/s^2$$
$$A = -16.57 m/$$
Formulas And Equations For Acceleration
Most people know that Sir Isaac Newton was one of the most influential scientists of the 17th century and he still is today. He discovered that when two mass objects attract each other, the force depends on the distance between the objects. The weight of the objects also matters. Gravitational force increases based on how large they are. Many people also know Isaac Newton for his laws of motion which appear prominently in the world of physics. 1.a = / t2.a = 2 * / t²3.a = F / ma = accelerationv_i = initial velocity v_f = final velocityt = acceleration time d = distance traveled during acceleration F = net force M = mass
Also Check: What Does Abiotic Mean In Biology
What Is The Acceleration Formula
You can use the acceleration equation to calculate acceleration. Here is the most common acceleration formula:
$$a = /$$
where $v$ is the change in velocity and $t$ is the change in time.
You can also write the acceleration equation like this:
$$a = /$$
In this acceleration equation, $v$ is the final velocity while is the $v$ initial velocity. $T$ is the final time and $t$ is the initial time.
Some other things to keep in mind when using the acceleration equation:
- You need to subtract the initial velocity from the final velocity. If you reverse them, you will get the direction of your acceleration wrong.
- If you dont have a starting time, you can use 0.
- If the final velocity is less than the initial velocity, the acceleration will be negative, meaning that the object slowed down.
Now lets breakdown the acceleration equation step-by-step in a real example.
Newton’s Second Law In Action
Rockets traveling through space encompass all three of Newton’s laws of motion.
If the rocket needs to slow down, speed up, or change direction, a force is used to give it a push, typically coming from the engine. The amount of the force and the location where it is providing the push can change either or both the speed and direction.
Now that we know how a massive body in an inertial reference frame behaves when it subjected to an outside force, such as how the engines creating the push maneuver the rocket, what happens to the body that is exerting that force? That situation is described by Newtons Third Law of Motion.;
Additional reporting by Rachel Ross, Live Science contributor.
Don’t Miss: What Is Biomass In Biology
Tangential And Centripetal Acceleration
The velocity of a particle moving on a curved path as a function of time can be written as:
- ; , \mathbf &= }}\\&=}\mathbf _ }+v _ }}}\\&=}\mathbf _ }+}}\mathbf _ }\ ,\end}}
where un is the unit normal vector to the particle’s trajectory , and r is its instantaneous radius of curvature based upon the osculating circle at time t. These components are called the tangential acceleration and the normal or radial acceleration .
Geometrical analysis of three-dimensional space curves, which explains tangent, normal and binormal, is described by the FrenetSerret formulas.
- } is the uniform rate of acceleration.
In particular, the motion can be resolved into two orthogonal parts, one of constant velocity and the other according to the above equations. As Galileo showed, the net result is parabolic motion, which describes, e.;g., the trajectory of a projectile in a vacuum near the surface of Earth.
- For a given speed , the magnitude of this geometrically caused acceleration is inversely proportional to the radius r
. } =-\omega ^\mathbf \;.}
- . =r\alpha .}
The sign of the tangential component of the acceleration is determined by the sign of the angular acceleration , and the tangent is always directed at right angles to the radius vector.
Finding Velocity And Displacement From Acceleration
- Derive the kinematic equations for constant acceleration using integral calculus.
- Use the integral formulation of the kinematic equations in analyzing motion.
- Find the functional form of velocity versus time given the acceleration function.
- Find the functional form of position versus time given the velocity function.
This section assumes you have enough background in calculus to be familiar with integration. In Instantaneous Velocity and Speed and Average and Instantaneous Acceleration we introduced the kinematic functions of velocity and acceleration using the derivative. By taking the derivative of the position function we found the velocity function, and likewise by taking the derivative of the velocity function we found the acceleration function. Using integral calculus, we can work backward and calculate the velocity function from the acceleration function, and the position function from the velocity function.
Recommended Reading: What Does Algebra 2 Look Like
How Do You Calculate Acceleration For An Object Moveing In A Straight Line
How do you calculate acceleration for an object moveing in a straight line
The average acceleration over a given time interval is calculated by dividing the change in the object’s velocity by the time that it takes the change to occur.So average acceleration = /time
Virtual Teaching Assistant: John B.
Question Level: Basic
Calculating Acceleration From A Force
Also Check: What Is Work Done In Physics
Summary Of Kinematic Equations
Before we get into the examples, letâs look at some of the equations more closely to see the behavior of acceleration at extreme values. Rearranging Equation 3.12, we have
From this we see that, for a finite time, if the difference between the initial and final velocities is small, the acceleration is small, approaching zero in the limit that the initial and final velocities are equal. On the contrary, in the limit t for a finite difference between the initial and final velocities, acceleration becomes infinite.
Similarly, rearranging Equation 3.14, we can express acceleration in terms of velocities and displacement:
Thus, for a finite difference between the initial and final velocities acceleration becomes infinite in the limit the displacement approaches zero. Acceleration approaches zero in the limit the difference in initial and final velocities approaches zero for a finite displacement.
Examples Of Acceleration Centripetal And Tangential
Acceleration is something you can deconstruct so you can understand it better. Two types of acceleration are known as centripetal and tangential. Centripetal acceleration is responsible for changing the velocitys direction. However, it doesnt alter the velocitys value. Tangential acceleration sits perpendicular to the motion trajectory and changes the value of the velocity but not its direction. Therefore, the two have different jobs to do. Imagine a circle with an object moving around the outside of it. The direction of it is the centripetal acceleration. The objects speed remains consistent and constant. When you bring tangential acceleration into the picture, it changes the velocity value but not its direction. |
The central limit theorem in statistics states that, given a sufficiently large sample size, the sampling distribution of the mean for a variable will approximate a normal distribution regardless of that variable’s distribution in the population.
Unpacking the meaning from that complex definition can be difficult. That’s the topic for this post! I’ll walk you through the various aspects of the central limit theorem (CLT) definition, and show you why it is vital in statistics.
Distribution of the Variable in the Population
Part of the definition for the central limit theorem states, “regardless of the variable’s distribution in the population.” This part is easy! In a population, the values of a variable can follow different probability distributions. These distributions can range from normal, left-skewed, right-skewed, and uniform among others.
This part of the definition refers to the distribution of the variable’s values in the population from which you draw a random sample.
The central limit theorem applies to almost all types of probability distributions, but there are exceptions. For example, the population must have a finite variance. That restriction rules out the Cauchy distribution because it has infinite variance.
Additionally, the central limit theorem applies to independent, identically distributed variables. In other words, the value of one observation does not depend on the value of another observation. And, the distribution of that variable must remain constant across all measurements.
Sampling Distribution of the Mean
The definition for the central limit theorem also refers to “the sampling distribution of the mean.” What’s that?
Typically, you perform a study once, and you might calculate the mean of that one sample. Now, imagine that you repeat the study many times and collect the same sample size for each one. Then, you calculate the mean for each of these samples and graph them on a histogram. The histogram displays the distribution of sample means, which statisticians refer to as the sampling distribution of the mean.
Fortunately, we don’t have to repeat studies many times to estimate the sampling distribution of the mean. Statistical procedures can estimate that from a single random sample.
The shape of the sampling distribution depends on the sample size. If you perform the study using the same procedure and change only the sample size, the shape of the sampling distribution will differ for each sample size. And, that brings us to the next part of the CLT definition!
Central Limit Theorem and a Sufficiently Large Sample Size
As the previous section states, the shape of the sampling distribution changes with the sample size. And, the definition of the central limit theorem states that when you have a sufficiently large sample size, the sampling distribution starts to approximate a normal distribution. How large does the sample size have to be for that approximation to occur?
It depends on the shape of the variable’s distribution in the underlying population. The more the population distribution differs from being normal, the larger the sample size must be. Typically, statisticians say that a sample size of 30 is sufficient for most distributions. However, strongly skewed distributions can require larger sample sizes. We’ll see the sample size aspect in action during the empirical demonstration below.
Central Limit Theorem and Approximating the Normal Distribution
To recap, the central limit theorem links the following two distributions:
- The distribution of the variable in the population.
- The sampling distribution of the mean.
Specifically, the CLT states that regardless of the variable’s distribution in the population, the sampling distribution of the mean will tend to approximate the normal distribution.
In other words, the population distribution can look like the following:
But, the sampling distribution can appear like below:
It’s not surprising that a normally distributed variable produces a sampling distribution that also follows the normal distribution. But, surprisingly, nonnormal population distributions can also create normal sampling distributions.
Related Post: Normal Distribution in Statistics
Properties of the Central Limit Theorem
Let’s get more specific about the normality features of the central limit theorem. Normal distributions have two parameters, the mean and standard deviation. What values do these parameters converge on?
As the sample size increases, the sampling distribution converges on a normal distribution where the mean equals the population mean, and the standard deviation equals σ/√n. Where:
- σ = the population standard deviation
- n = the sample size
As the sample size (n) increases, the standard deviation of the sampling distribution becomes smaller because the square root of the sample size is in the denominator. In other words, the sampling distribution clusters more tightly around the mean as sample size increases.
Let’s put all of this together. As sample size increases, the sampling distribution more closely approximates the normal distribution, and the spread of that distribution tightens. These properties have essential implications in statistics that I’ll discuss later in this post.
Empirical Demonstration of the Central Limit Theorem
Now the fun part! There is a mathematical proof for the central theorem, but that goes beyond the scope of this blog post. However, I will show how it works empirically by using statistical simulation software. I’ll define population distributions and have the software draw many thousands of random samples from it. The software will calculate the mean of each sample and then graph these sample means on a histogram to display the sampling distribution of the mean.
For the following examples, I’ll vary the sample size to show how that affects the sampling distribution. To produce the sampling distribution, I’ll draw 500,000 random samples because that creates a fairly smooth distribution in the histogram.
Keep this critical difference in mind. While I’ll collect a consistent 500,000 samples per condition, the size of those samples will vary, and that affects the shape of the sampling distribution.
Testing the Central Limit Theorem with Three Probability Distributions
I’ll show you how the central limit theorem works with three different distributions: moderately skewed, severely skewed, and a uniform distribution. The first two distributions skew to the right and follow the lognormal distribution. The probability distribution plot below displays the population’s distribution of values. Notice how the red dashed distribution is much more severely skewed. It actually extends quite a way off the graph! We’ll see how this makes a difference in the sampling distributions.
Let’s see how the central limit theorem handles these two distributions and the uniform distribution.
Moderately Skewed Distribution and the Central Limit Theorem
The graph below shows the moderately skewed lognormal distribution. This distribution fits the body fat percentage dataset that I use in my post about identifying the distribution of your data. These data correspond to the blue line in the probability distribution plot above. I use the simulation software to draw random samples from this population 500,000 times for each sample size (5, 20, 40).
In the graph above, the gray color shows the skewed distribution of the values in the population. The other colors represent the sampling distributions of the means for different sample sizes. The red color shows the distribution of means when your sample size is 5. Blue denotes a sample size of 20. Green is 40. The red curve (n=5) is still skewed a bit, but the blue and green (20 and 40) are not visibly skewed.
As the sample size increases, the sampling distributions more closely approximate the normal distribution and become more tightly clustered around the population mean—just as the central limit theorem states!
Very Skewed Distribution and the Central Limit Theorem
Now, let’s try this with the very skewed lognormal distribution. These data follow the red dashed line in the probability distribution plot above. I follow the same process but use larger sample sizes of 40 (grey), 60 (red), and 80 (blue). I do not include the population distribution in this one because it is so skewed that it messes up the X-axis scale!
The population distribution is extremely skewed. It’s probably more skewed than real data tend to be. As you can see, even with the largest sample size (blue, n=80), the sampling distribution of the mean is still skewed right. However, it is less skewed than the sampling distributions for the smaller sample sizes. Also, notice how the peaks of the sampling distribution shift to the right as the sample increases. Eventually, with a large enough sample size, the sampling distributions will become symmetric, and the peak will stop shifting and center on the actual population mean.
If your population distribution is extremely skewed, be aware that you might need a substantial sample size for the central limit theorem to kick in and produce sampling distributions that approximate a normal distribution!
Uniform Distribution and the Central Limit Theorem
Now, let’s change gears and look at an entirely different type of distribution. Imagine that we roll a die and take the average value of the rolls. The probabilities for rolling the numbers on a die follow a uniform distribution because all numbers have the same chance of occurring. Can the central limit theorem work with discrete numbers and uniform probabilities? Let’s see!
In the graph below, I follow the same procedure as above. In this example, the sample size refers to the number of times we roll the die. The process calculates the mean for each sample.
In the graph above, I use sample sizes of 5, 20, and 40. We’d expect the average to be (1 + 2 + 3 + 4 + 5 + 6 / 6 = 3.5). The sampling distributions of the means center on this value. Just as the central limit theorem predicts, as we increase the sample size, the sampling distributions more closely approximate a normal distribution and have a tighter spread of values.
You could perform a similar experiment using the binomial distribution with coin flips and obtain the same types of results when it comes to, say, the probability of getting heads. All thanks to the central limit theorem!
Why is the Central Limit Theorem Important?
The central limit theorem is vital in statistics for two main reasons—the normality assumption and the precision of the estimates.
Central limit theorem and the normality assumption
The fact that sampling distributions can approximate a normal distribution has critical implications. In statistics, the normality assumption is vital for parametric hypothesis tests of the mean, such as the t-test. Consequently, you might think that these tests are not valid when the data are nonnormally distributed. However, if your sample size is large enough, the central limit theorem kicks in and produces sampling distributions that approximate a normal distribution. This fact allows you to use these hypothesis tests even when your data are nonnormally distributed—as long as your sample size is large enough.
You might have heard that parametric tests of the mean are robust to departures from the normality assumption when your sample size is sufficiently large. That’s thanks to the central limit theorem!
For more information about this aspect, read my post that compares parametric and nonparametric tests.
Precision of estimates
In all of the graphs, notice how the sampling distributions of the mean cluster more tightly around the population mean as the sample sizes increase. This property of the central limit theorem becomes relevant when using a sample to estimate the mean of an entire population. With a larger sample size, your sample mean is more likely to be close to the real population mean. In other words, your estimate is more precise.
Conversely, the sampling distributions of the mean for smaller sample sizes are much broader. For small sample sizes, it’s not unusual for sample means to be further away from the actual population mean. You obtain less precise estimates.
In closing, understanding the central limit theorem is crucial when it comes to trusting the validity of your results and assessing the precision of your estimates. Use large sample sizes to satisfy the normality assumption even when your data are nonnormally distributed and to obtain more precise estimates! |
Ancient Greek phonology
Ancient Greek phonology is the description of the reconstructed phonology or pronunciation of Ancient Greek. This article mostly deals with the pronunciation of the standard Attic dialect of the fifth century BC, used by Plato and other Classical Greek writers, and touches on other dialects spoken at the same time or earlier. The pronunciation of Ancient Greek is not known from direct observation, but determined from other types of evidence. Some details regarding the pronunciation of Attic Greek and other Ancient Greek dialects are unknown, but it is generally agreed that Attic Greek had certain features not present in English or Modern Greek, such as a three-way distinction between voiced, voiceless, and aspirated stops (such as /b p pʰ/, as in English "bot, spot, pot"); a distinction between single and double consonants and short and long vowels in most positions in a word; and a word accent that involved pitch.
Koine Greek, the variety of Greek used after the conquests of Alexander the Great in the fourth century BC, is sometimes included in Ancient Greek, but its pronunciation is described in Koine Greek phonology. For disagreements with the reconstruction given here, see below.
- 1 Dialects
- 2 Consonants
- 3 Vowels
- 4 Spelling
- 5 Phonotactics
- 6 Accent
- 7 Sound changes
- 8 Terminology
- 9 Reconstruction
- 9.1 Internal evidence
- 9.2 External evidence
- 10 History of the reconstruction of ancient pronunciation
- 11 Footnotes
- 12 Bibliography
- 13 External links
Ancient Greek was a pluricentric language, consisting of many dialects. All Greek dialects derive from Proto-Greek and they share certain characteristics, but there were also distinct differences in pronunciation. For instance, the form of Doric in Crete had a digraph ⟨θθ⟩, which likely stood for a sound not present in Attic. The early form of Ionic in which the Iliad and Odyssey were composed, and the Aeolic dialect of Sappho, likely had the phoneme /w/ at the beginnings of words, sometimes represented by the letter digamma ⟨ϝ⟩, but it had been lost in the standard Attic dialect.
The pluricentric nature of Ancient Greek differs from that of Latin, which was composed of basically one variety from the earliest Old Latin texts until Classical Latin. Latin only formed dialects once it was spread over Europe by the Roman Empire; these Vulgar Latin dialects became the Romance languages.
The main dialect groups of Ancient Greek are Arcadocypriot, Aeolic, Doric, Ionic, and Attic. These form two main groups: East Greek, which includes Arcadocypriot, Aeolic, Ionic, and Attic, and West Greek, which consists of Doric along with Northwest Greek and Achaean.
Of the main dialects, all but Arcadocypriot have literature in them. The Ancient Greek literary dialects do not necessarily represent the native speech of the authors that use them. A primarily Ionic-Aeolic dialect, for instance, is used in epic poetry, while pure Aeolic is used in lyric poetry. Both Attic and Ionic are used in prose, and Attic is used in most parts of the Athenian tragedies, with Doric forms in the choral sections.
Early East Greek
Most of the East Greek dialects palatalized or assibilated /t/ to [s] before /i/. West Greek, including Doric, did not undergo this sound change in certain cases, and through the influence of Doric neither did the Thessalian and Boeotian dialects of Aeolic.
- Attic τίθησι, Doric τίθητι ('he places')
- Attic εἰσί, Doric ἐντί ('they are')
- Attic εἴκοσι, Doric ϝῑκατι ('twenty')
Arcadocypriot was one of the first Greek dialects in Greece. Mycenaean Greek, the form of Greek spoken before the Greek Dark Ages, seems to be an early form of Arcadocypriot. Clay tablets with Mycenaean Greek in Linear B have been found over a wide area, from Thebes in Central Greece, to Mycenae and Pylos on the Peloponnese, to Knossos on Crete. However, during the Ancient Greek period, Arcadocypriot was only spoken in Arcadia, in the interior of the Peloponnese, and on Cyprus. The dialects of these two areas remained remarkably similar despite the great geographical distance.
Aeolic is closely related to Arcadocypriot. It was originally spoken in eastern Greece north of the Peloponnese: in Thessaly, in Locris, Phocis, and southern Aetolia, and in Boeotia, a region close to Athens. Aeolic was carried to Aeolis, on the coast of Asia Minor, and the nearby island of Lesbos. By the time of Ancient Greek, the only Aeolic dialects that remained in Greece were Thessalian and Boeotian. The Aeolic dialects of Greece adopted some characteristics of Doric, since they were located near Doric-speaking areas, while the Aeolian and Lesbian dialects remained pure.
Boeotian underwent vowel shifts similar to those that occurred later in Koine Greek, converting /ai̯/ to [ɛː], /eː/ to [iː], and /oi̯/ to [yː]. These are reflected in spelling (see Boeotian Greek phonology). Aeolic also retained /w/.
Homeric or Epic Greek, the literary form of Archaic Greek used in the epic poems, Iliad and the Odyssey, is based on early Ionic and Aeolic, with Arcadocypriot forms. In its original form, it likely had the semivowel /w/ and a voiceless version /ʍ/, as indicated by the meter in some cases. This sound is sometimes written as ⟨Ϝ⟩ in inscriptions, but not in the Attic-influenced text of Homer.
The Doric dialect, the most important member of West Greek, originated from western Greece. Through the Dorian invasion, Doric displaced the native Arcadocypriot and Aeolic dialects in some areas of central Greece, on the Peloponnese, and on Crete, and strongly influenced the Thessalian and Boeotian dialects of Aeolic.
Doric dialects are classified by which vowel they have as the result of compensatory lengthening and contraction: those that have η ω are called Severer or Old, and those that have ει ου, as Attic does, are called Milder or New. Laconian and Cretan, spoken in Laconia, the region of Sparta, and on Crete, are two Old Doric dialects.
Attic and Ionic
Attic and Ionic share a vowel shift not present in any other East or West Greek dialects. They both raised Proto-Greek long /aː/ to [ɛː] (see below). Later on, Attic lowered [ɛː] found immediately after /e i r/ back to [aː], differentiating itself from Ionic. All other East and West Greek dialects retain original /aː/.
Attic is the standard dialect taught in modern introductory courses in Ancient Greek, and the one that has the most literature written in it. It was spoken in Athens and Attica, the surrounding region. Old Attic, which was used by the historian Thucydides and the tragedians, replaced the native Attic /tt rr/ with the /ss rs/ of other dialects. Later writers, such as Plato, use the native Attic forms.
Koine, the form of Greek spoken during the Hellenistic period, was primarily based on Attic Greek, with some influences from other dialects. It underwent many sound changes, including development of aspirated and voiced stops into fricatives and the shifting of many vowels and diphthongs to [i] (iotacism). In the Byzantine period it developed into Medieval Greek, which later became standard Modern Greek or Demotic.
Tsakonian, a modern form of Greek mutually unintelligible with Standard Modern Greek, derived from the Laconian variety of Doric, and is therefore the only surviving descendant of a non-Attic dialect.
Attic Greek had about 15 consonant phonemes: nine stop consonants, two fricatives, and four or six sonorants. Modern Greek has about the same number of consonants. The main difference between the two is that Modern Greek has voiced and voiceless fricatives that developed from Ancient Greek voiced and aspirated stops.
In the table below, the phonemes of standard Attic are unmarked, allophones are enclosed in parentheses. The sounds marked by asterisks appear in dialects or in earlier forms of Greek, but may not be phonemes in standard Attic.
Ancient Greek had nine stops. The grammarians classified them in three groups, distinguished by voice-onset time: voiceless aspirated, voiceless unaspirated (tenuis), and voiced. The aspirated stops are written /pʰ tʰ kʰ/. The tenuis stops are written /p˭ t˭ k˭/, with ⟨˭⟩ representing lack of aspiration and voicing, or /p t k/. The voiced stops are written /b d ɡ/. For the Ancient Greek terms for these three groups, see below; see also the section on spirantization.
English distinguishes two types of stops: voiceless and voiced. Voiceless stops have three main pronunciations (allophones): moderately aspirated at the beginning of a word before a vowel, unaspirated after /s/, and unaspirated, unreleased, glottalized, or debuccalized at the end of a word. English voiced stops are often only partially voiced. Thus, some pronunciations of the English stops are similar to the pronunciations of Ancient Greek stops.
- voiceless aspirated t in tie [tʰaɪ]
- tenuis t in sty [st˭aɪ]
- tenuis, unreleased, glottalized, or debuccalized t in light [laɪt˭, laɪt̚, laɪˀt, laɪʔ]
- partially voiced d in die [daɪ] or [d̥aɪ]
/h/ is often called the aspirate (see below). Attic generally kept it, but some non-Attic dialects during the Classical period lost it (see below). It mostly occurred at the beginning of words, because it was usually lost between vowels, except in two rare words. Also, when a stem beginning with /h/ was the second part of a compound word, the /h/ sometimes remained, probably depending on whether the speaker recognized that the word was a compound. This can be seen in Old Attic inscriptions, where /h/ was written using the letterform of eta (see below), which was the source of H in the Latin alphabet:
- Old Attic inscriptional forms
- ΕΥΗΟΡΚΟΝ /eú.hor.kon/, standard εὔορκον /eú.or.kon/ ('faithful to an oath')
- ΠΑΡΗΕΔΡΟΙ /pár.he.droi/, standard πάρεδροι /pá.re.droi/ ('sitting beside, assessor')
- ΠΡΟΣΗΕΚΕΤΟ /pros.hɛː.ké.tɔː/, standard προσηκέτω /pro.sɛː.ké.tɔː/ ('let him be present')
- εὐαἵ /eu.haí/ ('yay!')
- ταὧς /ta.hɔ́ɔs/ ('peacock')
/s/ was a voiceless coronal sibilant. It was transcribed using the symbol for /s/ in Coptic and an Indo-Aryan language, as in Dianisiyasa for Διονυσίου ('of Dionysius') on an Indian coin. This indicates that the Greek sound was a hissing sound rather than a hushing sound: like English s in see rather than sh in she. It was pronounced as a voiced [z] before voiced consonants.
According to W.S. Allen, zeta ⟨ζ⟩ in Attic Greek likely represented the consonant cluster /sd/, phonetically [zd]. For metrical purposes it was treated as a double consonant, thus forming a heavy syllable. In Archaic Greek, when the letter was adopted from Phoenician zayin, the sound was probably an affricate [dz]. In Koine Greek, ⟨ζ⟩ represented /z/. It is more likely that this developed from [dz] rather than from Attic /sd/.
- Ζεύς ('Zeus') — Archaic /d͡zeús/, Attic /sdeús/ [zdeǔs], late Koine /zefs/
/p k/ in the clusters /ps ks/ were somewhat aspirated, as [pʰs] and [kʰs], but in this case the aspiration of the first element was not phonologically contrastive: no words distinguish /ps *pʰs *bs/, for example (see below for explanation).[clarification needed]
Ancient Greek has two nasals: the bilabial nasal /m/, written μ and the alveolar nasal /n/, written ν. Depending on the phonetic environment, the phoneme /n/ was pronounced as [m n ŋ]; see below. On occasion, the /n/ phoneme participates in true gemination without any assimilation in place of articulation, as for example in the word ἐννέα. Artificial gemination for metrical purposes is also found occasionally, as in the form ἔννεπε, occurring in the first verse of Homer's Odyssey.
Ancient Greek has the liquids /l/ and /r/, written λ and ρ respectively.
The letter rho ρ was pronounced as an alveolar trill [r], as in Italian or Modern Greek rather than as in standard varieties of English or French. At the beginning of a word, it was pronounced as a voiceless alveolar trill [r̥]. In some cases, initial ⟨ρ⟩ in poetry was pronounced as a long trill (phonemically /rr/), shown by the fact that the previous syllable is counted as heavy: for instance τίνι ῥυθμῷ must be pronounced as τίνι ρρυθμῷ in Euripides, Electra 772, τὰ ῥήματα as τὰ ρρήματα in Aristophanes, The Frogs 1059, and βέλεα ῥέον as βέλεα ρρέον in Iliad 12.159.
The semivowels /j w/ were not present in standard Attic Greek at the beginnings of words. However, diphthongs ending in /i u/ were usually pronounced with a double semivowel [jj ww] or [jː wː] before a vowel. Allen suggests that these were simply semivocalic allophones of the vowels, although in some cases they developed from earlier semivowels.
The labio-velar approximant /w/ at the beginning of a syllable survived in some non-Attic dialects, such as Arcadian and Aeolic; a voiceless labio-velar approximant /ʍ/ probably also occurred in Pamphylian and Boeotian. /w/ is sometimes written with the letter digamma ⟨Ϝ⟩, and later with ⟨Β⟩ and ⟨ΟΥ⟩, and /ʍ/ was written with digamma and heta ⟨ϜΗ⟩:
- Pamphylian ϜΗΕ /ʍe/, written as ἕ in Homer (the reflexive pronoun)
- Boeotian ϜΗΕΚΑΔΑΜΟΕ /ʍe.ka.daː.moe/ for Attic Ἑκαδήμῳ Akademos
Evidence from the poetic meter of Homer suggests that /w ʍ/ also occurred in the Archaic Greek of the Iliad and Odyssey, although they would not have been pronounced by Attic speakers and are not written in the Attic-influenced form of the text. The presence of these consonants would explain some cases of absence of elision, some cases in which the meter demands a heavy syllable but the text has a light syllable (positional quantity), and some cases in which a long vowel before a short vowel is not shortened (absence of epic correption).
In the table below the scansion of the examples is shown with the breve ⟨˘⟩ for light syllables, the macron ⟨¯⟩ for heavy ones, and the pipe ⟨|⟩ for the divisions between metrical feet. The sound /w/ is written using digamma, and /ʍ/ with digamma and rough breathing, although the letter never appears in the actual text.
|location||Iliad 1.30||Iliad 1.108||Iliad 7.281||Iliad 5.343|
|standard text||ἐνὶ οἴκῳ||εἶπας ἔπος||καὶ ἴδμεν ἅπαντες||ἀπὸ ἕο|
|original form||ἐνὶ ϝοίκῳ||εἶπας ϝέπος||καὶ ϝίδμεν ἅπαντες||ἀπὸ ῾ϝϝέο|
Single and double (geminated) consonants were distinguished from each other in Ancient Greek: for instance, /p kʰ s r/ contrasted with /pː kʰː sː rː/ (also written /pp kkʰ ss rr/). In Ancient Greek poetry, a vowel followed by a double consonant counts as a heavy syllable in meter. Doubled consonants usually only occur between vowels, not at the beginning or the end of a word, except in the case of /r/, for which see above.
Gemination was lost in Standard Modern Greek, so that all consonants that used to be geminated are pronounced as singletons. Cypriot Greek, the Modern Greek dialect of Cyprus, however, preserves geminate consonants.
Archaic and Classical Greek vowels and diphthongs varied by dialect. The tables below show the vowels of Classical Attic in the IPA, paired with the vowel letters that represent them in the standard Ionic alphabet. The earlier Old Attic alphabet had certain differences. Attic Greek of the 5th century BC likely had 5 short and 7 long vowels: /a e i y o/ and /aː eː ɛː iː yː uː ɔː/. Vowel length was phonemic: some words are distinguished from each other by vowel length. In addition, Classical Attic had many diphthongs, all ending in /i/ or /u/; these are discussed below.
In standard Ancient Greek spelling, the long vowels /eː ɛː uː ɔː/ (spelled ει η ου ω) are distinguished from the short vowels /e o/ (spelled ε ο), but the long–short pairs /a aː/, /i iː/, and /y yː/ are each written with a single letter, α, ι, υ. This is the reason for the terms for vowel letters described below. In grammars, textbooks, or dictionaries, α, ι, υ are sometimes marked with macrons (ᾱ, ῑ, ῡ) to indicate that they are long, or breves (ᾰ, ῐ, ῠ) to indicate that they are short.
For the purposes of accent, vowel length is measured in morae: long vowels and most diphthongs count as two morae; short vowels, and the diphthongs /ai oi/ in certain endings, count as one mora. A one-mora vowel could be accented with high pitch, but two-mora vowels could be accented with falling or rising pitch.
Close and open vowels
Proto-Greek close back rounded /u uː/ shifted to front /y yː/ early in Attic and Ionic, around the 6th or 7th century BC (see below). /u/ remained only in diphthongs; it did not shift in Boeotian, so when Boeotians adopted the Attic alphabet, they wrote their unshifted /u uː/ using ⟨ΟΥ⟩.
The situation with the mid vowels was more complex. In the early Classical period, there were two short mid vowels /e o/, but four long mid vowels: close-mid /eː oː/ and open-mid /ɛː ɔː/. Since the short mid vowels changed to long close-mid /eː oː/ rather than long open-mid /ɛː ɔː/ by compensatory lengthening in Attic, E.H. Sturtevant suggests that the short mid vowels were close-mid, but Allen says this is not necessarily true.
By the mid-4th century BC, the close-mid back /oː/ shifted to /uː/, partly because /u uː/ had shifted to /y yː/. Similarly, the close-mid front /eː/ changed to /iː/. These changes triggered a shift of the open-mid vowels /ɛː ɔː/ to become mid or close-mid /eː oː/, and this is the pronunciation they had in early Koine Greek.
In Latin, on the other hand, all short vowels except for /a/ were much more open than the corresponding long vowels. This made long /eː oː/ similar in quality to short /i u/, and for this reason the letters ⟨I E⟩ and ⟨V O⟩ were frequently confused with each other in Roman inscriptions. This also explains the vocalism of New Testament Greek words such as λεγεών ('legion'; < Lat. legio) or λέντιον ('towel'; < Lat. linteum), where Latin ⟨i⟩ was perceived to be similar to Greek ⟨ε⟩.
In Attic, the open-mid /ɛː ɔː/ and close-mid /eː oː/ each have three main origins. Some cases of the open-mid vowels /ɛː ɔː/ developed from Proto-Greek *ē ō. In other cases they developed from contraction. Finally, some cases of /ɛː/, only in Attic and Ionic, developed from earlier /aː/ by the Attic–Ionic vowel shift.
In a few cases, the long close-mid vowels /eː oː/ developed from monophthongization of the pre-Classical falling diphthongs /ei ou/. In most cases, they arose through compensatory lengthening of the short vowels /e o/ or through contraction.
In both Aeolic and Doric, Proto-Greek /aː/ did not shift to /ɛː/. In some dialects of Doric, such as Laconian and Cretan, contraction and compensatory lengthening resulted in open-mid vowels /ɛː ɔː/, and in others they resulted in the close-mid /eː oː/. Sometimes the Doric dialects using the open-mid vowels are called Severer, and the ones using the close-mid vowels are called Milder.
Attic had many diphthongs, all falling diphthongs with /i u/ as the second semivocalic element, and either with a short or long first element. Diphthongs with a short first element are sometimes called "proper diphthongs", while diphthongs with a long first element are sometimes called "improper diphthongs." Whether they have a long or a short first element, all diphthongs count as two morae when applying the accent rules, like long vowels, except for /ai oi/ in certain cases. Overall Attic and Koine show a pattern of monophthongization: they tend to change diphthongs to single vowels.
The most common diphthongs were /ai au eu oi/ and /ɛːi̯ aːi̯ ɔːi̯/. The long diphthongs /ɛːu̯ aːu̯ ɔːu̯/ occurred rarely. The diphthongs /ei ou yi/ changed to /eː uː yː/ in the early Classical period in most cases, but /ei yi/ remained before vowels.
In the tables below, the diphthongs that were monophthongized in most cases are preceded by an asterisk, and the rarer diphthongs are in parentheses.
- Ἀθηναῖοι /a.tʰɛɛ.nái.oi/ ('Athenians'): [a.tʰɛː.naĵ.joi]
- ποιῶ /poi.ɔ́ɔ/ ('I do'): either [poj.jɔ̂ː] or [po.jɔ̂ː]
- Doric στοιᾱ́ /stoi.aá/: [sto.jǎː]
- Attic στοᾱ́ /sto.aá/: [sto.ǎː]
- κελεύω /ke.leú.ɔː/ ('I command'): [ke.lew̌.wɔː]
- σημεῖον /sɛɛ.méi.on/ ('sign'): [sɛː.meĵ.jon]
The diphthong /oi/ merged with the long close front rounded vowel /yː/ in Koine. It likely first became [øi]. Change to [øi] would be assimilation: the back vowel [o] becoming front [ø] because of the following front vowel [i]. This may have been the pronunciation in Classical Attic. Later it must have become [øː], parallel to the monophthongization of /ei ou/, and then [yː], but when words with ⟨οι⟩ were borrowed into Latin, the Greek digraph was represented with the Latin digraph ⟨oe⟩, representing the diphthong /oe/.
- λοιμός /loi.mós/ ('plague'): possibly [løi.mós]
- λῑμός /lii.mós/ ('famine'): [liː.mós]
In the diphthongs /au̯ eu̯ ɛːu̯/, the offglide /u/ became a consonant in Koine Greek, and they became Modern Greek /av ev iv/. The long diphthongs /aːi̯ ɛːi̯ ɔːi̯/ lost their offglide and merged with the long vowels /aː ɛː ɔː/ by the time of Koine Greek.
Many different forms of the Greek alphabet were used for the regional dialects of the Greek language during the Archaic and early Classical periods. The Attic dialect, however, used two forms. The first was the Old Attic alphabet, and the second is the Ionic alphabet, introduced to Athens around the end of the 5th century BC during the archonship of Eucleides. The last is the standard alphabet in modern editions of Ancient Greek texts, and the one used for Classical Attic, standard Koine, and Medieval Greek, finally developing into the alphabet used for Modern Greek.
Most double consonants are written using double letters: ⟨ππ σσ ρρ⟩ represent /pː sː rː/ or /pp ss rr/. The geminate versions of the aspirated stops /pʰː tʰː kʰː/ are written with the digraphs ⟨πφ τθ κχ⟩, and geminate /ɡː/ is written as ⟨κγ⟩, since ⟨γγ⟩ represents [ŋɡ] in the standard orthography of Ancient Greek.
- ἔκγονος (ἐκ-γονος) /éɡ.ɡo.nos/ ('offspring'), occasionally εγγονοσ in inscriptions
- ἐγγενής /eŋ.ɡe.nɛɛ́s/ ('inborn') (εν-γενής)
Voiceless /r/ is usually written with the spiritus asper as ῥ- and transcribed as rh in Latin. The same orthography is sometimes encountered when /r/ is geminated, as in ⟨συρρέω⟩, sometimes written ⟨συῤῥέω⟩, giving rise to the transliteration rrh. This example also illustrates that /n/ (συν-ῥέω) assimilates to following /r/, creating gemination.
In Classical Attic, the spellings ει and ου represented respectively the vowels /eː/ and /uː/ (the latter being an evolution of /oː/), from original diphthongs, compensatory lengthening, or contraction.
The above information about the usage of the vowel letters applies to the classical orthography of Attic, after Athens took over the orthographic conventions of the Ionic alphabet in 403 BC. In the earlier, traditional Attic orthography there was only a smaller repertoire of vowel symbols: α, ε, ι, ο, and υ. The letters η and ω were still missing. All five vowel symbols could at that stage denote either a long or a short vowel. Moreover, the letters ε and ο could respectively denote the long open-mid /ɛː, ɔː/, the long close-mid /eː, oː/ and the short mid phonemes /e, o/. The Ionic alphabet brought the new letters η and ω for the one set of long vowels, and the convention of using the digraph spellings ει and ου for the other, leaving simple ε and ο to be used only for the short vowels. However, the remaining vowel letters α, ι and υ continued to be ambiguous between long and short phonemes.
Spelling of /h/
In the Old Attic alphabet, /h/ was written with the letterform of eta ⟨Η⟩. In the Ionic dialect of Asia Minor, /h/ was lost early on, and the letter ⟨Η⟩ in the Ionic alphabet represented /ɛː/. In 403 BC, when the Ionic alphabet was adopted in Athens, the sound /h/ ceased to be represented in writing.
In some inscriptions /h/ was represented by a symbol formed from the left-hand half of the original letter: ⟨Ͱ⟩ (). Later grammarians, during the time of the Hellenistic Koine, developed that symbol further into a diacritic, the rough breathing (δασὺ πνεῦμα; Latin: spiritus asper; δασεῖα for short), which was written on the top of the initial vowel. Correspondingly, they introduced the mirror image diacritic called smooth breathing (ψιλὸν πνεῦμα; Latin: spiritus lenis; ψιλή for short), which indicated the absence of /h/. These marks were not used consistently until the time of the Byzantine Empire.
Ancient Greek words were divided into syllables. A word has one syllable for every short vowel, long vowel, or diphthong. In addition, syllables began with a consonant if possible, and sometimes ended with a consonant. Consonants at the beginning of the syllable are the syllable onset, the vowel in the middle is a nucleus, and the consonant at the end is a coda.
In dividing words into syllables, each vowel or diphthong belongs to one syllable. A consonant between vowels goes with the following vowel. In the following transcriptions, a period ⟨.⟩ separates syllables.
- λέγω ('I say'): /lé.ɡɔɔ/ (two syllables)
- τοιαῦται ('this kind') (fem pl): /toi.áu.tai/ (three syllables)
- βουλεύσειε ('if only he would want'): /buː.leú.sei.e/ (four syllables)
- ἠελίοιο ('sun's') (Homeric Greek): /ɛɛ.e.lí.oi.o/ (five syllables)
Any remaining consonants are added at the end of a syllable. And when a double consonant occurs between vowels, it is divided between syllables. One half of the double consonant goes to the previous syllable, forming a coda, and one goes to the next, forming an onset. Clusters of two or three consonants are also usually divided between syllables, with at least one consonant joining the previous vowel and forming the syllable coda of its syllable, but see below.
- ἄλλος ('another'): /ál.los/
- ἔστιν ('there is'): /és.tin/
- δόξα ('opinion'): /dók.sa/
- ἐχθρός ('enemy'): /ekʰ.tʰrós/
Syllables in Ancient Greek were either light or heavy. This distinction is important in Ancient Greek poetry, which was made up of patterns of heavy and light syllables. Syllable weight is based on both consonants and vowels. Ancient Greek accent, by contrast, is only based on vowels.
A syllable ending in a short vowel, or the diphthongs αι and οι in certain noun and verb endings, was light. All other syllables were heavy: that is, syllables ending in a long vowel or diphthong, a short vowel and consonant, or a long vowel or diphthong and consonant.
- λέγω /lé.ɡɔɔ/: light – heavy;
- τοιαῦται /toi.áu.tai/: heavy – heavy – light;
- βουλεύσειε /buː.leú.sei.e/: heavy – heavy – heavy – light;
- ἠελίοιο /ɛɛ.e.lí.oi.o/: heavy – light – light – heavy – light.
Greek grammarians called heavy syllables μακραί ('long', singular μακρά), and placed them in two categories. They called a syllable with a long vowel or diphthong φύσει μακρά ('long by nature'), and a syllable ending in a consonant θέσει μακρά ('long by position'). These terms were translated into Latin as naturā longa and positiōne longa. However, Indian grammarians distinguished vowel length and syllable weight by using the terms heavy and light for syllable quantity and the terms long and short only for vowel length. This article adopts their terminology, since not all metrically heavy syllables have long vowels; e.g.:
- ἥ (fem rel pron) /hɛɛ́/ is a heavy syllable having a long vowel, "long by nature";
- οἷ (masc dat sg pron) /hói/ is a heavy syllable having a diphthong, "long by nature";
- ὅς (masc rel pron) /hós/ is a heavy syllable ending in a consonant, "long by position".
Poetic meter shows which syllables in a word counted as heavy, and knowing syllable weight allows us to determine how consonant clusters were divided between syllables. Syllables before double consonants, and most syllables before consonant clusters, count as heavy. Here the letters ⟨ζ, ξ and ψ⟩ count as consonant clusters. This indicates that double consonants and most consonant clusters were divided between syllables, with at least the first consonant belonging to the preceding syllable.
- ἄλλος /ál.los/ ('different'): heavy – heavy
- ὥστε /hɔɔ́s.te/ ('so that'): heavy – light
- ἄξιος /ák.si.os/ ('worthy'): heavy – light – heavy
- προσβλέψαιμι /pros.blép.sai.mi/ ('may I see!'): heavy – heavy – heavy – light
- χαριζομένη /kʰa.ris.do.mé.nɛɛ/ ('rejoicing' fem sg): light – heavy – light – light – heavy
In Attic poetry, syllables before a cluster of a stop and a liquid or nasal are commonly light rather than heavy. This was called correptio Attica ('Attic shortening'), since here an ordinarily "long" syllable became "short".
- πατρός ('of a father'): Homeric /pat.rós/ (heavy-heavy), Attic /pa.trós/ (light-heavy)
Six stop clusters occur. All of them agree in voice-onset time, and begin with a labial or velar and end with a dental. Thus, the clusters /pʰtʰ kʰtʰ pt kt bd ɡd/ are allowed. Certain stop clusters do not occur as onsets: clusters beginning with a dental and ending with a labial or velar, and clusters of stops that disagree in voice onset time.
In Ancient Greek, any vowel may end a word, but the only consonants that may normally end a word are /n r s/. If a stop ended a word in Proto-Indo-European, this was dropped in Ancient Greek, as in ποίημα (from ποίηματ; compare the genitive singular ποιήματος). Other consonants may end a word, however, when a final vowel is elided before a word beginning in a vowel, as in ἐφ᾿ ἵππῳ (from ἐπὶ ἵππῳ).
Ancient Greek had a pitch accent, unlike the stress accent of Modern Greek and English. One mora of a word was accented with high pitch. A mora is a unit of vowel length; in Ancient Greek, short vowels have one mora and long vowels and diphthongs have two morae. Thus, a one-mora vowel could have accent on its one mora, and a two-mora vowel could have accent on either of its two morae. The position of accent was free, with certain limitations. In a given word, it could appear in several different positions, depending on the lengths of the vowels in the word.
In the examples below, long vowels and diphthongs are represented with two vowel symbols, one for each mora. This does not mean that the long vowel has two separate vowels in different syllables. Syllables are separated by periods ⟨.⟩; anything between two periods is pronounced in one syllable.
- η (long vowel with two morae): phonemic transcription /ɛɛ/, phonetic transcription [ɛː] (one syllable)
- εε (two short vowels with one mora each): phonemic transcription /e.e/, phonetic transcription [e̞.e̞] (two syllables)
The accented mora is marked with acute accent ⟨´⟩. A vowel with rising pitch contour is marked with a caron ⟨ˇ⟩, and a vowel with a falling pitch contour is marked with a circumflex ⟨ˆ⟩.
The position of the accent in Ancient Greek was phonemic and distinctive: certain words are distinguished by which mora in them is accented. The position of the accent was also distinctive on long vowels and diphthongs: either the first or the second mora could be accented. Phonetically, a two-mora vowel had a rising or falling pitch contour, depending on which of its two morae was accented:
|Examples of pitch accent|
|translation||'a slice'||'sharp'||'I go'||'either'||'I am'||'you were'||'or'||'houses'||'at home'|
Accent marks were never used until around 200 BC. They were first used in Alexandria, and Aristophanes of Byzantium is said to have invented them. There are three: the acute, circumflex, and grave ⟨´ ῀ `⟩. The shape of the circumflex is a merging of the acute and grave.
The acute represented high or rising pitch, the circumflex represented falling pitch, but what the grave represented is uncertain. Early on, the grave was used on every syllable without an acute or circumflex. Here the grave marked all unaccented syllables, which had lower pitch than the accented syllable.
- Θὲόδὼρὸς /tʰe.ó.dɔː.ros/
Later on, a grave was only used to replace a final acute before another full word; the acute was kept before an enclitic or at the end of a phrase. This usage was standardized in the Byzantine era, and is used in modern editions of Ancient Greek texts. Here it might mark a lowered version of a high-pitched syllable.
- ἔστι τι καλόν. /és.ti.ti.ka.lón/ ('there is something beautiful') (καλόν is at the end of the sentence)
- καλόν ἐστι. /ka.ló.nes.ti/ ('it is beautiful') (ἐστι here is an enclitic)
- καλὸν καὶ ἀγαθόν /ka.lón.kai.a.ɡa.tʰón/ ('good and beautiful')
Greek underwent many sound changes. Some occurred between Proto-Indo-European (PIE) and Proto-Greek (PGr), some between the Mycenaean Greek and Ancient Greek periods, which are separated by about 300 years (the Greek Dark Ages), and some during the Koine Greek period. Some sound changes occurred only in particular Ancient Greek dialects, not in others, and certain dialects, such as Boeotian and Laconian, underwent sound changes similar to the ones that occurred later in Koine. This section primarily describes sound changes that occurred between the Mycenaean and Ancient Greek periods and during the Ancient Greek period.
- PIE *so, seh₂ > ὁ, ἡ /ho hɛː/ ('the') (m f) — compare Sanskrit sá sā́
- PIE *septḿ̥ > ἑπτά /hep.tá/ ('seven') — compare Latin septem
Clusters of *s and a sonorant (liquid or nasal) at the beginning of a word became a voiceless resonant in some forms of Archaic Greek. Voiceless [r̥] remained in Attic at the beginning of words, and became the regular allophone of /r/ in this position; voiceless /ʍ/ merged with /h/; and the rest of the voiceless resonants merged with the voiced resonants.
- PIE *srew- > ῥέϝω > Attic ῥέω /r̥é.ɔː/ ('flow') — compare Sanskrit srávanti (3rd pl)
- PIE *sroweh₂ > Corfu ΡΗΟϜΑΙΣΙ /r̥owaisi/ (dat pl), Attic ῥοή [r̥o.ɛ̌ː] ('stream')
- PIE *swe > Pamphylian ϜΗΕ /ʍe/, Attic ἕ /hé/ (refl pron)
- PIE *slagʷ- > Corfu ΛΗΑΒΩΝ /l̥aboːn/, Attic λαβών /la.bɔ̌ːn/ ('taking') (aor ppl)
PIE *s remained in clusters with stops and at the end of a word:
- PIE *h₁esti > ἐστί /es.tí/ ('is') — compare Sanskrit ásti, Latin est
- PIE *seǵʰ-s- > ἕξω /hék.sɔː/ ('I will have')
- PIE *ǵenH₁os > γένος /ɡénos/ ('kind') — compare Sanskrit jánas, Latin genus
The PIE semivowel *y, IPA /j/, was sometimes debuccalized and sometimes strengthened initially. How this development was conditioned is unclear; the involvement of the laryngeals has been suggested. In certain other positions, it was kept, and frequently underwent other sound changes:
- PIE *yos, yeH₂ > ὅς, ἥ [hós hɛ̌ː] ('who') (rel pron) — compare Sanskrit yás, yā́
- PIE *yugóm > early /dzu.ɡón/ > Attic ζυγόν /sdy.ɡón/ ('yoke') — compare Sanskrit yugá, Latin jugum
- *mor-ya > μοῖρα /mói.ra/ ('part') (by metathesis) — compare μόρος
Between vowels, *s became /h/. Intervocalic /h/ probably occurred in Mycenaean. In most cases it was lost by the time of Ancient Greek. In a few cases, it was transposed to the beginning of the word. Later, initial /h/ was lost by psilosis.
- PIE *ǵénh₁es-os > PGr *genehos > Ionic γένεος /ɡé.ne.os/ > Attic γένους ('of a race') /ɡé.nuːs/ (contraction; gen. of γένος)
- Mycenaean pa-we-a₂, possibly /pʰar.we.ha/, later φάρεα /pʰǎː.re.a/ ('pieces of cloth')
- PIE *(H₁)ewsoH₂ > Proto-Greek *ewhō > εὕω /heǔ.ɔː/ ('singe')
- λύω, λύσω, ἔλυσα /lyý.ɔː lyý.sɔː é.lyy.sa/ ('I release, I will release, I released')
Through Grassmann's law, an aspirated consonant loses its aspiration when followed by another aspirated consonant in the next syllable; this law also affects /h/ resulting from debuccalization of *s; for example:
- PIE *dʰéh₁- > ἔθην /étʰɛːn/ ('I placed') (aor)
- *dʰí-dʰeh₁- > τίθημι /tí.tʰɛː.mi/ ('I place') (pres)
- *dʰé-dʰeh₁- > τέθηκα /té.tʰɛː.ka/ ('I have placed') (perf)
- *tʰrikʰ-s > θρίξ /tʰríks/ ('hair') (nom sg)
- *tʰrikʰ-es > τρίχες /trí.kʰes/ ('hairs') (nom. pl)
- PIE *seǵʰ-s- > ἕξω /hé.ksɔː/ ('I will have') (fut)
- *seǵʰ- > ἔχω/é.kʰɔː/ ('I have') (pres)
In some cases, the sound ⟨ττ⟩ /tː/ in Attic corresponds to the sound ⟨σσ⟩ /sː/ in other dialects. These sounds developed from palatalization of κ, χ, and sometimes τ, θ, and γ before the pre-Greek semivowel /j/. This sound was likely pronounced as an affricate [ts] or [tʃ] earlier in the history of Greek, but inscriptions do not show the spelling ⟨τσ⟩, which suggests that an affricate pronunciation did not occur in the Classical period.
- *ēk-yōn > *ētsōn > ἥσσων, Attic ἥττων ('weaker') — compare ἦκα ('softly')
- PIE *teh₂g-yō > *tag-yō > *tatsō > τάσσω, Attic τάττω ('I arrange') — compare ταγή ('battle line') and Latin tangō
- PIE *glōgʰ-yeh₂ > *glokh-ya > *glōtsa > γλῶσσα, Attic γλῶττα ('tongue') — compare γλωχίν ('point')
Loss of labiovelars
Mycenaean Greek had three labialized velar stops /kʷʰ kʷ ɡʷ/, aspirated, tenuis, and voiced. These derived from PIE labiovelars and from sequences of a velar and /w/, and were similar to the three regular velars of Ancient Greek /kʰ k ɡ/, except with added lip-rounding. They were written all using the same symbols in Linear B, and are transcribed as q.
In Ancient Greek, all labialized velars merged with other stops: labials /pʰ p b/, dentals /tʰ t d/, and velars /kʰ k ɡ/. Which one they became depended on dialect and phonological environment. Because of this, certain words that originally had labialized velars have different stops depending on dialect, and certain words from the same root have different stops even in the same Ancient Greek dialect.
- PIE, PGr *kʷis, kʷid > Attic τίς, τί, Thessalian Doric κίς, κί ('who?, what?') — compare Latin quis, quid
- PIE, PGr *kʷo-yos > Attic ποῖος, Ionic κοῖος ('what kind?')
- PIE *gʷʰen-yō > PGr *kʷʰenyō > Attic θείνω ('I strike')
- *gʷʰón-os > PGr *kʷʰónos > Attic φόνος ('slaughtering')
- PIE kʷey(H₁)- ('notice') > Mycenaean qe-te-o ('paid'), Ancient Greek τίνω ('pay')
- τιμή ('honor')
- ποινή ('penalty') > Latin poena)
Near /u uː/ or /w/, the labialized velars had already lost their labialization in the Mycenaean period.
- PG *gʷow-kʷolos > Mycenaean qo-u-ko-ro, Ancient Greek βουκόλος ('cowherd')
- Mycenaean a-pi-qo-ro, Ancient Greek ἀμφίπολος ('attendant')
Through psilosis ('stripping'), from the term for lack of /h/ (see below), the /h/ was lost even at the beginnings of words. This sound change did not occur in Attic until the Koine period, but occurred in East Ionic and Lesbian Aeolic, and therefore can be seen in certain Homeric forms. These dialects are called psilotic.
- Homeric ἠέλιος /ɛɛ.é.li.os/, Attic ἥλιος /hɛɛ́.li.os/ '(sun')
- Homeric ἠώς /ɛɛ.ɔɔ́s/, Attic ἑώς /he(.)ɔɔ́s/ ('dawn')
- Homeric οὖρος [óo.ros], Attic ὅρος /hó.ros/ ('border')
Even later, during the Koine Greek period, /h/ disappeared totally from Greek and never reappeared, resulting in Modern Greek not possessing this phoneme at all.
Spirantization of /tʰ/ occurred earlier in Laconian Greek. Some examples are transcribed by Aristophanes and Thucydides, such as ναὶ τὼ σιώ for ναὶ τὼ θεώ ('Yes, by the two gods!') and παρσένε σιά for παρθένε θεά ("virgin goddess!') (Lysistrata 142 and 1263), σύματος for θύματος ('sacrificial victim') (Histories book 5, chapter 77). These spellings indicate that /tʰ/ was pronounced as a dental fricative [θ] or a sibilant [s], the same change that occurred later in Koine. Greek spelling, however, does not have a letter for a labial or velar fricative, so it is impossible to tell whether /pʰ kʰ/ also changed to /f x/.
In Attic, Ionic, and Doric, vowels were usually lengthened when a following consonant was lost. The syllable before the consonant was originally heavy, but loss of the consonant would cause it to be light. Therefore, the vowel before the consonant was lengthened, so that the syllable would continue to be heavy. This sound change is called compensatory lengthening, because the vowel length compensates for the loss of the consonant. The result of lengthening depended on dialect and time period. The table below shows all possible results:
In Attic, some cases of long vowels arose through contraction of adjacent short vowels where a consonant had been lost between them. ⟨ει⟩ /eː/ came from contraction of ⟨εε⟩ and ⟨ου⟩ /oː/ through contraction of ⟨εο⟩, ⟨οε⟩, or ⟨oo⟩. ⟨ω⟩ /ɔː/ arose from ⟨αο⟩ and ⟨οα⟩, ⟨η⟩ /ɛː/ from ⟨εα⟩, and ⟨ᾱ⟩ /aː/ from ⟨αε⟩ and ⟨αα⟩. Contractions involving diphthongs ending in /i̯/ resulted in the long diphthongs /ɛːi̯ aːi̯ ɔːi̯/.
Uncontracted forms are found in other dialects, such as in Ionic.
The diphthongs /ei ou/ became the long monophthongs /eː/ and /oː/ before the Classical period.
Vowel raising and fronting
In Archaic Greek, upsilon ⟨Υ⟩ represented the back vowel /u uː/. In Attic and Ionic, this vowel was fronted around the 6th or 7th century BC. It likely first became central [ʉ ʉː], and then the front [y yː].
During the Classical period, /oː/ was raised to [uː], and thus took up the empty space of the earlier /uː/ phoneme. The fact that ⟨υ⟩ was never confused with ⟨ου⟩ indicates that ⟨υ⟩ was fronted before ⟨ου⟩ was raised.
In early Koine Greek, /eː/ was raised and merged with original /iː/.
Attic–Ionic vowel shift
In Attic and Ionic, the Proto-Greek long /aː/ shifted to /ɛː/. This shift did not happen in the other dialects. Thus, some cases of Attic and Ionic η correspond to Doric and Aeolic ᾱ, and other cases correspond to Doric and Aeolic η.
- Doric and Aeolic μᾱ́τηρ, Attic and Ionic μήτηρ [mǎː.tɛːr mɛ̌ːtɛːr] ('mother') — compare Latin māter
The vowel first shifted to /æː/, at which point it was distinct from Proto-Greek long /eː/, and then later /æː/ and /eː/ merged as /ɛː/. This is indicated by inscriptions in the Cyclades, which write Proto-Greek /eː/ as ⟨Ε⟩, but the shifted /æː/ as ⟨Η⟩ and new /aː/ from compensatory lengthening as ⟨Α⟩.
In Attic, both /æː/ and Proto-Greek /eː/ were written as ⟨Η⟩, but they merged to /ɛː/ at the end of the 5th century BC. At this point, nouns in the masculine first declension were confused with third-declension nouns with stems in /es/. The first-declension nouns had /ɛː/ resulting from original /aː/, while the third-declension nouns had /ɛː/ resulting from contraction of /ea/.
- Αἰσχίνης Aeschines (1st decl)
- Αἰσχίνου (gen sg)
- incorrect 3rd decl gen sg Αἰσχίνους
- Αἰσχίνην (acc sg)
- Ἰπποκράτης Hippocrates (3rd decl)
- Ἱπποκράτους (gen sg)
- Ἱπποκράτη (acc sg)
- incorrect 1st decl acc sg Ἱπποκράτην
In addition, words that had original η in both Attic and Doric were given false Doric forms with ᾱ in the choral passages of Athenian plays, indicating that Athenians could not distinguish the Attic-Ionic shifted ᾱ from original Proto-Greek η.
- Attic and Doric πηδός ('blade of an oar')
- incorrect Doric form πᾱδός
- Doric ᾱ̔μέρᾱ, Attic ἡμέρᾱ, Ionic ἡμέρη /haː.mé.raː hɛː.mé.raː hɛː.mé.rɛː/ ('day')
- Attic οἵᾱ, Ionic οἵη [hoǰ.jaː hoǰ.jɛː] ('such as') (fem nom sg)
- Attic νέᾱ, Ionic νέη /né.aː né.ɛː/ ('new') (fem nom sg) < νέϝος
- But Attic κόρη, Ionic κούρη, Doric κόρᾱ and κώρᾱ ('young girl') < κόρϝᾱ (as also in Arcadocypriot)
The fact that /aː/ is found instead of /εː/ may indicate that earlier, the vowel shifted to /ɛː/ in all cases, but then shifted back to /aː/ after /e i r/ (reversion), or that the vowel never shifted at all in these cases. Sihler says that Attic /aː/ is from reversion.
This shift did not affect cases of long /aː/ that developed from the contraction of certain sequences of vowels that contain α. Thus, the vowels /aː/ and /aːi̯/ are common in verbs with a-contracted present and imperfect forms, such as ὁράω "see". The examples below are shown with the hypothetical original forms from which they were contracted.
- infinitive: ὁρᾶν /ho.râːn/ "to see" < *ὁράεεν /ho.rá.e.en/
- third person singular present indicative active: ὁρᾷ /ho.râːi̯/ "he sees" < *ὁράει */ho.rá.ei/
- third person singular imperfect indicative active: ὥρᾱ /hɔ̌ː.raː/ "he saw" < *ὥραε */hǒː.ra.e/
Also unaffected was long /aː/ that arose by compensatory lengthening of short /a/. Thus, Attic and Ionic had a contrast between the feminine genitive singular ταύτης /taú.tɛːs/ and feminine accusative plural ταύτᾱς /taú.taːs/, forms of the adjective and pronoun οὗτος "this, that". The first derived from an original *tautās with shifting of ā to ē, the other from *tautans with compensatory lengthening of ans to ās.
When one consonant comes next to another in verb or noun conjugation or word derivation, various sandhi rules apply. When these rules affect the forms of nouns and adjectives or of compound words, they are reflected in spelling. Between words, the same rules also applied, but they are not reflected in standard spelling, only in inscriptions.
- Most basic rule: When two sounds appear next to each other, the first assimilates in voicing and aspiration to the second.
- This applies fully to stops. Fricatives assimilate only in voicing, sonorants do not assimilate.
- Before an /s/ (future, aorist stem), velars become [k], labials become [p], and dentals disappear.
- Before a /tʰ/ (aorist passive stem), velars become [kʰ], labials become [pʰ], and dentals become [s].
- Before an /m/ (perfect middle first-singular, first-plural, participle), velars become [ɡ], nasal+velar becomes [ɡ], labials become [m], dentals become [s], other sonorants remain the same.
|first sound||second sound||resulting cluster||examples||notes|
|/p, b, pʰ/||/s/||/ps/||πέμπω, πέμψω, ἔπεμψα;
|future and first aorist stems;|
and dative plural
of third-declension nominals
|/k, ɡ, kʰ/||/ks/||ἄγω, ἄξω;|
|/t, d, tʰ/||/s/||ἐλπίς, ἐλπίδος;|
πείθω, πείσω, ἔπεισα
|/p, b, pʰ/||/tʰ/||/pʰtʰ/||ἐπέμφθην||first aorist passive stem|
|/k, ɡ, kʰ/||/kʰtʰ/||ἤχθην|
|/t, d, tʰ/||/stʰ/||ἐπείσθην|
|/p, b, pʰ/||/m/||/mm/||πέπεμμαι||1st singular and plural|
of the perfect mediopassive
|/k, ɡ, kʰ/||/ɡm/ [ŋm]||ἦγμαι|
|/t, d, tʰ/||/sm/ [zm]||πέπεισμαι|
The alveolar nasal /n/ assimilates in place of articulation, changing to a labial or velar nasal before labials or velars:
- μ [m] before the labials /b/, /p/, /pʰ/, /m/ (and the cluster /ps/):
- ἐν- + βαίνω > ἐμβαίνω; ἐν- + πάθεια > ἐμπάθεια; ἐν- + φαίνω > ἐμφαίνω; ἐν- + μένω > ἐμμένω; ἐν- + ψυχή + -ος > ἔμψυχος;
- γ [ŋ] before the velars /ɡ/, /k/, /kʰ/ (and the cluster /ks/):
- ἐν- + γίγνομαι > ἐγγίγνομαι; ἐν- + καλέω > ἐγκαλέω; ἐν- + χέω > ἐγχέω; συν- + ξηραίνω > συγξηραίνω
The sound of zeta ⟨ζ⟩ develops from original *sd in some cases, and in other cases from *y dy gy. In the second case, it was likely first pronounced [dʒ] or [dz], and this cluster underwent metathesis early in the Ancient Greek period. Metathesis is likely in this case; clusters of a voiced stop and /s/, like /bs gs/, do not occur in Ancient Greek, since they change to /ps ks/ by assimilation (see below), while clusters with the opposite order, like /sb sɡ/, pronounced [zb zɡ], do occur.
- Ἀθήναζε ('to Athens') < Ἀθήνᾱσ-δε
- ἵζω ('set') < Proto-Indo-European *si-sdō (Latin sīdō: reduplicated present), from zero-grade of the root of ἕδος < *sedos "seat"
- πεζός ('on foot') < PGr *ped-yos, from the root of πούς, ποδός "foot"
- ἅζομαι ('revere') < PGr *hag-yomai, from the root of ἅγ-ιος ('holy')
Ancient grammarians, such as Aristotle in his Poetics and Dionysius Thrax in his Art of Grammar, categorized letters (γράμματα) according to what speech sounds (στοιχεῖα 'elements') they represented. They called the letters for vowels φωνήεντα ('pronounceable', singular φωνῆεν); the letters for the nasals, liquids, and /s/, and the letters for the consonant clusters /ps ks sd/, ἡμίφωνα ('half-sounding', singular ἡμίφωνον); and the letters for the stops ἄφωνα ('not-sounding', singular ἄφωνον). Dionysius also called consonants in general σύμφωνα ('pronounced with [a vowel]', σύμφωνον).
All the Greek terms for letters or sounds are nominalized adjectives in the neuter gender, to agree with the neuter nouns στοιχεῖον and γράμμα, since they were used to modify the nouns, as in φωνῆεν στοιχεῖον ('pronounceable element') or ἄφωνα γράμματα ('unpronounceable letters'). Many also use the root of the deverbal noun φωνή ('voice, sound').
The words φωνῆεν, σύμφωνον, ἡμίφωνον, ἄφωνον were loan-translated into Latin as vōcālis, cōnsōnāns, semivocālis, mūta. The Latin words are feminine because the Latin noun littera ('letter') is feminine. They were later borrowed into English as vowel, consonant, semivowel, mute.
The categories of vowel letters were δίχρονα, βραχέα, μακρά ('two-time, short, long'). These adjectives describe whether the vowel letters represented both long and short vowels, only short vowels or only long vowels. Additionally, vowels that ordinarily functioned as the first and second elements of diphthongs were called προτακτικά ('prefixable') and ὑποτακτικά ('suffixable'). The category of δίφθογγοι included both diphthongs and the spurious diphthongs ει ου, which were pronounced as long vowels in the Classical period.
The categories ἡμίφωνα and ἄφωνα roughly correspond to the modern terms continuant and stop. Greek grammarians placed the letters ⟨β δ γ φ θ χ⟩ in the category of stops, not of continuants, indicating that they represented stops in Ancient Greek, rather than fricatives, as in Modern Greek.
Stops were divided into three categories using the adjectives δασέα ('thick'), ψιλά ('thin'), and μέσα ('middle'), as shown in the table below. The first two terms indicate a binary opposition typical of Greek thought: they referred to stops with and without aspiration. The voiced stops did not fit in either category and so they were called "middle". The concepts of voice and voicelessness (presence or absence of vibration of the vocal folds) were unknown to the Greeks and were not developed in the Western grammatical tradition until the 19th century, when the Sanskrit grammatical tradition began to be studied by Westerners.
The glottal fricative /h/ was originally called πνεῦμα ('breath'), and it was classified as a προσῳδία, the category to which the acute, grave, and circumflex accents also belong. Later, a diacritic for the sound was created, and it was called pleonastically πνεῦμα δασύ ('rough breathing'). Finally, a diacritic representing the absence of /h/ was created, and it was called πνεῦμα ψιλόν ('smooth breathing'). The diacritics were also called προσῳδία δασεῖα and προσῳδία ψιλή ('thick accent' and 'thin accent'), from which come the Modern Greek nouns δασεία and ψιλή.
|Greek terms||Greek letters||IPA||phonetic description|
|φωνήεντα||προτακτικά||βραχέα||⟨ε ο⟩||/e o/||short vowels|
|μακρά||⟨η ω⟩||/ɛː ɔː/||long vowels|
|δίχρονα||⟨α⟩||/a(ː)/||short and long|
|ὑποτακτικά||⟨ι υ –υ⟩||/i(ː) y(ː) u̯/|
|δίφθογγοι||⟨αι αυ ει ευ οι ου⟩||/ai̯ au̯ eː eu̯ oi̯ oː/||diphthongs and|
long vowel digraphs
|σύμφωνα||ἡμίφωνα||διπλᾶ||⟨ζ ξ ψ⟩||/ds ks ps/||consonant clusters|
|⟨λ ρ⟩||/l r/||liquids|
|⟨μ ν σ⟩||/m n s/||nasals, fricative|
|ἄφωνα||ψῑλά||⟨κ π τ⟩||/k p t/||tenuis stops|
|μέσα||⟨β γ δ⟩||/b ɡ d/||voiced stops|
|δασέα||⟨θ φ χ⟩||/tʰ pʰ kʰ/||aspirated stops|
|προσῳδίαι||τόνοι||⟨ά ᾱ́ ὰ ᾶ⟩||/á ǎː a âː/||pitch accent|
|πνεύματα||⟨ἁ ἀ⟩||/ha a/||glottal fricative|
The above information is based on a large body of evidence which was discussed extensively by linguists and philologists of the 19th and 20th centuries. The following section provides a short summary of the kinds of evidence and arguments that have been used in this debate, and gives some hints as to the sources of uncertainty that still prevails with respect to some details.
Evidence from spelling
Whenever a new set of written symbols, such as an alphabet, is created for a language, the written symbols typically correspond to the spoken sounds, and the spelling or orthography is therefore phonemic or transparent: it is easy to pronounce a word by seeing how it is spelled, and conversely to spell a word by knowing how it is pronounced. Until the pronunciation of the language changes, spelling mistakes do not occur, since spelling and pronunciation match each other.
When the pronunciation changes, there are two options. The first is spelling reform: the spelling of words is changed to reflect the new pronunciation. In this case, the date of a spelling reform generally indicates the approximate time when the pronunciation changed.
The second option is that the spelling remains the same, while the pronunciation changes. In this case, the spelling system is called conservative or historical, since it reflects the pronunciation at an earlier period in the language. It is also called opaque, because there is not a simple correspondence between written symbols and spoken sounds: it is more difficult to pronounce a word by seeing its spelling, and conversely to spell a word by knowing how to pronounce it.
In a language with a historical spelling system, spelling mistakes indicate change in pronunciation. Writers with incomplete knowledge of the spelling system will misspell words, and in general their misspellings reflect the way they pronounced the words.
- If scribes very often confuse two letters, this implies that the sounds denoted by the two letters were the same: that the sounds have merged. This happened early with ⟨ι ει⟩. A little later, it happened with ⟨υ οι⟩, ⟨ο ω⟩, and ⟨ε αι⟩. Later still, ⟨η⟩ was confused with the already merged ⟨ι ει⟩.
- If scribes omit a letter where it would usually be written, or insert it where it does not belong (hypercorrection), this implies that the sound that the letter represented had been lost in speech. This happened early with word-initial rough breathing (/h/) in most forms of Greek. Another example is the occasional omission of the iota subscript of long diphthongs (see above).
Spelling mistakes provide limited evidence: they only indicate the pronunciation of the scribe who made the spelling mistake, not the pronunciation of all speakers of the language at the time. Ancient Greek was a language with many regional variants and social registers. Many of the pronunciation changes of Koine Greek probably occurred earlier in some regional pronunciations and sociolects of Attic even in the Classical Age, but the older pronunciations were preserved in more learned speech.
Greek literature sometimes contains representations of animal cries in Greek letters. The most often quoted example is βῆ βῆ, used to render the cry of sheep, and is used as evidence that beta had a voiced bilabial plosive pronunciation and eta was a long open-mid front vowel. Onomatopoeic verbs such as μυκάομαι for the lowing of cattle (cf. Latin mugire), βρυχάομαι for the roaring of lions (cf. Latin rugire) and κόκκυξ as the name of the cuckoo (cf. Latin cuculus) suggest an archaic [uː] pronunciation of long upsilon, before this vowel was fronted to [yː].
Sounds undergo regular changes, such as assimilation or dissimilation, in certain environments within words, which are sometimes indicated in writing. These can be used to reconstruct the nature of the sounds involved.
- <π,τ,κ> at the end of some words are regularly changed to <φ,θ,χ> when preceding a rough breathing in the next word. Thus, e.g.: ἐφ' ἁλός for ἐπὶ ἁλός or καθ' ἡμᾶς for κατὰ ἡμᾶς.
- <π,τ,κ> at the end of the first member of composite words are regularly changed to <φ,θ,χ> when preceding a spiritus asper in the next member of the composite word. Thus e.g.: ἔφιππος, καθάπτω
- The Attic dialect in particular is marked by contractions: two vowels without an intervening consonant were merged in a single syllable; for instance uncontracted (disyllabic) ⟨εα⟩ ([e.a]) occurs regularly in dialects but contracts to ⟨η⟩ in Attic, supporting the view that η was pronounced [ɛː] (intermediate between [e] and [a]) rather than [i] as in Modern Greek. Similarly, uncontracted ⟨εε⟩, ⟨οο⟩ ([e.e], [o.o]) occur regularly in Ionic but contract to ⟨ει⟩ and ⟨ου⟩ in Attic, suggesting [eː], [oː] values for the spurious diphthongs ⟨ει⟩ and ⟨ου⟩ in Attic as opposed to the [i] and [u] sounds they later acquired.
Morphophonological alternations like the above are often treated differently in non-standard spellings than in standardised literary spelling. This may lead to doubts about the representativeness of the literary dialect and may in some cases force slightly different reconstructions than if one were only to take the literary texts of the high standard language into account. Thus, e.g.:
- non-standard epigraphical spelling sometimes indicates assimilation of final ⟨κ⟩ to ⟨γ⟩ before voiced consonants in a following word, or of final ⟨κ⟩ to ⟨χ⟩ before aspirated sounds, in words like ἐκ.
The metres used in Classical Greek poetry are based on the patterns of light and heavy syllables, and can thus sometimes provide evidence as to the length of vowels where this is not evident from the orthography. By the 4th century AD poetry was normally written using stress-based metres, suggesting that the distinctions between long and short vowels had been lost by then, and the pitch accent had been replaced by a stress accent.
Some ancient grammarians attempt to give systematic descriptions of the sounds of the language. In other authors one can sometimes find occasional remarks about correct pronunciation of certain sounds. However, both types of evidence are often difficult to interpret, because the phonetic terminology of the time was often vague, and it is often not clear in what relation the described forms of the language stand to those which were actually spoken by different groups of the population.
Important ancient authors include:
Sometimes the comparison of standard Attic Greek with the written forms of other Greek dialects, or the humorous renderings of dialectal speech in Attic theatrical works, can provide hints as to the phonetic value of certain spellings. An example of this treatment with Spartan Greek is given above.
The spelling of Greek loanwords in other languages and vice versa can provide important hints about pronunciation. However, the evidence is often difficult to interpret or indecisive. The sounds of loanwords are often not taken over identically into the receiving language. Where the receiving language lacks a sound that corresponds exactly to that of the source language, sounds are usually mapped to some other, similar sound.
In this regard, Latin is of great value to the reconstruction of ancient Greek phonology because of its close proximity to the Greek world which caused numerous Greek words to be borrowed by the Romans. At first, Greek loanwords denoting technical terms or proper names which contained the letter Φ were imported in Latin with the spelling P or PH, indicating an effort to imitate, albeit imperfectly, a sound that Latin lacked. Later on, in the 1st centuries AD, spellings with F start to appear in such loanwords, signaling the onset of the fricative pronunciation of Φ. Thus, in the 2nd century AD, Filippus replaces P(h)ilippus. At about the same time, the letter F also begins to be used as a substitute for the letter Θ, for lack of a better choice, indicating that the sound of Greek theta had become a fricative as well.
For the purpose of borrowing certain other Greek words, the Romans added the letters Y and Z to the Latin alphabet, taken directly from the Greek one. These additions are important as they show that the Romans had no symbols to represent the sounds of the letters Υ and Ζ in Greek, which means that in these cases no known sound of Latin can be used to reconstruct the Greek sounds.
Latin often wrote ⟨i u⟩ for Greek ⟨ε ο⟩. This can be explained by the fact that Latin /i u/ were pronounced as near-close [ɪ ʊ], and therefore were as similar to the Ancient Greek mid vowels /e o/ as to the Ancient Greek close vowels /i u/.
- Φιλουμένη > Philumina
- ἐμπόριον > empurium
Sanskrit, Persian, and Armenian also provide evidence.
The quality of short /a/ is shown by some transcriptions between Ancient Greek and Sanskrit. Greek short /a/ was transcribed with Sanskrit long ā, not with Sanskrit short a, which had a closer pronunciation: [ə]. Conversely, Sanskrit short a was transcribed with Greek ε.
- Gr ἀπόκλιμα [apóklima] > Skt āpoklima- [aːpoːklimə] (an astrological term)
- Skt brāhmaṇa- [bɽaːɦməɳə] > Gr ΒΡΑΜΕΝΑΙ
Comparison with older alphabets
The Greek alphabet developed from the older Phoenician alphabet. It may be assumed that the Greeks tended to assign to each Phoenician letter that Greek sound which most closely resembled the Phoenician sound. But, as with loanwords, the interpretation is not straightforward.
Comparison with younger/derived alphabets
The Greek alphabet was in turn the basis of other alphabets, notably the Etruscan and Coptic and later the Armenian, Gothic, and Cyrillic. Similar arguments can be derived in these cases as in the Phoenician-Greek case.
For example, in Cyrillic, the letter В (ve) stands for [v], confirming that beta was pronounced as a fricative by the 9th century AD, while the new letter Б (be) was invented to note the sound [b]. Conversely, in Gothic, the letter derived from beta stands for [b], so in the 4th century AD, beta may have still been a plosive in Greek although according to evidence from the Greek papyri of Egypt, beta as a stop had been generally replaced by beta as a voiced bilabial fricative [β] by the first century AD.
Comparison with Modern Greek
Any reconstruction of Ancient Greek needs to take into account how the sounds later developed towards Modern Greek, and how these changes could have occurred. In general, the changes between the reconstructed Ancient Greek and Modern Greek are assumed to be unproblematic in this respect by historical linguists, because all the relevant changes (spirantization, chain-shifts of long vowels towards [i], loss of initial [h], restructuring of vowel-length and accentuation systems, etc.) are of types that are cross-linguistically frequently attested and relatively easy to explain.
Comparative reconstruction of Indo-European
Systematic relationships between sounds in Greek and sounds in other Indo-European languages are taken as strong evidence for reconstruction by historical linguists, because such relationships indicate that these sounds may go back to an inherited sound in the proto-language.
History of the reconstruction of ancient pronunciation
Until the 15th century (during the time of the Byzantine Greek Empire) ancient Greek texts were pronounced exactly like contemporary Greek when they were read aloud. From about 1486, various scholars (notably Antonio of Lebrixa, Girolamo Aleandro, and Aldus Manutius) judged that this pronunciation appeared to be inconsistent with the descriptions handed down by ancient grammarians, and suggested alternative pronunciations.
Johann Reuchlin, the leading Greek scholar in the West around 1500, had taken his Greek learning from Byzantine émigré scholars, and continued to use the modern pronunciation. This pronunciation system was called into question by Desiderius Erasmus (1466–1536) who in 1528 published De recta Latini Graecique sermonis pronuntiatione dialogus, a philological treatise clothed in the form of a philosophical dialogue, in which he developed the idea of a historical reconstruction of ancient Latin and Greek pronunciation. The two models of pronunciation became soon known, after their principal proponents, as the "Reuchlinian" and the "Erasmian" system, or, after the characteristic vowel pronunciations, as the "iotacist" (or "itacist" ) and the "etacist" system, respectively.
Erasmus' reconstruction was based on a wide range of arguments, derived from the philological knowledge available at his time. In the main, he strove for a more regular correspondence of letters to sounds, assuming that different letters must have stood for different sounds, and same letters for same sounds. That led him, for instance, to posit that the various letters which in the iotacist system all denoted [i] must have had different values, and that ει, αι, οι, ευ, αυ, ου were all diphthongs with a closing offglide. He also insisted on taking the accounts of ancient grammarians literally, for instance where they described vowels as being distinctively long and short, or the acute and circumflex accents as being clearly distinguished by pitch contours. In addition, he drew on evidence from word correspondences between Greek and Latin as well as some other European languages. Some of his arguments in this direction are, in hindsight, mistaken, because he naturally lacked much of the knowledge developed through later linguistic work. Thus, he could not distinguish between Latin-Greek word relations based on loans (e.g. Φοῖβος — Phoebus) on the one hand, and those based on common descent from Indo-European (e.g. φῶρ — fūr) on the other. He also fell victim to a few spurious relations due to mere accidental similarity (e.g. Greek θύειν 'to sacrifice' — French tuer, 'to kill'). In other areas, his arguments are of quite the same kind as those used by modern linguistics, e.g. where he argues on the basis of cross-dialectal correspondences within Greek that η must have been a rather open e-sound, close to [a].
Erasmus also took great pains to assign to the members in his reconstructed system plausible phonetic values. This was no easy task, as contemporary grammatical theory lacked the rich and precise terminology to describe such values. In order to overcome that problem, Erasmus drew upon his knowledge of the sound repertoires of contemporary living languages, for instance likening his reconstructed η to Scots a ([æ]), his reconstructed ου to Dutch ou ([oʊ]), and his reconstructed οι to French oi (at that time pronounced [oɪ]).
Erasmus assigned to the Greek consonant letters β, γ, δ the sounds of voiced plosives /b/, /ɡ/, /d/, while for the consonant letters φ, θ, and χ he advocated the use of fricatives /f/, /θ/, /x/ as in Modern Greek (arguing, however, that this type of /f/ must have been different from that denoted by Latin ⟨f⟩).
The reception of Erasmus' idea among his contemporaries was mixed. Most prominent among those scholars who resisted his move was Philipp Melanchthon, a student of Reuchlin's. Debate in humanist circles continued up into the 17th century, but the situation remained undecided for several centuries. (See Pronunciation of Ancient Greek in teaching.)
The 19th century
A renewed interest in the issues of reconstructed pronunciation arose during the 19th century. On the one hand, the new science of historical linguistics, based on the method of comparative reconstruction, took a vivid interest in Greek. It soon established beyond any doubt that Greek was descended in parallel with many other languages from the common source of the Indo-European proto-language. This had important consequences for how its phonological system must be reconstructed. At the same time, continued work in philology and archeology was bringing to light an ever-growing corpus of non-standard, non-literary and non-classical Greek writings, e.g. inscriptions and later also papyri. These added considerably to what could be known about the development of the language. On the other hand, there was a revival of academic life in Greece after the establishment of the Greek state in 1830, and scholars in Greece were at first reluctant to accept the seemingly foreign idea that Greek should have been pronounced so differently from what they knew.
Comparative linguistics led to a picture of ancient Greek that more or less corroborated Erasmus' view, though with some modifications. It soon became clear, for instance, that the pattern of long and short vowels observed in Greek was mirrored in similar oppositions in other languages and thus had to be a common inheritance (see Ablaut); that Greek ⟨υ⟩ had to have been [u] at some stage because it regularly corresponded to [u] in all other Indo-European languages (cf. Gr. μῦς : Lat. mūs); that many instances of ⟨η⟩ had earlier been [aː] (cf. Gr. μήτηρ : Lat. māter); that Greek ⟨ου⟩ sometimes stood in words that had been lengthened from ⟨ο⟩ and therefore must have been pronounced [oː] at some stage (the same holds analogically for ⟨ε⟩ and ⟨ει⟩, which must have been [eː]), and so on. For the consonants, historical linguistics established the originally plosive nature of both the aspirates ⟨φ,θ,χ⟩ [pʰ, tʰ, kʰ] and the mediae ⟨β, δ, γ⟩ [b, d, ɡ], which were recognised to be a direct continuation of similar sounds in Indo-European (reconstructed *bʰ, *dʰ, *gʰ and *b, *d, *g). It was also recognised that the word-initial spiritus asper was most often a reflex of earlier *s (cf. Gr. ἑπτά : Lat. septem), which was believed to have been weakened to [h] in pronunciation. Work was also done reconstructing the linguistic background to the rules of ancient Greek versification, especially in Homer, which shed important light on the phonology regarding syllable structure and accent. Scholars also described and explained the regularities in the development of consonants and vowels under processes of assimilation, reduplication, compensatory lengthening etc.
While comparative linguistics could in this way firmly establish that a certain source state, roughly along the Erasmian model, had once obtained, and that significant changes had to have occurred later, during the development towards Modern Greek, the comparative method had less to say about the question when these changes took place. Erasmus had been eager to find a pronunciation system that corresponded most closely to the written letters, and it was now natural to assume that the reconstructed sound system was that which obtained at the time when Greek orthography was in its formative period. For a time, it was taken for granted that this would also have been the pronunciation valid for all the period of classical literature. However, it was perfectly possible that the pronunciation of the living language had begun to move on from that reconstructed system towards that of Modern Greek, possibly already quite early during antiquity.
In this context, the freshly emerging evidence from the non-standard inscriptions became of decisive importance. Critics of the Erasmian reconstruction drew attention to the systematic patterns of spelling mistakes made by scribes. These mistakes showed that scribes had trouble distinguishing between the orthographically correct spellings for certain words, for instance involving ⟨ι⟩, ⟨η⟩, and ⟨ει⟩. This provided evidence that these vowels had already begun to merge in the living speech of the period. While some scholars in Greece were quick to emphasise these findings in order to cast doubt on the Erasmian system as a whole, some western European scholars tended to downplay them, explaining early instances of such orthographical aberrations as either isolated exceptions or influences from non-Attic, non-standard dialects. The resulting debate, as it was conducted during the 19th century, finds its expression in, for instance, the works of Jannaris (1897) and Papadimitrakopoulos (1889) on the anti-Erasmian side, and of Friedrich Blass (1870) on the pro-Erasmian side.
It was not until the early 20th century and the work of G. Chatzidakis, a linguist often credited with having first introduced the methods of modern historical linguistics into the Greek academic establishment, that the validity of the comparative method and its reconstructions for Greek began to be widely accepted among Greek scholars too. The international consensus view that had been reached by the early and mid-20th century is represented in the works of Sturtevant (1940) and Allen (1987).
More recent developments
Since the 1970s and 1980s, several scholars have attempted a systematic re-evaluation of the inscriptional and papyrological evidence (Smith 1972, Teodorsson 1974, 1977, 1978; Gignac 1976; Threatte 1980, summary in Horrocks 1999). According to their results, many of the relevant phonological changes can be dated fairly early, reaching well into the classical period, and the period of the Koiné can be characterised as one of very rapid phonological change. Many of the changes in vowel quality are now dated to some time between the 5th and the 1st centuries BC, while those in the consonants are assumed to have been completed by the 4th century AD. However, there is still considerable debate over precise dating, and it is still not clear to what degree, and for how long, different pronunciation systems would have persisted side by side within the Greek speech community. The resulting majority view today is that a phonological system roughly along Erasmian lines can still be assumed to have been valid for the period of classical Attic literature, but biblical and other post-classical Koine Greek is likely to have been spoken with a pronunciation that already approached that of Modern Greek in many crucial respects.
Recently, there has been one attempt at a more radically revisionist, anti-Erasmian reconstruction, proposed by the theologian and philologist Chrys Caragounis, Professor Emeritus at Lund University (1995, 2004). On the basis of the inscriptional record, Caragounis dates virtually all relevant vowel changes into or before the early classical period. He relies heavily upon Threatte and Gignac for data from the papyri, but he provides little if any actual interaction with their own markedly different analyses of the very same historical data. He also argues for a very early fricative status of the aspirate and medial consonants, and casts doubt on the validity of the vowel-length and accent distinctions in the spoken language in general. These views are currently isolated within the field.
- Roger D. Woodard (2008), "Greek dialects", in: The Ancient Languages of Europe, ed. R. D. Woodard, Cambridge: Cambridge University Press, p. 51.
- Allen 1987, pp. xii-xvi, introduction: dialectal nature of Greek
- Allen 1987, pp. 48–51
- Sihler 1995, pp. 7–12, §12-15: history of Greek, dialects and their use
- Smyth 1920, §C-E: Greek dialects, their characteristics, the regions they occurred in, and their use in literature
- Sihler 1995, pp. 149, 150, §148: assibilation in Greek
- Allen 1987, pp. 73, 74, long e from long a
- Allen 1987, pp. 66, 67, long y from oi in Boeotian
- Allen 1987, pp. 80, 81, the diphthong oi
- Allen 1987, pp. 50, 51, Aeolic digamma
- Stanford 1959, I: The Homeric dialect
- Stanford, 1959 & §2: digamma in Homer
- Sihler 1995, pp. 50–52, §54-56: Attic-Ionic η from *ā; Attic reversion; origin of *ā
- Allen 1987, pp. 18–29, aspirated plosives
- Allen 1987, pp. 14–18, voiceless plosives
- Allen 1987, pp. 29–32, voiced plosives
- Allen 1987, pp. 52–55, h
- Allen 1987, pp. 45, 46, the fricative s
- Allen 1987, pp. 56–59, zeta
- Allen 1987, pp. 59, 60, x, ps
- Allen 1987, pp. 41–45, on r
- Allen 1987, pp. 47–51, the semivowel w
- Allen 1987, pp. 51, 52, the semivowel y
- Allen 1987, pp. 81–84, diphthongs before other vowels
- Allen 1987, pp. 62, simple vowels
- Kiparsky 1973, p. 796, Greek accentual mobility and contour accents
- Found only as the second element of diphthongs.
- Allen 1987, pp. 62, 63, the vowel a
- Allen 1987, pp. 65, the vowel i
- Allen 1987, pp. 65–69, upsilon
- Allen 1987, pp. 75–79, ou ō
- Allen 1987, pp. 69–75, ē and ei
- Sturtevant 1940, p. 34
- Allen 1987, pp. 63, 64, short mid vowels
- Allen 1978, pp. 47–49, long and short vowel quality
- Smyth 1920, §37: compensatory lengthening
- Smyth 1920, §48-59: contraction
- Smyth 1920, §6: ei ou, spurious and genuine diphthongs
- Friedrich Blass, Pronunciation of Ancient Greek, Cambridge University Press, 1890, p. 22; Anne H. GrotonFrom Alpha to Omega: A Beginning Course in Classical Greek, Hackett Publishing, 2013, p. 4.
- Allen 1987, pp. 79, short diphthongs
- Allen 1987, pp. 84–88, long diphthongs
- Allen 1987, p. 21, doubling of aspirates
- Allen 1987, pp. 35–39
- Smyth 1920, §138, 140: syllables, vowels, and intervocalic consonants
- Allen 1987, pp. 104, 105, terms for syllable quantity
- Allen 1973, pp. 53–55, heavy or long versus light or short
- Allen 1987, pp. 105, 106, syllable division
- Allen 1987, pp. 106–110, correptio Attica
- Allen 1973, pp. 210–216, syllable weight before consonant sequences inside words
- Goldstein 2014
- Allen 1987, pp. 116–124, accent
- Smyth 1920, §161
- Smyth 1920, §156: the circumflex and its pronunciation
- Robins 1993, p. 50
- Allen 1987, pp. 124–126, accent marks and their meanings
- Sihler 1995, pp. 168–170, §170: debuccalized initial s in Greek
- Sihler 1995, pp. 170, 171, §171: s in initial clusters with a sonorant
- Sihler 1995, pp. 169, 170, §169: unchanged s in Greek
- Sihler 1995, pp. 187, 188, §191: y in initial position
- Sihler 1995, pp. 171, 172, §172: intervocalic s
- Smyth 1920, §112
- Smyth 1920, §114
- Allen 1987, pp. 60, 61, ττ/σσ
- Sihler 1995, §154: reflexes of palatals, plain velars, and labiovelars in Greek, Italic, and Germanic
- Sihler 1995, pp. 160–164, §161-164 A: examples of reflexes of labiovelar stops in Greek; remarks on them
- Smyth, §9 D: footnote on loss of rough breathing
- παρσένος, σιά, σιώ, σῦμα. Liddell, Henry George; Scott, Robert; A Greek–English Lexicon at the Perseus Project
- Allen 1987, pp. 23–26, development of aspirated stops to fricatives
- Smyth 1920, §30, 30 D: Attic η ᾱ; footnote on Doric, Aeolic, and Ionic
- Aristotle, 1456b
- Dionysius, §6
- Allen 1987, p. 19, Ancient Greek terminology for consonants
- Allen, William Sidney (1973). Accent and Rhythm: Prosodic features of Latin and Greek (3rd ed.). Cambridge University Press. ISBN 0-521-20098-9.
- Allen, William Sidney (1987) . Vox Graeca: the pronunciation of Classical Greek (3rd ed.). Cambridge University Press. ISBN 0-521-33555-8.
- Allen, William Sidney (1978) . Vox Latina—a Guide to the Pronunciation of Classical Latin (2nd ed.). Cambridge University Press. ISBN 0-521-37936-9.
- C. C. Caragounis (1995): "The error of Erasmus and un-Greek pronunciations of Greek". Filologia Neotestamentaria 8 (16).
- C. C. Caragounis (2004): Development of Greek and the New Testament, Mohr Siebeck (ISBN 3-16-148290-5).
- A.-F. Christidis ed. (2007), A History of Ancient Greek, Cambridge University Press (ISBN 0-521-83307-8): A. Malikouti-Drachmann, "The phonology of Classical Greek", 524-544; E. B. Petrounias, "The pronunciation of Ancient Greek: Evidence and hypotheses", 556-570; idem, "The pronunciation of Classical Greek", 556-570.
- Bakker, Egbert J., ed. (2010). A Companion to the Ancient Greek Language. Wiley-Blackwell. ISBN 978-1-4051-5326-3.
- Beekes, Robert (2010) . Etymological Dictionary of Greek. With the assistance of Lucien van Beek. In two volumes. Leiden, Boston. ISBN 9789004174184.
- Devine, Andrew M.; Stephens, Laurence D. (1994). The Prosody of Greek Speech. Oxford University Press. ISBN 0-19-508546-9.
- G. Horrocks (1997): Greek: A History of the Language and Its Speakers. London: Addison Wesley (ISBN 0-582-30709-0).
- F.T. Gignac (1976): A Grammar of the Greek Papyri of the Roman and Byzantine Periods. Volume 1: Phonology. Milan: Istituto Editoriale Cisalpino-La Goliardica.
- Goldstein, David (2014). "Phonotactics". Encyclopedia of Ancient Greek Language and Linguistics. 3. Brill. pp. 96, 97. Retrieved 19 January 2015 – via academia.edu.
- C. Karvounis (2008): Aussprache und Phonologie im Altgriechischen ("Pronunciation and Phonology in Ancient Greek"). Darmstadt: Wissenschaftliche Buchgesellschaft (ISBN 978-3-534-20834-0).
- M. Lejeune (1972): Phonétique historique du mycénien et du grec ancien ("Historical phonetics of Mycenean and Ancient Greek"), Paris: Librairie Klincksieck (reprint 2005, ISBN 2-252-03496-3).
- H. Rix (1992): Historische Grammatik des Griechischen. Laut- und Formenlehre ("Historical Grammar of Greek. Phonology and Morphology"), Darmstadt: Wissenschaftliche Buchgesellschaft (2nd edition, ISBN 3-534-03840-1).
- Robins, Robert Henry (1993). The Byzantine Grammarians: Their Place in History. Berlin: Mouton de Gruyter. ISBN 9783110135749. Retrieved 23 January 2015 – via Google Books.
- Sihler, Andrew Littleton (1995). New Comparative Grammar of Greek and Latin. New York, Oxford: Oxford University Press. ISBN 0-19-508345-8.
- R. B. Smith (1972): Empirical evidences and theoretical interpretations of Greek phonology: Prolegomena to a theory of sound patterns in the Hellenistic Koine, Ph.D. diss. Indiana University.
- S.-T. Teodorsson (1974): The phonemic system of the Attic dialect 400-340 BC. Göteborg: Acta Universitatis Gothoburgensis (ASIN B0006CL51U).
- S.-T. Teodorsson (1977): The phonology of Ptolemaic Koine (Studia Graeca et Latina Gothoburgensia), Göteborg (ISBN 91-7346-035-4).
- S.-T. Teodorsson (1978): The phonology of Attic in the Hellenistic period (Studia Graeca et Latina Gothoburgensia), Göteborg: Acta Universitatis Gothoburgensis (ISBN 91-7346-059-1).
- L. Threatte (1980): The Grammar of Attic Inscriptions, vol. 1: Phonology, Berlin: de Gruyter (ISBN 3-11-007344-7).
- G. Babiniotis: Ιστορική Γραμματεία της Αρχαίας Ελληνικής Γλώσσας, 1. Φωνολογία ("Historical Grammar of the Ancient Greek Language: 1. Phonology")
- F. Blass (1870): Über die Aussprache des Griechischen, Berlin: Weidmannsche Buchhandlung.
- I. Bywater, The Erasmian Pronunciation of Greek and its Precursors, Oxford: 1908. Defends Erasmus from the claim that he hastily wrote his Dialogus based on a hoax. Mentions Erasmus's predecessors Jerome Aleander, Aldus Manutius, and Antonio of Lebrixa. Short review in The Journal of Hellenic Studies 29 (1909), p. 133. JSTOR 624654.
- E. A. S. Dawes (1894): The Pronunciation of Greek aspirates, D. Nutt.
- E. M. Geldart (1870): The Modern Greek Language In Its Relation To Ancient Greek (reprint 2004, Lightning Source Inc. ISBN 1-4179-4849-3).
- G. N. Hatzidakis (1902): Ἀκαδημαϊκὰ ἀναγνώσματα: ἡ προφορὰ τῆς ἀρχαίας Ἑλληνικῆς ("Academic Studies: The pronunciation of Ancient Greek").
- Jannaris, A. (1897). An Historical Greek Grammar Chiefly of the Attic Dialect As Written and Spoken From Classical Antiquity Down to the Present Time. London: MacMillan.
- Kiparsky, Paul (1973). "The Inflectional Accent in Indo-European". Language. Linguistic Society of America. 49 (4): 794–849. doi:10.2307/412064. JSTOR 412064.
- A. Meillet (1975) Aperçu d'une histoire de la langue grecque, Paris: Librairie Klincksieck (8th edition).
- A. Meillet & J. Vendryes (1968): Traité de grammaire comparée des langues classiques, Paris: Librairie Ancienne Honoré Champion (4th edition).
- Papadimitrakopoulos, Th. (1889). Βάσανος τῶν περὶ τῆς ἑλληνικῆς προφορᾶς Ἐρασμικῶν ἀποδείξεων [Critique of the Erasmian evidence regarding Greek pronunciation]. Athens.
- E. Schwyzer (1939): Griechische Grammatik, vol. 1, Allgemeiner Teil. Lautlehre. Wortbildung. Flexion, München: C.H. Beck (repr. 1990 ISBN 3-406-01339-2).
- Smyth, Herbert Weir (1920). A Greek Grammar for Colleges. American Book Company – via Perseus Project.
- Stanford, William Bedell (1959) . "Introduction, Grammatical Introduction". Homer: Odyssey I-XII. 1 (2nd ed.). Macmillan Education Ltd. pp. IX–LXXXVI. ISBN 1-85399-502-9.
- W. B. Stanford (1967): The Sound of Greek.
- Sturtevant, E. H. (1940) . The Pronunciation of Greek and Latin (2nd ed.). Philadelphia.
Ancient Greek sources
All speech consists of these categories: element [letter], syllable, conjunction, noun, verb, inflection, phrase.
A letter is an indivisible sound — not any sound, but a sound from which a compound sound [syllable] can naturally be made, since the sounds of animals are also indivisible, and I call none of them a letter. The categories of sound are sounding [vowels], half-sounding [semivowels: fricatives and sonorants], and unsounded [silent or mute: stop].
These categories are the vowel, which has audible sound but no contact [between lips or between tongue and the inside of the mouth]; the semivowel, which has audible sound and contact (for example s and r); and the mute, which has contact and no sound by itself, becoming audible only with [letters] that have a sound (for example g and d).
[Letters] differ in the shape of the mouth and place [in the mouth], in thickness and thinness [aspiration and unaspiration], in length and shortness — and still more in sharpness and depth and middle [high and low pitch, and pitch between the two]: but theorizing about these things in detail is the job of those who study [poetic] meter.
Τῆς δὲ λέξεως ἁπάσης τάδ᾽ ἐστὶ τὰ μέρη, στοιχεῖον συλλαβὴ σύνδεσμος ὄνομα ῥῆμα ἄρθρον πτῶσις λόγος.
Στοιχεῖον μὲν οὖν ἐστιν φωνὴ ἀδιαίρετος, οὐ πᾶσα δὲ ἀλλ᾽ ἐξ ἧς πέφυκε συνθετὴ γίγνεσθαι φωνή· καὶ γὰρ τῶν θηρίων εἰσὶν ἀδιαίρετοι φωναί, ὧν οὐδεμίαν λέγω στοιχεῖον. Ταύτης δὲ μέρη τό τε φωνῆεν καὶ τὸ ἡμίφωνον καὶ ἄφωνον.
Ἔστιν δὲ ταῦτα φωνῆεν μὲν <τὸ> ἄνευ προσβολῆς ἔχον φωνὴν ἀκουστήν, ἡμίφωνον δὲ τὸ μετὰ προσβολῆς ἔχον φωνὴν ἀκουστήν, οἷον τὸ Σ καὶ τὸ Ρ, ἄφωνον δὲ τὸ μετὰ προσβολῆς καθ᾽ αὑτὸ μὲν οὐδεμίαν ἔχον φωνήν, μετὰ δὲ τῶν ἐχόντων τινὰ φωνὴν γινόμενον ἀκουστόν, οἷον τὸ Γ καὶ τὸ Δ.
ταῦτα δὲ διαφέρει σχήμασίν τε τοῦ στόματος καὶ τόποις καὶ δασύτητι καὶ ψιλότητι καὶ μήκει καὶ βραχύτητι ἔτι δὲ ὀξύτητι καὶ βαρύτητι καὶ τῷ μέσῳ: περὶ ὧν καθ᾽ ἕκαστον ἐν τοῖς μετρικοῖς προσήκει θεωρεῖν.
Dionysius Thrax. "ς´ περὶ στοιχείου" [6. On the Sound]. Ars Grammatica (Τέχνη Γραμματική) [Art of Grammar] (in Koine Greek). Retrieved 20 May 2016 – via The Internet Archive.CS1 maint: Unrecognized language (link)
There are 24 letters, from a to ō.... Letters are also called elements [of speech] because they have an order and classification.
Of these, seven are vowels: a, e, ē, i, o, y, ō. They are called vowels because they form a complete sound by themselves.
Two of the vowels are long (ē and ō), two are short (e and o), and three are two-timed (a i y). They are called two-timed since they can be lengthened and shortened.
Five are prefixable vowels: a, e, ē, o, ō. They are called prefixable because they form a complete syllable when prefixed before i and y: for instance, ai au. Two are suffixable: i and y. And y is sometimes prefixable before i, as in myia and harpyia.
Six are diphthongs: ai au ei eu oi ou.
The remaining seventeen letters are consonants [pronounced-with]: b, g, d, z, th, k, l, m, n, x, p, r, s, t, ph, kh, ps. They are called consonants because they do not have a sound on their own, but they form a complete sound when arranged with vowels.
Of these, eight are semivowels: z, x, ps, l, m, n, r, s. They are called semivowels, because, though a little weaker than the vowels, they still sound pleasant in hummings and hissings.
Nine are mutes: b, g, d, k, p, t, th, ph, kh. They are called mute, because, more than the others, they sound bad, just as we call a performer of tragedy who sounds bad voiceless. Three of these are thin (k, p, t), three are thick (th, ph, kh), and three of them are middle [intermediate] (b, g, d). They are called middle, because they are thicker than the thin [mutes], but thinner than the thick [mutes]. And b is [the mute] between p and ph, g between k and kh, and d between th and t.
The thick [mutes] alternate with the thin ones, ph with p, as in [an example from the Odyssey]; kh with k: [another example from the Odyssey]; th with t: [an example from the Iliad].
γράμματά ἐστιν εἰκοσιτέσσαρα ἀπο τοῦ α μέχρι τοῦ ω.... τὰ [γράμματα] δὲ αὐτὰ καὶ στοιχεῖα καλεῖται διὰ τὸ ἔχειν στοῖχόν τινα καὶ τάξιν.
τούτων φωνήεντα μέν ἐστιν ἑπτά· α ε η ι ο υ ω. φωνήεντα δὲ λέγεται, ὅτι φωνὴν ἀφ᾽ ἑαυτῶν ἀποτελεῖ....
τῶν δὲ φωνηέντων μακρὰ μέν ἐστι δύο, η καὶ ω, βραχέα δύο, ε καὶ ο, δίχρονα τρία, α ι υ. δίχρονα δὲ λέγεται, ἐπεὶ ἐκτείνεται καί συστέλλεται.
προτακτικὰ φωνήεντα πέντε· α ε η ο ω. προτακτικὰ δὲ λέγεται, ὅτι προτασσόμενα τοῦ ι καὶ υ συλλαβὴν ἀποτελεῖ, οἷον αι αυ. ὑποτακτικὰ δύο· ι καὶ υ. καὶ τὸ υ δὲ ἐνιότε προτακτικόν ἐστι τοῦ ι, ὡς ἐν τῶι μυῖα καὶ ἅρπυια.
δίφθογγοι δέ εἰσιν ἕξ· αι αυ ει ευ οι ου.
σύμφωνα δὲ τὰ λοιπὰ ἑπτακαίδεκα· β γ δ ζ θ κ λ μ ν ξ π ρ σ τ φ χ ψ. σύμφωνα δὲ λέγονται, ὅτι αὐτὰ μὲν καθ᾽ ἑαυτὰ φωνὴν οὐκ ἔχει, συντασσόμενα δὲ μετὰ τῶν φωνηέντων φωνὴν ἀποτελεῖ.
τούτων ἡμίφωνα μέν ἐστιν ὀκτώ· ζ ξ ψ λ μ ν ρ σ. ἡμίφωνα δὲ λέγεται, ὅτι παρ᾽ ὅσον ἧττον τῶν φωνηέντων εὔφωνα καθέστηκεν ἔν τε τοῖς μυγμοῖς καὶ σιγμοῖς.
ἄφωνα δέ ἐστιν ἐννέα· β γ δ κ π τ θ φ χ. ἄφωνα δὲ λέγεται, ὅτι μᾶλλον τῶν ἄλλων ἐστὶν κακόφωνα, ὥσπερ ἄφωνον λέγομεν τὸν τραγωιδὸν τὸν κακόφωνον. τούτων ψιλὰ μέν ἐστι τρία, κ π τ, δασέα τρία, θ φ χ, μέσα δὲ τούτων τρία, β γ δ. μέσα δὲ εἴρηται, ὅτι τῶν μὲν ψιλῶν ἐστι δασύτερα, τῶν δὲ δασέων ψιλότερα. καὶ ἔστι τὸ μὲν β μέσον τοῦ π καὶ φ, τὸ δὲ γ μέσον τοῦ κ καὶ χ, τὸ δὲ δ μέσον τοῦ θ καὶ τ.
ἀντιστοιχεῖ δὲ τὰ δασέα τοῖς ψιλοῖς, τῶι μὲν π τὸ φ, οὕτως·
τῶι δὲ κ τὸ χ·
τὸ δὲ θ τῶι τ·
In addition, three consonants are double: z, x, ps. They are called double because each one of them is made up of two consonants: z from s and d, x from k and s, and ps from p and s.
There are four unchangeable [consonants]: l, m, n, r. They are called unchangeable because they do not change in the future [tense]s of verbs and in the declensions of nouns. They are also called liquids.
ἔτι δὲ τῶν συμφώνων διπλᾶ μέν ἐστι τρία· ζ ξ ψ. διπλᾶ δὲ εἴρηται, ὅτι ἓν ἕκαστον αὐτῶν ἐκ δύο συμφώνων σύγκειται, τὸ μὲν ζ ἐκ τοῦ σ καὶ δ, τὸ δὲ ξ ἐκ τοῦ κ καὶ σ, τὸ δὲ ψ ἐκ τοῦ π καὶ σ.
ἀμετάβολα τέσσαρα· λ μ ν ρ. ἀμετὰβολα δὲ λέγεται, ὅτι οὐ μεταβάλλει ἐν τοῖς μέλλουσι τῶν ῥημάτων οὐδὲ ἐν ταῖς κλίσεσι τῶν ὀνομάτων. τὰ δὲ αὐτὰ καὶ ὑγρὰ καλεῖται.
- University of California, Berkeley: Practice of Ancient Greek pronunciation
- Society for the oral reading of Greek and Latin Literature: Recitation of classics books
- Desiderius Erasmus, De recta Latini Graecique sermonis pronuntiatione dialogus (alternative link) (in Latin)
- Brian Joseph, Ancient Greek, Modern Greek
- Harry Foundalis, Greek Alphabet and pronunciation
- Carl W. Conrad, A Compendium of Ancient Greek Phonology: about phonology strictly speaking, and not phonetics
- Randall Buth, Ἡ κοινὴ προφορά: Notes on the Pronunciation System of Phonemic Koine Greek
- Chrys C. Caragounis, The error of Erasmus and un-Greek pronunciations of Greek
- Sidney Allen, Vox Graeca (only a preview available, but still useful). |
The surface area of a shape is the sum total of the area of all of its faces. To find the area of a cylinder, you need to find the area of its bases and add it to its lateral area, or the area of its height. You can find the area of a cylinder by using the simple formula, A = 2πr2 + 2πrh.
1Find the radius of one of the bases. The bases of a cylinder have the same size and area, so it doesn't matter which base you use. In this example, the radius of the base is 3 centimeter (1.2 in). Write it down. If you're only given the diameter, divide it by 2. If you're only given the circumference, divide it by 2π.Ad
2Find the area of the base. To find the area of the base, just plug the radius, 3 centimeter (1.2 in), into the equation for finding the area of a circle: A = πr2. Here's how you do it:
- A = πr2
- A = π x 32
- A = π x 9 = 28.26 cm2
3Double the result to get the area of the top and bottom circles. Simply multiply the previous result, 28.26 cm2, by 2 to get the area of both bases. 28.26 x 2 = 56.52 cm2. This is the area of both bases.
4Find the circumference of one of the circles. You'll need to find the circumference to find the lateral area, or the area of the sides of the cylinder. To get the circumference, simply multiply the radius by 2π. So, the circumference can be found by multiplying 3 centimeter (1.2 in) by 2π. 3 centimeter (1.2 in) x 2π = 18.84 centimeter (7.4 in).
5Multiply the circumference of the circle by the height of the cylinder. This will give you the lateral surface area. Multiply the circumference, 18.84 centimeter (7.4 in), by the height, 5 centimeter (2.0 in). 18.84 centimeter (7.4 in) x 5 centimeter (2.0 in) = 94.2 cm2
6Add the lateral area and the base area. Once you add up the area of the two bases and the lateral area, you will have found the surface area of the cylinder. All you have to do is add 56.52 cm2, the area of both bases, and the lateral area, 94.2 cm2. 56.52 cm2 + 94.2 cm2 = 150.72 cm2. The surface area with a cylinder with a height of 5 centimeter (2.0 in) and a circular base with a radius of 3 centimeter (1.2 in) is 150.72 cm2.Ad
We could really use your help!
Managing Telephone Messages?
Categories: Surface Area
In other languages:
Italiano: Calcolare la Superficie Totale di un Cilindro, Nederlands: De oppervlakte van een cilinder berekenen, Русский: найти площадь поверхности цилиндра, Português: Encontrar a Área da Superfície de Cilindros, Deutsch: Die Oberfläche eines Zylinders berechnen, Español: calcular el área de la superficie de un cilindro, Français: calculer l'aire d'un cylindre, Bahasa Indonesia: Menghitung Luas Seluruh Permukaan Tabung, 中文: 计算圆柱体的表面积
Thanks to all authors for creating a page that has been read 127,517 times. |
No object in space is more mysterious—and more psychologically menacing—than a black hole. Once known as a frozen star, a black hole is formed when a massive star burns out and collapses upon itself, ultimately producing gravitational energy so powerful that not even light can escape from it. Although physicists can infer the existence of black holes in space, they cannot directly observe them. Yet making mini black holes may be possible when the world’s largest particle accelerator—the Large Hadron Collider (LHC)—goes online outside Geneva, Switzerland. At the heart of the new machine is a phenomenal 17-mile circular tunnel where particles will smash together at nearly the speed of light, producing temperatures 100,000 times hotter than the core of the sun. Physicists will observe the collisions not only for clues to fundamental constituents of matter, hidden dimensions, and the elusive Higgs boson—the hypothetical particle that gives matter its heft—but also for tiny black holes winking in and out of existence.
But a couple of Jeremiahs would halt the fireworks before they begin. A lawsuit filed in U.S. district court in Honolulu seeks to halt the opening of the accelerator, which is funded in part by the Department of Energy and the National Science Foundation. A similar suit was filed in 2000 against the Brookhaven National Laboratory to prevent the operation of the Relativistic Heavy Ion Collider, an accelerator that started up that year. The charge, then as now, is that microscopic black holes produced at the collider might coalesce and engulf the earth, ending all life as we know it. LHC scientists have publicly dismissed the lawsuit as bunkum while quietly double-checking their math just to be sure. DISCOVER asked Brown University physicist Greg Landsberg, who is involved in experiments at the LHC, if we should lose any sleep over the matter.
First off, how might microscopic black holes be produced at the LHC?
When too much matter is put into too small a space, it collapses under its own gravity and forms a black hole. That’s what is happening when astronomical black holes are formed. Now, the LHC doesn’t really create much matter, but it does put a lot of energy in a very small volume, and Einstein showed that for a moving particle, the energy, not the mass, governs gravitational attraction. You might create black holes at the LHC when two particles pass very close to each other, if the gravitational interaction between them is strong enough. But this is possible only in certain models that predict the existence of extra dimensions.
What is the connection between extra dimensions and black holes?Black hole production requires a strong gravitational attraction. But gravity is much weaker than other forces, such as electromagnetism. One way of remedying this problem is to assume the existence of extra dimensions in space accessible to the carrier of gravitational force—called the graviton—but not accessible to other particles, such as quarks, electrons, and photons. If this is the case, gravity may be fundamentally strong but still appear weak to us, as the gravitons spend most of their time in the extra space and rarely cross into our world.
Imagine a very long and thin straw. If you are observing it from far away, you don’t really resolve the fact that the straw has the second curled-up dimension, its circumference. The straw appears to you as a line—that is, one-dimensional. However, if you approach the straw at a distance comparable to its radius, you would start resolving its second dimension and see that it is truly two-dimensional. Pretty much the same way, when two particles are close to one another, they start feeling gravity from extra dimensions and thus feel the true, undiluted gravitational pull. That’s basically the framework in which it turns out that black hole production at the LHC is a possibility. But one should understand that this is just one model. Whether it’s true or not is anybody’s guess.
How would microscopic black holes be observed?They would emit light that is much, much hotter than, say, light coming from the stars or sun, because their temperature is many orders of magnitude greater. They would emit high-energy gamma rays, and they could emit all sorts of species of particles, such as electrons and muons, that we could detect.
Can we be sure that a black hole created at the LHC wouldn’t expand and swallow the earth?I think the honest answer to this question is yes. The black holes that would be produced at the LHC must also be produced by the hundreds every day due to energetic cosmic rays bombarding our earth. When cosmic rays smash into particles of earth material, it’s the same type of collision that happens in the LHC. So the very fact that we exist here on earth to talk about these things tells us that even if black holes are produced, pretty much everything is very safe. Either black holes are not produced at all, or they decay very, very quickly due to Hawking radiation or an equivalent mechanism.
What exactly is Hawking radiation?Stephen Hawking showed in the early 1970s that black holes are not completely black. They have a slight tint of gray, if you will. That means black holes do not just suck everything in—or accrete, as they call it scientifically—but in fact they must radiate some energy out. This process is known as Hawking radiation.
The intensity of Hawking radiation is determined by the temperature of the black hole. The higher it is, the more intense the radiation is, just like a hot bar of metal emits much more heat than a cold one. Now it turns out that the temperature of the black hole is inversely proportional to its mass. The more massive a black hole, the cooler it is. Thus small black holes are very hot and radiate a lot, while large, astronomical black holes are extremely cold and barely radiate at all. The black holes found in the universe are so cold that it takes forever for them to evaporate, many orders of magnitude longer than the age of the universe.
By contrast, black holes at the LHC would live only a fraction of a second before they radiate their mass away. This is not long enough for them to accrete anything before they disappear in a blast of radiation. These black holes would evaporate almost instantaneously, without moving by more than the size of the atomic nucleus.
Is it possible to quantify the chance of something catastrophic occurring at the LHC?The probability is never equal to zero in quantum mechanics, but you don’t worry about it if the probability is very small. There is some probability that all the air molecules in your room will suddenly cross over and end up on one half of the room and you won’t be able to breathe. But we are talking about risk management here, and I think people should be worried about probabilities that are large.
If black holes are detected at the LHC, what would it mean for physics?Above all, it would probably help us build a quantum theory of gravity, the one force that hasn’t really been explained by quantum mechanics. We have very little understanding of what the quantum theory of gravity looks like, and producing these black holes at the LHC would probably be as close as you could get to approaching the answer to this question. |
Comparing Fractions Anchor Chart
Fraction comparison is an essential skill that students need to develop in their early math education. Understanding how fractions relate to one another in terms of size and value is crucial for performing various mathematical operations. One effective tool for teaching and reinforcing fraction comparison is the anchor chart. In this article, we will explore the concept of comparing fractions and discuss the benefits of using an anchor chart as a visual aid in the learning process.
What are Fractions?
Before delving into the intricacies of comparing fractions, it is important to establish a clear understanding of what fractions are. Fractions represent a part of a whole and are expressed as a ratio of two numbers: the numerator and the denominator. The numerator indicates the number of parts being considered, while the denominator represents the total number of equal parts that make up the whole.
Why is Comparing Fractions Important?
Comparing fractions allows us to determine which fraction is larger or smaller. This skill is essential in various real-life scenarios, such as cooking, shopping, and understanding financial concepts. Additionally, fraction comparison serves as a foundation for more advanced mathematical concepts, including addition, subtraction, multiplication, and division of fractions.
The Challenges of Comparing Fractions
Comparing fractions can be challenging for students due to several reasons:
- Different denominators: Fractions with different denominators cannot be directly compared. Students need to find a common denominator before making a comparison.
- Equivalent fractions: Different fractions can represent the same value. It is important for students to recognize equivalent fractions to accurately compare their sizes.
- Complex numerators: Fractions with larger numerators can be misleading, as they may not always indicate a larger value. Students need to understand that the size of a fraction depends on both the numerator and the denominator.
What is an Anchor Chart?
An anchor chart is a visual representation of information that is displayed in the classroom as a reference tool for students. It is typically created by the teacher or with the participation of the students. Anchor charts serve as a visual aid to support learning and provide a quick reference for students to reinforce concepts.
The Benefits of Using an Anchor Chart for Comparing Fractions
Using an anchor chart specifically designed for comparing fractions offers several advantages:
- Visual representation: An anchor chart provides a clear visual representation of fraction comparison, making it easier for students to grasp the concept.
- Reference tool: The anchor chart serves as a reference tool that students can refer to when working on fraction comparison problems.
- Consolidation of learning: By actively participating in the creation of the anchor chart, students consolidate their understanding of fraction comparison.
- Engagement: Anchor charts are visually appealing and can capture students' attention, increasing their engagement in the learning process.
- Collaborative learning: Creating the anchor chart as a class promotes collaboration and discussion among students, allowing them to learn from one another.
Components of a Comparing Fractions Anchor Chart
A well-designed comparing fractions anchor chart should include the following components:
- Definitions: Clear definitions of key terms, such as numerator, denominator, and equivalent fractions.
- Visual representations: Examples of fraction models or number lines to visually demonstrate fraction comparison.
- Comparison symbols: The symbols used to represent greater than (>), less than (<), and equal to (=) for fraction comparison.
- Steps for comparison: A step-by-step process for comparing fractions, including finding a common denominator and simplifying fractions.
- Practice problems: Sample problems for students to practice comparing fractions and applying the concepts learned.
How to Create a Comparing Fractions Anchor Chart
Creating a comparing fractions anchor chart can be a collaborative effort between the teacher and students. Here is a step-by-step guide:
- Introduce the concept: Begin by introducing the concept of comparing fractions to the class, ensuring that students have a solid understanding of fractions.
- Discuss key terms: Review and discuss key terms related to comparing fractions, such as numerator, denominator, and equivalent fractions.
- Model comparison: Use manipulatives, fraction models, or number lines to model and demonstrate the process of comparing fractions.
- Create the anchor chart: As a class, create the anchor chart by adding the necessary components discussed earlier.
- Encourage student participation: Allow students to actively participate in creating the anchor chart by contributing ideas, examples, and explanations.
- Display the anchor chart: Once completed, display the anchor chart in a prominent place in the classroom where students can easily refer to it.
Using the Comparing Fractions Anchor Chart in the Classroom
Now that you have a well-designed comparing fractions anchor chart, here are some ways to utilize it effectively in the classroom:
- Whole-class instruction: Use the anchor chart during whole-class instruction to guide students through the process of comparing fractions.
- Small group practice: Provide small groups of students with practice problems and have them use the anchor chart as a reference tool.
- Independent work: Encourage students to refer to the anchor chart when working independently on fraction comparison tasks or assignments.
- Formative assessment: Assess students' understanding of fraction comparison by asking them to explain their thinking using the anchor chart as a reference.
- Reinforcement activities: Incorporate games, puzzles, or interactive activities that involve comparing fractions and require students to use the anchor chart.
The comparing fractions anchor chart is a valuable tool for teaching and reinforcing fraction comparison skills. By providing a visual representation, acting as a reference tool, and promoting collaborative learning, the anchor chart enhances students' understanding and engagement in the learning process. Utilize the steps outlined in this article to create an effective anchor chart that will support your students' mastery of comparing fractions. |
This unit explores the fact that the properties that hold for multiplication and division of whole numbers also apply to fractions (commutative, associative, distributive, and inverse). The operations of multiplication and division are developed using a length model (Cuisenaire rods). While the size of the referent one (whole) is fixed at first, students need to be able to change the referent during the physical process.
Name the fraction for a given Cuisenaire rod with reference to one (whole) and with reference to another rod.
Fluently multiply two or more fractions.
Divide a fraction by another fraction, including when the divisor is greater than the dividend.
‘Fractions as operators’ is one of Kieren’s (1994) sub-constructs of rational number, and applies to situations in which a fraction acts on another amount. That amount might be a whole number, e.g. three quarters of 48, a decimal or percentage, e.g. one half of 10% is 5%, or another fraction, e.g. two thirds of three quarters. Students often confuse when fractions should be treated as numbers and when they should be treated as operators, particularly when creating numbers lines, e.g. place one half where 2 1/2 belongs on a number line showing zero to five.
Fractions can operate on other fractions, and the rule for fraction multiplication can be generalised, a/b x c/d = ac/bd. The commutative property holds since c/d x a/b = ac/bd as do the distributive and associative properties, though the former is mostly applied in algebra rather than number calculation. In order that the inverse property holds, i.e. division by a given number undoes multiplication and vice versa, the following must be true:
a/b x c/d x d/c = a/b because c/d x d/c = 1 so division by c/d must be equivalent to multiplication by d/c.
Specific Teaching Points
Understanding that fractions are always named with reference to a one (whole) requires flexibility of thinking. Lamon (2007) described re-unitising and norming as two essential capabilities if students are to master rational numbers and other associated forms of proportional reasoning. By re-unitising she meant that students could flexibly define a given quantity in multiple ways by changing the units they attended to. By norming she meant that the student could then act with the new unit. In this unit Cuisenaire rods are used to develop students’ skills in changing units and thinking with those units.
Multiplication of fractions involves adaptation of multiplication with whole numbers. Connecting a x b as ‘a sets of b’ (or vice versa) with a/b x c/d as ‘a b-ths of c/d’ requires students to firstly create a referent whole. That whole might be continuous, like a region or volume, or discrete like a set. Expressing both fractions in a multiplication and the answer require thinking in different units. Consider two thirds of one half (2/3 x 1/2) as modelled with Cuisenaire rods. Let the dark green rod be one, then the light green rod is one half.
So which rod is two thirds of one half? A white rod is one third of light green so the red rod must be two thirds. Notice how we are describing the red rod with reference to the light green rod.
But what do we call the red rod? To name it we need to return to the original one, the dark green rod. The white rod is one sixth so the red rod is two sixths or one third of the original one. So the answer to the multiplication is 2/3 x 1/2 = 2/6 or 1/3.
While the algorithm of ‘invert and multiply’ is easy to learn and enact understanding why the rule holds requires complex thinking. Yet the ability to re-unitise and norm are central to fluency and flexibility with rational number, so necessary for algebraic manipulation. Consider a simple case of division of a fraction by a fraction, 2/3 ÷ 1/2 = 4/3.
If the dark green rod is defined as one then the crimson rod represents two thirds and the light green rod represents one half.
If we adopt a measurement view of division rather than a sharing view the problem; 2/3 ÷ 1/2 = ? translates into “How many light green rods fit into the crimson rod?” So the light green rod becomes the new referent. The result is that one and one third light green rods make a crimson rod. Note that; 1 1/3 = 4/3.
As a check that the inverse property holds with our answer we might calculate 4/3 x 1/2 = 4/8 = 2/3 in the same way that we could check 24 ÷ 3 = 8 by calculating 8 x 3 = 24.
Students are unlikely to have previous experience with using Cuisenaire rods since the use of these materials to teach early number has been abandoned. Their lack of familiarity with the rods is a significant advantage as they will need to imagine splitting the referent one to solve problems. However, other units in the Cuisenaire rod fractions collection are available at level 3 and level 4.
Use Cuisenaire rods or the online tool to introduce multiplication of fractions in the following way.
Set up this diagram:
“If the blue rod is defined as one then what fraction is the dark green rod?” (two thirds)
“What coloured rod is half of two thirds?” (light green)
“Relating back to the blue rod what fraction is one half of two thirds? In other words, what fraction is the light green rod?” (one third)
“How can we write one half of two thirds as an equation?” (1/2 x 2/3 = 1/3)
Note that your students might not relate multiplication of whole numbers, e.g. 3 x 8 = 24, with ‘of’ as a function so help them to realise that.
Create this diagram:
“Let’s try a more difficult example. If the orange rod is one, what fractions are the dark green and crimson rods?” (dark green is six tenths or three fifths, crimson is four tenths or two fifths)
“So what fraction of the dark green rod is the crimson rod?” (four sixths or two thirds)
“How can we record two thirds of three fifths as an equation?” (2/3 x 3/5 = ?)
“What is the answer to the equation?” (2/3 x 3/5 = 2/5)
Record the two equations that you have results for:
1/2 x 2/3 = 1/3 and 2/3 x 3/5 = 2/5
“Can you see any pattern here? Two examples are not enough really.”
Students might notice that the same digits are on the / diagonal or that the denominator of the answer matches the denominator of the second fraction. However, there is no clear relationship in the equations since equivalence is involved.
“Our goal today is to find a pattern that allows us to solve any problem like this, where a fraction is multiplied by a fraction.”
Ask the students to work in small co-operative groups using Copymaster One. Each group will need access to a set of Cuisenaire rods or to the Cuisenaire applet on their digital devices.
Look for the following as you roam:
- Can they identify and name the fractions for each rod when told the referent one rod? To do so thy will need to partition the rods into smaller equal units using the other rods that are available.
- Can they accept the third rod as a unit of measure? To do so they need to consider how many of that unit fit into the second rod. This idea is particularly challenging where the third rod is larger than the second rod.
- Can they map back to the original one to name the result of the fraction multiplication?
After a suitable period of investigation bring the class back together to discuss how they solved the problems. Record the first three solutions:
“Is there a pattern in these three examples?” Students should notice that the numerators and denominators of the factors are multiplied to give the numerator and denominator of the answer.
“Does this pattern apply to our first two examples?”
Invite the students to multiply numerators and denominators to see if the pattern holds. They may notice that the answers are equivalent to the products, i.e. , 2/6 = 1/3 and 6/15 = 2/5. If your students have difficulty with equivalence it may be worthwhile taking some sessions from Cuisenaire Rod Fractions: Level 4.
Focus on examples 6a and 6b from the Copymaster. In both cases the multiplier factor is an improper fraction. This idea is challenging but important if students are to generalise multiplication of fractions. In 6a the diagram models; . If the brown rod is one then the crimson rod is one half and the blue rod is nine eighths since each unit square is one eighth of the brown rod. The difficulty is in recognising that the blue rod is nine quarters of the crimson rod. This relationship requires the students to use the blue rod as the measure of the crimson rod.
“Does the rule for multiplying fractions hold if one of the fractions is improper?” (Yes, e.g. 5/4 x 2/3 = 10/12 = 5/6, for 6b)
The purpose of this session is to consolidate multiplication of fractions and to see if the properties of whole numbers under multiplication hold for fractions as well.
Create this diagram for the students:
“What fraction multiplication is represented in this diagram?” (2/3 x 3/4 = 6/12 = 1/2)
Give the students 5-10 minutes in their small groups to come up with their own multiplication of fractions examples. Ensure they record their examples using squared paper and an equation.
When you bring the class back together, focus on the properties of multiplication.
“Which of these equations are true? Explain how you know.”
5 x 3 = 3 x 5 4 x 2 x 3 = 3 x 4 x 2 3 x 7 = 3 x 2 + 3 x 5
All three equations are correct as they are examples of the properties of whole numbers under multiplication. Look for students to explain that the first equation is the commutative property, the order of the factors does not affect the product. The second equation is less obvious as it applies the associative property which is about the pairing of three of more factors not affecting the product. Since multiplication is a binary operation only two factors can be operated on at once. Students may recognise that in the third equation seven is distributed into five plus two. Ask a student to draw a diagram of how this example works. For example:
Record the properties under each equation.
“Do these properties also hold for fractions when they are multiplied? Let’s work on the example we started with.”
“If the commutative property works for fractions what should be true about; 2/3 x 3/4 = 6/12 = 1/2?”
Students should connect to the whole number example and say that 3/4 x 2/3 = 1/2 as well. Begin with the same stripey rod for one (12 unit squares).
“Which rod is two thirds?” (brown)
“Which rod is three quarters of two thirds?” (dark green)
“What fraction of the one rod is the dark green rod?” (one half)
“So from this one example the commutative property seems to work. See if it works with the fraction multiplication you made up.”
Let the students work out if the commutative property works for their example. When the class re-gathers choose a couple of further examples to share. Expect that some students will argue that the property obviously holds for this reason.
So if the commutative property holds for whole numbers under multiplication it must also hold for fractions.
Return to the original example of, 2/3 x 3/4 = 6/12 = 1/2.
“What fraction multiplication is this diagram representing?” 1/2 x 2/3 x 3/4 = ?
“What is the answer?” 1/2 x 2/3 x 3/4 = 6/24 = 1/4
“If the associative property holds what also will be true?” Students should offer several possible examples, e.g. 2/3 x 3/4 x 1/2 = 1/4. Ask them to check if the other examples of pairing factors first give one quarter as the answer.
“From what we know about the commutative property, why must this be true?” Look for students to argue that a similar generalisation is possible because:
The distributive property is harder to illustrate with fractions than with whole numbers. Start with the original example; 2/3 x 3/4 = 6/12 = 1/2. “Three quarters is made up of one quarter plus one half. If the distributive property works, what must also be true?” 2/3 x 3/4 = 2/3 x 1/4 + 2/3 x 1/2
Challenge the students to work together in small groups to see if this example is true. Look for them to work between the symbols and the Cuisenaire rod model.
Replacing the blue rod with the light green (one quarter) and dark green (one half) gives this:
Finding two thirds of each rod (light green and dark green) gives:
“Do two thirds of one quarter and two thirds of one half combine to make our original answer, one half?” (Yes, since red and crimson rods combined make a dark green rod).
“While one example is weak evidence it appears that the distributive property might work for fractions under multiplication.” If time permits asks your students to try out the distributive property for other fractions. Note that the distributive property is most commonly applied when working with mixed numbers, e.g. 3/4 x 1 2/3 = 3/4 x 1 + 3/4 x 2/3.
In this session the purpose is to find the quotient of two fractions by division. A measurement view of division is used rather than a sharing view.
Pose this simple problem: “A group of four students make $72 on a job. If they share the money equally how much should each friend get?”
Ask your students to solve the problem and share with their neighbour how they solved it. Students should recognise it as a division problem, 72 ÷ 4 = 16. There are many ways to solve the problem but the main focus is interpretation of seventy-two divided by four. Most students will ask themselves, “How many fours are in seventy-two?” which is measuring 72 in sets of four. Expect strategies like:
“If they got $80, that would mean $20 each. So they got $2 less each, that’s eighteen dollars.”
“If there were eight students they would get $9 each. There are half as many students so they should get twice as much.”
Record 72 ÷ 4 = 16 on the board and write 72 ÷ 1/2 = ?, as well.
“Ninety-five percent of people get this problem wrong. Try to solve it yourself then talk to your partner to justify your answer.”
In the discussion that follows try to connect their interpretation of 72 ÷ 4 as “How many fours are in seventy-two” with the meaning of 72 ÷ 1/2 as “How many one halves are in seventy-two?”
“Why is the answer, 144, larger than the dividend, 72?”
You may need to use a simple example to convince some students. Make a Cuisenaire rod model like this:
“If the crimson rod is one how many ones do I have?” (five)
“What fraction are the red rods?” (one half)
“How many halve do you get in five ones?” (ten)
Record this as 5 ÷ 1/2 = 10. “So the important idea is that we treat division as a ‘How many goes into’ operation.”
Create this simple example of fraction division using Cuisenaire rods. Build up the model as students answer.
“If the crimson rod is one half, then what colour is the one rod?” (brown)
“So what fraction is the light green rod?” (three eighths)
The model should now look like this:
Now answer the question, “How many three eighths fit into one half?” Record 1/2 ÷ 3/8 = ?
Ask the students to attempt an answer themselves then talk to a partner to justify their answer. Some students will provide answers like “A bit more than one” but encourage them to be accurate.
An important point is that the crimson rod (one half) is being measured with the light green rod (three eighths) so the light green rod is the measurement unit the answer must be expressed in. Many students will notice that a white rod fits into the ‘gap’.
“How much of a light green rod is the white rod?” (one third)
“So we get an answer of one and one third. What is that number as a fraction, not a mixed number?” (four-thirds)
Record; 1/2 ÷ 3/8 = 4/3. Mention that equivalence fooled us last time with the multiplication rule.
“Write down some equivalent fractions for four thirds.” (8/6, 12/9, 16/12...)
“If I write 1/2 ÷ 3/8 = 8/6 do you notice a pattern from one example?”
Get the students to work on the problems in Copymaster Three which scaffolds their learning towards division of fractions. Ask them to work in small co-operative groups with a set of Cuisenaire rods, or the online tool, and squared paper to record their ideas.
Look for the following:
Are the students able to accept the second rod as the unit of measure and express the answer in terms of that rod?
Do the students generalise that 1 ÷ a/b = b/a?, for example 1 ÷ 7/10 = 10/7.
Can they explain why 1 ÷ a/b = b/a?
Do they recognise that if the fractions have a common denominator then the division operates like whole number division?, e.g. 5/8 ÷ 3/8 has the same answer 5 ÷ 3 as because the units (eighths) are the same.
Do they generalise that if 1 ÷ a/b = b/a then using any fraction as the dividend just ‘scales’ the answer? For example, if 1 ÷ 4/5 = 5/4 then, 1/2 ÷ 4/5 = 1/2 x 5/4.
After a suitable period of investigation bring the class back together to discuss the points about. Use selected examples, as above, to ask students to reflect on a general rule for division of fractions. The discussion should lead to two useful algorithms. Use this example:
Express both fractions as equivalent fractions:
5/10 = 1/2 (Yellow rod) and 8/10 = 4/5 (Brown rod) so 1/2 ÷ 4/5 = 5/10 ÷ 8/10.
Since the units are all eighths the problem is just like “How many lots of eight somethings fit into five somethings.” Be aware that students might be unaware of the quotient rule for rational numbers, a ÷ b = a/b, for example 5 ÷ 8 = 5/8.
Invert and multiply:
The most common algorithm, and that needed for algebra, derives from, 1 ÷ a/b = b/a. So the answer is the reciprocal. Any change to the dividend, one, is just a scalar so in general, c/d ÷ a/b = c/d x b/a.
The aim of this session is to consolidate students’ connection between multiplication and division of fractions. To do so we keep the representation the same, Cuisenaire rods, but we expect a considerable amount of reunitising and norming with the units that are created.
Begin with this simple arrangement of three rods:
Ask a series of “Can you see…?” questions. Expect students to discuss the questions in ‘think-pair-share’ format. Write the equations as well as saying the question.
“Can you see five eighths of four fifths? (5/8 x 5/5 = ?)” The yellow rod is five eighths of the brown rod which is four fifths of the orange rod.
“Can you see how many one halves fit into four fifths? (4/5 ÷ 1/2 = ?)” The yellow rod is one half, the brown rod is four fifths.
“Can you see how many four fifths fit into one half? (1/2 ÷ 4/5 = ?)”
“Can you see eight fifths of one half? (8/5 x 1/2 = ?)”
All of the above questions treat the orange rod as one. Ask: “What ‘Can you see?’ questions could we ask if we treated the brown rod as one?”
For example; “Can you see two times five eighths? (2 x 5/8 = ?)” or “Can you see how many five quarters fit into five eighths? (5/8 ÷ 5/4 = ?)”.
In small groups of two or three challenge your students to do the following:
- Choose a set of three different Cuisenaire rods.
- Create as many “Can you see…?” problems as they can with that set.
- Produce a published set of their problems with answers hidden somewhere for other students to solve. Problems should involve pictures, words and equations
Students might publish their problems as a word document or PowerPoint so that the sets can be published. Swapping problems among groups is an excellent way for students to check if their reasoning is correct as well as providing practice at reunitising.
After sufficient time ask some groups to share their favourite “Can you see…?” problem. Look for other students to:
- Identify which rod is being used as the referent one rod
- Name the other two rods in terms of that one rod
- Connect word stories to appropriate multiplication or division equations
- Solve the equations and explain their answers with reference to the Cuisenaire rod model |
In the real word, uncertainty (sometimes called error or bias) is a part of everyday life, but in statistics we try to quantify just how much uncertainty is in our experiment, survey or test results.
The two main types are epistemic (things we don’t known because of a lack of data or experience) and aleatoric (things that are simply unknown, like what number a die will show on the next roll).
It is measured through a variety of ways.
Measures of Uncertainty
The confidence interval (CI) shows what the uncertainty is with a certain statistic (e.g. a mean). The margin of error is a range of values above and below a confidence interval’s sample statistic. For example, a survey might report a 95% confidence level of between 4.88 and 5.26. That means if the survey is repeated using the same methods, 95% of the time the true population parameter will fall between 4.88 and 5.26, 95% of the time.
The mean error refers to the mean (average) of all errors. “Error” in this context is the difference between a measured and true value.
For what happens to measurement errors when you use uncertain measurements to calculate something else (For example, using length to calculate area), see: Propagation of Uncertainty. In general terms, relative precision shows uncertainty as a fraction of a quantity. It is the ratio of a measurement’s precision and the measurement itself.
For the entropy of a distribution where a row variable X explains a column variable Y, see: Uncertainty Coefficient.
Sources of Uncertainty
- Interpolation errors happen because of a lack of data, and may be compounded by your choice of interpolation method.
- Model bias happens because any model is an approximation, or a best guess at what a true distribution might look like.
- Numerical errors are human errors that creep in when translating mathematical models into a computer.
- Observational error is due to the variability of measurements in an experiment.
- Parameter uncertainty happens because we don’t know the exact, or “best” values in a population—we can only take a good guess with sampling.
Everitt, B. S.; Skrondal, A. (2010), The Cambridge Dictionary of Statistics, Cambridge University Press.
Levine, D. (2014). Even You Can Learn Statistics and Analytics: An Easy to Understand Guide to Statistics and Analytics 3rd Edition. Pearson FT Press
Potter, K. (2012). Uncertainty and Parameter Space Analysis in Visualization. Scientific Computing and Imaging Institute University of Utah. Retrieved April 9, 2019 from: http://www.sci.utah.edu/~kpotter/talks/StatisticalUncertainty.pdf |
For parents, the sound of their newborn baby crying is a wonderful sign that the baby is healthy and breathing. However, millions of babies each year are born sick and depend on doctors and nurses to help them start breathing. Helping a baby breathe is called neonatal resuscitation. It is important that doctors and nurses practice performing the correct steps of neonatal resuscitation so that they are ready when a newborn baby needs their help. We created the boardgame RETAIN for doctors and nurses to play, to train their neonatal resuscitation knowledge and skills. We discovered that doctors and nurses significantly improved their knowledge of neonatal resuscitation after playing RETAIN. Building on this discovery, we may be able to use RETAIN to help doctors and nurses all around the world be better prepared to save babies at birth.
When babies are born, they announce themselves with a cry. This crying shows that the babies are taking their first breaths. A baby crying at birth is the most wonderful sound in the world to the parents! The crying of a baby at birth is heard around 130 million times a year all around the world. Sadly, 13–26 million babies are sick when they are born, and do not cry. They need urgent help from doctors and nurses. Helping a baby to breathe at birth is called neonatal resuscitation.
What is Neonatal Resuscitation?
Neonatal resuscitation is a special type of first aid for newborn babies. During neonatal resuscitation, doctors, and nurses must complete a series of steps to help the baby breathe. These steps include:
- Warmth, to keep the baby’s body temperature safe and comfortable
- Stimulation (rubbing the body) to wake the baby up and remind the baby to breathe
- Extra oxygen to help the baby to breathe
- Breaths through a mask placed over the baby’s mouth and nose
- Chest compressions (pumping the baby’s chest) to help keep the heart beating
- Medicine to help the baby’s heart start beating again.
Why is Training for Neonatal Resuscitation So Important?
Doctors and nurses work together to help newborn babies as they enter the world. It is important that doctors and nurses are always prepared to perform neonatal resuscitation. Being part of a neonatal resuscitation team is like being on a sports team. In sports, each teammate must practice individually by exercising, learning drills, watching games, and listening to the coach. But it is also important that the team practices playing together during scrimmages, so that everyone learns how to work together to win a real game. Neonatal resuscitation is not so different. Team members must practice individually, to better their individual knowledge and skills, as well as practice together as a team. This combination of training individually and together best prepares doctors and nurses to save a newborn baby’s life.
How Do Doctors and Nurses Train for Neonatal Resuscitation?
To practice their knowledge and skills, doctors and nurses attend classes, read textbooks, or participate in simulated neonatal resuscitations. During a simulation, doctors and nurses use equipment and supplies to perform neonatal resuscitation on a doll that looks like a real baby, in a classroom that looks like a real delivery room. An instructor, like a coach, leads the simulation and lets everyone know what they are doing well, and how they can improve. Training together during simulation is the best way to prepare for real-life neonatal resuscitation.
It is important that doctors and nurses train often, so that they do not make mistakes. However, doctors and nurses are very busy taking care of many patients, so they often do not have time to train as frequently as they should. When training is infrequent, they can forget how to perform neonatal resuscitation, which can be harmful for the baby. While simulation is a great way to train, it can also be expensive to use simulation to train doctors and nurses as often as they should be trained. As with all expensive things, not everyone can afford simulations and therefore not all doctors and nurses around the world will be able to use this method to practice neonatal resuscitation. Therefore, we need a new approach to train doctors and nurses in neonatal resuscitation.
Could We Use Games to Train Instead?
To make training more available and less expensive, we developed a simulation-based board game to teach neonatal resuscitation. Many people enjoy playing games, because games are fun and exciting, captivate our emotions, and stimulate our determination to win, which motivates us to play. Games can also help us learn. Games that teach us something while we play are called serious games. Serious games can be designed to teach us information or skills that may be useful for our schoolwork or jobs. Because games are fun to play, we are motivated to learn and practice even more (this is called game-based learning).
We thought that a serious game might help doctors and nurses to train for neonatal resuscitation. We worked with a team of doctors, educators, designers, and scientists to create the board game RETAIN. RETAIN combines simulation-based training with game-based learning, to help healthcare workers practice neonatal resuscitation knowledge and skills.
What is the Retain Game?
RETAIN is a fun way to practice neonatal resuscitation as a team, and busy doctors and nurses can play RETAIN anytime and anyplace. During the game, players review the medical history of an imaginary baby, which will help them to prepare for the situation. Once the baby is born, doctors and nurses have to work as a team using action cards (which represent different tasks, like measuring the baby’s heart rate) and 3D-equipment pieces (like a face mask to give oxygen). There is also a facilitator, a player who will provide the team with information about the baby’s health (like the baby’s heart rate). This will help the team know how the baby is doing throughout the scenario. Players win the game once the baby is breathing (Figure 1).
How Can We Know If Retain Actually Works?
Before doctors and nurses can use RETAIN to train for neonatal resuscitations, it is important to study whether playing RETAIN actually improves neonatal resuscitation knowledge and skills. To do this, we went to a hospital where around 6,500 babies are born every year. We asked the doctors and nurses there to participate in our study. At the start of the study, doctors and nurses took a test on what they knew about neonatal resuscitation (this was called the pre-test). After the pre-test, the doctors and nurses played the RETAIN board game. After playing the game for 30 min, they took another test to see how much their neonatal resuscitation knowledge improved (this was called the post-test).
What Did We Discover?
Doctors and nurses scored much higher on the post-test than on the pre-test, meaning that they improved their knowledge of neonatal resuscitation after playing RETAIN. After playing the RETAIN board game only once, these experienced doctors and nurses improved their knowledge of the correct steps of neonatal resuscitation by 12% (Figure 2). Our participants were already very knowledgeable about neonatal resuscitation (with an average of 7.5 years of experience taking care of newborn babies) and only played RETAIN for a short time (about 30 min). So, it was really exciting to see an improvement in their neonatal resuscitation knowledge during our study. Doctors and nurses also enjoyed RETAIN, with 80% of them agreeing that they enjoyed playing the board game, and that the game was a good way to train neonatal resuscitation (Figure 3). Since doctors and nurses enjoyed playing RETAIN, we think that they might be more motivated to practice their neonatal resuscitation knowledge and skills more often.
What Are Our Next Steps?
We showed that the RETAIN board game improves knowledge of neonatal resuscitation in doctors and nurses who have experience helping babies at birth. What we do not know is if RETAIN would also improve knowledge of neonatal resuscitation in less experienced doctors and nurses. We are currently conducting studies with less experienced doctors, nurses, paramedics, and midwives to understand if RETAIN also improves their neonatal resuscitation knowledge. We are also developing a video game and virtual reality version of RETAIN, which will allow doctors and nurses to practice their knowledge and skills of neonatal resuscitation in a different game format.
We know that simulation training is the best way for doctors and nurses to learn and practice their neonatal resuscitation knowledge. But simulation training takes a lot of time and money to organize, so many doctors and nurses do not get the training they need to provide the best care to their newborn patients. By making simulation training affordable and enjoyable for doctors and nurses, RETAIN may be the game-changing solution! If doctors and nurses can practice neonatal resuscitation more often, it will improve the lives of the newborn babies they care for. By continuing our research, we hope to make effective neonatal resuscitation training affordable and accessible to everyone helping babies at birth around the world. This is important, because all newborn babies deserve to receive the best quality care as they into enter the world and the loving arms of their families, triumphantly crying.
Want to know more? Take a look at some of the resources below, and get involved in clinical science research! Help us answer more questions about how to improve training for healthcare professionals, so that together we can give newborn babies the best start possible.
GS and SG: conception, literature search, drafting of the manuscript, critical revision of the manuscript, and final approval of the manuscript.
Neonatal Resuscitation: ↑ Helping babies breathe at birth.
Simulation: ↑ Creating a safe environment by replicating real-world situations, to teach learners what they will be expected to know and do during their jobs.
Serious Game: ↑ A game that is designed to teach the players knowledge, skills, or strategies that may be useful for their jobs.
Game-based Learning: ↑ Using characteristics of games, like fun and competition, to motivate players to learn something educational.
Conflict of Interest
GS has registered the RETAIN board game [Tech ID 2017083] under Canadian copyright [Tech ID 2017086] and one of the owners of RETAIN Labs Medical Inc., Edmonton, Canada (https://www.playretain.com) which is distributing the game.
The remaining author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
We would like to thank the public for donating money to our funding agencies: SG was a recipient of the Maternal and Child Health Scholarship and the Department of Pediatrics Recruitment Scholarship (supported by the University of Alberta, Stollery Children’s Hospital Foundation, Women and Children’s Health Research Institute, and the Lois Hole Hospital for Women). GS was a recipient of the Heart and Stroke Foundation/University of Alberta Professorship of Neonatal Resuscitation, a National New Investigator of the Heart and Stroke Foundation Canada and an Alberta New Investigator of the Heart and Stroke Foundation Alberta.
Original Source Article
↑ Cutumisu, M., Patel, S. D., Brown, M. R. G., Fray, C., von Hauff, P., Jeffery, T., et al. 2019. RETAIN: a board game that improves neonatal resuscitation knowledge retention. Front. Pediatr. 7:13. doi: 10.3389/fped.2019.00013 |
Log In to abcteach
Free Account (Settings)
Practice in performing translations, rotations, and dilations on a coordinate plane. CC: Math: 8.G.A.1-4
This math mini office contains information on determining perimeter and area in metric and standard measurement for a variety of geometric shapes.
Common Core: Geometry 6.G.A1, 5.G.B3, 4.MD.3
Match the pictures to the words for sphere, cube, cone and cylinder. Common Core Math: K.G.2
Match the pictures of the sphere, cone, cube and cylinder. Common Core Math: K.G.2
Students identify the polygon and number of sides.
Eight colorful math posters that help teach the concepts of area, perimeter and dimensional figures. Common Core: Geometry 6.G.A1, 5.G.B3, 4.MD.3
Match the pictures to the words for circle, triangle, square, rectangle, and hexagon. Common Core Math: K.G.2
Updated: Count the number of colorful shapes in each box. |
A triangular prism has three rectangular faces and two triangular faces, making it a specific and distinctive prism. The lateral faces are parallelograms because they have opposite sides that are parallel and congruent.
The height of the triangular prism is the distance between the parallel bases, and can vary in length depending on the size and proportions of the prism.
The triangular prism is just one of the many shapes of prisms that exist, and its specific geometry gives it unique properties and characteristics. It can be used in various applications, such as architecture, geometry or physics, depending on the specific needs and contexts.
Volume of a Triangular Prism: Formula and Calculation
The calculation of the volume of a triangular prism is made by multiplying the area of the base by the height of the prism. Here are the three steps to calculate it:
Calculate the area of the triangular base: For a triangle, the area is calculated using the triangle area formula, which is (base x height) / 2.
Determine the height of the prism: The height of the prism is the perpendicular distance between the two parallel bases.
Multiply the area of the base by the height.
Specific formula for the triangular prism:
V = (b t h t / 2) h p
V is the volume of the geometric field.
b t is the base of the triangle from one of its bases.
h t is the height of the triangle from one of its bases.
h p is the height of the prism.
Area of a Triangular Prism: Formula and Calculation
To calculate the area of a triangular prism, it is necessary to know the areas of the base and of the rectangles that form the faces of the polyhedron. That is to say:
The area of the triangle's base: (base x height) / 2
The perimeter of the triangle's base, which is the sum of its three sides.
The height of the prism.
Using the triangle area formula, the area of the base can be calculated using the following formula
A = 2 A t + p t h
A is the area of the triangular prism.
A t is the area of the base of the prism.
p t is the perimeter of the triangular base.
h is the height of the prism.
The main propereties of a triangular prism are the following:
Bases: The triangular prism has two bases that are congruent triangles. These bases are parallel to each other and are located at opposite ends of the prism.
Lateral Faces: The triangular prism has three rectangular lateral faces. These faces connect the edges of the bases and form parallelograms.
Edges: The triangular prism has nine edges in total: three edges that form the triangular bases and six edges that connect the corners of the bases with the corners of the side faces.
Angles: Triangular bases have three angles, and each side face has four right angles (90 degrees). Also, adjacent side faces make 90 degree angles to each other.
Volume: The volume of a triangular prism is calculated by multiplying the area of the base by the height of the prism. The area of the base is obtained by the formula for the area of a triangle (base x height / 2).
Area: The total area of a triangular prism is obtained by adding the area of the two triangular bases and the three rectangular lateral faces.
Symmetry: A triangular prism has symmetry about a plane through the center of the prism and parallel to the triangular bases.
Here are some examples of objects or structures that could be represented as triangular prisms:
Tents: Some tents have a structure in the shape of a triangular prism. The bases of the tent are usually triangles and the side faces are rectangular.
Signal towers: Some signal towers, such as those used in telecommunications or signal transmission systems, may have a triangular prism shape. The bases would be the support triangles and the lateral faces would be the rectangular panels that house the equipment or antennas.
Architectural buildings: Some modern buildings use architectural designs that incorporate triangular prisms into their structure. These prisms can be ornamental elements on the façade or even defined geometric shapes in the main structure of the building.
Pools: Some unusually shaped outdoor pools or pools may have a triangular prism shape. In this case, the bases would be the triangular shapes of the pool and the side faces would be the rectangular walls that surround the perimeter.
Access ramps: Triangular prisms are sometimes used to create an access ramp that allows people in wheelchairs to overcome the rise of a curb. |
The Wright brothers and Wilbur, were two American aviation pioneers credited with inventing and flying the world's first successful airplane. They made the first controlled, sustained flight of a powered, heavier-than-air aircraft with the Wright Flyer on December 17, 1903, four miles south of Kitty Hawk, North Carolina. In 1904–05, the brothers developed their flying machine into the first practical fixed-wing aircraft, the Wright Flyer III. Although not the first to build experimental aircraft, the Wright brothers were the first to invent aircraft controls that made fixed-wing powered flight possible; the brothers' breakthrough was their creation of a three-axis control system, which enabled the pilot to steer the aircraft and to maintain its equilibrium. This method remains standard on fixed-wing aircraft of all kinds. From the beginning of their aeronautical work, the Wright brothers focused on developing a reliable method of pilot control as the key to solving "the flying problem"; this approach differed from other experimenters of the time who put more emphasis on developing powerful engines.
Using a small homebuilt wind tunnel, the Wrights collected more accurate data than any before, enabling them to design more efficient wings and propellers. Their first U. S. patent did not claim invention of a flying machine, but a system of aerodynamic control that manipulated a flying machine's surfaces. The brothers gained the mechanical skills essential to their success by working for years in their Dayton, Ohio-based shop with printing presses, bicycles and other machinery, their work with bicycles in particular influenced their belief that an unstable vehicle such as a flying machine could be controlled and balanced with practice. From 1900 until their first powered flights in late 1903, they conducted extensive glider tests that developed their skills as pilots, their shop employee Charlie Taylor became an important part of the team, building their first airplane engine in close collaboration with the brothers. The Wright brothers' status as inventors of the airplane has been subject to counter-claims by various parties.
Much controversy persists over the many competing claims of early aviators. Edward Roach, historian for the Dayton Aviation Heritage National Historical Park, argues that they were excellent self-taught engineers who could run a small company, but they did not have the business skills or temperament to dominate the growing aviation industry; the Wright brothers were two of seven children born to Milton Wright, of English and Dutch ancestry, Susan Catherine Koerner, of German and Swiss ancestry. Milton Wright's mother, Catherine Reeder, was descended from the progenitor of the Vanderbilt family and the Huguenot Gano family of New Rochelle, New York. Wilbur was born near Millville, Indiana, in 1867; the brothers never married. The other Wright siblings were Reuchlin, Lorin and twins Otis and Ida; the direct paternal ancestry goes back to a Samuel Wright who sailed to America and settled in Massachusetts in 1636. None of the Wright children had middle names. Instead, their father tried hard to give them distinctive first names.
Wilbur was named for Wilbur Fisk and Orville for Orville Dewey, both clergymen that Milton Wright admired. They were "Will" and "Orv" to their friends and in Dayton, their neighbors knew them as "the Bishop's kids", or "the Bishop's boys"; because of their father's position as a bishop in the Church of the United Brethren in Christ, he traveled and the Wrights moved — twelve times before returning permanently to Dayton in 1884. In elementary school, Orville was once expelled. In 1878 when the family lived in Cedar Rapids, their father brought home a toy helicopter for his two younger sons; the device was based on an invention of French aeronautical pioneer Alphonse Pénaud. Made of paper and cork with a rubber band to twirl its rotor, it was about a foot long. Wilbur and Orville played with it until it broke, built their own. In years, they pointed to their experience with the toy as the spark of their interest in flying. Both brothers did not receive diplomas; the family's abrupt move in 1884 from Richmond, Indiana, to Dayton, where the family had lived during the 1870s, prevented Wilbur from receiving his diploma after finishing four years of high school.
The diploma was awarded posthumously to Wilbur on April 16, 1994, which would have been his 127th birthday. In late 1885 or early 1886 Wilbur was struck in the face by a hockey stick while playing an ice-skating game with friends, resulting in the loss of his front teeth, he had been vigorous and athletic until and although his injuries did not appear severe, he became withdrawn. He had planned to attend Yale. Instead, he spent the next few years housebound. During this time he cared for his mother, terminally ill with tuberculosis, read extensively in his father's library and ably assisted his father during times of controversy within the Brethren Church, but expressed unease over his own lack of ambition. Orville dropped out of high school after his junior year to start a printing business in 1889, having designed and built his own printing press with Wilbur's help. Wilbur joined the print shop, in March the brothers launched a weekly newspaper, the West Side News. Subsequent issues listed Orville as Wilbur as editor on the masthead.
In April 1890 they converted the paper to a daily, The Evening Item, but it lasted only f
Aircraft Owners and Pilots Association
The Aircraft Owners and Pilots Association is a Frederick, Maryland-based American non-profit political organization that advocates for general aviation. The organization started at Wings Field in Pennsylvania. On 24 April 1932, The Philadelphia Aviation Country Club was founded at Wings Field; the country club was the location of meetings of members that founded AOPA. AOPA incorporated on May 15, 1939, with C. Towsend Ludington serving as the first president, AOPA's membership consists of general aviation pilots in the United States. AOPA exists to serve the interests of its members as aircraft owners and pilots, to promote the economy, safety and popularity of flight in general aviation aircraft. In 1971 the organization purchased Airport World Magazine, moving its operations to Bethesda, Maryland. With 384,915 members in 2012, AOPA is the largest aviation association in the world, although since 2010 it has decreased in membership from 414,224, a loss of 7% in two years. AOPA is affiliated with other similar organizations in other countries though membership in the International Council of Aircraft Owner and Pilot Associations.
In 2015, AOPA was inducted into the International Air & Space Hall of Fame at the San Diego Air & Space Museum. AOPA has several programs. AOPA Foundation, is AOPA’s 501 charitable organization; the foundation's four goals are to improve general aviation safety, grow pilot population and improve community airports, provide a positive image of general aviation. AOPA Political Action Committee, is just for AOPA members. Through lobbying, it represents the interests of general aviation to Congress, the Executive Branch, state and local governments; the AOPA PAC campaigns in favor of federal and local candidates that support their policies and oppose those who do not through advertising and membership grassroots campaigns. GA Serves America, was created to promote general aviation to the public. Legal Services Plan/Pilot Protection Services, provides AOPA members with legal defense against alleged FAA enforcement charges as well as assistance obtaining an FAA flight medical. Enrollment in Pilot Protection Services is only open to AOPA members and requires an additional payment above dues.
The Legal Services Plan was combined with the former medical program in May 2012 under the name Pilot Protection Services. The Legal Services Plan was created in June 1983. Air Safety Institute is a separate nonprofit, tax exempt organization promoting safety and pilot proficiency in general aviation through quality training, research and the dissemination of information. AOPA sponsors its own open house in Frederick Maryland; the yearly event started in 1991 with 125 aircraft. By 2001, the attendance grew to 760 aircraft; the event was cancelled for five years after the September 11th attacks, airspace changes, but resumed in 2006. Canadian Owners and Pilots Association – similar organization established in Canada in 1952 Experimental Aircraft Association – similar organization focused on homebuilt aircraft Official Website AOPA Air Safety Institute General Aviation Serves America AOPA USA's Let's Go Flying Records of the AOPA at the Hagley Museum and Library
Nav Canada is a run, not-for-profit corporation that owns and operates Canada's civil air navigation system. It was established in accordance with the Civil Air Navigation Services Commercialization Act; the company employs 1,900 air traffic controllers, 650 flight service specialists and 700 technologists. It has been responsible for the safe and expeditious flow of air traffic in Canadian airspace since November 1, 1996 when the government transferred the ANS from Transport Canada to Nav Canada; as part of the transfer, or privatization, Nav Canada paid the government CA$1.5 billion. Nav Canada manages 12 million aircraft movements a year for 40,000 customers in over 18 million square kilometres, making it the world’s second-largest air navigation service provider by traffic volume. Nav Canada, which operates independently of any government funding, is headquartered in Ottawa, Ontario, it is only allowed to be funded by service charges to aircraft operators. Nav Canada's operations consist of various sites across the country.
These include: About 1,400 ground-based navigation aids 55 flight service stations 8 flight information centres, one each in: Kamloops – most of British Columbia Edmonton – all of Alberta and northeastern BC Winnipeg – northwestern Ontario, all of Manitoba and Saskatchewan London – most of Ontario North Bay – all of Nunavut and Northwest Territories, most of the Arctic waters Quebec City – all of Quebec, southwestern Labrador, tip of eastern Ontario, northern New Brunswick Halifax – most of New Brunswick, Nova Scotia, Prince Edward Island, most of Newfoundland and Labrador Whitehorse – northwestern British Columbia and all of Yukon 41 control towers 46 radar sites and 15 automatic dependent surveillance-broadcast ground sites 7 Area Control Centres, one each in: Vancouver – Surrey, BC Edmonton – Edmonton International Airport Winnipeg – Winnipeg-James Armstrong Richardson International Airport Toronto Centre – Toronto-Pearson International Airport Montreal Centre – Montreal-Trudeau International Airport Moncton – Riverview, New Brunswick Gander – Gander International Airport North Atlantic Oceanic control centre: Gander ControlNav Canada has three other facilities: National Operations Centre: Ottawa Technical Systems Centre: Ottawa The Nav Centre – 1950 Montreal Road in Cornwall, Ontario As a non-share capital corporation, Nav Canada has no shareholders.
The company is governed by a 15-member board of directors representing the four stakeholder groups that founded Nav Canada. The four stakeholders elect 10 members as follows: These 10 directors elect four independent directors, with no ties to the stakeholder groups; those 14 directors appoint the president and chief executive officer who becomes the 15th board member. This structure ensures that the interests of individual stakeholders do not predominate and no member group could exert undue influence over the remainder of the board. To further ensure that the interests of Nav Canada are served, these board members cannot be active employees or members of airlines, unions, or government; the company was formed on November 1, 1996 when the government sold the country's air navigation services from Transport Canada to the new not-for-profit private entity for CAD$1.5 billion. The company was formed in response to a number of issues with Transport Canada's operation of air traffic control and air navigation facilities.
While TC's safety record and operational staff were rated its infrastructure was old and in need of serious updating at a time of government restraint. This resulted in system delays for airlines and costs that were exceeding the airline ticket tax, a directed tax, supposed to fund the system; the climate of government wage freezes resulted in staff shortages of air traffic controllers that were hard to address within a government department. Having TC as the service provider, the regulator and inspector was a conflict of interest. Pressure from the airlines on the government mounted for a solution to the problem, hurting the air industry's bottom line. A number of solutions were considered, including forming a crown corporation, but rejected in favour of outright privatization, the new company being formed as a non-share-capital not-for-profit, run by a board of directors who were appointed and now elected; the company's revenue is predominately from service fees charged to aircraft operators which amount to about CAD$1.2B annually.
Nav Canada raises revenues from developing and selling technology and related services to other air navigation service providers around the world. It has some smaller sources of income, such as conducting maintenance work for other ANS providers and rentals from the Nav Centre in Cornwall, Ontario. To address the old infrastructure it purchased from the Canadian government the company has carried out projects such as implementing a wide area multilateration system, replacing 95 Instrument Landing System installations with new equipment, new control towers in Toronto and Calgary, modernizing the Vancouver Area Control Centre and building a new logistics centre Nav Canada felt the impact of the late-2000s recession in two ways: losses in its investments in third party sponsored asset-backed commercial paper and falling revenues due to reduced air traffic levels. In the summer of 2007 the company held $368 million in ABCP. On 12 January 2009 final Ontario Superior Court of Justice approval was granted to restructure the third party ABCP notes.
The company expects that the non-credit related fai
Ottawa is the capital city of Canada. It stands on the south bank of the Ottawa River in the eastern portion of southern Ontario. Ottawa borders Gatineau, Quebec; as of 2016, Ottawa had a city population of 964,743 and a metropolitan population of 1,323,783 making it the fourth-largest city and the fifth-largest CMA in Canada. Founded in 1826 as Bytown, incorporated as Ottawa in 1855, the city has evolved into the political centre of Canada, its original boundaries were expanded through numerous annexations and were replaced by a new city incorporation and amalgamation in 2001 which increased its land area. The city name "Ottawa" was chosen in reference to the Ottawa River, the name of, derived from the Algonquin Odawa, meaning "to trade". Ottawa has the most educated population among Canadian cities and is home to a number of post-secondary and cultural institutions, including the National Arts Centre, the National Gallery, numerous national museums. Ottawa has the highest standard of living in low unemployment.
With the draining of the Champlain Sea around ten thousand years ago, the Ottawa Valley became habitable. Local populations used the area for wild edible harvesting, fishing, trade and camps for over 6500 years; the Ottawa river valley has archaeological sites with arrow heads and stone tools. Three major rivers meet within Ottawa, making it an important trade and travel area for thousands of years; the Algonquins called the Ottawa River Kichi Sibi or Kichissippi meaning "Great River" or "Grand River". Étienne Brûlé regarded as the first European to travel up the Ottawa River, passed by Ottawa in 1610 on his way to the Great Lakes. Three years Samuel de Champlain wrote about the waterfalls in the area and about his encounters with the Algonquins, using the Ottawa River for centuries. Many missionaries would follow the early traders; the first maps of the area used the word Ottawa, derived from the Algonquin word adawe, to name the river. Philemon Wright, a New Englander, created the first settlement in the area on 7 March 1800 on the north side of the river, across from the present day city of Ottawa in Hull.
He, with five other families and twenty-five labourers, set about to create an agricultural community called Wrightsville. Wright pioneered the Ottawa Valley timber trade by transporting timber by river from the Ottawa Valley to Quebec City. Bytown, Ottawa's original name, was founded as a community in 1826 when hundreds of land speculators were attracted to the south side of the river when news spread that British authorities were constructing the northerly end of the Rideau Canal military project at that location; the following year, the town was named after British military engineer Colonel John By, responsible for the entire Rideau Waterway construction project. The canal's military purpose was to provide a secure route between Montreal and Kingston on Lake Ontario, bypassing a vulnerable stretch of the St. Lawrence River bordering the state of New York that had left re-supply ships bound for southwestern Ontario exposed to enemy fire during the War of 1812. Colonel By set up military barracks on the site of today's Parliament Hill.
He laid out the streets of the town and created two distinct neighbourhoods named "Upper Town" west of the canal and "Lower Town" east of the canal. Similar to its Upper Canada and Lower Canada namesakes "Upper Town" was predominantly English speaking and Protestant whereas "Lower Town" was predominantly French and Catholic. Bytown's population grew to 1,000 as the Rideau Canal was being completed in 1832. Bytown encountered some impassioned and violent times in her early pioneer period that included Irish labour unrest that attributed to the Shiners' War from 1835 to 1845 and political dissension evident from the 1849 Stony Monday Riot. In 1855 Bytown was incorporated as a city. William Pittman Lett was installed as the first city clerk guiding it through 36 years of development. On New Year's Eve 1857, Queen Victoria, as a symbolic and political gesture, was presented with the responsibility of selecting a location for the permanent capital of the Province of Canada. In reality, Prime Minister John A. Macdonald had assigned this selection process to the Executive Branch of the Government, as previous attempts to arrive at a consensus had ended in deadlock.
The "Queen's choice" turned out to be the small frontier town of Ottawa for two main reasons: Firstly, Ottawa's isolated location in a back country surrounded by dense forest far from the Canada–US border and situated on a cliff face would make it more defensible from attack. Secondly, Ottawa was midway between Toronto and Kingston and Montreal and Quebec City. Additionally, despite Ottawa's regional isolation it had seasonal water transportation access to Montreal over the Ottawa River and to Kingston via the Rideau Waterway. By 1854 it had a modern all season Bytown and Prescott Railway that carried passengers and supplies the 82-kilometres to Prescott on the Saint Lawrence River and beyond. Ottawa's small size, it was thought, would make it less prone to rampaging politically motivated mobs, as had happened in the previous Canadian capitals; the government owned the land that would become Parliament Hill which they thought would be an ideal location for the Parliament Buildings. Ottawa was th
Marsh & McLennan Companies
Marsh & McLennan Companies, Inc. is a global professional services firm, headquartered in New York City with businesses in insurance brokerage, risk management, reinsurance services, talent management, investment advisory, management consulting. Its four main operating companies are Marsh, Oliver Wyman Group, Guy Carpenter. Marsh & McLennan Companies ranked #212 on the 2018 Fortune 500 ranking, the company's 24th year on the annual Fortune list, #458 on the 2017 Forbes Global 2000 List. Marsh & McLennan's 2016 revenue of $13.2 billion ranked it #1 on Business Insurance's ranking of the world's largest insurance brokers. Burroughs, Marsh & McLennan was formed by Henry W. Marsh and Donald R. McLennan in Chicago in 1905, it was renamed as Marsh & McLennan in 1906. The reinsurance firm Guy Carpenter & Company was acquired in 1923, a year after it was founded by Guy Carpenter. In 1959, it acquired the human resources consulting firm Mercer; the 1960s were notable for the company's development, including an initial public offering in 1962 and a 1969 reorganization that introduced a holding company configuration, with the company offering clients its services under the banners of separately managed companies.
In 1970, the company purchased Putnam Investments. In 1997, the company boosted its insurance brokerage business with a $1.8 billion acquisition of Johnson & Higgins, which, at the time, was one of MMC’s biggest competitors in its brokerage business. The purchase occurred during a time of consolidation in the industry, pushed Marsh & McLennan back above Aon as the world’s largest insurance broker. Throughout the 2000s, the company further transformed and focused its operating strategy through various acquisitions and divestment at its subsidiaries, including: In 2000, Marsh & McLennan’s HR consulting unit, acquired Delta Consulting Group for its organizational development and change management expertise. In 2003, the company acquired Oliver Wyman, a management consultancy with a large financial services industry clientele; the Oliver Wyman acquisition served to transform MMC from a company known for its insurance brokerage services into one with a full-fledged management consulting practice that competes with McKinsey, Boston Consulting Group, others.
In 2007, Marsh & McLennan sold its Putnam Investments mutual fund business to Power Financial Corp. for $3.9 billion in a divestment meant to focus the parent company on its risk and human capital businesses. In 2007, the company announced that its insurance brokerage unit, had received the first license for a wholly owned foreign company to operate an insurance brokerage business in China. In 2010, the company sold Kroll, its corporate intelligence and investigative unit, to Altegrity Inc. for $1.13 billion. Prior to this final deal and divestiture, Marsh & McLennan had been selling off smaller divisions within Kroll to further focus on its core risk and consulting businesses. In July 2017, Marsh & McLennan Cos. Inc. was ranked first in Business Insurance's world's largest brokers list. In 2004, the company's insurance brokerage unit, was embroiled in a bid rigging scandal that plagued much of the insurance industry, including brokerage rivals Aon and Willis Group, insurer AIG. In a lawsuit, Eliot Spitzer New York State’s attorney general, accused Marsh of not serving as an unbiased broker, leading to increased costs for clients and higher revenues for Marsh.
In early 2005, Marsh agreed to pay $850 million to settle the lawsuit and compensate clients whose commercial insurance it arranged from 2001 to 2004. Much of Marsh & McLennan's corporate strategy since 2005 stemmed from an effort to recover from this tumultuous period leading to the firm's current organization and simplified focus on insurance services and consulting. At the time of the 2001 September 11 attacks in the United States, the corporation held offices on eight floors, 93 to 100, of the North Tower of the World Trade Center; when American Airlines Flight 11 crashed into the building, its offices spanned the entire impact zone, floors 93 to 99. Everybody present in the company's offices on the day of the attack died as all stairwells at impact zone were destroyed or blocked by the crash. Marsh & McLennan Companies is composed of two primary business segments: Risk and Insurance Services, Consulting. Marsh, which provides insurance broking and risk management consulting. John Doyle has been the president and CEO since 2017.
Guy Carpenter, a risk and reinsurance intermediary Mercer, offering health, retirement and investment consulting services Oliver Wyman Group, a collection of management consulting firms Willis Towers Watson Aon Brian Duperreault Official website
Advocacy is an activity by an individual or group that aims to influence decisions within political and social systems and institutions. Advocacy can include many activities that a person or organization undertakes including media campaigns, public speaking and publishing research or conducting exit poll or the filing of an amicus brief. Lobbying is a form of advocacy where a direct approach is made to legislators on an issue which plays a significant role in modern politics. Research has started to address how advocacy groups in the United States and Canada are using social media to facilitate civic engagement and collective action. An advocate is someone. There are several forms of advocacy, each representing a different approach in a way to initiate changes in the society. One of the most popular forms is social justice advocacy; the initial definition does not encompass the notions of power relations, people's participation and a vision of a just society as promoted by social justice advocates.
For them, advocacy represents the series of actions taken and issues highlighted to change the “what is” into a “what should be”, considering that this “what should be” is a more decent and a more just society. Those actions, which vary with the political and social environment in which they are conducted, have several points in common. They: Question the way policy is administered Participate in the agenda-setting as they raise significant issues Target political systems "because those systems are not responding to people's needs" Are inclusive and engaging Propose policy solutions Open up space for public argumentationOther forms of advocacy include: Budget advocacy: another aspect of advocacy that ensures proactive engagement of Civil Society Organizations with the government budget to make the government more accountable to the people and promote transparency. Budget advocacy enables citizens and social action groups to compel the government to be more alert to the needs and aspirations of people in general and the deprived sections of the community.
Bureaucratic advocacy: people considered "experts" have more chance to succeed at presenting their issues to decision-makers. They use bureaucratic advocacy to influence the agenda. Express versus issue advocacy: These two types of advocacy when grouped together refers to a debate in the United States whether a group is expressly making their desire known that voters should cast ballots in a particular way, or whether a group has a long-term issue that isn't campaign and election season specific. Health advocacy: supports and promotes patients' health care rights as well as enhance community health and policy initiatives that focus on the availability and quality of care. Ideological advocacy: in this approach, groups fight, sometimes during protests, to advance their ideas in the decision-making circles. Interest-group advocacy: lobbying is the main tool used by interest groups doing mass advocacy, it is a form of action that does not always succeed at influencing political decision-makers as it requires resources and organization to be effective.
Legislative advocacy: the "reliance on the state or federal legislative process" as part of a strategy to create change. Mass advocacy: any type of action taken by large groups Media advocacy: "the strategic use of the mass media as a resource to advance a social or public policy initiative". In Canada, for example, the Manitoba Public Insurance campaigns illustrate how media advocacy was used to fight alcohol and tobacco-related health issues. We can consider the role of health advocacy and the media in “the enactment of municipal smoking bylaws in Canada between 1970 and 1995.” Special education advocacy: advocacy with a "specific focus on the educational rights of students with disabilities."Different contexts in which advocacy is used: In a legal/law context: An "advocate" is the title of a specific person, authorized/appointed in some way to speak on behalf of a person in a legal process. In a political context: An "advocacy group" is an organized collection of people who seek to influence political decisions and policy, without seeking election to public office.
In a social care context: Both terms are used in the UK in the context of a network of interconnected organisations and projects which seek to benefit people who are in difficulty. In the context of inclusion: Citizen Advocacy organisations seek to cause benefit by reconnecting people who have become isolated, their practice was defined in two key documents: CAPE, Learning from Citizen Advocacy Programs. Advocacy in all its forms seeks to ensure that people those who are most vulnerable in society, are able to: Have their voice heard on issues that are important to them Defend and safeguard their rights Have their views and wishes genuinely considered when decisions are being made about their livesAdvocacy is a process of supporting and enabling people to: Express their views and concerns Access information and services Defend and promote their rights and responsibilities Explore choices and options Groups involved in advocacy work have been using the Internet to accomplish organizational goals.
It has been argued that the Internet helps to increase the speed and effectiveness of advocacy-related communication as well as mobilization efforts, suggesting that social media are beneficial to the advocacy community. People advocate for a large variety of topics; some of these are clear-cut social issues that are univers
Chris Austin Hadfield is a Canadian retired astronaut and former Royal Canadian Air Force fighter pilot. The first Canadian to walk in space, Hadfield has flown two Space Shuttle missions and served as commander of the International Space Station. Hadfield, raised on a farm in southern Ontario, was inspired as a child when he watched the Apollo 11 Moon landing on TV, he attended high school in Oakville and Milton and earned his glider pilot licence as a member of the Royal Canadian Air Cadets. He earned an engineering degree at Royal Military College. While in the military he learned to fly various types of aircraft and became a test pilot and flew several experimental planes; as part of an exchange program with the United States Navy and United States Air Force, he obtained a master's degree in aviation systems at the University of Tennessee Space Institute. In 1992, he was accepted into the Canadian astronaut program by the Canadian Space Agency, he first flew in space aboard STS-74 in November 1995 as a mission specialist.
During the mission he visited the Russian space station Mir. In April 2001 he flew again on STS-100 and visited the International Space Station, where he walked in space and helped to install the Canadarm2. In December 2012 he flew for a third time aboard Soyuz TMA-07M and joined Expedition 34 on the ISS, he was a member of this expedition until March 2013 when he became the commander of the ISS as part of Expedition 35. He was responsible for a crew of five astronauts and helped to run dozens of scientific experiments dealing with the impact of low gravity on human biology. During the mission, he gained popularity by chronicling life aboard the space station and taking pictures of the Earth and posting them on various social media platforms to a large following of people around the world, he was a guest on television news and talk shows and gained popularity by playing the International Space Station's guitar in space. His mission ended in May 2013. Shortly after returning, he announced his retirement, capping a 35-year career as a military pilot and an astronaut.
Hadfield was born in Ontario. His parents are Eleanor Hadfield, who live in Milton, Ontario. Hadfield was raised on a corn farm in southern Ontario and became interested in flying at a young age and in being an astronaut at age nine when he saw the Apollo 11 Moon landing on television, he is married to his high-school girlfriend Helene, they have three adult children: Kyle and Kristin Hadfield. Hadfield used to be a ski instructor at Glen Eden Ski Area before becoming a test pilot. Hadfield is of southern Scottish descent, he is a devoted fan of the Toronto Maple Leafs and wore a Leafs jersey under his spacesuit during his Soyuz TMA-07M reentry in May 2013. After the 2012 NHL Lockout ended, Hadfield tweeted a photo of himself holding a Maple Leafs logo, stated he was "ready to cheer on from orbit", he sang the Canadian National Anthem during the Toronto Maple Leafs and Montreal Canadiens game on January 18, 2014 at the Air Canada Centre in Toronto. Hadfield attended White Oaks Secondary School in Oakville, Ontario until his senior year and graduated as an Ontario Scholar from Milton District High School in 1977.
As a member of the Royal Canadian Air Cadets, he earned a glider pilot scholarship at age 15 and a powered pilot scholarship at age 16. After graduating from high school in 1978, he joined the Canadian Armed Forces and spent two years at Royal Roads Military College followed by two years at the Royal Military College, where he received a bachelor's degree in mechanical engineering in 1982. Before graduating, he underwent basic flight training at CFB Portage la Prairie. In 1983, he took honours as the top graduate from Basic Jet Training at CFB Moose Jaw, went on to train as a tactical fighter pilot with 410 Tactical Fighter Operational Training Squadron at CFB Cold Lake, flying the Canadair CF-116 Freedom Fighter and the McDonnell Douglas CF-18 Hornet. After completing his fighter training, Hadfield flew CF-18 Hornets with 425 Tactical Fighter Squadron, flying intercept missions for NORAD, he was the first CF-18 pilot to intercept a Soviet Tupolev Tu 95 long-range bomber in the Canadian Arctic.
In the late 1980s, Hadfield attended the US Air Force Test Pilot School at Edwards Air Force Base and served as an exchange officer with the US Navy at Strike Test Directorate at the Patuxent River Naval Air Station. His accomplishments from 1989 to 1992 included testing the McDonnell Douglas F/A-18 Hornet and LTV A-7 Corsair II aircraft. In May 1992, Hadfield graduated with a master's degree in aviation systems from the University of Tennessee Space Institute, where his thesis concerned high-angle attack aerodynamics of the F/A-18 Hornet fighter jet. In total, Hadfield has flown over 70 different types of aircraft. Hadfield was selected to become one of four new Canadian astronauts from a field of 5,330 applicants in June 1992. Three of those four have flown in space, he was assigned by the Canadian Space Agency to the NASA Johnson Space Center in Houston, Texas in August, where he addressed technical and safety issues for Shuttle Operations Development, contributed to the development of the glass shuttle cockpit, supported shuttle launc |
An inclined plane, also known as a ramp, is a flat supporting surface tilted at an angle, with one end higher than the other, used as an aid for raising or lowering a load. The inclined plane is one of the six classical simple machines defined by Renaissance scientists. Inclined planes are widely used to move heavy loads over vertical obstacles; examples vary from a ramp used to load goods into a truck, to a person walking up a pedestrian ramp, to an automobile or railroad train climbing a grade.
Moving an object up an inclined plane requires less force than lifting it straight up, at a cost of an increase in the distance moved. The mechanical advantage of an inclined plane, the factor by which the force is reduced, is equal to the ratio of the length of the sloped surface to the height it spans. Due to conservation of energy, the same amount of mechanical energy (work) is required to lift a given object by a given vertical distance, disregarding losses from friction, but the inclined plane allows the same work to be done with a smaller force exerted over a greater distance.
The angle of friction, also sometimes called the angle of repose, is the maximum angle at which a load can rest motionless on an inclined plane due to friction, without sliding down. This angle is equal to the arctangent of the coefficient of static friction μs between the surfaces.
Two other simple machines are often considered to be derived from the inclined plane. The wedge can be considered a moving inclined plane or two inclined planes connected at the base. The screw consists of a narrow inclined plane wrapped around a cylinder.
The term may also refer to a specific implementation; a straight ramp cut into a steep hillside for transporting goods up and down the hill. It may include cars on rails or pulled up by a cable system; a funicular or cable railway, such as the Johnstown Inclined Plane.
Inclined planes are widely used in the form of loading ramps to load and unload goods on trucks, ships and planes. Wheelchair ramps are used to allow people in wheelchairs to get over vertical obstacles without exceeding their strength. Escalators and slanted conveyor belts are also forms of inclined plane. In a funicular or cable railway a railroad car is pulled up a steep inclined plane using cables. Inclined planes also allow heavy fragile objects, including humans, to be safely lowered down a vertical distance by using the normal force of the plane to reduce the gravitational force. Aircraft evacuation slides allow people to rapidly and safely reach the ground from the height of a passenger airliner.
Other inclined planes are built into permanent structures. Roads for vehicles and railroads have inclined planes in the form of gradual slopes, ramps, and causeways to allow vehicles to surmount vertical obstacles such as hills without losing traction on the road surface. Similarly, pedestrian paths and sidewalks have gentle ramps to limit their slope, to ensure that pedestrians can keep traction. Inclined planes are also used as entertainment for people to slide down in a controlled way, in playground slides, water slides, ski slopes and skateboard parks.
|Simon Stevin (Stevinus) derived the mechanical advantage of the inclined plane by an argument that used a string of beads. He imagined two inclined planes of equal height but different slopes, placed back-to-back (above) as in a prism. A loop of string with beads at equal intervals is draped over the inclined planes, with part hanging down below. The beads resting on the planes act as loads on the planes, held up by the tension force in the string at point T. Stevin's argument goes like this:|
As pointed out by Dijksterhuis, Stevin's argument is not completely tight. The forces exerted by the hanging part of the chain need not be symmetrical because the hanging part need not retain its shape when let go. Even if the chain is released with a zero angular momentum, motion including oscillations is possible unless the chain is initially in its equilibrium configuration, a supposition which would make the argument circular.
Inclined planes have been used by people since prehistoric times to move heavy objects. The sloping roads and causeways built by ancient civilizations such as the Romans are examples of early inclined planes that have survived, and show that they understood the value of this device for moving things uphill. The heavy stones used in ancient stone structures such as Stonehenge are believed to have been moved and set in place using inclined planes made of earth, although it is hard to find evidence of such temporary building ramps. The Egyptian pyramids were constructed using inclined planes, Siege ramps enabled ancient armies to surmount fortress walls. The ancient Greeks constructed a paved ramp 6 km (3.7 miles) long, the Diolkos, to drag ships overland across the Isthmus of Corinth.
However the inclined plane was the last of the six classic simple machines to be recognised as a machine. This is probably because it is a passive, motionless device (the load is the moving part), and also because it is found in nature in the form of slopes and hills. Although they understood its use in lifting heavy objects, the ancient Greek philosophers who defined the other five simple machines did not include the inclined plane as a machine. This view persisted among a few later scientists; as late as 1826 Karl von Langsdorf wrote that an inclined plane "...is no more a machine than is the slope of a mountain. The problem of calculating the force required to push a weight up an inclined plane (its mechanical advantage) was attempted by Greek philosophers Heron of Alexandria (c. 10 - 60 CE) and Pappus of Alexandria (c. 290 - 350 CE), but they got it wrong.
It wasn't until the Renaissance that the inclined plane was solved mathematically and classed with the other simple machines. The first correct analysis of the inclined plane appeared in the work of enigmatic 13th century author Jordanus de Nemore, however his solution was apparently not communicated to other philosophers of the time. Girolamo Cardano (1570) proposed the incorrect solution that the input force is proportional to the angle of the plane. Then at the end of the 16th century, three correct solutions were published within ten years, by Michael Varro (1584), Simon Stevin (1586), and Galileo Galilee (1592). Although it was not the first, the derivation of Flemish engineer Simon Stevin is the most well-known, because of its originality and use of a string of beads (see box). In 1600, Italian scientist Galileo Galilei included the inclined plane in his analysis of simple machines in Le Meccaniche ("On Mechanics"), showing its underlying similarity to the other machines as a force amplifier.
The first elementary rules of sliding friction on an inclined plane were discovered by Leonardo da Vinci (1452-1519), but remained unpublished in his notebooks. They were rediscovered by Guillaume Amontons (1699) and were further developed by Charles-Augustin de Coulomb (1785). Leonhard Euler (1750) showed that the tangent of the angle of repose on an inclined plane is equal to the coefficient of friction.
The mechanical advantage of an inclined plane depends on its slope, meaning its gradient or steepness. The smaller the slope, the larger the mechanical advantage, and the smaller the force needed to raise a given weight. A plane's slope s is equal to the difference in height between its two ends, or "rise", divided by its horizontal length, or "run". It can also be expressed by the angle the plane makes with the horizontal, θ.
The mechanical advantage MA of a simple machine is defined as the ratio of the output force exerted on the load to the input force applied. For the inclined plane the output load force is just the gravitational force of the load object on the plane, its weight Fw. The input force is the force Fi exerted on the object, parallel to the plane, to move it up the plane. The mechanical advantage are...
The MA of an ideal inclined plane without friction is sometimes called ideal mechanical advantage (IMA) while the MA when friction is included is called the actual mechanical advantage (AMA).
Frictionless inclined planeEdit
If there is no friction between the object being moved and the plane, the device is called an ideal inclined plane. This condition might be approached if the object is rolling, like a barrel, or supported on wheels or casters. Due to conservation of energy, for a frictionless inclined plane the work done on the load lifting it, Wout, is equal to the work done by the input force, Win
Work is defined as the force multiplied by the displacement an object moves. The work done on the load is just equal to its weight multiplied by the vertical displacement it rises, which is the "rise" of the inclined plane
The input work is equal to the force Fi on the object times the diagonal length of the inclined plane.
Substituting these values into the conservation of energy equation above and rearranging
To express the mechanical advantage by the angle θ of the plane, it can be seen from the diagram (above) that
So the mechanical advantage of a frictionless inclined plane is equal to the reciprocal of the sine of the slope angle. The input force Fi from this equation is the force needed to hold the load motionless on the inclined plane, or push it up at a constant velocity. If the input force is greater than this, the load will accelerate up the plane; if the force is less, it will accelerate down the plane.
Inclined plane with frictionEdit
Where there is friction between the plane and the load, as for example with a heavy box being slid up a ramp, some of the work applied by the input force is dissipated as heat by friction, Wfric, so less work is done on the load. Due to conservation of energy, the sum of the output work and the frictional energy losses is equal to the input work
Therefore, more input force is required, and the mechanical advantage is lower, than if friction were not present. With friction, the load will only move if the net force parallel to the surface is greater than the frictional force Ff opposing it. The maximum friction force is given by
where Fn is the normal force between the load and the plane, directed normal to the surface, and μ is the coefficient of static friction between the two surfaces, which varies with the material. When no input force is applied, if the inclination angle θ of the plane is less than some maximum value φ the component of gravitational force parallel to the plane will be too small to overcome friction, and the load will remain motionless. This angle is called the angle of repose and depends on the composition of the surfaces, but is independent of the load weight. It is shown below that the tangent of the angle of repose φ is equal to μ
With friction, there is always some range of input force Fi for which the load is stationary, neither sliding up or down the plane, whereas with a frictionless inclined plane there is only one particular value of input force for which the load is stationary.
- The applied force, Fi exerted on the load to move it, which acts parallel to the inclined plane.
- The weight of the load, Fw, which acts vertically downwards
- The force of the plane on the load. This can be resolved into two components:
- The normal force Fn of the inclined plane on the load, supporting it. This is directed perpendicular (normal) to the surface.
- The frictional force, Ff of the plane on the load acts parallel to the surface, and is always in a direction opposite to the motion of the object. It is equal to the normal force multiplied by the coefficient of static friction μ between the two surfaces.
Using Newton's second law of motion the load will be stationary or in steady motion if the sum of the forces on it is zero. Since the direction of the frictional force is opposite for the case of uphill and downhill motion, these two cases must be considered separately:
- Uphill motion: The total force on the load is toward the uphill side, so the frictional force is directed down the plane, opposing the input force.
- The mechanical advantage is
- where . This is the condition for impending motion up the inclined plane. If the applied force Fi is greater than given by this equation, the load will move up the plane.
- Downhill motion: The total force on the load is toward the downhill side, so the frictional force is directed up the plane.
- The mechanical advantage is
- This is the condition for impending motion down the plane; if the applied force Fi is less than given in this equation, the load will slide down the plane. There are three cases:
- : The mechanical advantage is negative. In the absence of applied force the load will remain motionless, and requires some negative (downhill) applied force to slide down.
- : The 'angle of repose'. The mechanical advantage is infinite. With no applied force, load will not slide, but the slightest negative (downhill) force will cause it to slide.
- : The mechanical advantage is positive. In the absence of applied force the load will slide down the plane, and requires some positive (uphill) force to hold it motionless
Mechanical advantage using powerEdit
The mechanical advantage of an inclined plane is the ratio of the weight of the load on the ramp to the force required to pull it up the ramp. If energy is not dissipated or stored in the movement of the load, then this mechanical advantage can be computed from the dimensions of the ramp.
In order to show this, let the position r of a rail car on along the ramp with an angle, θ, above the horizontal be given by
where R is the distance along the ramp. The velocity of the car up the ramp is now
Because there are no losses, the power used by force F to move the load up the ramp equals the power out, which is the vertical lift of the weight W of the load.
The input power pulling the car up the ramp is given by
and the power out is
Equate the power in to the power out to obtain the mechanical advantage as
The mechanical advantage of an inclined can also be calculated from the ratio of length of the ramp L to its height H, because the sine of the angle of the ramp is given by
Example: If the height of a ramp is H = 1 meter and its length is L = 5 meters, then the mechanical advantage is
which means that a 20 lb force will lift a 100 lb load.
The Liverpool Minard inclined plane has the dimensions 1804 meters by 37.50 meters, which provides a mechanical advantage of
so a 100 lb tension force on the cable will lift a 4810 lb load. The grade of this incline is 2%, which means the angle θ is small enough that sinθ=tanθ.
- Cole, Matthew (2008). Explore science, 2nd Ed. Pearson Education. p. 178. ISBN 978-981-06-2002-8.
- Merriam-Webster's collegiate dictionary, 11th Ed. Merriam-Webster. 2003. pp. 629. ISBN 978-0-87779-809-5.
inclined plane definition dictionary.
- "The Inclined Plane". Math and science activity center. Edinformatics. 1999. Retrieved March 11, 2012.
- Silverman, Buffy (2009). Simple Machines: Forces in Action, 4th Ed. USA: Heinemann-Raintree Classroom. p. 7. ISBN 978-1-4329-2317-4.
- Ortleb, Edward P.; Richard Cadice (1993). Machines and Work. Lorenz Educational Press. pp. iv. ISBN 978-1-55863-060-4.
- Reilly, Travis (November 24, 2011). "Lesson 04:Slide Right on By Using an Inclined Plane". Teach Engineering. College of Engineering, Univ. of Colorado at Boulder. Archived from the original on May 8, 2012. Retrieved September 8, 2012.
- Scott, John S. (1993). Dictionary of Civil Engineering. Chapman & Hill. p. 14. ISBN 978-0-412-98421-1.
angle of friction [mech.] in the study of bodies sliding on plane surfaces, the angle between the perpendicular to the surface and the resultant force (between the body and the surface) when the body begins to slide. angle of repose [s.m.] for any given granular material the steepest angle to the horizontal at which a heaped surface will stand in stated conditions.
- Ambekar, A. G. (2007). Mechanism and Machine Theory. PHI Learning. p. 446. ISBN 978-81-203-3134-1.
Angle of repose is the limiting angle of inclination of a plane when a body, placed on the inclined plane, just starts sliding down the plane.
- Rosen, Joe; Lisa Quinn Gothard (2009). Encyclopedia of Physical Science, Volume 1. Infobase Publishing. p. 375. ISBN 978-0-8160-7011-4.
- Koetsier, Teun (2010). "Simon Stevin and the rise of Archimedean mechanics in the Renaissance". The Genius of Archimedes – 23 Centuries of Influence on Mathematics, Science and Engineering: Proceedings of an International Conference Held at Syracuse, Italy, June 8–10, 2010. Springer. pp. 94–99. ISBN 978-90-481-9090-4.
- Devreese, Jozef T.; Guido Vanden Berghe (2008). 'Magic is no magic': The wonderful world of Simon Stevin. WIT Press. pp. 136–139. ISBN 978-1-84564-391-1.
- Feynman, Richard P.; Robert B. Leighton; Matthew Sands (1963). The Feynman Lectures on Physics, Vol. I. USA: California Inst. of Technology. pp. 4.4–4.5. ISBN 978-0-465-02493-3.
- E.J.Dijksterhuis: Simon Stevin 1943
- Therese McGuire, Light on Sacred Stones, in Conn, Marie A.; Therese Benedict McGuire (2007). Not etched in stone: essays on ritual memory, soul, and society. University Press of America. p. 23. ISBN 978-0-7618-3702-2.
- Dutch, Steven (1999). "Pre-Greek Accomplishments". Legacy of the Ancient World. Prof. Steve Dutch's page, Univ. of Wisconsin at Green Bay. Retrieved March 13, 2012.
- Moffett, Marian; Michael W. Fazio; Lawrence Wodehouse (2003). A world history of architecture. Laurence King Publishing. p. 9. ISBN 978-1-85669-371-4.
- Peet, T. Eric (2006). Rough Stone Monuments and Their Builders. Echo Library. pp. 11–12. ISBN 978-1-4068-2203-8.
- Thomas, Burke (2005). "Transport and the Inclined Plane". Construction of the Giza Pyramids. world-mysteries.com. Retrieved March 10, 2012.
- Isler, Martin (2001). Sticks, stones, and shadows: building the Egyptian pyramids. USA: University of Oklahoma Press. pp. 211–216. ISBN 978-0-8061-3342-3.
- Sprague de Camp, L. (1990). The Ancient Engineers. USA: Barnes & Noble. p. 43. ISBN 978-0-88029-456-0.
- Karl von Langsdorf (1826) Machinenkunde, quoted in Reuleaux, Franz (1876). The kinematics of machinery: Outlines of a theory of machines. MacMillan. pp. 604.
- for example, the lists of simple machines left by Roman architect Vitruvius (c. 80 – 15 BCE) and Greek philosopher Heron of Alexandria (c. 10 – 70 CE) consist of the five classical simple machines, excluding the inclined plane. – Smith, William (1848). Dictionary of Greek and Roman antiquities. London: Walton and Maberly; John Murray. p. 722., Usher, Abbott Payson (1988). A History of Mechanical Inventions. USA: Courier Dover Publications. pp. 98, 120. ISBN 978-0-486-25593-4.
- Heath, Thomas Little (1921). A History of Greek Mathematics, Vol. 2. UK: The Clarendon Press. pp. 349, 433–434.
- Egidio Festa and Sophie Roux, The enigma of the inclined plane in Laird, Walter Roy; Sophie Roux (2008). Mechanics and natural philosophy before the scientific revolution. USA: Springer. pp. 195–221. ISBN 978-1-4020-5966-7.
- Meli, Domenico Bertoloni (2006). Thinking With Objects: The Transformation of Mechanics in the Seventeenth Century. JHU Press. pp. 35–39. ISBN 978-0-8018-8426-9.
- Boyer, Carl B.; Uta C. Merzbach (2010). A History of Mathematics, 3rd Ed. John Wiley and Sons. ISBN 978-0-470-63056-3.
- Usher, Abbott Payson (1988). A History of Mechanical Inventions. Courier Dover Publications. p. 106. ISBN 978-0-486-25593-4.
- Machamer, Peter K. (1998). The Cambridge Companion to Galileo. London: Cambridge University Press. pp. 47–48. ISBN 978-0-521-58841-6.
- Armstrong-Hélouvry, Brian (1991). Control of machines with friction. USA: Springer. p. 10. ISBN 978-0-7923-9133-3.
- Meyer, Ernst (2002). Nanoscience: friction and rheology on the nanometer scale. World Scientific. p. 7. ISBN 978-981-238-062-3.
- Handley, Brett; David M. Marshall; Craig Coon (2011). Principles of Engineering. Cengage Learning. pp. 71–73. ISBN 978-1-4354-2836-2.
- Dennis, Johnnie T. (2003). The Complete Idiot's Guide to Physics. Penguin. pp. 116–117. ISBN 978-1-59257-081-2.
- Nave, Carl R. (2010). "The Incline". Hyperphysics. Dept. of Physics and Astronomy, Georgia State Univ. Retrieved September 8, 2012.
- Martin, Lori (2010). "Lab Mech14:The Inclined Plane - A Simple Machine" (PDF). Science in Motion. Westminster College. Retrieved September 8, 2012.
- Pearson (2009). Physics class 10 - The IIT Foundation Series. New Delhi: Pearson Education India. p. 69. ISBN 978-81-317-2843-7.
- Bansal, R.K (2005). Engineering Mechanics and Strength of Materials. Laxmi Publications. pp. 165–167. ISBN 978-81-7008-094-7.
- This derives slightly more general equations which cover force applied at any angle: Gujral, I.S. (2008). Engineering Mechanics. Firewall Media. pp. 275–277. ISBN 978-81-318-0295-3.
|Wikimedia Commons has media related to Inclined planes.| |
Python provides the List, Tuple, Dictionary, Set data types in order to store sequences. Python also provides the
filter() method in order to test each element or item in a sequence to test for the given condition and return True or False according to the test result.
filter() Method Syntax
The filter() method has the following syntax.
filter( FUNCTION , SEQUENCE )
- FUNCTION is the function which will test each item or element for the given SEQUENCE and return True or False.
- SEQUENCE is the sequence type like List, Dictionary, Tuple which will be tested with the given FUNCTION.
The list is the most popular sequential data type used in Python. The filter() method mostly used for a list of data types in order to filter the items of the list. In the following example, we will filter the even numbers from the odd numbers by using the function named
even() . Keep in mind that the filer() method will return an iterable object which will provide given items when iterated with for.
mylist = [ 2 , 5 , 4 , 6 , 8 , 1 , 0 ] def odd(number): if ( number in [ 1 , 3 , 5 , 7 , 9 ] ): return True else: return False odd_numbers= filter ( odd , mylist ) for number in odd_numbers: print( number )
filter() Method with Lambda Function
Alternatively, the filter() method can be used with a lambda function. The lambda function will be written into the filter() method without creating a function with def etc.
mylist = [ 2 , 5 , 4 , 6 , 8 , 1 , 0 ] odd_numbers= filter( lambda number: number in [ 1 , 3 , 5 , 7 , 9 ] , mylist ) for number in odd_numbers: print( number ) |
1. Literary Periods. The Indus Valley civilization flourished in northern India between
2500 and 1500 B.C. The Aryans, a group of nomadic warriors and herders, were the
earliest known migrants into India. They brought with them a well-developed
language and literature and a set of religious beliefs.
a) Vedic Period (1500 B.C. –500 B.C.). This period is named for the Vedas, a set
of hymns that formed the cornerstone of Aryan culture. Hindus consider the
Vedas, which were transmitted orally by priests, to be the most sacred of all
literature for they believe these to have been revealed to humans directly by the
• The Rigveda which has come to mean “hymns of supreme sacred
knowledge,” is the foremost collection or Samhita made up of 1,028 hymns.
The oldest of the Vedas, it contains strong, energetic, non-speculative hymns,
often comparable to the psalms in the Old Testament. The Hindus regard
these hymns as divinely inspired or ‘heard’ directly from the gods.
Focus: Afro-Asian Literature
1. be familiar with the literary history, philosophy, religious
beliefs, and culture of the Afro-Asian nations
2. point out the universal themes, issues, and subject matter that
dominate Afro-Asian literature
3. interpret the significance and meaning of selected literary
4. identify outstanding writers and their major works
2. The Song of Creation
Then was not non-existent nor existent: there was no realm of air, no sky
What covered it and where? And what gave shelter? Was water there,
unfathomedmdepth of water?
Death was not then nor was there aught immortal: no sign was there, the
day’s and night’s divider.
That one thing, breathless, breathed by its own nature: apart from it was
Darkness there was: at first concealed in darkness, this All was
All that existed then was void and formless: by the great power of warmth was
born that unit.
b) Epic and Buddhist Age (500 B.C. – A.D.). The period of composition of the
two great epics, Mahabharata and the Ramayana. This time was also the growth
of later Vedic literature, new Sanskrit literature, and Buddhist literature in Pali.
The Dhammapada was also probably composed during this period. The Maurya
Empire (322-230 B.C.) ruled by Ashoka promoted Buddhism and preached
goodness, nonviolence, and ‘righteousness’ although this period was known for
warfare and iron-fisted rule. The Gupta Dynasty (320-467 B.C.) was the next
great political power. During this time, Hinduism reached a full flowering and
was evident in culture and the arts.
• The Mahabharata, traditionally ascribed to the sage Vyasa, consists of a
mass of legendary and didactic material that tells of the struggle for
supremacy between two groups of cousins, the Kauravas and the Pandavas set
sometime 3102 BC. The poem is made up of almost 100,000 couplets divided
into 18 parvans or sections. It is an exposition on dharma (codes of conduct),
including the proper conduct of a king, of a warrior, of a man living in times
of calamity, and of a person seeking to attain emancipation from rebirth.
• The Bhagavad Gita (The Blessed Lord’s Song) is one of the greatest and
most beautiful of the Hindu scriptures. It is regarded by the Hindus in
somewhat the same way as the Gospels are by Christians. It forms part of
Book IV and is written in the form of a dialogue between the warrior Prince
Arjuna and his friend and charioteer, Krishna, who is also an earthly
3. incarnation of the god Vishnu.
From the Bhagavad-Gita
Arjuna: Krishna, what defines a man/ deep in contemplation whose insight/
and thought are sure? How would he speak?/ How would he sit? How
would he move?
Lord Krishna: When he gives up desires in his mind,/ is content with the self
within himself,/ then he is said to be a man/ whose insight is sure, Arjuna.
When suffering does not disturb his mind,/ when his craving for pleasures has
vanished,/ when attraction, fear, and anger are gone,/ he is called a sage
whose thought is sure.
• The Ramayana was composed in Sanskrit, probably not before 300 BC, by
the poet Valmiki and consists of some 24,000 couplets divided into seven
books. It reflects the Hindu values and forms of social organization, the
theory of karma, the ideals of wifehood, and feelings about caste, honor and
The poem describes the royal birth of Rama, his tutelage under the sage
Visvamitra, and his success in bending Siva’s mighty bow, thus winning Sita,
the daughter of King Janaka, for his wife. After Rama is banished from his
position as heir by an intrigue, he retreats to the forest with his wife and his
half brother, Laksmana. There Ravana, the demon-king of Lanka, carries off
Sita, who resolutely rejects his attentions. After numerous adventures Rama
slays Ravana and rescues Sita. When they return to his kingdom, however,
Rama learns that the people question the queen’s chastity, and he banishes her
to the forest where she gives birth to Rama’s two sons. The family is reunited
when the sons come of age, but Sita, after again protesting her innocence, asks
to be received by the earth, which swallows her up.
From the Ramayana: “Brother’s Faithfulness”
If my elder and his lady to the pathless forests wend,
Armed with bow and ample quiver Lakshman will on them attend,
Where the wild deer range the forest and the lordly tuskers roam,
And the bird of gorgeous plumage nestles in its jungle home,
Dearer far to me those woodlands where perennial bliss prevails!
Grant me then thy sweet permission, - faithful to thy glorious star,
Lakshman, shall not wait and tarry when his Rama wanders far,
Grant me then thy loving mandate, - Lakshman hath no wish to stay,
None shall bar the faithful younger when the elder leads the way!
c) Classical Period (A.D. – 1000 A.D.). The main literary language of northern
India during this period was Sanskrit, in contrast with the Dravidian languages of
southern India. Sanskrit, which means ‘perfect speech’ is considered a sacred
4. language, the language spoken by the gods and goddesses. As such, Sanskrit was
seen as the only appropriate language for the noblest literary works. Poetry and
drama peaked during this period. Beast fables such as the Panchatantra were
popular and often used by religious teachers to illustrate moral points.
• The Panchatantra is a collection of Indian beast fables originally written in
Sanskrit. In Europe, the work was known under the title The Fables of
Bidpai after the narrator, and Indian sage named Bidpai, (called Vidyapati in
Sanskrit). It is intended as a textbook of artha (worldly wisdom); the
aphorisms tend to glorify shrewdness and cleverness more than helping of
others. The original text is a mixture of Sanskrit prose and stanzas of verse,
with the stories contained within one of five frame stories. The introduction,
which acts as an enclosing frame for the entire work, attributes the stories to a
learned Brahman named Vishnusarman, who used the form of animal fables to
instruct the three dull-witted sons of a king.
From the Panchatantra: “Right-Mind and Wrong-Mind”
The good and bad of given schemes/ wise thought must first reveal: the stupid
heron saw his chicks/ provide a mongoose meal.
• Sakuntala, a Sanskrit drama by Kalidasa, tells of the love between
Sakuntala and King Dushyanta. What begins as a physical attraction for both
of them becomes spiritual in the end as their love endures and surpasses all
difficulties. King Dushyanta is a noble and pious king who upholds his duties
above personal desire. Sakuntala, on the other hand, is a young girl who
matures beautifully because of her kindness, courage, and strength of will.
After a period of suffering, the two are eventually reunited. Emotion or rasa
dominates every scene in Sanskrit drama. These emotions vary from love to
anger, heroism to cowardice, joy to terror and allows the audience to take part
in the play and be one with the characters.
Excerpt from Sakuntala:
King. You are too modest. I feel honoured by the mere sight of you.
Shakuntala. Anusuya, my foot is cut on a sharp blade of grass, and my dress
is caught on an amaranth twig. Wait for me while I loosen it. (She casts a
lingering glance at the king, and goes out with her two friends.)
King. (sighing). They are gone. And I must go. The sight of Shakuntala has
made me dread the return to the city. I will make my men camp at a distance
from the pious grove. But I cannot turn my own thoughts from Shakuntala.
It is my body leaves my love, not I;/ My body moves away, but not my
For back to her struggling fancies fly/ Like silken banners borne against
5. • The Little Clay Cart (Mrcchakatika) is attributed to Shudraka, a king. The
characters in this play include a Brahman merchant who has lost his money
through liberality, a rich courtesan in love with a poor young man, much
description of resplendent palaces, and both comic and tragic or near-tragic
PROLOGUE (Benediction upon the audience)
May His, may Shiva's meditation be
Your strong defense; on the Great Self thinks he,
Knowing full well the world's vacuity.
May Shiva's neck shield you from every harm,
That seems a threatening thunder-cloud, whereon,
Bright as the lightning-flash, lies Gauri's arm.
d) Medieval and Modern Age (A.D. 1000 – present). Persian influence on
literature was considerable during this period. Persian was the court language of
the Moslem rulers. In the 18th
century India was directly under the British Crown
and remained so until its Independence in 1947. British influence was strong and
modern-day Indians are primarily educated in English. Many have been brought
into the world of Western learning at the expense of learning about their own
• Gitanjali: Song Offerings was originally published in India in 1910 and it s
translation followed in 1912. In these prose translations, Rabindranath
Tagore uses imagery from nature to express the themes of love and the
internal conflict between spiritual longings and earthly desires.
I ask for a moment's indulgence to sit by thy side.
The works that I have in hand I will finish afterwards.
Away from the sight of thy face my heart knows no rest nor respite,
and my work becomes an endless toil in a shoreless sea of toil.
• The Taj Mahal, a poem by Sahir Ludhianvi, is about the mausoleum in
North India built by the Mogul emperor Shah Jahan for his wife Mumtaz-i-
Mahal. The façade of this grandiose structure is made of white marble and is
surrounded by water gardens, gateways, and walks. The tomb at the center of
the dome stands on a square block with towers at each corner. The
construction of the building took twenty years to complete involving some 20,
Excerpt from the Taj Mahal
Do dead king’s tombs delight you?
6. If so, look into your own dark home.
In this world, countless people have loved.
Who says their passions weren’t true?
They just couldn’t afford a public display like this.
• On Learning to be an Indian an essay by Santha Rama Rau illustrates the
telling effects of colonization on the lives of the people particularly the
younger generation. The writer humorously narrates the conflicts that arise
between her grandmother's traditional Indian values and the author’s own
Because Mother had to fight against the old standards, and because she was
brought up to believe in them, she has an emotional understanding of them
which my sister I will never have. Brought up in Europe and educated in
preparatory and public schools in England, we felt that the conventions were
not only retrogressive and socially crippling to the country but also a little
2. Religions. Indian creativity is evident in religion as the country is the birthplace of
two important faiths: Hinduism, the dominant religion, and Buddhism, which
ironically became extinct in India but spread throughout Asia.
a) Hinduism, literally “the belief of the people of India,” is the predominant faith of
India and of no other nation. The Hindus are deeply absorbed with God and the
creation of the universe.
The Purusarthas are the three ends of man: dharma – virtue, duty,
righteousness, moral law; artha – wealth; and kama – love or pleasure. A fourth
end is moksha – the renunciation of duty, wealth and love in order to seek
spiritual perfection. It is achieved after the release from samsara, the cycle of
births and deaths. The Hindus believe that all reality is one and spiritual, and that
each individual soul is identical with this reality and shares its characteristics:
pure being, intelligence, and bliss. Everything that seems to divide the soul from
this reality is maya or illusion.
Life is viewed as an upward development through four stages of effort
called the four asramas: a) the student stage – applies to the rite of initiation into
the study of the Vedas; b) the householder stage – marries and fulfills the duties
as head of the family where he begets sons and earns a living; c) the stage of the
forest dweller – departs from home and renounces the social world; and d)
ascetic – stops performing any of the rituals or social duties of life in the world
and devotes time for reflection and meditation.
Kama refers to one of the proper pursuits of man in his role as
householder, that of pleasure and love. The Kama-sutra is a classic textbook on
erotics and other forms of pleasure and love, which is attributed to the sage
The Hindus regard Purusha, the Universal Spirit, as the soul and original
source of the universe. As the universal soul, Purusha is the life-giving principle
in all animated beings. As a personified human being, Purusha's body is the
7. source of all creation. The four Varnas serve as the theoretical basis for the
organization of the Hindu society. These were thought to have been created from
- The Brahman (priest) was Purusha’s mouth. Their duty is to perform
sacrifices, to study and to teach the Vedas, and to guard the rules of
dharma. Because of their sacred work, they are supreme in purity and
- The Ksatriyas (warriors) are the arms. From this class arose the kings
who are the protectors of society.
- The Vaisyas (peasants) are the thighs. They live by trading, herding,
- The Sudras (serfs) are the feet. They engage in handicrafts and manual
occupation and they are to serve meekly the three classes above them.
They are strictly forbidden to mate with persons of a higher varna.
• The Upanishads form a highly sophisticated commentary on the religious
thought suggested by the poetic hymns of the Rigveda. The name implies,
according to some traditions, ‘sitting at the feet of the teacher.’ The most
important philosophical doctrine is the concept of a single supreme being, the
Brahman, and knowledge is directed toward reunion with it by the human
soul, the Atman or Self. The nature of eternal life is discussed and such
themes as the transmigration of souls and causality in creation.
b) Buddhism originated in India in the 6th
century B.C. This religion is based on
the teachings of Siddhartha Gautama, called Buddha, or the ‘Enlightened
One.’ Much of Buddha’s teaching is focused on self-awareness and self-
development in order to attain nirvana or enlightenment.
According to Buddhist beliefs, human beings are bound to the wheel of
life which is a continual cycle of birth, death, and suffering. This cycle is an
effect of karma in which a person’s present life and experiences are the result of
past thoughts and actions, and these present thoughts and actions likewise create
those of the future. The Buddhist scriptures uphold the Four Noble Truths and the
Noble Eightfold Path. The Four Noble Truths are: 1) life is suffering; 2) the
cause of suffering is desire; 3) the removal of desire is the removal of suffering;
and 4) the Noble Eightfold Path leads to the end of suffering. The Noble
Eightfold Path consists of: 1) right understanding; 2) right thought; 3) right
speech; 4) right action; 5) right means of livelihood; 6) right effort; 7) right
concentration; and 8) right meditation. The Buddhist truth states that bad actions
and bad feelings such as selfishness, greed, hostility, hate are evil not because
they harm others but because of their negative influence on the mental state of the
doer. It is in this sense that evil returns to punish the doer
• The Dhammapada (Way of Truth) is an anthology of basic Buddhist
teaching in a simple aphoristic style. One of the best known books of the Pali
Buddhist canon, it contains 423 stanzas arranged in 26 chapters. These verses
are compared with the Letters of St. Paul in the Bible or that of Christ’s
8. Sermon on the Mount.
As a fletcher makes straight his arrow, a wise man makes straight his
trembling and unsteady thought which is difficult to guard, difficult to hold
As a fish taken from his watery home and thrown on the dry ground, our
thought trembles all over in order to escape the dominion of Mara, the
It is good to tame the mind, which is difficult to hold in and flighty, rushing
wherever it listeth; a tamed mind brings happiness.
Let the wise man guard his thoughts, for they are difficult to perceive, very
artful, and they rush wherever they list: thoughts well guarded bring
Those who bridle their mind which travels far, moves about alone, is without a
body, and hides in the chamber of the heart, will be free from the bonds of
Mara, the tempter
3. Major Writers.
a) Kalidasa a Sanskrit poet and dramatist is probably the greatest Indian writer of
all time. As with most classical Indian authors, little is known about Kalidasa’s
person or his historical relationships. His poems suggest that he was a Brahman
(priest). Many works are traditionally ascribed to the poet, but scholars have
identified only six as genuine.
b) Rabindranath Tagore (1861-1941). The son of a Great Sage, Tagore is a
Bengali poet and mystic who won the Nobel Prize for Literature in 1913. Tagore
managed his father's estates and lived in close contact with the villagers. His
sympathy for their poverty and backwardness was later reflected in his works.
The death of his wife and two children brought him years of sadness but this also
inspired some of his best poety. Tagore is also a gifted composer and a painter.
c) Prem Chand pseudonym of Dhanpat Rai Srivastava (1880-1936). Indian
author of numerous novels and short stories in Hindi and Urdu who pioneered in
adapting Indian themes to Western literary styles. He worked as a teacher before
joining Mahatma Gandhi’s anticolonial Noncooperation Movement.
• Sevasadana (House of Service). His first major novel deals with the
problems of prostitution and moral corruption among the Indian middle class.
• Manasarovar (The Holy Lake). A collection of 250 or so short stories which
contains most of Prem Chand’s best works.
• Godan (The Gift of a Cow). This last novel was Prem Chand’s masterpiece
and it deals with his favorite theme – the hard and unrewarding life of the
9. d) Kamala Markandaya (1924). Her works concern the struggles of contemporary
Indians with conflicting Eastern and Western values. A Brahman, she studied at
Madras University then settled in England and married an Englishman. In her
fiction, Western values typically are viewed as modern and materialistic, and
Indian values as spiritual and traditional.
• Nectar in a Sieve. Her first novel and most popular work is about an Indian
peasant’s narrative of her difficult life.
e) R. K. Narayan (1906). One of the finest Indian authors of his generation writing
in English. He briefly worked as a teacher before deciding to devote himself
full-time to writing. All of Narayan’s works are set in the fictitious South Indian
town of Malgudi. They typically portray the peculiarities of human relationships
and the ironies of Indian daily life, in which modern urban existence clashes with
ancient tradition. His style is graceful, marked by genial humor, elegance, and
• Swami and Friends. His first novel is an episodic narrative recounting the
adventures of a group of schoolboys.
• Novels: The English Teacher (1945), Waiting for the Mahatma (1955), The
Guide (1958), The Man-Eater of Malgudi (1961), The Vendor of Sweets
(1967), A Tiger for Malgudi (1983), and The World of Nagaraj (1990).
• Collection of Short Stories: Lawley Road (1956), A Horse and Two Goats
and Other Stories (1970), Under the Banyan Tree and Other Stories (1985),
and Grandmother’s Tale (1992).
f) Anita Desai (1937). An English-language Indian novelist and author of
children’s books, she is considered India’s premier imagist writer. She excelled
in evoking character and mood through visual images. Most of her works reflect
Desei’s tragic view of life.
• Cry, the Peacock. Her first novel addresses the theme of the suppression and
oppression of Indian women.
• Clear Light of Day. Considered the author’s most successful work, this is a
highly evocative portrait of two sisters caught in the lassitude of Indian life.
This was shortlisted for the 1980 Booker Prize.
• Fire on the Mountain. This work was criticized as relying too heavily on
imagery at the expense of plot and characterization, but it was praised for its
poetic symbolism and use of sounds. This won for her the Royal Society of
Literature’s Winifred Holtby Memorial Prize.
g) Vir Singh (1872-1957). A Sikh writer and theologian, he wrote at a time when
Sikh religion and politics and the Punjabi language were under heavy attack by
the English and Hindus. He extolled Sikh courage, philosophy, and ideals,
earning respect for the Punjabi language as a literary vehicle.
• Kalghi Dhar Chamatkar. This novel is about the life of the 17th
• Other novels on Sikh philosophy and martial excellence include Sundri
10. (1898) and Bijai Singh (1899).
h) Arundhati Roy. A young female writer whose first book The God of Small
Things won for her a Booker Prize.
1. Historical Background. Chinese literature reflects the political and social history
of China and the impact of powerful religions that came from within and outside the
country. Its tradition goes back thousand of years and has often been inspired by
philosophical questions about the meaning of life, how to live ethically in society,
and how to live in spiritual harmony with the natural order of the universe.
a) Shang Dynasty (1600 B.C.). During this time, the people practiced a
religion based on the belief that nature was inhabited by many powerful gods and
spirits. Among the significant advances of this period were bronze working,
decimal system, a twelve-month calendar and a system of writing consisting of
b) Chou Dynasty (1100 B.C. – 221 B.C.). This was the longest of all the
dynasties and throughout most of this period China suffered from severe political
disunity and upheaval. This era was also known as the Hundred Schools period
because of the many competing philosophers and teachers who emerged the most
influential among them being Lao Tzu, the proponent of Taoism, and Confucius,
the founder of Confucianism. Lao Tzu stressed freedom, simplicity, and the
mystical contemplation of nature whereas Confucius emphasized a code of social
conduct and stressed the importance of discipline, morality, and knowledge.
The Book of Songs, (Shih Ching) first compiled in the 6th
century B.C., is the
oldest collection of Chinese poetry and is considered a model of poetic
expression and moral insight. The poems include court songs that entertained
the aristocracy, story songs that recounted Chou dynasty legends, hymns that
were sung in the temples accompanied by dance and brief folk songs and ballads.
Although these poems were originally meant to be sung, their melodies have
long been lost.
O Oriole, Yellow Bird
O oriole, yellow bird,/ Do not settle on the corn,
Do not peck at my millet./ The people of this land
Are not minded to nurture me./ I must go back, go home
To my own land and kin.
The Parables of the Ancient Philosophers illustrate the Taoist belief and the
humanism of the Chinese thought. In them can be seen the relativity of all things
as they pass through man’s judgment, the virtues of flexibility, and the
drawbacks of material progress.
11. The Missing Axe by Lieh Tzu
A man whose axe was missing suspected his neighbor’s son. The boy walked
like a thief, looked like a thief, and spoke like a thief. But the man found his axe
while he was digging in the valley, and the next time he saw his neighbor’s son,
the boy walked, looked, and spoke like any other child.
c) Ch’in Dynasty (221 B.C. – 207 B.C.). This period saw the unification of
China and the strengthening of central government. Roads connecting all parts
of the empire were built and the existing walls on the northern borders were
connected to form the Great Wall of China.
d) Han Dynasty (207 B.C. – A.D. 220). This period was one of the most
glorious eras of Chinese history and was marked by the introduction of
Buddhism from India.
e) T’ang Dynasty (A.D. 618-960). Fine arts and literature flourished during
this era which is viewed as the Golden Age of Chinese civilization. Among the
technological advances of this time were the invention of gunpowder and the
The T’ang Poets. Chinese lyrical poetry reached its height during the T’ang
Dynasty. Inspired by scenes of natural beauty, T’ang poets wrote about the
fragile blossoms in spring, the falling of leaves in autumn, or the changing shape
of the moon.
Conversation in the Mountains by Li Po
If you were to ask me why I dwell among green mountains,
I should laugh silently; my soul is serene.
The peach blossom follows the moving water;
There is another heaven and earth beyond the world of men.
A Meeting by Tu Fu
We were often separated How long does youth last?
Like the Dipper and the morning star. Now we are all gray-haired.
What night is tonight? Half of our friends are dead,
We are together in the candlelight. And both of us were surprised when
f) Sung Dynasty (A.D. 960 – 1279). This period was characterized by
delicacy and refinement although inferior in terms of literary arts but great in
learning. Professional poets were replaced by amateur writers. The practice of
g) Later Dynasties (A.D. 1260-1912). During the late 12th
and early 13th
centuries, northern China was overrun by Mongol invaders led by Genghis Khan
whose grandson Kublai Khan completed the Mongol conquest of China and
12. established the Yuan dynasty, the first foreign dynasty in China’s history. It was
during this time that Marco Polo visited China. Chinese rule was reestablished
after the Mongols were driven out of China and the Ming dynasty was
established. There was a growth of drama in colloquial language and a decline
of the language of learning. A second foreign dynasty, the Ch’ing was
established and China prospered as its population rapidly increased causing
major problems for its government.
h) Traditional Chinese Government. The imperial rule lasted in China for
over 2,000 years leading to a pyramid-shaped hierarchy in the government. The
emperor, known as the Son of Heaven, was a hereditary ruler and beneath him
were bureaucratic officials. An official government career was considered
prestigious and the selection was by means of government examinations. The
civil service examinations tested on the major Chinese works of philosophy and
poetry requiring the composition for verse. Most government officials were
well-versed in literature and philosophy and many famous Chinese poets also
served in the government.
2. Philosophy and Religion. Chinese literature and all of Chinese culture has been
profoundly influenced by three great schools of thought: Confucianism, Taoism, and
Buddhism. Unlike Western religions, Chinese religions are based on the perception
of life as a process of continual change in which opposing forces, such as heaven and
earth or light and dark, balance one another. These opposites are symbolized by the
Yin and Yang. Yin, the passive and feminine force, counterbalances yang, the active
and masculine force, each contains a ‘seed’ of the other, as represented in the
traditional yin-yang symbol.
a) Confucianism provides the Chinese with both a moral order and an order
for the universe. It is not a religion but it makes individuals aware of their place
in the world and the behavior appropriate to it. It also provides a political and
Confucius was China’s most famous teacher, philosopher, and political theorist,
whose ideas have influenced all civilizations of East Asia. According to tradition,
Confucius came from an impoverished family of the lower nobility. He became a
minor government bureaucrat but was never give a position of high office. He
criticized government policies and spent the greater part of his life educating a
group of disciples. Confucius was not a religious leader in the ordinary sense, for
his teaching was essentially a social ethic. Confucian politics is hierarchical but
not absolute and the political system is described by analogy with the family.
There are five key Confucian relationships: emperor and subject, father and son,
husband and wife, older brother and younger brother, friend and friend.
Confucian ethics is humanist. The following are Confucian tenets: a) jen or
human heartedness are qualities or forms of behavior that set men above the rest
of the life on earth. It is the unique goodness of man which animals cannot aspire
13. to. Also known as ren, it is the measure of individual character and such, is the
goal of self-cultivation. The ideal individual results from acting according to li,
b) li refers to ritual, custom, propriety, and manner. Li is thought to be the means
by which life should be regulated. A person of li is a good person and a state
ordered by li is a harmonious and peaceful state. Li or de as a virtue is best
understood as a sacred power inherent in the very presence of the sage. The sage
was the inspiration for proper conduct and the model of behavior.
The Analects (Lun Yu) is one of the four Confucian texts. The sayings range
from brief statements to more extended dialogues between Confucius and his
students. Confucius believes that people should cultivate the inherent goodness
within themselves –unselfishness, courage, and honor – as an ideal of universal
moral and social harmony. The Analects instructs on moderation in all things
through moral education, the building of a harmonious family life, and the
development of virtues such as loyalty, obedience, and a sense of justice. It also
emphasizes filial piety and concern with social and religious rituals. To
Confucius, a person’s inner virtues can be fully realized only through concrete
acts of ‘ritual propriety’ or proper behavior toward other human beings.
From The Analects (II.1)
The Master said, “He who exercises government by means of his virtue may be
compared to the north polar star, which keeps its place and all the stars turn
The Book of Changes (I Ching) is one of the Five Classics of Confucian
philosophy and has been primarily used for divination. This book is based on the
concept of change – the one constant of the universe. Although change is never-
ending, it too proceeds according to certain universal and observable patterns.
b) Taoism, was expounded by Lao Tzu during the Chou Dynasty. Taoist
beliefs and influences are an important part of classical Chinese culture. “The
Tao” or “The Way” means the natural course that the world follows. To follow
the tao of to “go with the flow” is both wisdom and happiness. For the Taoist,
unhappiness comes from parting from the tao or from trying to flout it.
The Taoist political ideas are very passive: the good king does nothing, and by
this everything is done naturally. This idea presents an interesting foil to
Confucian theories of state, although the Taoists never represented any political
threat to the Confucianists. Whereas Confucianism stressed conformity and
reason in solving human problems, Taoism stressed the individual and the need
for human beings to conform to nature rather than to society.
Lao-tzu. Known as the “old philosopher”, Lao-zi is credited as the founder of
Taoism and an elder contemporary of Confucius who once consulted with him.
He was more pessimistic than Confucius was about what can be accomplished in
the world by human action. He counseled a far more passive approach to the
14. world and one’s fellows: one must be cautious and let things speak for
themselves. He favored a more direct relationship between the individual self and
The Tao-Te Ching (Classic of the Way of Power) is believed to have been
written between the 8th
centuries B.C. The basic concept of the dao is wu-
wei or “non-action” which means no unnatural action, rather than complete
passivity. It implies spontaneity, non-interference, letting things take their natural
course i.e., “Do nothing and everything else is done.” Chaos ceases, quarrels end,
and self-righteous feuding disappears because the dao is allowed to flow
Realize the Simple Self
Banish wisdom, discard knowledge,
And the people shall profit a hundredfold;
Banish love, discard justice,
And the people shall recover the love of
Banish cunning discard utility,
And the thieves and brigands shall
As these three touch the externals and are
The people have need of what
they can depend upon:
Reveal thy Simple Self,
Embrace the Original
Check thy selfishness,
Curtail thy desires.
c) Buddhism was imported from India during the Han dynasty. Buddhist
thought stresses the importance of ridding oneself of earthly desires and of
seeking ultimate peace and enlightenment through detachment. With its stress on
living ethically and its de-emphasis on material concerns, Buddhism appealed to
both Confucians and Taoists.
3. Genres in Chinese Poetry has always been highly valued in Chinese culture and was
considered superior to prose. Chief among its characteristics are lucidity, brevity,
subtlety, suggestiveness or understatement, and its three-fold appeal to intellect,
emotion, and calligraphy. There are five principle genres in Chinese poetry:
b) shih was the dominant Chinese poetic form from the 2nd
through the 12th
century characterized by: i) an even number of lines; ii) the same number of words
in each line, in most cases five or seven; and iii) the occurrence of rhymes at the
end s of the even-numbered lines. Shih poems often involve the use of parallelism,
or couplets that are similar in structure or meaning.
c) sao was inspired by li sao or ‘encountering sorrow’, a poem of lamentation and
protest authored by China’s first known great poet, Chu Yuan (332-295 B.C.). It
was an unusually long poem consisting of two parts: i) an autobiographical account
that is Confucian in overtones; and ii) a narration of an imaginary journey
undertaken by the persona. The sao enables the poets to display their creativity of
describing China’s flora and fauna, both real and imaginary. It is also filled with
melancholia for unrewarded virtue
15. d) fu was a poem partially expository and partly descriptive involving a single thought
or sentiment usually expressed in a reflective manner. Language ranges from the
simple to the rhetorical.
e) lu-shih or ‘regulation poetry’ was developed during the Tang dynasty but has
remained popular even in the present times. It is an octave consisting of five or
seven syllabic verses with a definite rhyming scheme with all even lines rhyming
together and the presence of the caesura in every line. The first four lines of this
poem is the ching (scene) while the remaining four lines describe the ch’ing
(emotion). Thus, emotion evolves from the setting or atmosphere and the two
becomes fused resulting in a highly focused reflection of the persona’s loneliness
but with determination to struggle.
f) chueh-chu or truncated poetry is a shorter version of the lu-shih and was also
popular during the Tang dynasty. It contains only four lines but within its twenty or
twenty-eight syllables or characters were vivid pictures of natural beauty.
g) tzu was identified with the Sung dynasty. It is not governed by a fixed number of
verses nor a fixed number of characters per verse. The tzu lyrics were sung to the
tunes of popular melodies.
4. Conventions of Chinese Theater. Chinese drama may be traced to the song and
dances of the chi (wizards) and the wu (witches) whom the people consulted to
exercise evil spirits, to bring rain, to insure bountiful harvest, etc., an origin in worship
or in some sacred ritual.
a) There are four principal roles: sheng, tau, ching, and chao.
• The sheng is the prerogative of the leading actor, usually a male character, a
scholar, a statesman, a warrior patriot and the like.
• The tau plays all the women’s roles. At least six principal characters are
played by the female impersonator who has taken over the role after women
were banned from the Chinese stage as they were looked down upon as
• The ching roles usually assigned the roles of brave warriors, bandits, crafty
and evil ministers, upright judges, loyal statesmen, at times god-like and
supernatural beings. Conventionally, the ching must have broad faces and
forehead suitable for the make-up patters suggestive of his behavior.
• The chau is the clown or jester who is not necessarily a fool and may also do
serious or evil character. He is easily recognized for the white patch around his
eyes and nose, his use of colloquial language and adeptness in combining
mimicry and acrobatics.
b) Unlike Greek plays, classical Chinese plays do not follow the unities of time,
place, and action. The plot may be set in two or more places, the time element
sometimes taking years to develop or end, and action containing many other sub-
c) Chinese drama conveys an ethical lesson in the guise of art in order to impress a
moral truth or a Confucian tenet. Dramas uphold virtue, condemn vice, praise
fidelity, and filial piety. Vice is represented on the stage not for its own sake but
as contrast to virtue.
16. d) There are two types of speeches – the dialogue, usually in prose, and the
monologues. While the dialogue carries forward the action of the day, the
monologue is the means for each character to introduce him/herself at the
beginning of the first scene of every scene as well as to outline the plot.
e) Chinese plays are long – six or seven hours if performed completely. The
average length is about four acts with a prologue and an epilogue. The Chinese
play is a total theater. There is singing, recitation of verses, acrobats, dancing,
and playing of traditional musical instruments.
f) Music is an integral part of the classical drama. It has recitatives, arias, and
musical accompaniment. Chinese music is based on movement and rhythm that
harmonized perfectly with the sentiments being conveyed by a character.
g) The poetic dialogue, hsieh tzu (wedge), is placed at the beginning or in between
acts and is an integral part of the play.
The stage is bare of props except a table and a pair of chairs may be converted to
a battlefield or a court scene, a bedroom, even a prison through vivid acting and
Property conventions are rich in symbolism table with a chair at the side, both
the side of the stage, represents a hill or a high wall.
h) Dramatic conventions that serve to identify the nature and function of each
Make-up identifies the characters and personalities. Costumes help reveal types
different colors signify ranks and status.
i) Action reflects highly stylized movements. Hand movements may indicate
embarrassment or helplessness or anguish or anger.
5. Major Chinese Writers.
• Chuang Tzu (4th
century B.C.) was the most important early interpreter of the
philosophy of Taoism. Very little is known about his life except that he served as a
minor court official. In his stories, he appears as a quirky character who cares little
for either public approval or material possessions.
• Lieh Tzu (4th
century B.C.) was a Taoist teacher who had many philosophical
differences with his forebears Lao-Tzu and Chuan Tzu. He argued that a sequence
of causes predetermines everything that happens, including one’s choice of action.
• Lui An (172 – 122 B.C.) was not only a Taoist scholar but the grandson of the
founder of the founder of the Han dynasty. His royal title was the Prince of Haui-
nan. Together with philosophers and under his patronage, he produced a collection
of essays on metaphysics, cosmology, politics, and conduct.
• Ssu-ma Ch’ien (145 – 90 B.C.) was the greatest of China’s ‘Grand Historians’ who
dedicated himself to completing the first history of China the Records of the
Historian. His work covers almost three thousand years of Chinese history in more
than half a million written characters etched onto bamboo tablets.
The T’ang Poets:
• Li Po (701 –762) was Wang Wei’s contemporary and he spent a short time in
courts, but seems to have bee too much of a romantic and too give to drink to
17. carry out responsibilities. He was a Taoist, drawing sustenance from nature and
his poetry was often other-wordly and ecstatic. He had no great regard for his
poems himself. He is said to have mad thousands of them into paper boats which
he sailed along streams.
• Tu Fu (712 –770) is the Confucian moralist, realist, and humanitarian. He was
public-spirited, and his poetry helped chronicle the history of the age: the
• Wang Wei (796? – 761?) was an 8th
century government official who spent the
later years of his life in the country, reading and discussing Buddhism with
scholars and monks. He is known for the pictorial quality of his poetry and for its
economy. His word-pictures parallel Chinese brush artistry in which a few
strokes are all suggestive of authority, the disasters of war, and official
• Po Chu-I (772 – 846) was born two years after Tu Fu died, at a time when China
was still in turmoil from foreign invasion and internal strife. He wrote many
poems speaking bitterly against the social and economic problems that were
• Li Ch’ing-chao (A.D. 1084 – 1151) is regarded as China’s greatest woman poet
and was also one of the most liberated women of her day. She was brought up in
court society and was trained in the arts and classical literature quite an unusual
upbringing for a woman of the Sung dynasty. Many of her poems composed in
the tz’u form celebrate her happy marriage or express her loneliness when her
husband was away.
• Chou-Shu-jen (1881 – 1936) has been called the ‘father of the modern Chinese
short story because of his introduction of Western techniques. He is also known
as Lu Hsun whose stories deal with themes of social concern, the problems of the
poor, women, and intellectuals.
1. Historical Background. Early Japan borrowed much from Chinese culture but
evolved its own character over time. Early Japan’s political structure was based on
clan, or family. Each clan developed a hierarchy of classes with aristocrats, warriors,
and priests at the top and peasants and workers at the bottom. During the 4th
A.D. the Yamato grew to be most powerful and imposed the Chinese imperial system
on Japan creating an emperor, an imperial bureaucracy, and a grand capital city.
a) The Heian Age was the period of peace and prosperity, of aesthetic
refinement and artificial manners. The emperor began to diminish in power but
continued to be a respected figure. Since the Japanese court had few official
responsibilities, they were able to turn their attention to art, music, and literature.
The Pillow Book by Sei Shōnagon, represents a unique form of the diary genre. It
contains vivid sketches of people and place, shy anecdotes and witticisms, snatches
of poetry, and 164 lists on court life during the Heian period. Primarily intended to
be a private journal, it was discovered and eventually printed. Shōnagon served as
18. a lady-in-waiting to the Empress Sadako in the late 10th
From Hateful Things
One is in a hurry to leave, but one’s visitor keeps chattering away. If it is someone
of no importance, one can get rid of him by saying, “You must tell me all about it
next time”; but, should it be the sort of visitor whose presence commands one’s
best behavior, the situation is hateful indeed.
b) The Feudal Era was dominated by the samurai class which included the
militaristic lords, the daimyo and the band of warriors, the samurai who adhered to
a strict code of conduct the emphasized bravery, loyalty, and honor. In 1192
Yorimoto became the shogun or chief general one of a series of shoguns who ruled
Japan for over 500 years.
c) The Tokugawa Shogonate in the late 1500s crushed the warring feudal lords and
controlled all of Japan from a new capital at Edo, now Tokyo. By 1630 and for two
centuries, Japan was a closed society: all foreigners were expelled, Japanese
Christians were persecuted, and foreign travel was forbidden under penalty of
death. The shogonate was ended in 1868 when Japan began to trade with the
Western powers. Under a more powerful emperor, Japan rapidly acquired the latest
technological knowledge, introduced universal education, and created an impressive
2. Religious Traditions. Two major faiths were essential elements in the cultural
foundations of Japanese society.
a) Shintoism or ‘ the way of the gods,’ is the ancient religion that reveres in dwelling
divine spirits called kami, found in natural places and objects. For this reason
natural scenes, such as waterfall, a gnarled tree, or a full moon, inspired reverence
in the Japanese people.
The Shinto legends have been accepted as historical fact although in postwar times
they were once again regarded as myths. These legends from the Records of
Ancient Matters, or Kokiji, A.D. 712, and the Chronicles of Japan, or Nihongi, A.D.
720 form the earliest writings of ancient Japan. Both collections have been
considerably influenced by Chinese thought.
b) Zen Buddhism emphasized the importance of meditation, concentration, and self-
discipline as the way to enlightenment. Zen rejects the notion that salvation is
attained outside of this life and this world. Instead, Zen disciples believe that one
can attain personal tranquility and insights into the true meaning of life through
rigorous phusical and mental discipline.
3. Socio-political concepts. Japan has integrated Confucian ethics and Buddhist
morality which India implanted in China. The concepts of giri and on explain why the
19. average Japanese is patriotic, sometimes ultra-nationalistic, law-abiding. Even
seppuku or ritual disembowelment exemplify to what extent these two socio-political
concepts could be morally followed.
a) Giri connotes duty, justice, honor, face, decency, respectability, courtesy, charity,
humanity, love, gratitude, claim. Its sanctions are found in mores, customs,
folkways. For example, in feudal Japan ‘loss of face’ is saved by suicide or
vendetta, if not renouncing the world in the monastery.
b) On suggests a sense of obligation or indebtedness which propels a Japanese to act,
as it binds the person perpetually to other individuals to the group, to parents,
teachers, superiors, and the emperor.
4. Poetry is one of the oldest and most popular means of expression and communication
in the Japanese culture. It was an integral part of daily life in ancient Japanese society,
serving as a means through which anyone could chronicle experiences and express
a) The Manyoshu or ‘Book of Ten Thousand Leaves is an anthology by poets from a
wide range of social classes, including the peasantry, the clergy, and the ruling
b) There are different poems according to set forms or structures:
• choka are poems that consist of alternate lines of five and seven syllables with an
additional seven-syllable line at the end. There is no limit to the number of lines
which end with envoys, or pithy summations. These envoys consist of 5-7-5-7-7
syllables that elaborate on or summarize the theme or central idea of the main
• tanka is the most prevalent verse form in traditional Japanese literature. It
consists of five lines of 5-7-5-7-7 syllables including at least one caesura, or
pause. Used as a means of communication in ancient Japanese society, the tanka
often tell a brief story or express a single thought or insight and the common
subjects are love and nature.
Every Single Thing
(by Priest Saigyo)
Every single thing
Changes and is changing
Always in this world.
Yet with the same light
The moon goes on shining.
How Helpless My Heart!
(by Ono Komachi)
How helpless my heart!
Were the stream to tempt,
My body, like a reed
Severed at the roots,
Would drift along, I think.
• renga is a chain of interlocking tanka. Each tanka within a renga was divided
into verses of 17 and 14 syllables composed by different poets as it was
fashionable for groups of poets to work together during the age of Japanese
• hokku was the opening verse of a renga which developed into a distinct literary
form known as the haiku. The haiku consist of 3 lines of 5-7-5 syllable
20. characterized by precision, simplicity, and suggestiveness. Almost all haiku
include a kigo or seasonal words such as snow or cherry blossoms that indicates
the time of year being described.
Blossoms on the pear;
and a woman in the moonlight
reads a letter there…
If to the moon
one puts a handle – what
a splendid fan!
Even stones in streams
of mountain water compose
songs to wild cherries.
5. Prose appeared in the early part of the 8th
century focusing on Japanese history.
During the Heian Age, the members of the Imperial court, having few administrative
or political duties, kept lengthy diaries and experimented with writing fiction.
• The Tale of Genji by Lady Murasaki Shikibu, a work of tremendous length
and complexity, is considered to be the world’s first true novel. It traces the
life of a gifted and charming prince. Lady Murasaki was an extraordinary
woman far more educated than most upper-class men of her generation. She
was appointed to serve in the royal court of the emperor.
• The Tale of Haike written by an anonymous author during the 13th
was the most famous early Japanese novel. It presents a striking portrait of
war-torn Japan during the early stages of the age of feudalism.
• Essays in Idleness by Yoshida Kenko was written during the age of
feudalism. It is a loosely organized collection of insights, reflections, and
observations, written during the 14th
century. Kenko was born into a high-
ranking Shinto family and became a Buddhist priest.
Excerpt from Essays in Idleness:
In all things it is Beginning and End that are interesting. The love of men and
women – is it only when they meet face to face? To feel sorrow at an
unaccomplished meeting, to grieve over empty vows, to spend the long night
sleepless and alone, to yearn for distant skies, in a neglected house to think
fondly of the past – this is what love is.
• In the Grove by Ryunusuke Akutagawa is the author’s most famous story
made into the film Rashomon. The story asks these questions: What is the
truth? Who tells the truth? How is the truth falsified? Six narrators tell their
own testimonies about the death of a husband and the violation of his wife in
the woods. The narrators include a woodcutter, a monk, an old woman, the
mother-in-law of the slain man, the wife, and finally, the dead man whose story
is spoken through the mouth of a shamaness. Akutagawa’s ability to blend a
feudal setting with deep psychological insights gives this story an ageless
21. An Excerpt: “The Story Of The Murdered Man As Told Through A
After violating my wife, the robber, sitting there, began to speak comforting
words to her. Of course I couldn’t speak. My whole body was tied fast to the
root of a cedar. But meanwhile I winked at her many times, as much as to say,
“Don’t believe the robber.” I wanted to convey some such meaning to her.
But my wife, sitting dejectedly on the bamboo leaves, was looking hard at her
lap. To all appearances, she was listening to his words. I was agonized by
a) Nō plays emerged during the 14th
century as the earliest form of Japanese drama.
The plays are performed on an almost bare stage by a small but elaborately
costumed cast of actors wearing masks. The actors are accompanied by a chorus
and the plays are written either in verse or in highly poetic prose. The dramas
reflect many Shinto and Buddhist beliefs, along with a number of dominant
Japanese artistic preferences. The Nō performers’ subtle expressions of inner
strength, along with the beauty of the costumes, the eloquence of the dancing, the
mesmerizing quality of the singing, and the mystical, almost supernatural,
atmosphere of the performances, has enabled the Nō theater to retain its
Atsumori by Seami Motokiyo is drawn from an episode of The Tale of the
Heike, a medieval Japanese epic based on historical fact that tells the story of the
rise and fall of the Taira family, otherwise known as the Heike. The play takes
place by the sea of Ichi no tani. A priest named Rensei, who was once a warrior
with the Genji clan, has decided to return to the scene of the battle to pray for a
sixteen-year-old named Atsumori, whom he killed on the beach during the battle.
Rensei had taken pity on Atsumori and had almost refrained from killing him. He
realized though that if he did not kill the boy, his fellow warriors would. He
explained to Atsumori that he must kill him, and promised to pray for his soul.
On his return, he meets two peasants who are returning home from their fields and
Rensai makes an astonishing discovery about one of them.
An Excerpt from Atsumori
Chorus: [ATSUMORI rises from the ground and advances toward the Priest with
“There is my enemy,” he cries, and would strike,
But the other is grown gentle
And calling on Buddha’s name
Has obtained salvation for his foe;
So that they shall be reborn together
On one lotus seat.
22. “No, Rensai is not my enemy.
Pray for me again, oh pray for me again.”
b) Kabuki involves lively, melodramatic acting and is staged using elaborate and
colorful costumes and sets. It is performed with the accompaniment of an
orchestra and generally focus on the lives of common people rather than
c) Jorori (now called Bunraku) is staged using puppets and was a great influence
on the development of the Kabuki.
d) Kyogen is a farce traditionally performed between the Nō tragedies.
7. Novels and Short Stories.
• Snow Country by Kawabata tells of love denied by a Tokyo dilettante,
Shimamura, to Komako, a geisha who feels ‘used’ much as she wants to think and
feel that she is drawn sincerely, purely to a man of the world. She has befriended
Yoko to whom Shimamura is equally and passionately drawn because of her
virginity, her naivete, as he is to Komako who loses it, after her affair with him
earlier. In the end, Yoko dies in the cocoon-warehouse in a fire notwithstanding
Komako’s attempt to rescue her. Komako embraces the virgin Yoko in her arms
while Shimamura senses the Milky Way ‘flowing down inside him with a roar.’
Kawabata makes use of contrasting thematic symbols in the title: death and
purification amidst physical decay and corruption.
• The House of Sleeping Beauties by Kawabata tells of the escapades of a dirty
old man, Eguchi, to a resort near the sea where young women are given drugs
before they are made to sleep sky-clad. Decorum rules it that these sleeping
beauties should not be touched, lest the customers be driven away by the
management. The book lets the reader bare the deeper recesses of the
septuagenarian’s mind. Ironically, this old man who senses beauty and youth is
incapable of expressing, much less having it. Thus, the themes of old age and
loneliness and coping become inseparable.
• The Makioka Sisters by Tanizaki is the story of four sisters whose chief concern
is finding a suitable husband for the third sister, Yukiko, a woman of traditional
beliefs who has rejected several suitors. Until Yukiko marries, Taeko, the
youngest, most independent, and most Westernized of the sisters, must remain
unmarried. More important than the plot, the novel tells of middle-class daily life
in prewar Osaka. It also delves into such topics as the intrusion of modernity and
its effect on the psyche of the contemporary Japanese, the place of kinship in the
daily life of the people, and the passage of the old order and the coming of the
• The Sea of Fertility by Mishima is the four-part epic including Spring Snow,
Runaway Horses, The Temple of Dawn, and The Decay of the Angel. The novels
are set in Japan from about 1912 to the 1960s. Each of them depicts a different
reincarnation of the same being: as a young aristocrat in 1912, as a political fanatic
in the 1930s, as a Thai princess before the end of WWII, and as an evil young
orphan in the 1960s. Taken together the novels are a clear indication of Mishima’s
increasing obsession with blood, death, and suicide, his interest in self-destructive
23. personalities, and his rejection of the sterility of modern life.
• The Setting Sun by Ozamu is a tragic, vividly painted story of life in postwar
Japan. The narrator is Kazuko, a young woman born to gentility but now
impoverished. Though she wears Western clothes, her outlook is Japanese; her life
is static, and she recognizes that she is spiritually empty. In the course of the
novel, she survives the deaths of her aristocratic mother and her sensitive, drug-
addicted brother Naoji, an intellectual ravage by his own and society’s spiritual
failures. She also spends a sad, sordid night with the writer Uehara, and she
conceives a child in the hope that it will be the first step in a moral revolution
• In the Grove by Akutagawa is the author’s most famous story made into the film
Rashomon. The story asks these questions: What is the truth? Who tells the truth?
How is the truth falsified? Six narrators tell their own testimonies about the death
of a husband and the violation of his wife in the woods. The narrators include a
woodcutter, a monk, an old woman, the mother-in-law of the slain man, the wife,
and finally, the dead man whose story is spoken through the mouth of a
shamaness. Akutagawa’s ability to blend a feudal setting with deep psychological
insights gives this story an ageless quality.
• The Wild Geese by Oagi is a melodramatic novel set in Tokyo at the threshold of
century. The novel explores the blighted life of Otama, daughter of a cake
vendor. Because of extreme poverty, she becomes the mistress of a policeman,
and later on of a money-lender, Shazo. In her desire to rise from the pitfall of
shame and deprivation, she tries to befriend Okada, a medical student who she
greets every day by the window as he passes by on his way to the campus. She is
disillusioned however, as Okada, in the end, prepares for further medical studies in
Germany. Ogai’s novel follows the traditio of the watakushi-shosetsu or the
confessional I- novel where the storyteller is the main character.
• The Buddha Tree by Fumio alludes to the awakening of Buddha under the bo
tree when he gets enlightened after fasting 40 days and nights. Similarly, the hero
of the novel, Soshu, attains self-illumination after freeing himself from the way of
all flesh. The author was inspired by personal tragedies that befell their family and
this novel makes him transcend his personal agony into artistic achievement.
8. Major Writers.
• Seami Motokiyo had acting in his blood for his father Kanami, a priest, was one of
the finest performers of his day. At age 20 not long after his father’s death, he took
over his father’s acting school and began to write plays. Some say he became a Zen
priest late in life; others say he had two sons, both of them actors. According to
legend, he died alone at the age of 81 in a Buddhist temple near Kyoto.
• The Haiku Poets
- Matsuo Bashō (1644 – 1694) is regarded as the greatest haiku poet. He was
born into a samurai family and began writing poetry at an early age. After
becoming a Zen Buddhist, he moved into an isolated hut on the outskirts of Edo
(Tokyo) where he lived the life of a hermit, supporting himself by teaching and
judging poetry. Bashō means ‘banana plant,’ a gift given him to which he
became deeply attached. Over time his hut became known as the Bashō Hut
24. until he assumed the name.
- Yosa Buson (1716 – 1783) is regarded as the second-greatest haiku poet. He
lived in Kyoto throughout most of his life and was one of the finest painters of
his time. Buson presents a romantic view of the Japanese landscape, vividly
capturing the wonder and mystery of nature.
- Kobayashi Issa (1763 –1827) is ranked with Bashō and Buson although his
talent was not widely recognized until after his death. Issa’s poems capture the
essence of daily life in Japan and convey his compassion for the less fortunate.
• Yasunari Kawabata (1899 – 1972) won the Nobel Prize for Literature in 1968.
The sense of loneliness and preoccupation with death that permeates much of his
mature writing possibly derives from the loneliness of his childhood having been
orphaned early. Three of his best novels are: Snow Country, Thousand Cranes,
and Sound of the Mountains. He committed suicide shortly after the suicide of
his friend Mishima.
• Junichiro Tanizaki (1886 –1965) is a major novelist whose writing is
characterized by eroticism and ironic wit. His earliest stories were like those of
Edgar Allan Poe’s but he later turned toward the exploration of more traditional
Japanese ideals of beauty. Among his works are Some Prefer Nettles, The
Makioka Sisters, Diary of a Mad Old Man.
• Yukio Mishima (1925 – 1970) is the pen name of Kimitake Hiraoka, a prolific
writer who is regarded by many writers as the most important Japanese novelist of
century. His highly acclaimed first novel, Confessions of a Mask is
partly autobiographical work that describes with stylistic brilliance a homosexual
who must mask his sexual orientation. Many of his novels have main characters
who, for physical or psychological reasons, are unable to find happiness. Deeply
attracted to the austere patriotism and marital spirit of Japan’s past, Mishima was
contemptuous of the materialistic Westernized society of Japan in the postwar era.
Mishima committed seppuku (ritual disembowelment).
• Dazai Ozamu (1909 – 1948) just like Mishima, and Kawabata committed
suicide, not unusual, but so traditional among Japanese intellectuals. It is
believed that Ozamu had psychological conflicts arising from his inability to draw
a red line between his Japaneseness clashing with his embracing the Catholic
faith, if not the demands of creativity. The Setting Sun is one of his works.
• Ryunosuke Akutagawa (1892 – 1927) is a prolific writer of stories, plays, and
poetry, noted for his stylistic virtuosity. He is one of the most widely translated
of all Japanese writers, and a number of his stories have been made into films.
Many of his short stories are Japanese tales retold in the light of modern
psychology in a highly individual style of feverish intensity that is well-suited to
their macabre themes. Among his works are Rashomon, and Kappa. He also
• Oe Kenzaburo (1935 -) a novelist whose rough prose style, at time nearly
violating the natural rhythms of the Japanese language, epitomizes the rebellion of
the post-WWII generation which he writes. He was awarded the Nobel Prize for
Literature in 1994. Among his works are: Lavish are the Dead, The Catch, Our
Generation, A Personal Matter, The Silent Cry, and Awake, New Man!.
25. C. AFRICA
1. The Rise of Africa’s Great Civilization. Between 751 and 664 B.C. the kingdom
of Kush at the southern end of the Nile River gained strength and prominence
succeeding the New Kingdom of Egyptian civilization. Smaller civilizations around
the edges of the Sahara also existed among them the Fasa of the northern Sudan,
whose deeds are recalled by the Soninka oral epic, The Daust.
• Aksum (3rd
century A.D.), a rich kingdom in eastern Africa arose in what is now
Ethiopia. It served as the center of a trade route and developed its own writing
system. The Kingdom of Old Ghana (A.D. 300) the first of great civilizations in
western Africa succeeded by the empires of Old Mali and Songhai. The
legendary city of Timbuktu was a center of trade and culture in both the Mali and
Songhai empires. New cultures sprang up throughout the South: Luba and
Malawi empires in central Africa, the two Congo kingdoms, the Swahili culture of
eastern Africa, the kingdom of Old Zimbabwe, and the Zulu nation near the
southern tip of the cotinent.
• Africa’s Golden Age (between A.D. 300 and A.D. 1600) marked the time when
sculpture, music, metalwork, textiles, and oral literature flourished.
• Foreign influences came in the 4th
century. The Roman Empire had proclaimed
Christianity as its state religion and taken control of the entire northern coast of
Africa including Egypt. Around 700 A.D. Islam, the religion of Mohammed,
was introduced into Africa as well as the Arabic writing system. Old Mali,
Somali and other eastern African nations were largely Muslim. Christianity and
colonialism came to sub-Saharan Africa towards the close of Africa’s Golden
Age. European powers created colonized countries in the late 1800s. Social and
political chaos reigned as traditional African nations were either split apart by
European colonizers or joined with incompatible neighbors.
• Mid-1900s marked the independence and rebirth of traditional cultures written in
2. Literary Forms.
a) Orature is the tradition of African oral literature which includes praise poems,
love poems, tales, ritual dramas, and moral instructions in the form of proverbs
and fables. It also includes epics and poems and narratives.
b) Griots, the keepers of oral literature in West Africa, may be a professional
storyteller, singer, or entertainer and were skilled at creating and transmitting the
many forms of African oral literature. Bards, storytellers, town criers, and oral
historians also preserved and continued the oral tradition.
c) Features of African oral literature:
• repetition and parallel structure – served foremost as memory aids for
griots and other storytellers. Repetition also creates rhythm, builds suspense,
and adds emphasis to parts of the poem or narrative. Repeated lines or refrains
often mark places where an audience can join in the oral performance.
26. • repeat-and-vary technique – in which lines or phrases are repeated with
slight variations, sometimes by changing a single word.
• tonal assonance – the tones in which syllables are spoken determine the
meanings of words like many Asian languages.
• call-and-response format - includes spirited audience participation in
which the leader calls out a line or phrase and the audience responds with an
answering line or phrase becoming performers themselves.
d) Lyric Poems do not tell a story but instead, like songs, create a vivid, expressive
testament to a speaker’s thoughts or emotional state. Love lyrics were an
influence of the New Kingdom and were written to be sung with the
accompaniment of a harp or a set of reed pipes.
The Sorrow of Kodio by Baule Tribe
We were three women
And myself, Kodio Ango.
We were on our way to work in the city.
And I lost my wife Nanama on the way.
I alone have lost my wife
To me alone, such misery has happened,
To me alone, Kodio, the most handsome of the three men,
Such misery has happened.
In vain I call for my wife,
She died on the way like a chicken running.
How shall I tell her mother?
How shall I tell it to her, I Kodio,
When it is so hard to hold back my own pain.
e) Hymns of Praise Songs were offered to the sun god Aten. The Great Hymn to
Aten is the longest of several New Kingdom hymns. This hymn was found on the
wall of a tomb built for a royal scribe named Ay and his wife. In was intended to
assure their safety in the afterlife.
f) African Proverbs are much more than quaint old sayings. Instead, they represent
a poetic form that uses few words but achieves great depth of meaning and they
function as the essence of people’s values and knowledge.
• They are used to settle legal disputes, resolve ethical problems, and teach
children the philosophy of their people.
• Often contain puns, rhymes, and clever allusions, they also provide
• Mark power and eloquence of speakers in the community who know and use
them. Their ability to apply the proverbs to appropriate situations
demonstrates an understanding of social and political realities.
27. Kenya. Gutire muthenya ukiaga ta ungi. (No day dawns like another.)
South Africa. Akundlovu yasindwa umboko wayo.
(No elephant ever found its trunk too heavy.)
Kikuyu. Mbaara ti ucuru. (War is not porridge.)
g) Dilemma or Enigma Tale is an important kind of African moral tale intended for
listeners to discuss and debate. It is an open-ended story that concludes with a
question the asks the audience to choose form among several alternatives. By
encouraging animated discussion, a dilemma tale invites its audience to think
about right and wrong behavior and how to best live within society.
h) Ashanti Tale comes from Ashanti, whose traditional homeland is the dense and
hilly forest beyond the city of Kumasi in south-central Ghana which was
colonized by the British in the mid-19th
century. But the Ashanti, protected in
their geographical stronghold, were able to maintain their ancient culture. The
tale exemplifies common occupations of the Ashanti such as farming, fishing, and
weaving. It combines such realistic elements with fantasy elements like talking
objects and animals.
i) Folk Tales have been handed down in the oral tradition from ancient times. The
stories represent a wide and colorful variety that embodies the African people’s
most cherished religious and social beliefs. The tales are used to entertain, to
teach, and to explain. Nature and the close bond that Africans share with the
natural world are emphasized. The mystical importance of the forest, sometimes
called the bush, is often featured.
j) Origin stories include creation stories and stories explaining the origin of death.
k) Trickster Tale is an enormously popular type. The best known African trickster
figure is Anansi the Spider, both the hero and villain from the West African origin
to the Caribbean and other parts of the Western Hemisphere as a result of the
l) Moral Stories attempt to teach a lesson.
m) Humorous Stories is primarily intended to amuse.
The chief listened to them patiently, but he couldn’t refrain from scowling.
“Now, this is really a wild story,” he said at last. “You’d better all go
back to your work before I punish you for disturbing the peace.”
So the men went away, and the chief shook his head and mumbled to
himself, “Nonsense like that upsets the community”
“Fantastic, isn’t it?” his stool said, “Imagine, a talking yam!”
28. n) Epics of vanished heroes – partly human, partly superhuman, who embody the
highest values of a society – carry with them a culture’s history, values, and
traditions. The African literary traditions boasts of several oral epics.
• The Dausi from the Soninke
• Monzon and the King of Kore from the Bambara of western Africa
• The epic of Askia the Great, medieval ruler of the Songhai empire in western
• The epic of the Zulu Empire of southern Africa
• Sundiata from the Mandingo peoples of West Africa is the best-preserved and
the best-known African epic which is a blend of fact and legend. Sundiata
Keita, the story’s hero really existed as a powerful leader who in 1235 defeated
the Sosso nation of western Africa and reestablished the Mandingo Empire of
Old Mali. Supernatural powers are attributed to Sundiata and he is involved in
a mighty conflict between good and evil. It was first recorded in Guinea in the
1950s and was told by the griot Djeli Mamoudou Kouyate.
3. Negritude, which means literally ‘blackness,’ is the literary movement of the
1930s – 1950s that began among French-speaking African and Caribbean writers
living in Paris as a protest against French colonial rule and the policy of assimilation.
Its leading figure was Leopold Sedar Senghor (1st
president of the Republic of Senegal
in 1960) , who along with Aime Cesaire from Martinique and Leo Damas from French
Guina, began to examine Western values critically and to reassess African culture.
The movement largely faded in the early 1960s when its political and cultural
objectives had been achieved in most African countries. The basic ideas behind
• Africans must look to their own cultural heritage to determine the values and
traditions that are most useful in the modern world.
• Committed writers should use African subject matter and poetic traditions and
should excite a desire for political freedom.
• Negritude itself encompasses the whole of African cultural, economic, social, and
• The value and dignity of African traditions and peoples must be asserted.
4. African Poetry is more eloquent in its expression of Negritude since it is the
poets who first articulated their thoughts and feelings about the inhumanity suffered by
their own people.
• Paris in the Snow swings between assimilation of French, European culture or
negritude, intensified by the poet’s catholic piety.
• Totem by Leopold Senghor shows the eternal linkage of the living with the dead.
• Letters to Martha by Dennis Brutus is the poet’s most famous collection that
speaks of the humiliation, the despondency, the indignity of prison life.
• Train Journey by Dennis Brutus reflects the poet’s social commitment, as he
reacts to the poverty around him amidst material progress especially and acutely felt
by the innocent victims, the children
29. • Telephone Conversation by Wole Soyinka is the poet’s most anthologized poem
that reflects Negritude. It is a satirical poem between a Black man seeking the
landlady’s permission to accommodate him in her lodging house. The poetic
dialogue reveals the landlady’s deep-rooted prejudice against the colored people as
the caller plays up on it.
Excerpt from Telephone Conversation
The price seemed reasonable, location
indifferent. The landlady swore she lived
off premises. Nothing remained
but self-confession. “Madam,” I warned,
“I hate a wasted journey – I am African.”
Silence. Silenced transmission of
pressurized good-breeding. Voice, when it came,
lipstick coated, long gold-rolled
cigarette-holder pipped. Caught I was foully.
“HOW DARK?” … I had not misheard … “ARE YOU LIGHT
OR VERY DARK?” Button B. Button A. Stench
of rancid breath of public hide-and-speak.
• Africa by David Diop is a poem that achieves its impact by a series of climactic
sentences and rhetorical questions
Africa, my Africa
Africa of proud warriors on ancestral savannahs
Africa that my grandmother sings
On the bank of her distant river
I have never known you
But my face is full of your blood
Your beautiful black blood which waters
the wide fields
The blood of your sweat
The sweat of your work
The work of your slavery
The slavery of your children
Africa tell me Africa
Is this really you this back which is bent
And breaks under the load of insult
This back trembling with red weals
Which says yes to the whip on the hot
roads of noon
Then gravely a voice replies to me
Impetuous son that tree robust and
That tree over there
Splendidly alone amidst white and faded
That is Africa your Africa which grows
Grows patiently obstinately
And whose fruit little by little learn
The bitter taste of liberty.
• Song of Lawino by Okot P’Bitek is a sequence of poems about the clash between
African and Western values and is regarded as the first important poem in “English
to emerge from Eastern Africa. Lawino’s song is a plea for the Ugandans to look
30. back to traditional village life and recapture African values.
• The Houseboy by Ferdinand Oyono points out the disillusionment of Toundi, a
boy who leaves his parents maltreatment to enlist his services as an acolyte to a
foreign missionary. After the priest’s death, he becomes a helper of a white
plantation owner, discovers the liaison of his master’s wife, and gets murdered later
in the woods as they catch up with him. Toundi symbolizes the disenchantment, the
coming of age, and utter despondency of the Camerooninans over the corruption
and immortality of the whites. The novel is developed in the form of a recit, the
French style of a diary-like confessional work.
• Things Fall Apart by Chinua Achebe depict a vivid picture of Africa before the
colonization by the British. The title is an epigraph from Yeats’ The Second
Coming: ‘things fall apart/ the center cannot hold/ mere anarchy is loosed upon the
world.’ The novel laments over the disintegration of Nigerian society, represented
in the story by Okwonko, once a respected chieftain who looses his leadership and
falls from grace after the coming of the whites. Cultural values are woven around
the plot to mark its authenticity: polygamy since the character is Muslim; tribal law
is held supreme by the gwugwu, respected elders in the community; a man’s social
status is determined by the people’s esteem and by possession of fields of yams and
physical prowess; community life is shown in drinking sprees, funeral wakes, and
• No Longer at Ease by Chinua Achebe is a sequel to Things Fall Apart and the
title of which is alluded to Eliot’s The Journey of the Magi: ‘We returned to our
places, these kingdoms,/ But no longer at ease here, in the old dispensation.’ The
returning hero fails to cope with disgrace and social pressure. Okwonko’s son has
to live up to the expectations of the Umuofians, after winning a scholarship in
London, where he reads literature, not law as is expected of him, he has to dress up,
he must have a car, he has to maintain his social standing, and he should not marry
an Ozu, an outcast. In the end, the tragic hero succumgs to temptation, he, too
receives bribes, and therefore is ‘no longer at ease.’
• The Poor Christ of Bombay by Mongo Beti begins en medias res and exposes the
inhumanity of colonialism. The novel tells of Fr. Drumont’s disillusionment after
the discovery of the degradation of the native women, betrothed, but forced to work
like slaves in the sixa. The government steps into the picture as syphilis spreads out
in the priest’s compound. It turns out that the native whose weakness is wine,
women, and song has been made overseer of the sixa when the Belgian priest goes
out to attend to his other mission work. Developed through recite or diary entries,
the novel is a satire on the failure of religion to integrate to national psychology
without first understanding the natives’ culture.
31. • The River Between by James Ngugi show the clash of traditional values and
contemporary ethics and mores. The Honia River is symbolically taken as a
metaphor of tribal and Christian unity – the Makuyu tribe conducts Christian rites
while the Kamenos hold circumcision rituals. Muthoni, the heroine, although a
new-born Christian, desires the pagan ritual. She dies in the end but Waiyaki, the
teacher, does not teach vengeance against Joshua, the leader of the Kamenos, but
unity with them. Ngugi poses co-existence of religion with people’s lifestyle at the
same time stressing the influence of education to enlighten people about their socio-
• Heirs to the Past by Driss Chraili is an allegorical, parable-like novel. After 16
years of absence, the anti-hero Driss Ferdi returns to Morocco for his father’s
funeral. The Signeur leaves his legacy via a tape recorder in which he tells the
family members his last will and testament. Each chapter in the novel reveals his
relationship with them, and at the same time lays bare the psychology of these
people. His older brother Jaad who was ‘born once and had ided several times’
because of his childishness and irresponsibility. His idiotic brother, Nagib, has
become a total burden to the family. His mother feels betrayed, after doin her roles
as wife and mother for 30 years, as she yearns for her freedom. Driss flies back to
Europe completely alienated fro his people, religion, and civilization.
• A Few Days and Few Nights by Mbella Sonne Dipoko deals withracial prejudice.
In the novel originally written in French, a Cameroonian scholar studying in France
is torn between the love of a Swedish girl and a Parisienne show father owns a
business establishment in Africa. The father rules out the possibility of marriage.
Therese, their daughter commits suicide and Doumbe, the Camerronian, thinks only
of the future of Bibi, the Swedish who is expecting his child. Doumbe’s remark
that the African is like a turtle which carries it home wherever it goes implies the
racial pride and love for the native grounds.
• The Interpreters by Wole Soyinka is about a group of young intellectuals who
function as artists in their talks with one another as they try to place themselves in
the context of the world about them.
6. Major Writers.
• Leopold Sedar Senghor (1906) is a poet and statesman who was cofounder of the
Negritude movement in African art and literature. He went to Paris on a scholarship
and later taught in the French school system. During these years Senghor
discovered the unmistakable imprint of African art on modern painting, sculpture,
and music, which confirmed his belief in Africa’s contribution to modern culture.
Drafted during WWII, he was captured and spent two years in Nazi concentration
camp where he wrote some of his finest poems. He became president of Senegal in
1960. His works include: Songs of Shadow, Black Offerings, Major Elegies,
Poetical Work. He became Negritude’s foremost spokesman and edited an
anthology of French-language poetry by black African that became a seminal text
of the Negritude movement.
32. • Okot P’Bitek (1930 – 1982) was born in Uganda during the British domination and
was embodied in a contrast of cultures. He attended English-speaking schools but
never lost touch with traditional African values and used his wide array of talents to
pursue his interests in both African and Western cultures. Among his works are:
Song of Lawino, Song of Ocol, African Religions and Western Scholarship,
Religion of the Central Luo, Horn of My Love.
• Wole Soyinka (1934) is a Nigerian playwright, poet, novelis, and critic who was
the first black African to be awarded the Nobel Prize for Literature in 1986. He
wrote of modern West Africa in a satirical style and with a tragic sense of the
obstacles to human progress. He taught literature and drama and headed theater
groups at various Nigerian universities. Among his works are: plays – A Dance of
the Forests, The Lion and the Jewel, The Trials of Brother Jero; novels – The
Interpreters, Season of Anomy; poems – Idanre and Other Poems, Poems from
Prison, A Shuttle in the Crypt, Mandela’s Earth and Other Poems.
• Chinua Achebe (1930) is a prominent Igbo novelist acclaimed for his
unsentimental depictions of the social and psychological disorientation
accompanying the imposition of Western customs and values upon traditional
African society. His particular concern was with emergent Africa at its moments of
crisis. His works include, Things Fall Apart, Arrow of God, No Longer at Ease,
A Man of the People, Anthills of Savanah.
• Nadine Gordimer (1923) is a South African novelist and short story writer whose
major theme was exile and alienation. She received the Nobel Prize for Literature
in 1991. Gordimer was writing by age 9 and published her first story in a magazine
at 15. Her works exhibit a clear, controlled, and unsentimental technique that
became her hallmark. She examines how public events affect individual lives, how
the dreams of on’s youth are corrupted, and how innocence is lost. Among her
works are: The Soft Voice of the Serpent, Burger’s Daughter, July’s People, A
Sport of Nature, My Son’s Story.
• Bessie Head (1937 –1986) described the contradictions and shortcomings of pre-
and postcolonial African society in morally didactic novels and stories. She
suffered rejection and alienation from an early age being born of an illegal union
between her white mother and black father. Among her works are: When Rain
Clouds Gather, A Question of Power, The Collector of Treasures, Serowe.
• Barbara Kimenye (1940) wrote twelve books on children’s stories known as the
Moses series which are now a standard reading fare for African school children.
She also worked for many years for His Highness the Kabaka of Uganda, in the
Ministry of Education and later served as Kabaka’s librarian. She was a journalist
of The Uganda Nation and later a columnist for a Nairobi newspaper. Among her
works are: KalasandaRevisited, The Smugglers, The Money Game.
33. • Ousmane Sembene (1923) is a writer and filmmaker from Senegal. His works
reveal an intense commitment to political and social change. In the words of one of
his characters: “You will never be a good writer so long as you don’t defend a
cause.” Sembene tells his stories from out of Africa’s past and relates their
relevance and meaning for contemporary society. His works include, O My
Country, My Beautiful People, God’s Bits of Wood, The Storm. |
Comets are among the oldest objects in the solar system. Composed of rock and ice, they generally go around the Sun in narrow elliptical orbits which may stretch well beyond the orbit of dwarf-planet Pluto.
Most of the time comets are too small to be seen, but that changes as they approach the Sun when they can become quite spectacular. Some comets appear on predictable schedules (like Halley’s comet) while others show up for the first time. When comets appear, the Observatory provides viewing guidance and shares spectacular imagery. Observatory public telescopes always highlight comets when they are in view.
Comet PANSTARRS C/2011 L4, imaged from the San Gabriel Mountains (near Mt. Wilson) between 4:30 a.m. and 5:00 a.m. on April 19, 2013. The picture shows the comet’s broad, fanlike dust tail spreading for several million miles away from the sun.
Multiple guided exposures were combined, resulting in an exposure of 4 minutes, 30 seconds. A Canon 20Da was used at ISO 1200 through a 5.5-inch diameter f/3.5 Celestron Comet-Catcher Schmidt-Newtonian telescope. (Image by Anthony Cook, Griffith Observatory).
- Comet PanSTARRS C/2013 X1 was visible during evening hours in July, 2016.
- Comet LINEAR 252/P was visible during early morning hours in April, 2016.
- Comet Lovejoy C/2014 Q2 was visible during evening hours in January, 2015.
- Comet Lovejoy C/2013 R1 was visible before dawn in December, 2013.
- Comet ISON skimmed the sun’s surface in late November, 2013.
- Comet Lemmon C/2012 F6 was visible in May, 2013.
- Comet PANSTARRS was visible in March, 2013.
For information on comets that may presently be visible to amateur astronomers, visit Weekly Information on Bright Comets.
Past Comet Observations
Tracking Comet NEOWISE, July-August 2020
In July 2020, the Observatory provided observing guidance for the arrival of Comet Neowise, which definitely lived up to its billing.
Tracking Comet Atlas, April 2020
In July 2020, Griffith Observatory provided observing guidance for the arrival of Comet Neowise, which definitely lived up to its billing.
Comet Lovejoy C/2014 Q2, January 2015
From January 6 - 24, 2015, Griffith Observatory offered visitors a telescopic view of Comet Lovejoy, C/2014 Q2. The comet orbits the sun about every 14,000 years. It was closest to Earth – 44 million miles away – on January 7, 2015.
Comet ISON, Autumn 2013
Comet ISON (C/2012 S1) was discovered on 21 September 2012 by Vitali Nevski and Artyom Novichonok while collecting data for the International Scientific Optical Network (ISON).
Comet PANSTARRS, March 2013
Comet PANSTARRS was observed from Griffith Observatory through telescopes and binoculars through most of March 2013.
Catch That Comet! Chasing Comet Hartley!
Join speakers Malcolm Hartley and Tim Larsen for an evening of discussion about the excitement of comet hunting, the mysteries of the solar system's oldest denizens, and the upcoming flyby of Comet Hartley. |
We are searching data for your request:
Upon completion, a link will appear to access the found materials.
Today, astronomers are able to study objects in our Universe that are over thirteen billion light-years from Earth. In fact, the farthest object studied is a galaxy known as GN-z11, which exists at a distance of 13.39 billion light-years from our Solar System.
But since we live in the relativistic universe, where time and space are similar expressions of the same reality, looking deep into space means also looking deep into the past. Ergo, looking at an object that is over 13 billion light-years away means seeing it as it appeared over 13 billion years ago.
This allows astronomers to see back to some of the earliest times in the Universe, which is estimated to be 13.8 billion years old. And in the future, next-generation instruments will allow them to see even farther, to when the first stars and galaxies formed - a time that is commonly referred to as "Cosmic Dawn."
Much of the credit for this progress goes to space telescopes, which have been studying the deep Universe from orbit for decades. The most well-known of these is Hubble, which has set the precedent for space-based observatories.
Since it was launched in 1990, the vital data Hubble has collected has led to many scientific breakthroughs. Today, it is still in service and will mark its 30th anniversary on May 20th, 2020. However, it's important to note that Hubble was by no means the first space telescope.
Decades prior to it making its historic launch, NASA, Roscosmos, and other space agencies were sending observatories to space to conduct vital research. And in the near future, a number of cutting-edge telescopes will be sent to space to build on the foundation established Hubble and others.
The case for space telescopes
The idea of placing an observatory in space can be traced back to the 19th century and the German astronomers Wilhelm Beer and Johann Heinrich Mädler. In 1837, they discussed the advantages of building an observatory on the Moon, where Earth's atmosphere would not be a source of interference.
However, it was not until the 20th century that a detailed proposal was first made. This happed in 1946 when American theoretical physicist Lyman Spitzer proposed sending a large telescope to space. Here too, Spitzer emphasized how a space telescope would not be hindered by Earth's atmosphere.
Essentially, ground-based observatories are limited by the filtering and distortion our atmosphere has on electromagnetic radiation. This is what causes stars to "twinkle" and for celestial objects like the Moon and the Solar Planets to glow and appear larger than they are.
Another major impediment is "light pollution", where light from urban sources can make it harder to detect light coming from space. Ordinarily, ground-based telescopes overcome this by being built in high-altitude, remote regions where light pollution is minimal and the atmosphere is thinner.
Adaptative optics is another method that is commonly used, where deforming mirrors correct for atmospheric distortion. Space telescopes get around all of this by being positioned outside of Earth's atmosphere where neither light pollution nor distortions are an issue.
Space-based observatories are even more important when it comes to frequency ranges beyond the visible wavelengths. Infrared and ultraviolet radiation are largely blocked by Earth's atmosphere, whereas X-ray and Gamma-ray astronomy are virtually impossible on Earth.
Throughout the 1960s and 1970s, Spitzer lobbied US Congress for such a system to be built. While his vision would not come to full fruition until the 1990s (with the Hubble Space Telescope), many space observatories would be sent to space in the meantime.
During the late 1950s, the race to conquer space between the Soviet Union and the United States began. These efforts began in earnest with the deployment of the first satellites and then became largely focused on sending the first astronauts into space.
However, efforts were also made to send the observatories into space for the first time. Here, "space telescopes" would be able to conduct astronomical observations that were free of atmospheric interference, which was especially important where high-energy physics was concerned.
As always, these efforts were tied to military advancements during the Cold War. Whereas the development of Intercontinental Ballistic Missiles (ICBMs) led to the creation of space launch vehicles, the development of spy satellites led to advances in space telescopes.
In all cases, the Soviets took an early lead. After sending the first artificial object (Sputnik 1) and the first man (Yuri Gagarin and the Vostok 1 mission) into orbit in 1957 and 1961, they also sent the first space telescopes to space between 1965 and 1968.
These were launched as part of the Soviet Proton program, which sent four gamma-ray telescopes to space (Proton-1 through -4). While each satellite was short-lived compared to modern space telescopes, they did conduct vital research of the high-energy spectrum and cosmic rays.
NASA followed suit with the launch of the four Orbiting Astronomical Observatory (OAO) satellites between 1968 and 1972. These provided the first high-quality observations of celestial objects in ultraviolet light.
In 1972, the Apollo 16 astronauts also left behind the Far Ultraviolet Camera/Spectrograph (UVC) experiment on the Moon. This telescope and camera took several images and obtained spectra of astronomical objects in the far-UV spectrum.
The post-Apollo era
The 1970s and 1980s proved to a lucrative time for space-based observatories. With the Apollo Era finished, the focus on human spaceflight began to shift to other avenues - such as space research. More nations began to join in as well, including India, China, and various European space agencies.
Between 1970 and 1975, NASA also launched three telescopes as part of their Small Astronomy Satellite (SAS) program, which conducted X-ray, gamma-ray, UV, and other high-energy observations. The Soviets also sent three Orion space telescopes to space to conduct ultraviolet observations of stars.
The ESA and European space agencies also launched their first space telescopes by the 1970s. The first was the joint British-NASA telescope named Ariel 5, which launched in 1974 to observe the sky in the X-ray band. The same year, the Astronomical Netherlands Satellite (ANS) was launched to conduct UV and X-ray astronomy.
In 1975, India sent its first satellite to space - Aryabata - to study the Universe in the X-ray spectrum. In that same year, the ESA sent the COS-B mission to space to study gamma-ray sources. Japan also sent its first observatory to space in 1979, known as the Hakucho X-ray satellite.
Between 1977 and 1979, NASA also deployed a series of X-ray, gamma-ray, and cosmic-ray telescopes as part of the High Energy Astronomy Observatory Program (HEAO). In 1978, NASA, the UK Science Research Council (SERC) and the ESA collaborated to launch the InternationalUltraviolet Explorer (IUE).
Before the 1980s were out, the ESA, Japan, and the Soviets would contribute several more missions, like the European X-ray Observatory Satellite (EXOSAT), the Hinotori and Tenma X-ray satellites, and the Astron ultraviolet telescope.
NASA also deployed the Infrared Astronomy Satellite (IRAS) in 1983, which became the first space telescope to perform a survey of the entire night sky at infraredwavelengths.
Rounding out the decade, the ESA and NASA sent their Hipparcos and Cosmic Background Explorer (COBE) in 1989. Hipparcos was the first space experiment dedicated to measuring the proper motions, velocities, and positions of stars, a process known as astrometry.
Meanwhile, COBE provided the first accurate measurements of the Cosmic Microwave Background (CMB) - the diffuse background radiation permeating the observable Universe. These measurements provided some of the most compelling evidence for the Big Bang theory.
In 1989, a collaboration between the Soviets, France, Denmark, and Bulgaria led to the deployment of the International Astrophysical Observatory (aka. GRANAT). The mission spent the next nine years observing the Universe from the X-ray to the gamma-ray parts of the spectrum.
Hubble (HST) goes to space
After many decades, Spitzer finally saw his dream of a dedicated space observatory come true with the Hubble Space Telescope (HST). This observatory was developed by NASA and the ESA and launched on April 24th, 1990, aboard the Space Shuttle Discovery (STS-31), commencing operations by May 20th.
This telescope takes its name from the famed American astronomer Edwin Hubble (1889 - 1953), who is considered by many to be one of the most important astronomers in history.
In addition to discovering that there are galaxies beyond the Milky Way, he also offered definitive proof that the Universe is in a state of expansion. In his honor, this scientific fact is known as the Hubble-Lemaître Law, and the rate at which it is expanding is known as the Hubble Constant.
Hubble is equipped with a primary mirror that measures 2.4-meters (7.8-feet) in diameter and a secondary mirror of 30.5 cm (12 inches). Both mirrors are made from a special type of glass that is coated with aluminum and a compound that reflects ultraviolet light.
With its suite of five scientific instruments, Hubble is able to observe the Universe in the ultraviolet, visible, and near-infrared wavelengths. These instruments include the following:
Wide Field Planetary Camera: a high-resolution imaging device primarily intended for optical observations. Its most recent iteration - the Wide Field Camera 3 (WFC3) - is capable of making observations in the ultraviolet, visible and infrared wavelengths. This camera has captured images of everything from bodies in the Solar System and nearby star systems to galaxies in the very distant universe.
Cosmic Origins Spectrograph (COS): an instrument that breaks ultraviolet radiation into components that can be studied in detail. It has been used to study the evolution of galaxies, active galactic nuclei (aka. quasars), the formation of planets, and the distribution of elements associated with life.
Advanced Camera for Surveys (ACS): a visible-light camera that combines a wide field of view with sharp image quality and high sensitivity. It has been responsible for many of Hubble’s most impressive images of deep space, has located massive extrasolar planets, helped map the distribution of dark matter, and detected the most distant objects in the Universe.
Space Telescope Imaging Spectrograph (STIS): a camera combined with a spectrograph that is sensitive to a wide range of wavelengths (from optical and UV to the near-infrared). The STIS is used to study black holes, monster stars, the intergalactic medium, and the atmospheres of worlds around other stars.
Near-Infrared Camera and Multi-Object Spectrometer (NICMOS): a spectrometer that is sensitive to infrared light, which revealed details about distant galaxies, stars, and planetary systems that are otherwise obscured by visible light by interstellar dust. This instrument ceased operations in 2008.
The "Great Observatories" and more!
Between 1990 and 2003, NASA sent three more telescopes to space that (together with Hubble) became known as the Great Observatories. These included the Compton Gamma Ray Observatory (1991), the Chandra X-ray Observatory (1999), the Spitzer Infrared Space Telescope (2003).
In 1999, the ESA sent the X-ray multi-Mirror Newton (XMM-Newton) observatory to space, named in honor of Sir Isaac Newton. In 2001, they sent the Wilkinson Microwave Anisotropy Probe (WMAP) to space, which succeeded COBE by making more accurate measurements of the CMB.
In 2004, NASA launched the Swift Gamma Ray Burst Explorer (aka. the Neil Gehrels Swift Observatory). This was followed in 2006 by the ESA's Convection, Rotation and planetary Transits (COROT) mission to study exoplanets.
2009 was a bumper year for space telescopes. In this one year, the Herschel Space Observatory, the Wide-field Infrared Telescope (WISE), the Planckobservatory, and the Kepler Space Telescope. Whereas Herschel and WISE were dedicated to infrared astronomy, Planck picked up where left off by studying the CMB.
The purpose of Kepler was to advance the study of extrasolar planets (i.e. planets that orbit stars beyond the Solar System). Through a method known as transit photometry, Kepler spotted planets as they passed in front of their stars (aka. transited), resulting in an observable dip in brightness.
The extent of these dips and the period with which they occur allows astronomers to determine a planet's size and orbital period. Thanks to Kepler, the number of known exoplanets has grown exponentially.
Today, there have been over 4000 confirmed discoveries (and 4900 awaiting confirmation), of which Kepler is responsible for discovering almost 2800 (with another 2420 awaiting confirmation).
In 2013, the ESA launched the Gaia mission, an astrometry observatory and the successor to the Hipparcos mission. This mission has been gathering data on over 1 billion objects (stars, planets, comets, asteroids, and galaxies) to create the largest and most precise 3D space catalog ever made.
In 2015, the ESA also launched the Laser Interferometer Space Antenna Pathfinder (LISA Pathfinder), the first-ever observatory dedicated to measuring gravitational waves from space. And in 2018, NASA sent the Transiting Exoplanet Survey Satellite (TESS) - Kepler's successor - to space to search for more exoplanets.
Future space telescopes
In the coming decades, the space agencies of the world plan to launch even more sophisticated space telescopes with even higher-resolution. These instruments will allow astronomers to gaze back to the earliest periods of the Universe, study extrasolar planets in detail, and observe the role Dark Matter and Dark Energy played in the evolution of our Universe.
James Webb Space Telescope (JWST), an infrared telescope built with generous support provided by the ESA and the Canadian Space Agency (CSA). This observatory, the spiritual successor to Hubble and Spitzer, will be the largest and most complex space telescope to date.
Unlike its precessors, the JWST will observe the Universe in the visible light to mid-infrared wavelengths, giving it the ability to observe objects that are too old and too distant for its predecessors to observe.
This will allow astronomers to see far enough through space (and back in time) to observe the first light after the Big Bang and the formation of the first stars, galaxies, and solar systems.
There's also the ESA's Euclid mission, which is scheduled for launch in 2022. This space telescope will be optimized for cosmology and exploring the "dark Universe." To this end, it will map the distribution of up to two billion galaxies and associated Dark Matter across 10 billion light-years.
This data will be used to create a 3D map of the local Universe that will provide astronomers with vital information about the nature of Dark Matter and Dark Energy. It will also provide accurate measurements of both the accelerated expansion of the Universe and strength of gravity on cosmological scales.
By 2025, NASA will be launching the Wide-Field Infrared Space Telescope (WFIRST), a next-generation infrared telescope dedicated to exoplanet detection and Dark Energy research. It's advanced optics and suite of instruments will reportedly give it several hundred times the efficiency of Hubble (in the near-IR wavelength).
Once deployed, WFIRST will observe the earliest periods of cosmic history, study Dark Energy, and measure the rate at which cosmic expansion is accelerating. It will also build on the foundation built by Kepler by conducting direct-imaging studies and characterization of exoplanets.
The launch of the ESA's PLAnetary Transits and Oscillations of stars(PLATO) will follow in 2026. Using a series of small, optically fast, wide-field telescopes, PLATO will search for exoplanets and characterize their atmospheres to determine if they could be habitable.
Looking even farther ahead, a number of interesting things are predicted for space-based astronomy. Already, there are proposals in place for next-next-generation telescopes that will offer even greater observational power and capabilities.
During the recent 2020 Decadal Survey for Astrophysics hosted by NASA's Science Mission Directorate (SMD), four flagship mission concepts were considered to build on the legacy established by Hubble, Kepler, Spitzer, and Chandra.
These four concepts include the Large Ultraviolet/Optical/Infrared Surveyor (LUVOIR), the Origins Space Telescope (OST), the Habitable Exoplanet Imager (HabEx) and the Lynx X-ray Surveyor.
NASA and other space agencies are also working towards the realization of in-space assembly (ISA) with space telescopes, where individual components will be sent to orbit and assembled there. This process will remove the need for especially heavy launch vehicles capable of sending massive observatories to space - a process that is very expensive and risky.
There's also the concept of observatories made up of swarms of smaller telescope mirrors ("swarm telescopes"). Much like large-scale arrays here on Earth - like the Very Long Baseline Interferometer (VLBI) and the Event Horizon Telescope (EHT) - this concept comes down to combing the imaging power of multiple observatories.
Then there's the idea of sending up space telescopes that are capable of assembling themselves. This idea, as proposed by Prof. Dmitri Savransky of Cornell University, would involve a ~30 meter (100 ft) telescope made up of modules that would assemble themselves autonomously.
This latter concept was also proposed during the 2020 Decadal Survey and was selected for Phase I development as part of the 2018 NASA Innovative Advanced Concepts (NIAC) program.
Space-based astronomy is a relatively new phenomenon whose history is inextricably linked to the history of space exploration. The first space telescopes followed the development of the first rockets and satellites.
As NASA and Roscosmos achieved expertise in space, space-based observatories increased in number and diversity. And as more and more nations joined the Space Age, more space agencies began conducting astronomical observations from space.
Today, the field has benefitted from the rise of interferometry, miniaturization, autonomous robotic systems, analytic software, predictive algorithms, high-speed data transfer, and improved optics.
At this rate, it is only a matter of time before astronomers see the Universe in the earliest stages of formation, unlock the mysteries of Dark Matter and Dark Energy, locate habitable worlds, and discover life beyond Earth and the Solar System. And it wouldn't be surprising if it all happens simultaneously!
- ESA - PLATO
- ESA - Euclid Overview
- ESA - Hubble Space Telescope
- NASA - Hubble Space Telescope
- NASA - Spitzer Space Telescope
- Wikipedia - List of space telescopes
- Space.com - Major Space Telescopes
- NASA - James Webb Space Telescope
- Scientific American - The World's First Space Telescope |
A radionuclide (radioactive nuclide, radioisotope or radioactive isotope) is an atom that has excess nuclear energy, making it unstable. This excess energy can be used in one of three ways: emitted from the nucleus as gamma radiation; transferred to one of its electrons to release it as a conversion electron; or used to create and emit a new particle (alpha particle or beta particle) from the nucleus. During those processes, the radionuclide is said to undergo radioactive decay. These emissions are considered ionizing radiation because they are powerful enough to liberate an electron from another atom. The radioactive decay can produce a stable nuclide or will sometimes produce a new unstable radionuclide which may undergo further decay. Radioactive decay is a random process at the level of single atoms: it is impossible to predict when one particular atom will decay. However, for a collection of atoms of a single element the decay rate, and thus the half-life (t1/2) for that collection, can be calculated from their measured decay constants. The range of the half-lives of radioactive atoms has no known limits and spans a time range of over 55 orders of magnitude.
Radionuclides occur naturally or are artificially produced in nuclear reactors, cyclotrons, particle accelerators or radionuclide generators. There are about 730 radionuclides with half-lives longer than 60 minutes (see list of nuclides). Thirty-two of those are primordial radionuclides that were created before the earth was formed. At least another 60 radionuclides are detectable in nature, either as daughters of primordial radionuclides or as radionuclides produced through natural production on Earth by cosmic radiation. More than 2400 radionuclides have half-lives less than 60 minutes. Most of those are only produced artificially, and have very short half-lives. For comparison, there are about 252 stable nuclides. (In theory, only 146 of them are stable, and the other 106 are believed to decay (alpha decay or beta decay or double beta decay or electron capture or double electron capture).)
All chemical elements can exist as radionuclides. Even the lightest element, hydrogen, has a well-known radionuclide, tritium. Elements heavier than lead, and the elements technetium and promethium, exist only as radionuclides. (In theory, elements heavier than dysprosium exist only as radionuclides, but some such elements, like gold and platinum, are observationally stable and their half-lives have not been determined).
Unplanned exposure to radionuclides generally has a harmful effect on living organisms including humans, although low levels of exposure occur naturally without harm. The degree of harm will depend on the nature and extent of the radiation produced, the amount and nature of exposure (close contact, inhalation or ingestion), and the biochemical properties of the element; with increased risk of cancer the most usual consequence. However, radionuclides with suitable properties are used in nuclear medicine for both diagnosis and treatment. An imaging tracer made with radionuclides is called a radioactive tracer. A pharmaceutical drug made with radionuclides is called a radiopharmaceutical.
On Earth, naturally occurring radionuclides fall into three categories: primordial radionuclides, secondary radionuclides, and cosmogenic radionuclides.
- Radionuclides are produced in stellar nucleosynthesis and supernova explosions along with stable nuclides. Most decay quickly but can still be observed astronomically and can play a part in understanding astronomic processes. Primordial radionuclides, such as uranium and thorium, exist in the present time because their half-lives are so long (>100 million years) that they have not yet completely decayed. Some radionuclides have half-lives so long (many times the age of the universe) that decay has only recently been detected, and for most practical purposes they can be considered stable, most notably bismuth-209: detection of this decay meant that bismuth was no longer considered stable. It is possible decay may be observed in other nuclides, adding to this list of primordial radionuclides.
- Secondary radionuclides are radiogenic isotopes derived from the decay of primordial radionuclides. They have shorter half-lives than primordial radionuclides. They arise in the decay chain of the primordial isotopes thorium-232, uranium-238, and uranium-235. Examples include the natural isotopes of polonium and radium.
- Cosmogenic isotopes, such as carbon-14, are present because they are continually being formed in the atmosphere due to cosmic rays.
Many of these radionuclides exist only in trace amounts in nature, including all cosmogenic nuclides. Secondary radionuclides will occur in proportion to their half-lives, so short-lived ones will be very rare. For example, polonium can be found in uranium ores at about 0.1 mg per metric ton (1 part in 1010). Further radionuclides may occur in nature in virtually undetectable amounts as a result of rare events such as spontaneous fission or uncommon cosmic ray interactions.
Radionuclides are produced as an unavoidable result of nuclear fission and thermonuclear explosions. The process of nuclear fission creates a wide range of fission products, most of which are radionuclides. Further radionuclides can be created from irradiation of the nuclear fuel (creating a range of actinides) and of the surrounding structures, yielding activation products. This complex mixture of radionuclides with different chemistries and radioactivity makes handling nuclear waste and dealing with nuclear fallout particularly problematic.
Synthetic radionuclides are deliberately synthesised using nuclear reactors, particle accelerators or radionuclide generators:
- As well as being extracted from nuclear waste, radioisotopes can be produced deliberately with nuclear reactors, exploiting the high flux of neutrons present. These neutrons activate elements placed within the reactor. A typical product from a nuclear reactor is iridium-192. The elements that have a large propensity to take up the neutrons in the reactor are said to have a high neutron cross-section.
- Particle accelerators such as cyclotrons accelerate particles to bombard a target to produce radionuclides. Cyclotrons accelerate protons at a target to produce positron-emitting radionuclides, e.g. fluorine-18.
- Radionuclide generators contain a parent radionuclide that decays to produce a radioactive daughter. The parent is usually produced in a nuclear reactor. A typical example is the technetium-99m generator used in nuclear medicine. The parent produced in the reactor is molybdenum-99.
Radionuclides are used in two major ways: either for their radiation alone (irradiation, nuclear batteries) or for the combination of chemical properties and their radiation (tracers, biopharmaceuticals).
- In biology, radionuclides of carbon can serve as radioactive tracers because they are chemically very similar to the nonradioactive nuclides, so most chemical, biological, and ecological processes treat them in a nearly identical way. One can then examine the result with a radiation detector, such as a Geiger counter, to determine where the provided atoms were incorporated. For example, one might culture plants in an environment in which the carbon dioxide contained radioactive carbon; then the parts of the plant that incorporate atmospheric carbon would be radioactive. Radionuclides can be used to monitor processes such as DNA replication or amino acid transport.
- In nuclear medicine, radioisotopes are used for diagnosis, treatment, and research. Radioactive chemical tracers emitting gamma rays or positrons can provide diagnostic information about internal anatomy and the functioning of specific organs, including the human brain. This is used in some forms of tomography: single-photon emission computed tomography and positron emission tomography (PET) scanning and Cherenkov luminescence imaging. Radioisotopes are also a method of treatment in hemopoietic forms of tumors; the success for treatment of solid tumors has been limited. More powerful gamma sources sterilise syringes and other medical equipment.
- In food preservation, radiation is used to stop the sprouting of root crops after harvesting, to kill parasites and pests, and to control the ripening of stored fruit and vegetables.
- In industry, and in mining, radionuclides are used to examine welds, to detect leaks, to study the rate of wear, erosion and corrosion of metals, and for on-stream analysis of a wide range of minerals and fuels.
- In spacecraft and elsewhere, radionuclides are used to provide power and heat, notably through radioisotope thermoelectric generators (RTGs).
- In astronomy and cosmology, radionuclides play a role in understanding stellar and planetary process.
- In particle physics, radionuclides help discover new physics (physics beyond the Standard Model) by measuring the energy and momentum of their beta decay products.
- In ecology, radionuclides are used to trace and analyze pollutants, to study the movement of surface water, and to measure water runoffs from rain and snow, as well as the flow rates of streams and rivers.
- In geology, archaeology, and paleontology, natural radionuclides are used to measure ages of rocks, minerals, and fossil materials.
The following table lists properties of selected radionuclides illustrating the range of properties and uses.
|Mode of formation||Comments|
|Tritium (3H)||1||2||12.3 y||β−||19||Cosmogenic||lightest radionuclide, used in artificial nuclear fusion, also used for radioluminescence and as oceanic transient tracer. Synthesized from neutron bombardment of lithium-6 or deuterium|
|Beryllium-10||4||6||1,387,000 y||β−||556||Cosmogenic||used to examine soil erosion, soil formation from regolith, and the age of ice cores|
|Carbon-14||6||8||5,700 y||β−||156||Cosmogenic||used for radiocarbon dating|
|Fluorine-18||9||9||110 min||β+, EC||633/1655||Cosmogenic||positron source, synthesised for use as a medical radiotracer in PET scans.|
|Aluminium-26||13||13||717,000 y||β+, EC||4004||Cosmogenic||exposure dating of rocks, sediment|
|Chlorine-36||17||19||301,000 y||β−, EC||709||Cosmogenic||exposure dating of rocks, groundwater tracer|
|Potassium-40||19||21||1.24×109 y||β−, EC||1330 /1505||Primordial||used for potassium-argon dating, source of atmospheric argon, source of radiogenic heat, largest source of natural radioactivity|
|Calcium-41||20||21||102,000 y||EC||Cosmogenic||exposure dating of carbonate rocks|
|Cobalt-60||27||33||5.3 y||β−||2824||Synthetic||produces high energy gamma rays, used for radiotherapy, equipment sterilisation, food irradiation|
|Strontium-90||38||52||28.8 y||β−||546||Fission product||medium-lived fission product; probably most dangerous component of nuclear fallout|
|Technetium-99||43||56||210,000 y||β−||294||Fission product||commonest isotope of the lightest unstable element, most significant of long-lived fission products|
|Technetium-99m||43||56||6 hr||γ,IC||141||Synthetic||most commonly used medical radioisotope, used as a radioactive tracer|
|Iodine-129||53||76||15,700,000 y||β−||194||Cosmogenic||longest lived fission product; groundwater tracer|
|Iodine-131||53||78||8 d||β−||971||Fission product||most significant short term health hazard from nuclear fission, used in nuclear medicine, industrial tracer|
|Xenon-135||54||81||9.1 h||β−||1160||Fission product||strongest known "nuclear poison" (neutron-absorber), with a major effect on nuclear reactor operation.|
|Caesium-137||55||82||30.2 y||β−||1176||Fission product||other major medium-lived fission product of concern|
|Gadolinium-153||64||89||240 d||EC||Synthetic||Calibrating nuclear equipment, bone density screening|
|Bismuth-209||83||126||1.9×1019y||α||3137||Primordial||long considered stable, decay only detected in 2003|
|Polonium-210||84||126||138 d||α||5307||Decay product||Highly toxic, used in poisoning of Alexander Litvinenko|
|Radon-222||86||136||3.8d||α||5590||Decay product||gas, responsible for the majority of public exposure to ionizing radiation, second most frequent cause of lung cancer|
|Thorium-232||90||142||1.4×1010 y||α||4083||Primordial||basis of thorium fuel cycle|
|Uranium-235||92||143||7×108y||α||4679||Primordial||fissile, main nuclear fuel|
|Uranium-238||92||146||4.5×109 y||α||4267||Primordial||Main Uranium isotope|
|Plutonium-238||94||144||87.7 y||α||5593||Synthetic||used in radioisotope thermoelectric generators (RTGs) and radioisotope heater units as an energy source for spacecraft|
|Plutonium-239||94||145||24110 y||α||5245||Synthetic||used for most modern nuclear weapons|
|Americium-241||95||146||432 y||α||5486||Synthetic||used in household smoke detectors as an ionising agent|
|Californium-252||98||154||2.64 y||α/SF||6217||Synthetic||undergoes spontaneous fission (3% of decays), making it a powerful neutron source, used as a reactor initiator and for detection devices|
Household smoke detectors
Radionuclides are present in many homes as they are used inside the most common household smoke detectors. The radionuclide used is americium-241, which is created by bombarding plutonium with neutrons in a nuclear reactor. It decays by emitting alpha particles and gamma radiation to become neptunium-237. Smoke detectors use a very small quantity of 241Am (about 0.29 micrograms per smoke detector) in the form of americium dioxide. 241Am is used as it emits alpha particles which ionize the air in the detector's ionization chamber. A small electric voltage is applied to the ionized air which gives rise to a small electric current. In the presence of smoke, some of the ions are neutralized, thereby decreasing the current, which activates the detector's alarm.
Impacts on organisms
Radionuclides that find their way into the environment may cause harmful effects as radioactive contamination. They can also cause damage if they are excessively used during treatment or in other ways exposed to living beings, by radiation poisoning. Potential health damage from exposure to radionuclides depends on a number of factors, and "can damage the functions of healthy tissue/organs. Radiation exposure can produce effects ranging from skin redness and hair loss, to radiation burns and acute radiation syndrome. Prolonged exposure can lead to cells being damaged and in turn lead to cancer. Signs of cancerous cells might not show up until years, or even decades, after exposure."
Summary table for classes of nuclides, "stable" and radioactive
Following is a summary table for the total list of nuclides with half-lives greater than one hour. Ninety of these 989 nuclides are theoretically stable, except to proton-decay (which has never been observed). About 252 nuclides have never been observed to decay, and are classically considered stable.
The remaining tabulated radionuclides have half-lives longer than 1 hour, and are well-characterized (see list of nuclides for a complete tabulation). They include 30 nuclides with measured half-lives longer than the estimated age of the universe (13.8 billion years), and another 4 nuclides with half-lives long enough (> 100 million years) that they are radioactive primordial nuclides, and may be detected on Earth, having survived from their presence in interstellar dust since before the formation of the solar system, about 4.6 billion years ago. Another 60+ short-lived nuclides can be detected naturally as daughters of longer-lived nuclides or cosmic-ray products. The remaining known nuclides are known solely from artificial nuclear transmutation.
Numbers are not exact, and may change slightly in the future, as "stable nuclides" are observed to be radioactive with very long half-lives.
|Stability class||Number of nuclides||Running total||Notes on running total|
|Theoretically stable to all but proton decay||90||90||Includes first 40 elements. Proton decay yet to be observed.|
|Theoretically stable to alpha decay, beta decay, isomeric transition, and double beta decay but not spontaneous fission, which is possible for "stable" nuclides ≥ niobium-93||56||146||All nuclides that are possibly completely stable (spontaneous fission has never been observed for nuclides with mass number < 232).|
|Energetically unstable to one or more known decay modes, but no decay yet seen. All considered "stable" until decay detected.||106||252||Total of classically stable nuclides.|
|Radioactive primordial nuclides.||34||286||Total primordial elements include uranium, thorium, bismuth, rubidium-87, potassium-40, tellurium-128 plus all stable nuclides.|
|Radioactive nonprimordial, but naturally occurring on Earth.||61||347||Carbon-14 (and other isotopes generated by cosmic rays) and daughters of radioactive primordial elements, such as radium, polonium, etc. 41 of these have a half life of greater than one hour.|
|Radioactive synthetic half-life ≥ 1.0 hour). Includes most useful radiotracers.||662||989||These 989 nuclides are listed in the article List of nuclides.|
|Radioactive synthetic (half-life < 1.0 hour).||>2400||>3300||Includes all well-characterized synthetic nuclides.|
List of commercially available radionuclides
This list covers common isotopes, most of which are available in very small quantities to the general public in most countries. Others that are not publicly accessible are traded commercially in industrial, medical, and scientific fields and are subject to government regulation.
Gamma emission only
|Barium-133||9694 TBq/kg (262 Ci/g)||10.7 years||81.0, 356.0|
|Cadmium-109||96200 TBq/kg (2600 Ci/g)||453 days||88.0|
|Cobalt-57||312280 TBq/kg (8440 Ci/g)||270 days||122.1|
|Cobalt-60||40700 TBq/kg (1100 Ci/g)||5.27 years||1173.2, 1332.5|
|Europium-152||6660 TBq/kg (180 Ci/g)||13.5 years||121.8, 344.3, 1408.0|
|Manganese-54||287120 TBq/kg (7760 Ci/g)||312 days||834.8|
|Sodium-22||237540 Tbq/kg (6240 Ci/g)||2.6 years||511.0, 1274.5|
|Zinc-65||304510 TBq/kg (8230 Ci/g)||244 days||511.0, 1115.5|
|Technetium-99m||1.95×107 TBq/kg (5.27 × 105 Ci/g)||6 hours||140|
Beta emission only
|Strontium-90||5180 TBq/kg (140 Ci/g)||28.5 years||546.0|
|Thallium-204||17057 TBq/kg (461 Ci/g)||3.78 years||763.4|
|Carbon-14||166.5 TBq/kg (4.5 Ci/g)||5730 years||49.5 (average)|
|Tritium (Hydrogen-3)||357050 TBq/kg (9650 Ci/g)||12.32 years||5.7 (average)|
Alpha emission only
|Polonium-210||166500 TBq/kg (4500 Ci/g)||138.376 days||5304.5|
|Uranium-238||12580 kBq/kg (0.00000034 Ci/g)||4.468 billion years||4267|
Multiple radiation emitters
|Isotope||Activity||Half-life||Radiation types||Energies (keV)|
|Caesium-137||3256 TBq/kg (88 Ci/g)||30.1 years||Gamma & beta||G: 32, 661.6 B: 511.6, 1173.2|
|Americium-241||129.5 TBq/kg (3.5 Ci/g)||432.2 years||Gamma & alpha||G: 59.5, 26.3, 13.9 A: 5485, 5443|
- List of nuclides shows all radionuclides with half-life > 1 hour
- Hyperaccumulators table – 3
- Radioactivity in biology
- Radiometric dating
- Radionuclide cisternogram
- Uses of radioactivity in oil and gas wells
- R.H. Petrucci, W.S. Harwood and F.G. Herring, General Chemistry (8th ed., Prentice-Hall 2002), p.1025–26
- "Decay and Half Life". Retrieved 2009-12-14.
- Stabin, Michael G. (2007). "3". Radiation Protection and Dosimetry: An Introduction to Health Physics (Submitted manuscript). Springer. doi:10.1007/978-0-387-49983-3. ISBN 978-0387499826.
- Best, Lara; Rodrigues, George; Velker, Vikram (2013). "1.3". Radiation Oncology Primer and Review. Demos Medical Publishing. ISBN 978-1620700044.
- Loveland, W.; Morrissey, D.; Seaborg, G.T. (2006). Modern Nuclear Chemistry. Modern Nuclear Chemistry. Wiley-Interscience. p. 57. Bibcode:2005mnc..book.....L. ISBN 978-0-471-11532-8.
- Eisenbud, Merril; Gesell, Thomas F (1997-02-25). Environmental Radioactivity: From Natural, Industrial, and Military Sources. p. 134. ISBN 9780122351549.
- Bagnall, K. W. (1962). "The Chemistry of Polonium". Advances in Inorganic Chemistry and Radiochemistry 4. New York: Academic Press. pp. 197–226. doi:10.1016/S0065-2792(08)60268-X. ISBN 0-12-023604-4. Retrieved June 14, 2012., p. 746
- Bagnall, K. W. (1962). "The Chemistry of Polonium". Advances in Inorganic Chemistry and Radiochemistry 4. New York: Academic Press., p. 198
- Ingvar, David H.; Lassen, Niels A. (1961). "Quantitative determination of regional cerebral blood-flow in man". The Lancet. 278 (7206): 806–807. doi:10.1016/s0140-6736(61)91092-3.
- Ingvar, David H.; Franzén, Göran (1974). "Distribution of cerebral activity in chronic schizophrenia". The Lancet. 304 (7895): 1484–1486. doi:10.1016/s0140-6736(74)90221-9. PMID 4140398.
- Lassen, Niels A.; Ingvar, David H.; Skinhøj, Erik (October 1978). "Brain Function and Blood Flow". Scientific American. 239 (4): 62–71. Bibcode:1978SciAm.239d..62L. doi:10.1038/scientificamerican1078-62.
- Severijns, Nathal; Beck, Marcus; Naviliat-Cuncic, Oscar (2006). "Tests of the standard electroweak model in nuclear beta decay". Reviews of Modern Physics. 78 (3): 991–1040. arXiv:nucl-ex/0605029. Bibcode:2006RvMP...78..991S. doi:10.1103/RevModPhys.78.991.
- "Smoke Detectors and Americium". world-nuclear.org. Archived from the original on 2010-11-12.
- Office of Radiation Protection – Am 241 Fact Sheet – Washington State Department of Health Archived 2011-03-18 at the Wayback Machine
- "Ionizing radiation, health effects and protective measures". World Health Organization. November 2012. Retrieved January 27, 2014.
- "Cosmic Detectives". The European Space Agency (ESA). 2013-04-02. Retrieved 2013-04-15.
- Table data is derived by counting members of the list; see WP:CALC. References for the list data itself are given below in the reference section in list of nuclides
- Carlsson, J.; Forssell Aronsson, E; Hietala, SO; Stigbrand, T; Tennvall, J; et al. (2003). "Tumour therapy with radionuclides: assessment of progress and problems". Radiotherapy and Oncology. 66 (2): 107–117. doi:10.1016/S0167-8140(02)00374-2. PMID 12648782.
- "Radioisotopes in Industry". World Nuclear Association.
- Martin, James (2006). Physics for Radiation Protection: A Handbook. p. 130. ISBN 978-3527406111.
- Luig, H.; Kellerer, A. M.; Griebel, J. R. (2011). "Radionuclides, 1. Introduction". Ullmann's Encyclopedia of Industrial Chemistry. doi:10.1002/14356007.a22_499.pub2. ISBN 978-3527306732.
|Wikimedia Commons has media related to |
- EPA – Radionuclides – EPA's Radiation Protection Program: Information.
- FDA – Radionuclides – FDA's Radiation Protection Program: Information.
- Interactive Chart of Nuclides – A chart of all nuclides
- National Isotope Development Center – U.S. Government source of radionuclides – production, research, development, distribution, and information
- The Live Chart of Nuclides – IAEA |
A collection of data can be distributed or spread out in many different ways. For example, data from rolling a dice can have random integer values from 1 through 6. Data from a manufacturing process may be centered on a target value or may include data values that are very far from the center value.
You can assess a data distribution through graphs, descriptive statistics, or comparison to a theoretical distribution:
- Graphs like histograms can give instant insight into the distribution of a data set. Histograms can help you to observe:
- Whether the data cluster around a single value or whether the data have multiple peaks or modes.
- Whether the data are spread thinly over a large range or whether the data are within a small range.
- Whether the data are skewed or symmetrical.
- Descriptive statistics
- Descriptive statistics that describe the central tendency (mean, median) and spread (variance, standard deviation) of data with numeric values add a layer of detail and can be used to make comparisons with other data sets.
- Theoretical distributions
- Finally, some common distributions can be identified and are referred to by name, like the normal, Weibull, and exponential distributions. The normal distribution, for example, is always bell-shaped and symmetric about a mean value.
- Your real data will likely only approximate these theoretical distributions. If there is a close fit, you can say that the data are well-modeled by a given distribution. |
The genome of living beings is permanently affected by anomalies, due to accidental errors or agents inside and outside the body. Fortunately, enzymes correct most of these anomalies and thus reinforce genetic stability. But they can also cause mutations that contribute to variability. Other systems can produce new genetic combinations. These mutations and recombinations are essential for the adaptation of populations to changes in the environment and therefore for evolution.
Who says genomeGenetic material of a living organism. It contains genetic information encoding proteins. In most organisms, the genome corresponds to DNA. However, in some viruses, including retroviruses such as HIV, the genetic material is RNA. known as a DNA molecule (deoxyribonucleic acid) since it carries the genetic information of living things. In many viruses it is replaced by a very similar molecule, RNA (ribonucleic acid).
DNA consists of a double sequence of monomersmolecules which, by successive sequences with identical or different molecules, give rise to a polymer structure. (nucleotides) of which there are four types that differ only by the nitrogen baseor nucleic base. An organic nitrogen compound present in nucleic acids as a nucleotide in which it is bound to a ose, ribose in the case of RNA and deoxyribose in the case of DNA. In genetics, they are often simply referred to as the bases of nucleic acids that they carry: adenine (A), guanine (G), thymine (T) and cytosine (C). It is the precise sequence of these four types, and therefore the four bases, that constitutes the “text” of the genetic message. Its structure in two complementary strands, where A is always facing T and G facing C, – the famous double helix (Figure 1) – has two major advantages:
- It allows, in a single step, the replicationProcess to obtain two molecules identical to the initial molecule. into two identical daughter molecules (Figure 1), thus transmitting genetic information through cell divisions and sexual reproduction.
- It allows the repair of lesions that appeared on one of the strands as well as the creation of recombinant molecules, i.e. new genetic combinations. This double-stranded structure therefore promotes both stability and variability of genomes.
Genetic variations in DNA fall into two main categories: mutations and recombinations.
- Mutations are sudden changes in the genome of a living cell or virus. In multicellular organisms, if the mutations affect the germ cells intended to give gametes, they will be transmitted to the offspring and will therefore become heritable.
- Recombinations consist in producing a new genetic combination from exchanges between existing genetic materials. There are several types of them with very different biological mechanisms and roles. Classically, this definition only concerned DNA belonging to the same species. We now know that there are spontaneous genetic exchanges between different species, transgenesis, that we will include in this category.
These two main types of variations are themselves divided into several modalities. We will endeavour to extract the elements most likely to shed light on their respective roles in population dynamics and evolution.
2. The mutations
2.1. general information
The stability of genetic messages encoded by DNA can be affected by errors of the enzyme that performs the replication, the replicase. This enzyme can thus put a nitrogenous base in place of another on the new strand. But apart from any replication, DNA is also permanently damaged. It is important to distinguish between lesion and mutation (see Genetic polymorphism & variation). Lesions are abnormalities in the physical structure of DNA that, in most cases, will prevent its replication. They cannot therefore be transmitted to the descendants. On the other hand, a mutated molecule has a normal physical structure. Only the sequence of the databases – and therefore the information they contain – is modified. It can therefore replicate without problem and transmit the mutation to daughter cells.
The lesions are very diverse and there are quantities of repair enzymes, each highly specialized for a particular type of lesion. This is referred to as the “cell toolbox” for DNA repair (work awarded the Nobel Prize for Chemistry 2015). Paradoxically enough, it is these repair systems that, by mistake, will create the actual mutation from the primary lesion. These lesions can be caused either by agents internal to the body (or endogenous) or by agents from the external environment (or exogenous). Here are some examples to set the ideas.
- Endogenous agents: In warm-blooded animals, it is estimated that the DNA molecule can undergo 20,000 to 40,000 single-stranded cutsin a bond between two adjacent nucleotides on a strand of a nucleic acid fragment. per cell per day, as a result of molecular agitation. There can also be base losses, it is estimated that 10,000 T and Cs are lost and 500 A and Gs are lost, still per cell and per day. In these cases, we can even speak of spontaneous lesions. Among the most frequent endogenous agents are oxygen derivatives (free radicals or ROSA abbreviation for “Reactive oxygen species” or “reactive oxygen species”. Free radicals derived from oxygen, very reactive and very toxic. The abbreviation ROS is commonly used, even in French.), which are normal by-products of respiratory metabolism, they play an important role in oxidizing certain bases that will need to be replaced. It would also be necessary to add the transposable elements, sometimes called transposon, capable of moving autonomously in a genome, by a mechanism called transposition. These mobile DNA sequences are part of what are called dispersed repeats and are considered powerful drivers of evolution and biodiversity., we will discuss them further below.
- Exogenous agents: they can be physical (radiation) or chemical. The most common are ultraviolet (UV) rays, whose effect is generally limited to the skin because they are not very penetrating (see Solar UV Cellular Impact). For example, it is estimated that sunbathing can cause, per cell and per hour, 60,000 to 80,000 abnormal chemical bonds between contiguous thymins of the same DNA strand, each of which is sufficient to block replication. If they are not all repaired, the cell dies: it is sunburn.
It is therefore clear that DNA stability is a dynamic process, the result of a permanent balance between the production of lesions and their repair. These repair mechanisms do not operate at a constant level, they are subject to regulation.
A first type of regulation depends on the number of lesions in the cell. This phenomenon was first highlighted in the E. coli, a bacterium that is one of the preferred subjects of study for geneticists. As early as 1974, it was assumed that there was a response, called SOS, which regulates the intervention of several repair systems according to the number of injuries. When there is a small number of lesions, this response increases the effectiveness of faithful repair mechanisms. But beyond a certain threshold of damage, these mechanisms are overwhelmed. The SOS response then induces the synthesis of a replicase capable of crossing specific lesions (see below), but with a certain error rate. This is called SOS mutagenesis. At the end of the 1990s, it was shown to facilitate the adaptation of the bacterial population to a hostile environment, at the cost of significant losses due to harmful mutations. It is a kind of faint hope system, hence the name SOS.
A simulation of faithful repair mechanisms following irradiations has been shown on other organisms highly prized by biologists (Figure 2): in the 1980s in baker’s yeast, in 2000 in vinegar fly, in 2006 in a plant with a pretty name, “ladies’ arabette” and in 2008 in mice exposed to radiation at Chernobyl. These data call into question the still widespread claim that the effects of radiation are directly proportional to doses. For moderate doses, this is clearly not true. More detailed information on this work can be found at: https://www.lespiedsdansleplat.me/comment-les-organismes-vivants-protegent-leur-adn/.
The second type of regulation concerns the fidelity of replication according to the type of organism. In all living cells, bacteria and others, this replication involves, in addition to DNA replication, a corrective enzyme system. Thanks to the old strand, this system corrects the errors made on the new strand by the replicase, which results in a very low error rate, in the order of 1 in 10 billion (10-10). Such accuracy is essential for organisms with large DNA molecules: 4 million base pairs in eelgrass and 3 billion in humans.
But in multicellular organisms, the number of cells can be very large: in the human species, it is estimated at one hundred thousand billion (1014). In these organisms, the majority of mutations are neutral, because the genome contains a large number of non-coding sequences (see Genetic polymorphism and variation). Even taking this into account, an error rate of 1 in 10 billion (10-10) is still too high. This is all the more so since many mutations also occur outside replication. If there were only one copy of the gene per cell, many of them would carry deleterious mutations and the organism would not be viable. It is the interest, and the necessity, to have two chromosome batches for these organisms (diploidyProperty of a cell whose chromosomes it contains are present in pairs (2n chromosomes). The concept is generally to be contrasted with haploidy, a term referring to the ownership of cells with single copy chromosomes (n chromosomes). An organism or part of an organism is said to be diploid when its cells are themselves diploid.). Most mutations that inactivate one gene are recessive, so it is sufficient for the other copy to be functional for the cell to function normally. In a nutshell, it can be said that diploidy acts as a spare wheel.
If we now look at the case of viruses, most of which have very small genomes, the situation is very different. For DNA viruses, there is generally no corrective activity associated with replicase, so the mutation rate is much higher, in the order of 1 in 1 million (10-6). This is even more true for viruses whose genome is composed of RNA, because RNA-replicases are much less accurate than DNA-replicases.
This is particularly the case for influenza viruses or AIDS. For the latter, which has a genome of 10,000 nucleotides, the rate of point mutationsGenome mutations where only one pair of nitrogen bases is modified. has been estimated at 1 in 10 per genome and per replication cycle. Since the number of viral particles produced (virions) is in the order of 10,000 per day per infected cell, we can see that the population of an infected host shows considerable variability. This high error rate should not be considered simply as a defect of organisms that are too “rudimentary”, but rather as an opportunity for the virus to bypass their host’s defense mechanisms and perpetuate itself indefinitely. The counterpart is a large number of inactive transfers but, given the quantity produced, this is not very embarrassing.
This leads us to make two remarks on the role of mutations in the evolutionary process:
- They play a key role in the adaptation of species to their environment because, by increasing genetic diversity, they are the material on which natural selection can act. What we have just seen with viruses is an example. The development of antibiotic-resistant bacteria, insecticide-treated insects, or herbicide-treated plants, are other examples.
- A genetic mutation is not inherently advantageous or disadvantageous, depending on the environment. For example, in the Kerguelen Islands, in the subantarctic, there is a wingless fly species, Calycopteryx moseley (Figure 3). This character, strongly disadvantageous in our regions, is on the contrary beneficial there because it prevents these flies from being carried into the ocean by the very strong winds that constantly sweep these islands
What are the different categories of mutations? They are classically distinguished by the size of the DNA segment concerned. In increasing order, these range from point mutations, where a single pair of nitrogenous bases is modified, to mutations that affect larger or smaller DNA segments, to tens of thousands of nucleotides. Not to mention the changes in the number of chromosomes. The last two categories mainly concern eukaryotic organisms, whose chromosomes are located in a particular compartment of the cell: the nucleus. These chromosomes each consist of a very long DNA molecule but in a complex “package”, composed of several families of proteins.
2.2. Point mutations
They can be due either to errors during replication or, more frequently, to poorly repaired lesions; they create variants of the same gene (alleles). In E. coli, it is the SOS system that causes the vast majority of mutations after UV irradiation. Most of the examples cited above are in fact point mutations.
2.3. Chromosome mutations
These are rearrangements produced by agents that cause DNA breaks, including radiation (other than UV). Several double-strand breaks can lead to more or less significant rearrangements depending on their number and the size of the segment concerned.
In the case of two very distant double-stranded breaks on the same chromosome, the entire segment between the breaks can be either (a) lost, causing the individual to die, (b) reversed at the same site (inversion), or (c) transferred to another chromosome (translocation) if other breaks have occurred on it. Inversions and translocations are quite frequent in natural populations. They interfere with the proper matching of chromosomes during meiosis The process of double cell division that takes place in the cells (diploids) of the germline to form gametes (haploids), or sex cells in eukaryotic organisms. and thus cause some sterility. As a result, they can be involved in the processes of specificationEvolutionary process that leads to the emergence of new living species that individualize from populations belonging to an original species.. Often, neighbouring species differ in chromosomal rearrangements.
2.4. Mutations in the chromosomal batch
This category has different mechanisms from the previous ones. These are no longer anomalies resulting from primary lesions but mechanical errors in cell division processes, mitosisrefers to the chromosomal events of cell division, the stage of the cell cycle of eukaryotic cells. This is the step of non-sexual/asexual duplication (unlike meiosis) of the chromosomes of the mother cell and their equal distribution in each of the two daughter cells. or meiosis. Some can lead to situations where individuals have more than two chromosomal batches (2N), but always an integer number of batches (3N, 4N…), this is polyploidy (see Focus Polyploidy). They are viable because the “gene balance” is respected: all genes have the same number of copies. On the other hand, the 3Ns are sterile because the genetic balance of the gametes is necessarily abnormal. Meiosis obviously cannot evenly distribute an odd number of chromosomes among the gametes. The polyploids 4N, 6N or 8N are fertile. But they immediately create a new species, because any crossing with diploid parents would give sterile descendants, their melodies being very unbalanced.
Other “failures” of meiosis can lead to a chromosomal imbalance in some gametes, which will therefore be found in the offspring. For example, they will be 2N-1 (monosomy) or 2N+1 (trisomy), we then speak of aneuploidy, which is a harmful situation because the genetic balance is no longer respected.
There are many recombination modalities with very different biological roles.
3.1. The homologous recombination
It is the most common and also the oldest known type. As its name suggests, it is done between identical DNA molecules (Figure 4). In diploid organisms, it occurs regularly, during meiosis, between homologous chromosomes, by crossover. It is a “soft” variability: the re-collection is done at exactly the same point, so there will be no mutations or modifications in the arrangement of genes on the chromosomes. This results in new and viable genetic combinations that may, in a given environment, have different adaptive capacities than parental combinations. In bacteria, it can also occur during conjugation which allows genetic exchanges between cells.
This recombination mechanism is also involved in DNA repair. In diploid organisms, double-strand breaks caused on a chromosome by irradiation can be repaired by the presence of an intact homologous molecule in the segment concerned.
3.2. Other types of recombinations
To simplify, we will group together types of events with very different biological roles. They could be grouped under the term additive recombination because they result, in most cases, in the addition of DNA segments in genomes.
Among the most common mechanisms is the transposition of mobile genetic elements (called transposable elements), which contain only a few hundred or thousands of nucleotides and possess, at a minimum, functions necessary for their “jumps” into the host genome. They occur in both bacteria and the most complex eukaryotes, sometimes in large numbers. In the former, they can be exchanged between different species during conjugation. As they generally carry antibiotic resistance genes, they are one of the essential factors in the very rapid spread of these resistances in pathogenic bacteria, with the serious medical problems that result.
In the human species, while our genes themselves, which code for all our proteins, constitute only about 2 to 3% of the DNA of the nucleus, the various families of transposable elements constitute nearly 50%. Most of them have been established in our lineage since very ancient geological times, long before the appearance of our species. The retrovirusFamily of RNA viruses with high genetic variability. Have an enzyme, reverse transcriptase, which allows the transcription of viral RNA into a particular DNA molecule capable of integrating with the DNA of the host cell., to which HIV belongs, both virus and transposable element, alone represents 8% of our DNA. Fortunately for genome stability, only a small number of these elements are still mobile. As with “classical” mutations, the insertion of a transposable element can be harmful if it is done in a gene, but it can also bring interesting genetic innovations, either by modifying gene regulations or by their own functions. Multiple data show that they have contributed to the evolution of genomes and therefore species.
Finally, it is necessary to mention the phenomena of transgenesis, i.e. the genetic exchanges between different species. Since the late 1990s, we have found that they are much more frequent than we thought. The species barrier, which had been thought to be impassable under natural conditions, is actually quite porous, at least on an evolutionary scale. Again, we have a source of innovation that contributes to the plasticity of genomes and evolution.
We presented a brief overview of the different types of genome variations. We could not detail the enzyme systems involved in the repairs and recombinations, they alone would have required a complete article (if not two!). Readers interested in these mechanisms can refer to the following sites. To conclude, two important points should be highlighted:
- Genetic variations occur randomly. They are not directed by the environment for adaptive purposes, as postulated in the Lamarckian vision of evolution. This has been abundantly demonstrated experimentally (Read Lamarck and Darwin: two divergent visions of the living world). However, contrary to what was believed until the 1980s, their frequencies can be modulated by the environment. We have seen this illustrated in bacteria with the increase in the rate of mutations by the SOS response and, in fruit fly and arabette, with the stimulation of homologous recombination following irradiation. These are not the only factor capable of modulating the frequency of variations, many physiological stresses can lead to this result. For example, the application of antibiotics to bacterial cultures also triggers SOS mutagenesis and thus increases the frequency of mutations[11,12]. This subject is also covered in the article Adaptation: Responding to environmental challenges.
- This article is at the heart of what can be called the dialectic stability/variability of living things. Two a priori antagonistic properties but in fact quite complementary. Both are essential for the survival of populations and the evolution of species.
Stability to allow a population to adapt sustainably to its environment when it is relatively stable and variability to facilitate genetic changes when the environment changes, allowing natural selection to work. The SOS response in bacteria is a good illustration of this since it is capable of performing either of these two functions, depending on environmental conditions.
References and notes
Cover photo: © vitstudio; Image 134698571 via Shutterstock
2] R. Devoret (1993) Mechanism of SOS mutagenesis, Med/Sci vol.3, n°9, I-VII.
3] J. Ducau et al (2000) Mutation Research 460, 69-80
4] J. Molinier et al (2006) Nature 442, 1046-1049
5] B.E. Rodgers & K.M. Holmes (2008) Dose response 6, 209-221
Video of Miroslav Radman: https://www.reseau-canope.fr/corpus/video/la-recombinaison-genetique-129.html
8] D. Anxolabéhère, D. Nouaud & W.J. Miller (2000) Transposable elements and genetic novelties in eukaryotes. Med/Sci, I No. 11, vol. 16, I-IX.
S. Da Re & M.-C. Ploy (2012) Antibiotics and bacterial SOS response. Med Sci (Paris) 28:179-184.
12] J. Blázquez, J. Rodrı́guez-Beltrán & I. Matic (2018) Antibiotic-Induced Genetic Variation: How It Arises and How It Can Be Prevented. Annu. Microbiol Rev. 72:209-30.
L’Encyclopédie de l’environnement is publicated by l’Université Grenoble Alpes - www.univ-grenoble-alpes.fr
To cite this article: BREGLIANO Jean-Claude (2019), The genome between stability and variability, Encyclopédie de l’Environnement, [en ligne ISSN 2555-0950] url : https://www.encyclopedie-environnement.org/en/non-classe-en/the-genome-between-stability-and-variability/. |
Quiz & Worksheet Addition Rule & Mendelian Inheritance
Probability Rules by phildb Teaching Resources - Tes. Counting principle. solve probability word problems involving combinations. when there is more than one dimension to an event or occurrence, it is useful to know how many different outcomes are possible., 7th grade probability worksheets with answers pdf probability word problems five worksheet pack - the probability that the stick will be purple? 9. you think of a number from the first thirty negative integers. what is the probability that the integer chosen will be divisible by 5? 10. when a six sided die is rolled then what is the probability that the number rolled will be 4?science: 7th.
Section 5.4 Conditional Probability and the General
Probability Quiz Math ~ Probability Math Worksheets. Probability addition rules. showing top 8 worksheets in the category - probability addition rules. some of the worksheets displayed are addition and multiplication rules work, work 5, introduction to probability and addition rule, work 6, addition and multiplication laws of probability, probability work 2, probability rules work 2 name, ag name _____ addition and multiplication rules for probability worksheet addition rule i. determine whether these events are mutually exclusive..
Probability Multiplication Rule. The multiplication rule for independent events relates the probabilities of two events to the probability that they both occur. in order to use the rule, we need to have the probabilities of each of the independent events. given these events, the multiplication rule states the probability that both events occur is found by multiplying the probabilities of each event., conditional probability worksheet 8 - answers 1. if all outcomes in a sample have an equal probability of occurring, then the probability model for that sample is uniform. is this statement true or false? true. this is the de nition of a uni-form probability model. 2. if two outcomes in a sample have a dif-ferent probability of occurring, then the multiplication rule for events a and b, p(b)p.
Probability Addition Rules Worksheets Printable Worksheets
PART 3 MODULE 5 INDEPENDENT EVENTS THE MULTIPLICATION. Multiplication rule 5 of8 determine if the following events are independent. question 10. drunk driving and getting a dwi. question 11. getting an a in statistics and getting an a in sculpture., multiplication/division rules: the rules for multiplication and division are the same. positive ( or ) positive = positive ex: 10 2 = 5 negative ( or ) negative = positive ex: – 4 ( – 3) = 12.
Section 3.2 Conditional Probability an the Multiplication
Worksheet #5 Compound Probability General Multiplication. Multiplication rule of probability – independent practice worksheet complete all the problems. 6. holly is going to draw two cards from a standard deck. Probability worksheet 3 name: part i: each problem below investigates the probability that a single event occurs. for rule #3 from probability worksheet 2. three events: 1. first is girl, and 2. second is girl, and 3. third is girl . 2. consider a deck of poker cards. a poker deck contains four suits: diamonds, hearts, spades, and clubs. the diamonds and hearts are red and the spades and.
Additionules and multiplication for probability worksheet answers math answer key addition rules images high definition 1. probabilityssword puzzle pdf concept 7th grade mathd search puzzles probabilityssword puzzle pdf concept 7th grade mathd search puzzles printablesswordksheets free math word worksheets. printable worksheets and lessons . bag of candy step-by-step lesson- what is the chance of getting a green candy out of a mixed bag. guided lesson - marbles, pulling cards from a … |
|SI unit||ohm metre (m)|
|In SI base units||kg?m3?s-3?A-2|
|?, ?, ?|
|SI unit||siemens per metre (S/m)|
|In SI base units||kg-1?m-3?s3?A2|
Electrical resistivity (also called specific electrical resistance or volume resistivity) is a fundamental property of a material that measures how strongly it resists electric current. Its inverse, called electrical conductivity, quantifies how well a material conducts electricity. A low resistivity indicates a material that readily allows electric current. Resistivity is commonly represented by the Greek letter ? (rho). The SI unit of electrical resistivity is the ohm-meter (m). For example, if a 1 m solid cube of material has sheet contacts on two opposite faces, and the resistance between these contacts is 1 ?, then the resistivity of the material is 1 m.
Electrical conductivity or specific conductance is the reciprocal of electrical resistivity. It represents a material's ability to conduct electric current. It is commonly signified by the Greek letter ? (sigma), but ? (kappa) (especially in electrical engineering) and ? (gamma) are sometimes used. The SI unit of electrical conductivity is siemens per metre (S/m).
In an ideal case, cross-section and physical composition of the examined material are uniform across the sample, and the electric field and current density are both parallel and constant everywhere. Many resistors and conductors do in fact have a uniform cross section with a uniform flow of electric current, and are made of a single material, so that this is a good model. (See the adjacent diagram.) When this is the case, the electrical resistivity ? (Greek: rho) can be calculated by:
Both resistance and resistivity describe how difficult it is to make electrical current flow through a material, but unlike resistance, resistivity is an intrinsic property. This means that all pure copper wires (which have not been subjected to distortion of their crystalline structure etc.), irrespective of their shape and size, have the same resistivity, but a long, thin copper wire has a much larger resistance than a thick, short copper wire. Every material has its own characteristic resistivity. For example, rubber has a far larger resistivity than copper.
In a hydraulic analogy, passing current through a high-resistivity material is like pushing water through a pipe full of sand -- while passing current through a low-resistivity material is like pushing water through an empty pipe. If the pipes are the same size and shape, the pipe full of sand has higher resistance to flow. Resistance, however, is not solely determined by the presence or absence of sand. It also depends on the length and width of the pipe: short or wide pipes have lower resistance than narrow or long pipes.
The above equation can be transposed to get Pouillet's law (named after Claude Pouillet):
The resistance of a given material is proportional to the length, but inversely proportional to the cross-sectional area. Thus resistivity can be expressed using the SI unit "ohm metre" (m) -- i.e. ohms divided by metres (for the length) and then multiplied by square metres (for the cross-sectional area).
For example, if A = , = (forming a cube with perfectly conductive contacts on opposite faces), then the resistance of this element in ohms is numerically equal to the resistivity of the material it is made of in m.
Conductivity, ?, is the inverse of resistivity:
Conductivity has SI units of siemens per metre (S/m).
For less ideal cases, such as more complicated geometry, or when the current and electric field vary in different parts of the material, it is necessary to use a more general expression in which the resistivity at a particular point is defined as the ratio of the electric field to the density of the current it creates at that point:
in which and are inside the conductor.
Conductivity is the inverse (reciprocal) of resistivity. Here, it is given by:
For example, rubber is a material with large ? and small ? -- because even a very large electric field in rubber makes almost no current flow through it. On the other hand, copper is a material with small ? and large ? -- because even a small electric field pulls a lot of current through it.
As shown below, this expression simplifies to a single number when the electric field and current density are constant in the material.
When the resistivity of a material has a directional component, the most general definition of resistivity must be used. It starts from the tensor-vector form of Ohm's law, which relates the electric field inside a material to the electric current flow. This equation is completely general, meaning it is valid in all cases, including those mentioned above. However, this definition is the most complicated, so it is only directly used in anisotropic cases, where the more simple definitions cannot be applied. If the material is not anisotropic, it is safe to ignore the tensor-vector definition, and use a simpler expression instead.
Here, anisotropic means that the material has different properties in different directions. For example, a crystal of graphite consists microscopically of a stack of sheets, and current flows very easily through each sheet, but much less easily from one sheet to the adjacent one. In such cases, the current does not flow in exactly the same direction as the electric field. Thus, the appropriate equations are generalized to the three-dimensional tensor form:
where the conductivity ? and resistivity ? are rank-2 tensors, and electric field E and current density J are vectors. These tensors can be represented by 3×3 matrices, the vectors with 3×1 matrices, with matrix multiplication used on the right side of these equations. In matrix form, the resistivity relation is given by:
Equivalently, resistivity can be given in the more compact Einstein notation:
In either case, the resulting expression for each electric field component is:
Since the choice of the coordinate system is free, the usual convention is to simplify the expression by choosing an x-axis parallel to the current direction, so Jy = Jz = 0. This leaves:
Conductivity is defined similarly:
Both resulting in:
Looking at the two expressions, and are the matrix inverse of each other. However, in the most general case, the individual matrix elements are not necessarily reciprocals of one another; for example, ?xx may not be equal to 1/?xx. This can be seen in the Hall effect, where is nonzero. In the Hall effect, due to rotational invariance about the z-axis, and , so the relation between resistivity and conductivity simplifies to:
If the electric field is parallel to the applied current, and are zero. When they are zero, one number, , is enough to describe the electrical resistivity. It is then written as simply , and this reduces to the simpler expression.
Electric current is the ordered movement of electric charges. These charges are called current carriers. In metals and semiconductors, electrons are the current carriers; in electrolytes and ionized gases, positive and negative ions. In the general case, the current density of one carrier is determined by the formula:
where n is the density of charge carriers (the number of carriers in a unit volume), q is the charge of one carrier, is the average speed of their movement. In the case where the current consists of many carriers
where is the current density of the -th carrier.
According to elementary quantum mechanics, an electron in an atom or crystal can only have certain precise energy levels; energies between these levels are impossible. When a large number of such allowed levels have close-spaced energy values - i.e. have energies that differ only minutely - those close energy levels in combination are called an "energy band". There can be many such energy bands in a material, depending on the atomic number of the constituent atoms[a] and their distribution within the crystal.[b]
The material's electrons seek to minimize the total energy in the material by settling into low energy states; however, the Pauli exclusion principle means that only one can exist in each such state. So the electrons "fill up" the band structure starting from the bottom. The characteristic energy level up to which the electrons have filled is called the Fermi level. The position of the Fermi level with respect to the band structure is very important for electrical conduction: Only electrons in energy levels near or above the Fermi level are free to move within the broader material structure, since the electrons can easily jump among the partially occupied states in that region. In contrast, the low energy states are completely filled with a fixed limit on the number of electrons at all times, and the high energy states are empty of electrons at all times.
Electric current consists of a flow of electrons. In metals there are many electron energy levels near the Fermi level, so there are many electrons available to move. This is what causes the high electronic conductivity of metals.
An important part of band theory is that there may be forbidden bands of energy: energy intervals that contain no energy levels. In insulators and semiconductors, the number of electrons is just the right amount to fill a certain integer number of low energy bands, exactly to the boundary. In this case, the Fermi level falls within a band gap. Since there are no available states near the Fermi level, and the electrons are not freely movable, the electronic conductivity is very low.
A metal consists of a lattice of atoms, each with an outer shell of electrons that freely dissociate from their parent atoms and travel through the lattice. This is also known as a positive ionic lattice. This 'sea' of dissociable electrons allows the metal to conduct electric current. When an electrical potential difference (a voltage) is applied across the metal, the resulting electric field causes electrons to drift towards the positive terminal. The actual drift velocity of electrons is typically small, on the order of magnitude of meters per hour. However, due to the sheer number of moving electrons, even a slow drift velocity results in a large current density. The mechanism is similar to transfer of momentum of balls in a Newton's cradle but the rapid propagation of an electric energy along a wire is not due to the mechanical forces, but the propagation of an energy-carrying electromagnetic field guided by the wire.
Most metals have electrical resistance. In simpler models (non quantum mechanical models) this can be explained by replacing electrons and the crystal lattice by a wave-like structure. When the electron wave travels through the lattice, the waves interfere, which causes resistance. The more regular the lattice is, the less disturbance happens and thus the less resistance. The amount of resistance is thus mainly caused by two factors. First, it is caused by the temperature and thus amount of vibration of the crystal lattice. Higher temperatures cause bigger vibrations, which act as irregularities in the lattice. Second, the purity of the metal is relevant as a mixture of different ions is also an irregularity. The small decrease in conductivity on melting of pure metals is due to the loss of long range crystalline order. The short range order remains and strong correlation between positions of ions results in coherence between waves diffracted by adjacent ions.
In metals, the Fermi level lies in the conduction band (see Band Theory, above) giving rise to free conduction electrons. However, in semiconductors the position of the Fermi level is within the band gap, about halfway between the conduction band minimum (the bottom of the first band of unfilled electron energy levels) and the valence band maximum (the top of the band below the conduction band, of filled electron energy levels). That applies for intrinsic (undoped) semiconductors. This means that at absolute zero temperature, there would be no free conduction electrons, and the resistance is infinite. However, the resistance decreases as the charge carrier density (i.e., without introducing further complications, the density of electrons) in the conduction band increases. In extrinsic (doped) semiconductors, dopant atoms increase the majority charge carrier concentration by donating electrons to the conduction band or producing holes in the valence band. (A "hole" is a position where an electron is missing; such holes can behave in a similar way to electrons.) For both types of donor or acceptor atoms, increasing dopant density reduces resistance. Hence, highly doped semiconductors behave metallically. At very high temperatures, the contribution of thermally generated carriers dominates over the contribution from dopant atoms, and the resistance decreases exponentially with temperature.
In electrolytes, electrical conduction happens not by band electrons or holes, but by full atomic species (ions) traveling, each carrying an electrical charge. The resistivity of ionic solutions (electrolytes) varies tremendously with concentration - while distilled water is almost an insulator, salt water is a reasonable electrical conductor. Conduction in ionic liquids is also controlled by the movement of ions, but here we are talking about molten salts rather than solvated ions. In biological membranes, currents are carried by ionic salts. Small holes in cell membranes, called ion channels, are selective to specific ions and determine the membrane resistance.
The concentration of ions in a liquid (e.g., in an aqueous solution) depends on the degree of dissociation of the dissolved substance, characterized by a dissociation coefficient , which is the ratio of the concentration of ions to the concentration of molecules of the dissolved substance :
The specific electrical conductivity () of a solution is equal to:
where : module of the ion charge, and : mobility of positively and negatively charged ions, : concentration of molecules of the dissolved substance, : the coefficient of dissociation.
The electrical resistivity of a metallic conductor decreases gradually as temperature is lowered. In ordinary conductors, such as copper or silver, this decrease is limited by impurities and other defects. Even near absolute zero, a real sample of a normal conductor shows some resistance. In a superconductor, the resistance drops abruptly to zero when the material is cooled below its critical temperature. An electric current flowing in a loop of superconducting wire can persist indefinitely with no power source.
In 1986, researchers discovered that some cuprate-perovskite ceramic materials have much higher critical temperatures, and in 1987 one was produced with a critical temperature above 90 K (-183 °C). Such a high transition temperature is theoretically impossible for a conventional superconductor, so the researchers named these conductors high-temperature superconductors. Liquid nitrogen boils at 77 K, cold enough to activate high-temperature superconductors, but not nearly cold enough for conventional superconductors. In conventional superconductors, electrons are held together in pairs by an attraction mediated by lattice phonons.[clarification needed] The best available model of high-temperature superconductivity is still somewhat crude. There is a hypothesis that electron pairing in high-temperature superconductors is mediated by short-range spin waves known as paramagnons.[dubious ]
Plasmas are very good conductors and electric potentials play an important role.
The potential as it exists on average in the space between charged particles, independent of the question of how it can be measured, is called the plasma potential, or space potential. If an electrode is inserted into a plasma, its potential generally lies considerably below the plasma potential, due to what is termed a Debye sheath. The good electrical conductivity of plasmas makes their electric fields very small. This results in the important concept of quasineutrality, which says the density of negative charges is approximately equal to the density of positive charges over large volumes of the plasma (ne = ?Z?>ni), but on the scale of the Debye length there can be charge imbalance. In the special case that double layers are formed, the charge separation can extend some tens of Debye lengths.
The magnitude of the potentials and electric fields must be determined by means other than simply finding the net charge density. A common example is to assume that the electrons satisfy the Boltzmann relation:
Differentiating this relation provides a means to calculate the electric field from the density:
It is possible to produce a plasma that is not quasineutral. An electron beam, for example, has only negative charges. The density of a non-neutral plasma must generally be very low, or it must be very small. Otherwise, the repulsive electrostatic force dissipates it.
In astrophysical plasmas, Debye screening prevents electric fields from directly affecting the plasma over large distances, i.e., greater than the Debye length. However, the existence of charged particles causes the plasma to generate, and be affected by, magnetic fields. This can and does cause extremely complex behavior, such as the generation of plasma double layers, an object that separates charge over a few tens of Debye lengths. The dynamics of plasmas interacting with external and self-generated magnetic fields are studied in the academic discipline of magnetohydrodynamics.
Plasma is often called the fourth state of matter after solid, liquids and gases. It is distinct from these and other lower-energy states of matter. Although it is closely related to the gas phase in that it also has no definite form or volume, it differs in a number of ways, including the following:
|Electrical conductivity||Very low: air is an excellent insulator until it breaks down into plasma at electric field strengths above 30 kilovolts per centimeter.||Usually very high: for many purposes, the conductivity of a plasma may be treated as infinite.|
|Independently acting species||One: all gas particles behave in a similar way, influenced by gravity and by collisions with one another.||Two or three: electrons, ions, protons and neutrons can be distinguished by the sign and value of their charge so that they behave independently in many circumstances, with different bulk velocities and temperatures, allowing phenomena such as new types of waves and instabilities.|
|Velocity distribution||Maxwellian: collisions usually lead to a Maxwellian velocity distribution of all gas particles, with very few relatively fast particles.||Often non-Maxwellian: collisional interactions are often weak in hot plasmas and external forcing can drive the plasma far from local equilibrium and lead to a significant population of unusually fast particles.|
|Interactions||Binary: two-particle collisions are the rule, three-body collisions extremely rare.||Collective: waves, or organized motion of plasma, are very important because the particles can interact at long ranges through the electric and magnetic forces.|
The degree of semiconductors doping makes a large difference in conductivity. To a point, more doping leads to higher conductivity. The conductivity of a solution of water is highly dependent on its concentration of dissolved salts, and other chemical species that ionize in the solution. Electrical conductivity of water samples is used as an indicator of how salt-free, ion-free, or impurity-free the sample is; the purer the water, the lower the conductivity (the higher the resistivity). Conductivity measurements in water are often reported as specific conductance, relative to the conductivity of pure water at . An EC meter is normally used to measure conductivity in a solution. A rough summary is as follows:
|Material||Resistivity, ? (?·m)|
|Carbon steel (1010)|||
|Grain oriented electrical steel|||
|Swimming pool water[o]||to||to|||
|Wood (oven dry)||to||to|||
The effective temperature coefficient varies with temperature and purity level of the material. The 20 °C value is only an approximation when used at other temperatures. For example, the coefficient becomes lower at higher temperatures for copper, and the value 0.00427 is commonly specified at .
The extremely low resistivity (high conductivity) of silver is characteristic of metals. George Gamow tidily summed up the nature of the metals' dealings with electrons in his popular science book One, Two, Three...Infinity (1947):
The metallic substances differ from all other materials by the fact that the outer shells of their atoms are bound rather loosely, and often let one of their electrons go free. Thus the interior of a metal is filled up with a large number of unattached electrons that travel aimlessly around like a crowd of displaced persons. When a metal wire is subjected to electric force applied on its opposite ends, these free electrons rush in the direction of the force, thus forming what we call an electric current.
More technically, the free electron model gives a basic description of electron flow in metals.
Wood is widely regarded as an extremely good insulator, but its resistivity is sensitively dependent on moisture content, with damp wood being a factor of at least worse insulator than oven-dry. In any case, a sufficiently high voltage - such as that in lightning strikes or some high-tension power lines - can lead to insulation breakdown and electrocution risk even with apparently dry wood.
The electrical resistivity of most materials changes with temperature. If the temperature T does not vary too much, a linear approximation is typically used:
where is called the temperature coefficient of resistivity, is a fixed reference temperature (usually room temperature), and is the resistivity at temperature . The parameter is an empirical parameter fitted from measurement data, equal to 1/. Because the linear approximation is only an approximation, is different for different reference temperatures. For this reason it is usual to specify the temperature that was measured at with a suffix, such as , and the relationship only holds in a range of temperatures around the reference. When the temperature varies over a large temperature range, the linear approximation is inadequate and a more detailed analysis and understanding should be used.
In general, electrical resistivity of metals increases with temperature. Electron-phonon interactions can play a key role. At high temperatures, the resistance of a metal increases linearly with temperature. As the temperature of a metal is reduced, the temperature dependence of resistivity follows a power law function of temperature. Mathematically the temperature dependence of the resistivity ? of a metal is given by the Bloch-Grüneisen formula:
where is the residual resistivity due to defect scattering, A is a constant that depends on the velocity of electrons at the Fermi surface, the Debye radius and the number density of electrons in the metal. is the Debye temperature as obtained from resistivity measurements and matches very closely with the values of Debye temperature obtained from specific heat measurements. n is an integer that depends upon the nature of interaction:
If more than one source of scattering is simultaneously present, Matthiessen's Rule (first formulated by Augustus Matthiessen in the 1860s) states that the total resistance can be approximated by adding up several different terms, each with the appropriate value of n.
As the temperature of the metal is sufficiently reduced (so as to 'freeze' all the phonons), the resistivity usually reaches a constant value, known as the residual resistivity. This value depends not only on the type of metal, but on its purity and thermal history. The value of the residual resistivity of a metal is decided by its impurity concentration. Some materials lose all electrical resistivity at sufficiently low temperatures, due to an effect known as superconductivity.
An investigation of the low-temperature resistivity of metals was the motivation to Heike Kamerlingh Onnes's experiments that led in 1911 to discovery of superconductivity. For details see History of superconductivity.
At high metal temperatures, the Wiedemann-Franz law holds:
In general, intrinsic semiconductor resistivity decreases with increasing temperature. The electrons are bumped to the conduction energy band by thermal energy, where they flow freely, and in doing so leave behind holes in the valence band, which also flow freely. The electric resistance of a typical intrinsic (non doped) semiconductor decreases exponentially with temperature:
An even better approximation of the temperature dependence of the resistivity of a semiconductor is given by the Steinhart-Hart equation:
where A, B and C are the so-called Steinhart-Hart coefficients.
This equation is used to calibrate thermistors.
Extrinsic (doped) semiconductors have a far more complicated temperature profile. As temperature increases starting from absolute zero they first decrease steeply in resistance as the carriers leave the donors or acceptors. After most of the donors or acceptors have lost their carriers, the resistance starts to increase again slightly due to the reducing mobility of carriers (much as in a metal). At higher temperatures, they behave like intrinsic semiconductors as the carriers from the donors/acceptors become insignificant compared to the thermally generated carriers.
where n = 2, 3, 4, depending on the dimensionality of the system.
When analyzing the response of materials to alternating electric fields (dielectric spectroscopy), in applications such as electrical impedance tomography, it is convenient to replace resistivity with a complex quantity called impedivity (in analogy to electrical impedance). Impedivity is the sum of a real component, the resistivity, and an imaginary component, the reactivity (in analogy to reactance). The magnitude of impedivity is the square root of sum of squares of magnitudes of resistivity and reactivity.
Conversely, in such cases the conductivity must be expressed as a complex number (or even as a matrix of complex numbers, in the case of anisotropic materials) called the admittivity. Admittivity is the sum of a real component called the conductivity and an imaginary component called the susceptivity.
An alternative description of the response to alternating currents uses a real (but frequency-dependent) conductivity, along with a real permittivity. The larger the conductivity is, the more quickly the alternating-current signal is absorbed by the material (i.e., the more opaque the material is). For details, see Mathematical descriptions of opacity.
Even if the material's resistivity is known, calculating the resistance of something made from it may, in some cases, be much more complicated than the formula above. One example is spreading resistance profiling, where the material is inhomogeneous (different resistivity in different places), and the exact paths of current flow are not obvious.
In cases like this, the formulas
must be replaced with
where E and J are now vector fields. This equation, along with the continuity equation for J and the Poisson's equation for E, form a set of partial differential equations. In special cases, an exact or approximate solution to these equations can be worked out by hand, but for very accurate answers in complex cases, computer methods like finite element analysis may be required.
In some applications where the weight of an item is very important, the product of resistivity and density is more important than absolute low resistivity - it is often possible to make the conductor thicker to make up for a higher resistivity; and then a low-resistivity-density-product material (or equivalently a high conductivity-to-density ratio) is desirable. For example, for long-distance overhead power lines, aluminium is frequently used rather than copper (Cu) because it is lighter for the same conductance.
Silver, although it is the least resistive metal known, has a high density and performs similarly to copper by this measure, but is much more expensive. Calcium and the alkali metals have the best resistivity-density products, but are rarely used for conductors due to their high reactivity with water and oxygen (and lack of physical strength). Aluminium is far more stable. Toxicity excludes the choice of beryllium. (Pure beryllium is also brittle.) Thus, aluminium is usually the metal of choice when the weight or cost of a conductor is the driving consideration.
|Resistivity × density||..., relative to Cu, giving
|Approximate price, at |
9 December 2018
|Volume||Mass||(USD per kg)||Relative to Cu|
When electrons are conducted through a metal, they interact with imperfections in the lattice and scatter. [...] Thermal energy produces scattering by causing atoms to vibrate. This is the source of resistance of metals. |
Wolf–Rayet stars (often referred to as WR stars) are evolved, massive stars (over 20 solar masses initially) which are losing mass rapidly by means of a very strong stellar wind, with speeds up to 2000 km/s. While our own Sun loses approximately 10−14 solar masses every year, Wolf–Rayet stars typically lose 10−5 solar masses a year.
Wolf–Rayet stars are extremely hot, with surface temperatures in the range of 30,000 K to around 200,000 K. They are also highly luminous, from tens of thousands to several million times the bolometric luminosity of the Sun, although not exceptionally bright visually since most of their output is in far ultraviolet and even soft X-rays.
In 1867, using the 40 cm Foucault telescope at the Paris Observatory, astronomers Charles Wolf and Georges Rayet discovered three stars in the constellation Cygnus (now designated HD191765, HD192103 and HD192641) that displayed broad emission bands on an otherwise continuous spectrum. Most stars display absorption bands in the spectrum, as a result of overlying elements absorbing light energy at specific frequencies. The number of stars with emission lines is quite low, so these were clearly unusual objects.
The nature of the emission bands in the spectra of a Wolf–Rayet star remained a mystery for several decades. Edward C. Pickering theorized that the lines were caused by an unusual state of hydrogen, and it was found that this "Pickering series" of lines followed a pattern similar to the Balmer series, when half-integral quantum numbers were substituted. It was later shown that the lines resulted from the presence of helium; a gas that was discovered in 1868.
By 1929, the width of the emission bands was being attributed to Doppler broadening, and hence that the gas surrounding these stars must be moving with velocities of 300–2400 km/s along the line of sight. The conclusion was that a Wolf–Rayet star is continually ejecting gas into space, producing an expanding envelope of nebulous gas. The force ejecting the gas at the high velocities observed is radiation pressure.
In addition to helium, emission lines of carbon, oxygen and nitrogen were identified in the spectra of Wolf–Rayet stars. In 1938, the International Astronomical Union classified the spectra of Wolf–Rayet stars into types WN and WC, depending on whether the spectrum was dominated by lines of nitrogen or carbon-oxygen respectively.
Wolf–Rayet stars are a normal stage in the evolution of very massive stars, in which strong, broad emission lines of helium and nitrogen ("WN" sequence) or helium, carbon, and oxygen ("WC" sequence) are visible. Due to their strong emission lines they can be identified in nearby galaxies. About 500 Wolf–Rayets are catalogued in our own Milky Way Galaxy. This number has changed dramatically during the last few years as the result of photometric and spectroscopic surveys in the near-infrared dedicated to discovering this kind of object in the Galactic plane. Additionally, about 100 are known in the Large Magellanic Cloud, while only 12 have been identified in the Small Magellanic Cloud. Many more are known in the galaxies in the Local Group, such as M33, where 206 Wolf–Rayet stars are known, and M31, where 154 Wolf–Rayet stars are known.
Several astronomers, among them Rublev (1965) and Conti (1976) originally proposed that the WR stars as a class are descended from massive O-stars in which the strong stellar winds characteristic of extremely luminous stars have ejected the unprocessed outer H-rich layers.
The characteristic emission lines are formed in the extended and dense high-velocity wind region enveloping the very hot stellar photosphere, which produces a flood of UV radiation that causes fluorescence in the line-forming wind region. This ejection process uncovers in succession, first the nitrogen-rich products of CNO cycle burning of hydrogen (WN stars), and later the carbon-rich layer due to He burning (WC and WO stars). WO stars too, according to some authors, could be more advanced on its evolution showing oxygen-rich layers produced during carbon burning . Most of these stars are believed finally to progress to become supernovae of Type Ib or Type Ic. There is however one group of WR stars that have strong hydrogen lines in their spectra indicating the existence of a hydrogen atmosphere. These are the WNh (and also WNha) stars and they have not yet shed their hydrogen shells.
Some Wolf–Rayet stars of the carbon sequence ("WC"), especially those belonging to the latest types, are noticeable due to their production of dust. Usually this takes places on those belonging to binary systems as a product of the collision of the stellar winds forming the pair, as is the case of the famous binary WR 104; however this process occurs on single ones too.
A few (roughly 10%) of the central stars of planetary nebulae are, despite their much lower (typically ~0.6 solar) masses, also observationally of the WR-type; i.e., they show emission line spectra with broad lines from helium, carbon and oxygen. Denoted [WR], they are much older objects descended from evolved low-mass stars and are closely related to white dwarfs, rather than to the very young, very massive stars that comprise the bulk of the WR class.
The majority of WR stars are now understood as being at a natural state in the evolution of massive stars (not counting the less common planetary nebula central stars). WR stars are not thought to form in low metallicity stars because they do not lose enough mass, instead proceeding directly to pair-instability or photodisintegration supernovae. Here are the likely sequences in the evolution of single stars of different masses without high rotation rates. Stars with very high rotation rates and stars in binary systems may skip some steps due to accelerated mass loss. Low-mass stars explode as supernovae before they lose enough mass to become Wolf–Rayet stars.
|Initial Mass (M☉)||Evolutionary Sequence||Supernova Type|
|90+||O → Of → WNLh (→ WNE) → WC||Ib (or IIn?)|
|60–90||O → Of/WNLh ↔ LBV → WNL → WC||Ib (or IIn?)|
|40–60||O → BSG → LBV ↔ WNL (→ WNE) → WC||Ib|
|(rarely) O → BSG → LBV ↔ WNL (→ WNE) → WC → WO||Ic|
|30–40||O → BSG → RSG (↔ LBV)→ WNE → WC||Ib|
|20–30||O (→ BSG) → RSG ↔ BSG (blue loops) → RSG||II-L (or IIb)|
|10–20||O → RSG||II-P|
|O||O-type main-sequence star|
|Of||evolved O-type showing N and He emission|
|Of/WNLh||"slash star", spectrum between Of and WNLh (quiescent LBV?)|
|LBV||luminous blue variable|
|WNL||"late" WN-class Wolf–Rayet star (about WN6 to WN9)|
|WNLh||WNL plus hydrogen lines|
|WNE||"early" WN-class Wolf–Rayet star (about WN2 to WN6)|
|WC||WC-class Wolf–Rayet star|
|WO||WO-class Wolf–Rayet star|
Higher-mass stars are much rarer, both because they form less often and because they only exist for a short time. This means that Wolf–Rayet stars themselves are very rare because they only form from the most massive main sequence stars, but also that type II-P supernovae are the most common amongst massive stars. Although Wolf–Rayet stars form from exceptionally massive stars, most of them are only moderately massive because they only form after losing the bulk of their outer layers. For example, γ2 Velorum A currently has a mass around 9 times the sun, but began with a mass at least 40 times the sun. An exception are the WNh stars which are spectroscopically similar but actually a much less evolved star which has only just started to expel its atmosphere. The most massive stars currently known are all WNh stars rather than O-type main sequence stars, an expected situation because such stars start to move away from the main sequence only a few thousand years after they form.
It is possible for a Wolf–Rayet star to progress to a "collapsar" stage in its death throes if it doesn't lose sufficient mass. This is when the core of the star collapses to form a black hole, either directly or by pulling in the surrounding ejected material. This is thought to be the precursor of a long gamma-ray burst.
Observations of supernova progenitors and the known WC stars do not currently support the idea that WC stars evolve from the highest mass stars. An alternate proposal is that they evolve from the highest mass red supergiants, above about 25 M☉, which have not been observed to be supernova progenitors. Most higher mass stars would then explode as supernovae either in a blue supergiant, LBV, or WN phase, before reaching the WC stage. WO stars are observed to be luminous enough to be the end point for stars of 60 M☉ or more, but no intermediate WC equivalents have been observed. It isn't clear if this is simply because of the very low numbers of this type of star, or because WO stars develop by a different mechanism.
Wolf Rayet stars are united as a group by the prominent emission lines of helium, carbon, nitrogen, and oxygen, and by the comparative lack of any signs of hydrogen in their spectra. However, one group of WR stars do show significant hydrogen. These are the WNh, or WNLh, stars. They show little carbon or oxygen, so there are no "WCh" stars known, and they are generally relatively cool, hence the "late" designation WNLh. Types such as WN9h are the most common, although they have been identified as early as WN5h.
In contrast to the other WR stars, these are not highly evolved stars that have exhausted hydrogen in their cores. They show helium and nitrogen fusion products at their surface because of vigorous convection which dredges up these elements from the core even while the core is still burning hydrogen. This occurs only in the most massive stars, and most likely only in combination with rapid rotation. WNh stars are both initially more massive and have lost relatively little mass compared to other WR stars, hence they are amongst the most luminous stars known. They have similar spectra to the slash stars which also show helium and nitrogen lines but are more obviously regular supergiant star. There are also intermediate cases, and most likely a continuum as massive stars rapidly evolve away from the main sequence.
The most visible example of a Wolf–Rayet star is Gamma 2 Velorum (γ² Vel), which is a naked eye star for those located south of 40 degrees northern latitude. Due to the exotic nature of its spectrum (bright emission lines in lieu of dark absorption lines) it is dubbed the "Spectral Gem of the Southern Skies". There are no other naked eye Wolf–Rayet stars.
The most massive star and probably most luminous star currently known, R136a1, is also a Wolf–Rayet star of the WNh type indicating it has only just started to evolve away from the main sequence. This type of star, which includes many of the most luminous and most massive stars, is very young and usually found only in the centre of the densest star clusters. Occasionally a runaway Wolf–Rayet star such as VFTS 682 is found outside such clusters, probably having been ejected from a multiple system or by interaction with other stars.
|Commons has media related to Wolf-Rayet stars.|
- Aperture Masking Interferometry observations.
- harvard.edu Wolf–Rayet Stars: Spectral Classifications
- astro.lsa.umich.edu ApJ 525:L97-L100 Nov. 10, 1999. Monnier, Tuthill & Danchi: Pinwheel Nebula Around WR98a (PDF)
- uk.arxiv.org ApJ Jan. 3,2005. Dougherty, et al.: High Resolution Radio Observations of the Colliding Wind Binary WR140 (PDF)
- harvard.edu A catalog of northern Wolf–Rayet Stars and the Central Stars of Planetary Nebulae (Harvard)
- nytimes.com Scientists See Supernova in Action
- nasa.gov Big Old Stars Don't Die Alone (NASA) |
MODIS is an observational instrument operated by the National Aeronautics and Space Administration.
It is the name of the sensor aboard 2 satellites, Terra and Aqua, that were launched in 1999 and 2002 respectively. Terra and Aqua follow a sun-synchronous near polar orbit, approximately 705 kilometers above the surface of the earth. Collectively, the satellites observe each point on the Earth's surface 3 to 4 times per day, though the interval between passes varies. Instrument data is used to monitor and process trends in land vegetation, ocean phytoplankton, and cloud temperature and altitude, among other features.3
One of the uses of MODIS data is the detection of active fires. The fire detection strategy is based on absolute detection of a fire (when the fire strength is sufficient to detect) and on detection relative to its background (to account for variability of the surface temperature and reflection by sunlight).4 Scientists at the University of Maryland developed the algorithm to identify areas where the infrared radiation being emitted is significantly different from that of surrounding areas. MODIS can routinely detect both flaming and smoldering fires ∼1000 square meters in size. Under very good observing conditions (e.g., near nadir, little or no smoke, relatively homogeneous land surface, etc.), flaming fires one-tenth this size can be detected. Under pristine (and extremely rare) observing conditions, even smaller flaming fires ∼50 square meters can be detected.5
- 3 NASA. (n.d.). Data. MODIS: Moderate Resolution Imaging Spectroradiometer. Retrieved December 16, 2020 from https://modis.gsfc.nasa.gov/data/
- 4 NASA. (n.d.). MODIS thermal anomalies/fire. MODIS: Moderate Resolution Imaging Spectroradiometer. Retrieved December 16, 2020 from https://modis.gsfc.nasa.gov/data/dataprod/mod14.php
- 5 University of Maryland. (n.d.). Active fire products. MODIS Active Fire and Burned Area Products. Retrieved November 6, 2019 from http://modis-fire.umd.edu/af.html; Giglio, L., Schroeder, W., Hall, J. V., & Justice, C. O. (2018, December). MODIS collection 6 active fire product user’s guide revision B. NASA. http://modis-fire.umd.edu/files/MODIS_C6_Fire_User_Guide_B.pdf |
Precipitation intensity is measured by a ground-based radar that bounces radar waves off of precipitation. The Local Radar base reflectivity product is a display of echo intensity (reflectivity) measured in dBZ (decibels). "Reflectivity" is the amount of transmitted power returned to the radar receiver after hitting precipitation, compared to a reference power density at a distance of 1 meter from the radar antenna. Base reflectivity images are available at several different elevation angles (tilts) of the antenna; the base reflectivity image currently available on this website is from the lowest "tilt" angle (0.5°).
The maximum range of the base reflectivity product is 143 miles (230 km) from the radar location. This image will not show echoes that are more distant than 143 miles, even though precipitation may be occurring at these greater distances. To determine if precipitation is occurring at greater distances, link to an adjacent radar. In addition, the radar image will not show echos from precipitation that lies outside the radar's beam, either because the precipitation is too high above the radar, or because it is so close to the Earth's surface that it lies beneath the radar's beam.
How Doppler Radar Works
NEXRAD (Next Generation Radar) can measure both precipitation and wind. The radar emits a short pulse of energy, and if the pulse strike an object (raindrop, snowflake, bug, bird, etc), the radar waves are scattered in all directions. A small portion of that scattered energy is directed back toward the radar.
This reflected signal is then received by the radar during its listening period. Computers analyze the strength of the returned radar waves, time it took to travel to the object and back, and frequency shift of the pulse. The ability to detect the "shift in the frequency" of the pulse of energy makes NEXRAD a Doppler radar. The frequency of the returning signal typically changes based upon the motion of the raindrops (or bugs, dust, etc.). This Doppler effect was named after the Austrian physicist, Christian Doppler, who discovered it. You have most likely experienced the "Doppler effect" around trains.
As a train passes your location, you may have noticed the pitch in the train's whistle changing from high to low. As the train approaches, the sound waves that make up the whistle are compressed making the pitch higher than if the train was stationary. Likewise, as the train moves away from you, the sound waves are stretched, lowering the pitch of the whistle. The faster the train moves, the greater the change in the whistle's pitch as it passes your location.
The same effect takes place in the atmosphere as a pulse of energy from NEXRAD strikes an object and is reflected back toward the radar. The radar's computers measure the frequency change of the reflected pulse of energy and then convert that change to a velocity of the object, either toward or from the radar. Information on the movement of objects either toward or away from the radar can be used to estimate the speed of the wind. This ability to "see" the wind is what enables the National Weather Service to detect the formation of tornados which, in turn, allows us to issue tornado warnings with more advanced notice.
The National Weather Service's 148 WSR-88D Doppler radars can detect most precipitation within approximately 90 mi of the radar, and intense rain or snow within approximately 155 mi. However, light rain, light snow, or drizzle from shallow cloud weather systems are not necessarily detected.
Radar Products Offered
Included in the NEXRAD data are the following products, all updated every 6 minutes if the radar is in Precipitation Mode or every 10 minutes if the radar is in Clear Air Mode (continue scrolling for further definitions)
- Base Reflectivity
- Composite Reflectivity
- Base Radial Velocity
- Storm Relative Mean Radial Velocity
- Vertically Integrated Liquid Water (VIL)
- Echo Tops
- Storm Total Precipitation
- 1 Hour Running Total Precipitation
- Velocity Azimuth Display (VAD) Wind Profile
Clear Air Mode
In this mode, the radar is in its most sensitive operation. This mode has the slowest antenna rotation rate which permits the radar to sample a given volume of the atmosphere longer. This increased sampling increases the radar's sensitivity and ability to detect smaller objects in the atmosphere than in precipitation mode. A lot of what you will see in clear air mode will be airborne dust and particulate matter. Also, snow does not reflect energy sent from the radar very well. Therefore, clear air mode will occasionally be used for the detection of light snow. In clear air mode, the radar products update every 10 minutes.
When rain is occurring, the radar does not need to be as sensitive as in clear air mode as rain provides plenty of returning signals. In Precipitation Mode, the radar products update every 6 minutes.
The dBZ Scale
The colors on the legend are the different echo intensities (reflectivity) measured in dBZ. "Reflectivity" is the amount of transmitted power returned to the radar receiver. Reflectivity covers a wide range of signals (from very weak to very strong). So, a more convenient number for calculations and comparison, a decibel (or logarithmic) scale (dBZ), is used.
The dBZ values increase as the strength of the signal returned to the radar increases. Each reflectivity image you see includes one of two color scales. One scale represents dBZ values when the radar is in clear air mode (dBZ values from -28 to +28). The other scale represents dBZ values when the radar is in precipitation mode (dBZ values from 5 to 75).
The scale of dBZ values is also related to the intensity of rainfall. Typically, light rain is occurring when the dBZ value reaches 20. The higher the dBZ, the stronger the rainrate. Depending on the type of weather occurring and the area of the U.S., forecasters use a set of rain rates which are associated to the dBZ values. These values are estimates of the rainfall per hour, updated each volume scan, with rainfall accumulated over time. Hail is a good reflector of energy and will return very high dBZ values. Since hail can cause the rainfall estimates to be higher than what is actually occurring, steps are taken to prevent these high dBZ values from being converted to rainfall.
Ground Clutter, Anomalous Propagation and Other False Echoes
Echoes from objects like buildings and hills appear in almost all radar reflectivity images. This "ground clutter" generally appears within a radius of 25 miles of the radar as a roughly circular region with a random pattern. An mathematical algorithm can be applied to the radar data to remove echoes where the echo intensity changes rapidly in an unrealistic fashion. These "No Clutter" images are available on the web site. Use these images with caution; ground clutter removal techniques can remove some real echoes, too.
Under highly stable atmospheric conditions (typically on calm, clear nights), the radar beam can be refracted almost directly into the ground at some distance from the radar, resulting in an area of intense-looking echoes. This "anomalous propagation " phenomenon (commonly known as AP) is much less common than ground clutter. Certain sites situated at low elevations on coastlines regularly detect "sea return", a phenomenon similar to ground clutter except that the echoes come from ocean waves.
Radar returns from birds, insects, and aircraft are also rather common. Echoes from migrating birds regularly appear during nighttime hours between late February and late May, and again from August through early November. Return from insects is sometimes apparent during July and August. The apparent intensity and areal coverage of these features is partly dependent on radio propagation conditions, but they usually appear within 30 miles of the radar and produce reflectivities of <30 dBZ.
However, during the peaks of the bird migration seasons, in April and early September, extensive areas of the south-central U.S. may be covered by such echoes. Finally, aircraft often appear as "point targets" far from the radar.
This is a display of echo intensity (reflectivity) measured in dBZ. The base reflectivity images in Precipitation Mode are available at four radar "tilt" angles, 0.5°, 1.45°, 2.40° and 3.35° (these tilt angles are slightly higher when the radar is operated in Clear Air Mode). A tilt angle of 0.5° means that the radar's antenna is tilted 0.5° above the horizon. Viewing multiple tilt angles can help one detect precipitation, evaluate storm structure, locate atmospheric boundaries, and determine hail potential.
The maximum range of the "short range" base reflectivity product is 124 nautical miles (about 143 miles) from the radar location. This view will not display echoes that are more distant than 124 nm, even though precipitation may be occurring at greater distances.
This display is of maximum echo intensity (reflectivity) measured in dBZ from all four radar "tilt" angles, 0.5°, 1.45°, 2.40° and 3.35°. This product is used to reveal the highest reflectivity in all echoes. When compared with Base Reflectivity, the Composite Reflectivity can reveal important storm structure features and intensity trends of storms.
The maximum range of the "short range" composite reflectivity product is 124 nm (about 143 miles) from the radar location. This view will not display echoes that are more distant than 124 nm, even though precipitation may be occurring at greater distances.
Base Radial Velocity
This is the velocity of the precipitation either toward or away from the radar (in a radial direction). No information about the strength of the precipitation is given. This product is available for just two radar "tilt" angles, 0.5° and 1.45°. Precipitation moving toward the radar has negative velocity (blues and greens). Precipitation moving away from the radar has positive velocity (yellows and oranges). Precipitation moving perpendicular to the radar beam (in a circle around the radar) will have a radial velocity of zero, and will be colored grey. The velocity is given in knots (10 knots = 11.5 mph).
Where the display is colored pink (coded as "RF" on the color legend on the left side), the radar detected an echo but was unable to determine the wind velocity, due to inherent limitations in the Doppler radar technology. RF stands for "Range Folding".
Storm Relative Mean Radial Velocity
This is the same as the Base Radial Velocity, but with the mean motion of the storm subtracted out. This product is available for four radar "tilt" angles, 0.5°, 1.45°, 2.40° and 3.35°.
Determining True Wind Direction
The true wind direction can be determined on a radial velocity plot only where the radial velocity is zero (grey colors). Where you see a grey area, draw an arrow from negative velocities (greens and blues) to positive velocities (yellows and oranges) so that the arrow is perpendicular to the radar beam. The radar beam can be envisioned as a line connecting the grey point with the center of the radar. To think of it another way, draw the wind direction line so that the wind will be blowing in a circle around the radar (no radial velocity, only tangential velocity).
In order to determine the wind direction everywhere on the plot, a second Doppler radar positioned in a different location would be required. Research programs frequently use such "dual Doppler" techniques to generate a full 3-D picture of the winds over a large area.
If you see a small area of strong positive velocities (yellows and oranges) right next to a small area of strong negative velocities (greens and blues), this may be the signature of a mesocyclone--a rotating thunderstorm. Approximately 40% of all mesocyclones produce tornadoes. 90% of the time, the mesocyclone (and tornado) will be spinning counter-clockwise.
If the thunderstorm is moving rapidly toward or away from you, the mesocyclone may be harder to detect. In these cases, it is better to subtract off the mean velocity of the storm center, and look at the Storm Relative Mean Radial Velocity.
Vertically Integrated Liquid Water (VIL)
VIL is the amount of liquid water that the radar detects in a vertical column of the atmosphere for an area of precipitation. High values are associated with heavy rain or hail. VIL values are computed for each 2.2x2.2 nm grid box for each elevation angle within 124 nm radius of the radar, then vertically integrated. VIL units are in kilograms per square meter--the total mass of water above a given area of the surface. VIL is useful for:
- Finding the presence and approximate size of hail (used in conjunction with spotter reports). VIL is computed assuming that all the echoes are due to liquid water. Since hail has a much higher reflectivity than a rain drop, abnormally high VIL levels are typically indicative of hail.
- Locating the most significant thunderstorms or areas of possible heavy rainfall.
- Predicting the onset of wind damage. Rapid decreases in VIL values frequently indicate wind damage may be occurring.
A handy VIL interpretation guide is available from the Oklahoma Climatological Survey.
The Echo Tops image shows the maximum height of precipitation echoes. The radar will not report echo tops below 5,000 feet or above 70,000 feet, and will only report those tops that are at a reflectivity of 18.5 dBZ or higher. In addition, the radar will not be able to see the tops of some storms very close to the radar. For very tall storms close to the radar, the maximum tilt angle of the radar (19.5 degrees) is not high enough to let the radar beam reach the top of the storm. For example, the radar beam at a distance 30 miles from the radar can only see echo tops up to 58,000 feet.
Echo top information is useful for identifying areas of strong thunderstorm updrafts. In addition, a sudden decrease in the echo tops inside a thunderstorm can signal the onset of a downburst--a severe weather event where the thunderstorm downdraft rushes down to the ground at high velocities and causes tornado-intensity wind damage.
Storm Total Precipitation
The Storm Total Precipitation image is of estimated accumulated rainfall, continuously updated, since the last one-hour break in precipitation. This product is used to locate flood potential over urban or rural areas, estimate total basin runoff and provide rainfall accumulations for the duration of the event.
1 Hour Running Total Precipitation
The 1 Hour Running Total Precipitation image is an estimate of one-hour precipitation accumulation on a 1.1x1.1 nm grid. This product is useful for assessing rainfall intensities for flash flood warnings, urban flood statements and special weather statements.
Velocity Azimuth Display (VAD) Wind Profile
The VAD Wind Profile image presents snapshots of the horizontal winds blowing at different altitudes above the radar. These wind profiles will be spaced 6 to 10 minutes apart in time, with the most recent snapshot at the far right. If there is no precipitation above the radar to bounce off, a "ND" (Non-Detection) value will be plotted in knots.
Altitudes are given in thousands of feet (KFT), and the time is GMT (5 hours ahead of EST). The colors of the wind barbs are coded by how confident the radar was that it measured a correct value. High values of the RMS (Root Mean Square) error (in knots) mean that the radar was not very confident that the wind it is displaying is accurate — there was a lot of change in the wind during the measurement.
Storm Attributes Table
The Storm Attributes Table is a NEXRAD derived product which attempts to identify storm cells.
The table contains the following fields:
- ID - This is the ID of the cell. The ID is also printed on the radar image to enable you to reference the table with storms on the radar image. If a triangle is shown in this field, it indicates NEXRAD detection of a possible tornadic cell (this "detection" is called the tornado vortex signature). If a diamond appears in this field, NEXRAD algorithms detect the storm is a mesocyclone. If a yellow-filled square appears, the storm has a 70% or greater chance of containing hail.
- Max DBZ - This is the highest reflectivity found within the storm cell.
- Top (ft) - Storm top elevation in feet.
- VIL (kg/m²) - Vertically Integrated Water. This is an estimation of the mass of water suspended in the storm per square meter.
- Probability of severe hail - Probability that the storm contains severe hail.
- Probability of hail - Probability that the storm contains hail.
- Max hail size (in) - Maximum hail stone diameter.
- Speed (knots) - Speed of the storm movement in knots.
- Direction - Direction of storm movement.
On the radar image, arrows show the forecast movement of storm cells. Each tick mark indicates 20 minutes of time. The arrow length indicates where the cells are forecast to be in 60 minutes.
When choosing the top 5 or top 10 storms from the "Show Storms" select box, the top storms are based on Max DBZ.
This should not be used for protection of life and/or property. Weather Underground's NEXRAD radar product incorporates StrikeStar data. StrikeStar is a network of Boltek lightning detectors around the United States and Canada. These detectors all send their data to our central server where the StrikeStar software developed by Astrogenic Systems triangulates their data and presents the results in near real-time.
Please note: Because of errors in sensor calibration and large distances between some sensors, lightning data may display skewed or be missing in certain regions.
If you have a Boltek detector and run Astrogenic's NexStorm software then we would like to hear from you. There are a small number of simple criteria you need to fulfill to join the network. You can email us at firstname.lastname@example.org for further details.
Terminal Doppler Weather Radar (TDWR)
The Terminal Doppler Weather Radar (TDWR) is an advanced technology weather radar deployed near 45 of the larger airports in the U.S. The radars were developed and deployed by the Federal Aviation Administration (FAA) beginning in 1994, as a response to several disastrous jetliner crashes in the 1970s and 1980s caused by strong thunderstorm winds. The crashes occurred because of wind shear--a sudden change in wind speed and direction. Wind shear is common in thunderstorms, due to a downward rush of air called a microburst or downburst. The TDWRs can detect such dangerous wind shear conditions, and have been instrumental in enhancing aviation safety in the U.S. over the past 15 years. The TDWRs also measure the same quantities as our familiar network of 148 NEXRAD WSR-88D Doppler radars--precipitation intensity, winds, rainfall rate, echo tops, etc. However, the newer Terminal Doppler Weather Radars are higher resolution, and can "see" details in much finer detail close to the radar. This high-resolution data has generally not been available to the public until now. Thanks to a collaboration between the National Weather Service (NWS) and the FAA, the data for all 45 TDWRs is now available in real time via a free satellite broadcast (NOAAPORT). We're calling them "High-Def" stations on our NEXRAD radar page. Since thunderstorms are uncommon along the West Coast and Northwest U.S., there are no TDWRs in California, Oregon, Washington, Montana or Idaho.
Summary of the TDWR products
The TDWR products are very similar to those available for the traditional WSR-88D NEXRAD sites. There is the standard radar reflectivity image, available at each of three different tilt angles of the radar, plus Doppler velocity of the winds in precipitation areas. There are 16 colors assigned to the short range reflectivity data (same as the WSR-88Ds), but 256 colors assigned to the long range reflectivity data and all of the velocity data. Thus, you will see up to 16 times as many colors in these displays versus the corresponding WSR-88D display, giving much higher detail of storm features. The TDWRs also have storm total precipitation available in the standard 16 colors like the WSR-88D has, or in 256 colors (the new "Digital Precipitation" product). Note, however, that the TDWR rainfall products generally underestimate precipitation, due to attenuation problems (see below). The TDWRs also have such derived products as echo height, vertically integrated liquid water, and VAD winds. These are computed using the same algorithms as the WSR-88Ds use, and thus have no improvement in resolution.
Improved horizontal resolution of TDWRs
The TDWR is designed to operate at short range, near the airport of interest, and has a limited area of high-resolution coverage — just 48 nm, compared to the 124 nm of the conventional WSR-88Ds. The WSR-88Ds use a 10 cm radar wavelength, but the TDWRs use a much shorter 5 cm wavelength. This shorter wavelength allow the TDWRs to see details as small as 150 meters along the beam, at the radar's regular range of 48 nm. This is nearly twice the resolution of the NEXRAD WSR-88D radars, which see details as small as 250 meters at their close range (out to 124 nm). At longer ranges (48 to 225 nm), the TDWRs have a resolution of 300 meters — more than three times better than the 1000 meter resolution WSR-88Ds have at their long range (124 to 248 nm). The angular (azimuth) resolution of the TDWR is nearly twice what is available in the WSR-88D. Each radial in the TDWR has a beam width of 0.55 degrees. The average beam width for the WSR-88D is 0.95 degrees. At distances within 48 nm of the TDWR, these radars can pick out the detailed structure of tornadoes and other important weather features (Figure 2). Extra detail can also been seen at long-ranges, and the TDWRs should give us more detailed depictions of a hurricane's spiral bands as it approaches the coast.
View of a tornado taken by conventional WSR-88D NEXRAD radar (left) and the higher-resolution TDWR system (right). Using the conventional radar, it is difficult to see the hook-shape of the radar echo, while the TDWR clearly depicts the hook echo, as well as the Rear-Flank Downdraft (RFD) curling into the hook. Image credit: National Weather Service.
TDWR attenuation problems
The most serious drawback to using the TDWRs is the attenuation of the signal due to heavy precipitation falling near the radar. Since the TDWRs use the shorter 5 cm wavelength, which is closer to the size of a raindrop than the 10 cm wavelength used by the traditional WSR-88Ds, the TDWR beam is more easily absorbed and scattered away by precipitation. This attenuation means that the radar cannot "see" very far through heavy rain. It is often the case that a TDWR will completely miss seeing tornado signatures when there is heavy rain falling between the radar and the tornado. Hail causes even more trouble. Thus, it is best to use the TDWR in conjunction with the traditional WSR-88D radar to insure nothing is missed.
View of a squall line (left) taken using a TDWR (left column) and a WSR-88D system. A set of three images going from top to bottom show the squall line's reflectivity as it approaches the TDWR radar, moves over the TDWR, than moves away. Note that when the heavy rain of the squall line is over the TDWR, it can "see" very little of the squall line. On the right, we can see the effect a strong thunderstorm with hail has on a TDWR. The radar (located in the lower left corner of the image) cannot see much detail directly behind the heavy pink echoes that denote the core of the hail region, creating a "shadow". Image credit: National Weather Service.
TDWR range unfolding and aliasing problems
Another serious drawback to using the TDWRs is the high uncertainty of the returned radar signal reaching the receiver. Since the radar is geared towards examining the weather in high detail at short range, echoes that come back from features that lie at longer ranges suffer from what is called range folding and aliasing. For example, for a thunderstorm lying 48 nm from the radar, the radar won't be able to tell if the thunderstorm is at 48 nm, or some multiple of 48 nm, such as 96 or 192 nm. In regions where the software can't tell the distance, the reflectivity display will have black missing data regions extending radially towards the radar. Missing velocity data will be colored pink and labeled "RF" (Range Folded). In some cases, the range folded velocity data will be in the form of curved arcs that extend radially towards the radar.
Typical errors seen in the velocity data (left) and reflectivity data (right) when range folding and aliasing are occurring. Image credit: National Weather Service.
TDWR ground clutter problems
Since the TDWRs are designed to alert airports of low-level wind shear problems, the radar beam is pointed very close to the ground and is very narrow. The lowest elevation angle for the TDWRs ranges from 0.1° to 0.3°, depending upon how close the radar is to the airport of interest. In contrast, the lowest elevation angle of the WSR-88Ds is 0.5°. As a result, the TDWRs are very prone to ground clutter from buildings, water towers, hills, etc. Many radars have permanent "shadows" extending radially outward due to nearby obstructions. The TDWR software is much more aggressive about removing ground clutter than the WSR-88D software is. This means that real precipitation echoes of interest will sometimes get removed.
For more TDWR information
For those of you who are storm buffs that will be regularly using the new TDWR data, you can download the three Terminal Doppler Weather Radar (TDWR) Build 3 Training modules. These three Flash files, totaling about 40 Mb, give one a detailed explanation of how TDWRs work, and their strengths and weaknesses.
Archived Historical Radar Data
The National Climatic Data Center offers free U.S. mosaics for the past 10 years.
Plymouth State College offers single-site radar images of all radar products going back several weeks. |
Learn IT math lessons are designed to understand Middle School/High School Algebra as well as improve problem solving skills. The App makes math friendlier ...
Learn IT math lessons are designed to understand Middle School/High School Algebra as well as improve problem solving skills. The App makes math friendlier for students who struggle with math. Mathematical concepts are explained through animated examples and voice instruction. The examples are presented with step by step instructions. Checkpoints within each lesson allow students to measure their level of understanding. Learn IT: Lessons were developed by Learn IT in partnership with PITSCO education. PITSCO's math and science products can be used by students to explore math concepts presented in the lessons. Features: 100% offline Realize the importance of units in the measurement system. Learn about the different units used to measure Length, Volume, and Weight. Learn about the metric base units, and the different prefixes used. Define dimensional analysis, and use it to convert between units. This unit also ensures that you know temperature conversion between Fahrenheit to Celsius and vice versa. This unit of instruction includes the following 4 lessons: Standard Units -Recognize the standard base units. -Identify the property each standard base unit measures. -Choose the correct unit to measure an object. Metric Units -Identify the common units of the metric system. -Use the basic prefixes of the metric system. Dimensional Analysis -Evaluate the arrangement of variables in a problem. -Convert a single unit to a different unit within a system and between systems. -Convert derived units. Converting Fahrenheit and Celsius -From Celsius to Fahrenheit. -From Fahrenheit to Celsius. Other Apps from Learn IT: Integer Decimal Operation Intro to Decimals Angles Angle Relationships Calculators Functions Matrices Logic and Sequences Data Graphs II Graphing Calculators Accuracy Linear Equations and Graphing Triangles Units Operations with Fractions II Polynomials Special Equations Quadratics Factoring Exponential Equations Data Graphs I Sets Probability System of Equations Exponents Circles More Apps coming soon: Intro to Fractions Transformations Polygons Prisms and Pyramids Inequalities Real Number System Properties of Real Numbers Ratios and Percents Radicals Operations with Fractions I Equations Our Website: http://www.learnitapps.com/
Size: 2.52 MB
Price: 1,83 €
Developed by Learn It Applications LLC
Day of release: 2013-05-12
Recommended age: 4+ |
3.2b: Protocols & Layers
A protocol is a set of rules that allow devices on a network to communicate with each other.
TCP / IP
(Transmission Control Protocol / Internet Protocol)
TCP / IP is actually two separate protocols that combine together.
TCP is a protocol that allows packets to be sent and received between computer systems.
It breaks the data into packets and reassembles them back into the original data at the destination.
IP is a protocol in charge of routing and addressing data packets. This ensures data packets are sent across networks to the correct destination.
It is also an addressing system - every device on a network is given a unique IP address so data packets can be sent to the correct computer system.
HTTP is used to transfer web pages over the Internet so that users can view them in a web browser.
All URLs start with either HTTP or HTTPS
HTTPS is a more secure version of HTTP that works with another protocol called SSL (Secure Sockets Layer) to transfer encrypted data.
You should see a padlock symbol in the URL bar if your connection to that website is secure.
(Hypertext Transfer Protocol)
FTP (File Transfer Protocol) is used to transfer files across a network. It is commonly used to upload or download files to/from a web server.
SMTP (Simple Mail Transfer Protocol) is a protocol used to send emails to a mail server and between mail servers.
POP (Post Office Protocol) and IMAP (Internet Message Access Protocol) are both protocols for receiving and storing emails from a mail server.
POP will delete an email once it has been downloaded to a device.
IMAP syncs the message with an email server so it can be accessed by different devices.
IP vs MAC Address
There are two versions of IP addressing currently used - IPv4 and IPv6.
IPv4 uses a 32-bit address that allows for over 4 billion unique addresses.
IPv4 uses a numeric dot-decimal notation like this: 184.108.40.206
4 billion unique addresses may sound like a lot but there are nearly 8 billion people in the world. Therefore a newer version - IPv6 - was developed with a 128-bit address, represented in hexadecimal that allows for a mind-boggling number of unique addresses.
A MAC address is a unique hexadecimal number assigned to each network interface card inside a networked device e.g. a router or a laptop.
While an IP address may change, the MAC address can’t be changed.
Networking standards are rules that allow computer systems to communicate across networks. Standards have been created to ensure devices can exchange data and work together.
4-Layer TCP/IP Model
The TCP/IP model is split into 4 layers. The model is used to visualise the different parts of a network as each of the four layers has a specific role.
Splitting a network design into layers is beneficial to programmers as it simplifies design, making it easier to modify and use.
Each layer has a certain purpose and is associated with different protocols.
The four layers are explained below:
Allows humans and software applications to use the network e.g. browsers (HTTP/HTTPS) and email (SMTP) and file transfer (FTP).
TCP breaks the data down into data packets. This layer makes sure the data is sent and received in the correct order and reassembled at the destination without errors.
IP is responsible for addressing and routing data packets. The optimal route for the data to take is calculated in this layer.
Also known as the 'Internet Layer'.
Ethernet sets out the format of data packets. This layer handles transmission errors and passes data to the physical layer.
3.2b - Protocols & Layers:
1. Describe each of the following protocols. It might be helpful to also draw an icon or small diagram for each one:
a. TCP
b. IP
c. HTTP & HTTPS
d. FTP
e. SMTP
f. POP3 & IMAP
2. State which protocol would be used in the following scenarios:
a. Transferring a music file to a friend over the internet.
b. Sending an email to a family member in America.
c. Using a webpage to enter a password securely.
d. Receiving an email from a bank.
3a. What are networking standards?
3b. Describe why network designs are split into layers.
4. Create a diagram similar to the one above and describe each layer of the TCP/IP Model.
5. Look at the statements below and name the layer that is being described:
a. This layer ensures data packets are sent and received correctly.
b. This layer checks for errors in transmission and sets out the data packet format.
c. This layer allows software like web browsers to interact with the network.
d. This layer uses addresses to ensure data packets take the correct route. |
Have you ever faced the challenge of not being able to remember a function or variable name while coding? You must have had to scroll to the top of the code to find that name so you could use it. You may have even used the same name for two different variables, which must have caused a lot of confusion. These problems can occur quite often, especially in complex codes like the one shown above. Would it not be convenient if you could use a function only one time, execute a command with it, and then forget all about it? Fortunately, you can do just that by using the Python lambda (anonymous) function.
What are Lambda Functions in Python?
Python and other programming languages like Java, C++, and C# have small anonymous functions called lambda meant for one-time use. While we use the def keyword to define a function in Python, we use the lambda keyword to define an anonymous function. As the name suggests, the function that is defined using this keyword will have no name. Since we use the lambda keyword, we also call these functions as lambda functions.
How to use Lambda Functions in Python?
If you want to use the Python lambda (anonymous) function, you will have to follow this syntax:
lambda [arguments] : expression
The lambda function can have any number of arguments, or no arguments at all, after the colon (:). When you call the lambda function, you will see that the expression after the ‘:’ is executed. Let’s understand this through an example.
square = lambda a : a * a
In the above example, we want to execute a program to find the square of a number. We start the function by using the lambda keyword and follow it by giving it an argument ‘a’. The expression in the above code (a*a) will return the square value of a when we call the function.
We assign the entire lambda function to the variable ‘square’.
We get the output:
The variable name becomes the function name in this example, and we can use it as a regular variable.
Characteristics of the Lambda Function
- The function can take any number of arguments (numbers, letters, words, etc.) but can only have one expression.
- The function can be used anywhere in the code where functions are required.
- They are syntactically restricted to a single expression. That means, you cannot use loops or a conditional statement like for, while, if, else, etc., with a lambda function.
Use of Lambda Function in Python
# Python code for cube of a number # Using def keyword def cube(c): return c*c*c # Using lambda keyword l_cube = lambda c: c*c*c # using a normally defined function print(cube(6)) # using the lambda function print(lambda_cube(6))
Consider the code above. Can you see a difference in code for calculating the cube of a number? In the above code, we show the difference between defining and calling a normal function and a lambda function.
Both the cube() function and the l_cube() function perform the same calculations as we expected. So, what is the difference?
Both the functions return the cube of a number. However, we had to define a named function (cube) using the def keyword in the first code and then pass a value in it. We also had to use the return keyword to return the result from where we called the function.
On the other hand, when using the lambda function, we did not have to use the return keyword as it always contains an expression that is returned.
Hence, we use the lambda function or the anonymous function to simplify the code into one-line expressions. We also use them when we require a function for a short period of time. The simplicity of the function also allows us to use it anywhere it is expected, without having to assign a variable to it.
Using Built-in Functions like Filter() and Map() with Python Lambda (anonymous) Function
The Filter() Function
In Python, the filter() function takes in a function and a list as arguments. This gives us an easy way to filter all the elements in a sequence. When the lambda() function is passed in the filter() function (with the list to be evaluated) a new list is returned that contains items. Let’s consider the following example:
# Python code to use filter() with lambda() lis = [5, 7, 22, 9, 54, 6, 77, 2, 73, 66] final_li = list(filter(lambda x: (x%2 != 0) , li)) print(final_li)
In the code, a list of random numbers is assigned to the variable ‘lis’. We then call the filter() function and pass it as an argument in the lambda function. The function divides each number by 2 and returns “True” if the number is not equal to 0. We use the print function to print the new list.
The output will be:
[5, 7, 9, 77, 73]
The Map() Function
The map() function is similar to the filter() function as it takes in a function and a list. However, while the filter() function only prints values that are true, the map() function prints a new list that contains items returned by the function for each item. Let us look at an example:
# Python code to use map() with lambda() # to double each item in a list li = [5, 7, 25, 97, 82, 19, 45, 23, 73, 57] final_li = list(map(lambda x: x*2, li)) print(final_li)
In the above example, we have first created a list of random numbers called li. We then pass the map() function in the lambda() function. The expression in the lambda function multiplies every item in the list (doubles) and returns its value to final_li.
The output of the code will be:
[10, 14, 50, 194, 164, 38, 90, 46, 146, 114]
In conclusion, the Python lambda (anonymous) function is a no-name function declared in a single line. It can have only one expression and is used when a short-term function is required. It is defined using the lambda keyword and is similar to a regular function (defined by using the def keyword). We pass the lambda function as an argument in another function, as well.
Question and Answers
1. What is lambda in Python?
Answer: In Python, lambda is a keyword that is used to define a lambda function. You do not have to assign a name to these lambda functions, and you can use them anywhere in the Python code. The lambda expressions in Python find their root in lambda calculus, which is a complex computing language created by Alonzo Church in the 1930s.
2. Why is lambda used in Python?
Answer: Coding can sometimes be very confusing as we use a lot of variables and functions to execute tasks. The lambda function helps us reduce the code into one-line expressions. They work the same way as a normal function and give us the same results. However, they do not take up as much space and can be used for one-time executions.
3. How does lambda work in Python?
Answer: To use the lambda in Python, you will have to follow the following syntax:
lambda [arguments] : expression
The lambda keyword in the above syntax is used to define a lambda function. The function can take any number of arguments, which may be integers, strings, float, etc. However, there can only be one expression after the ‘:’. That means you cannot use a loop or conditional statements in the lambda function.
4. Try to write a lambda code that adds 12 to any number passed in the function as an argument. Also, use the lambda function to multiply one argument ‘a’ with another argument ‘b’.
# Adding 12 to a number num = lambda x : x + 12 print(num(15)) # Multiplying two numbers number = lambda a, b : a * b print(number(26, 7))
The output will be:
In the above code, we first use the lambda function to add 12 to any number that is given as an argument (in this example, the number is 15) and assign the name “num” to the function. When the function is called, the numbers are added and we get the output: 27.
In the next part of the code, we define two arguments (a and b) in the lambda function and write an expression to multiply them. We assign the name “number” to the function. When the function is called, the two numbers passed (26 and 7 in this example) are multiplied and we get the answer: 182 |
Student Robotics: A Fourth-grade Student Explores Virtual Robots with VoxCAD
With open source software and guided directions from Science Buddies, students can explore the ways in which robotics engineers test designs before choosing which designs to prototype. This student put her own robots to the test—on her computer—and walked away with a blue ribbon at a local fair.With many schools offering extracurricular or after-school robotics clubs and programs, more and more students are exploring robotics engineering. Hands-on projects like building an ArtBot or BristleBot make it easy for families to tackle a robotics building activity at home with fairly easy-to-come-by supplies like toothbrush heads, coin cell batteries, and plastic cups.
While making a cute bot that shuttles about on toothbrush bristles can be empowering and rewarding for kids, designing effective robots involves more than just the mechanics of assembly. Being able to test different approaches to a robot design or its materials before investing time and money in building offers many advantages for engineers. If the goal is to create a robot creature that can move quickly from Point A to Point B, which design will work best?
Building three different working models, each with different approaches to mobility, is not always a practical approach given issues of time, materials, and money. If, instead, an engineer can do some preliminary testing and gauge the benefits or drawbacks of various design options, she may be able to save time and money and invest energy working on the design that shows the most promise for a given challenge or need. One approach to evaluating designs involves using computer software, like VoxCAD, to simulate various designs and conditions. VoxCAD is an open source, cross-platform physics simulation tool originally developed by Jonathan Hiller in the Creative Machines Lab at Cornell University.
In the field, using simulation software can be an important pre-build and testing step for robotics engineers. In the classroom, simulation software allows students to explore robotics without prior engineering experience. With a suite of VoxCAD Project Ideas at Science Buddies, students can experiment with robotics engineering at the virtual level, no circuits, batteries, or soldering required.
"The neat thing about VoxCAD is that kids can jump straight in to the deep end," says Dr. Sandra Slutz, Lead Staff Scientist at Science Buddies. "It takes a lot of mechanical engineering, electronics, and even programming know-how to create robots with different mobility strategies, but using VoxCAD, a student whose curiosity is sparked can start designing those robots in just a few minutes without all the time it takes to develop those skills."
With robotics simulation, exploring robotics and comparing designs doesn't require building multiple robots. Instead, students can get started right at their computers. After mocking up, visualizing, and testing their three dimensional ideas using VoxCAD, students who want to learn more about hands-on robotics engineering can explore circuit-based robot building projects in the robotics area at Science Buddies and move from virtual to real-world robotics design, building, and engineering.
Thinking 3D: Student Robotics
Laura was in 4th grade when her mom showed her a new VoxCAD project at Science Buddies. Laura, who wants to be a website developer in the future, was fascinated by the idea of designing three-dimensional robots and decided to give the introductory "Robot Race! Use a Computer to Design, Simulate, & Race Robots with VoxCAD" project a try.
"My mom showed me a VoxCAD video, and I became attached," says Laura. "I liked the way the creatures moved. I thought that was very interesting that the computer was able to bring them to life. And I wanted to learn how to do that."
When her mom told her about VoxCAD, Laura didn't have a science project assignment due. She chose to experiment with VoxCAD on her own. "I just thought it looked fun and wanted to try it," says the budding engineer, noting that robotics engineering wasn't an area of science she was already interested in or had explored before.
Laura enjoyed working with VoxCAD and trying different robot designs. In a video she created to accompany her project, Laura describes the movement of each design as the three-dimensional block-based robots move around on screen. She refers to the three models she created and tested as the "fastman snail," the "shimmier," and the "sidewinder," and her testing shows clear differences in the effectiveness of each. Using VoxCAD, she was able to bring the three robot designs to life on the screen and put them in motion to see how they would move and which would move farthest.
The best part of the experience, says Laura, was watching her creations move in the VoxCAD Physics Sandbox. "I learned to think in 3D," she adds. After finishing her project, Laura entered it in the science division of the Alameda County Fair where she won a first prize blue ribbon.
Congratulations to Laura!
To learn more about VoxCAD and to experiment with your own three-dimensional robot design and testing, see the following Project Ideas:
- Robot Race! Use a Computer to Design, Simulate, & Race Robots with VoxCAD
- Hard, Soft, or in Between? Changing Material Properties in Robot Design with VoxCAD *
- The World Is Your (Physics) Sandbox: Changing the Settings in a Robot Simulation with VoxCAD *
- Eco-Friendly Robots: Design the Most Energy-Efficient Racing Robot Using VoxCAD *
For suggestions about family robotics projects and activities and ways to engage your students with introductory robotics exploration, see: Bot Building for Kids and Their Parents: Celebrating Student Robotics, Create a Carnival of Robot Critters this Summer, Robot Engineering: Tapping the Artist within the Bot, and Family Robotics: Toothbrush Bots that Follow the Light.
Today, February 20, 2014 is Girl Day, part of Engineers Week. Don't miss the chance to make a difference in a student's life and future by taking the opportunity to introduce students to the world of engineering today and every day.
You Might Also Enjoy These Related Posts:
- Extra Credit with the Fluor Challenge
- Explore Coding and Electronics with Raspberry Pi
- Teacher Puts Black History Month on the Door to Science Class
- Student Discovers Green Thumb Growing Plants without Water
- Sustainability Lessons Help Make Environmental Awareness Real
- Elementary School Student Finds Science Fair Success
- Teacher Uses Hashtag Bellwork System to Introduce STEM Careers
- Middle School Microbes Spark Career Interest
Explore Our Science Videos
Slow Motion Craters - STEM Activity
DIY Toy Sailboat
Why Won't it Mix? Discover the Brazil Nut Effect |
: having a tendency to attract electrons.
What is another way to define electronegativity?
Electronegativity is a measure of how strongly atoms attract bonding electrons to themselves. Its symbol is the Greek letter chi: χ The higher the electronegativity, the greater an atom’s attraction for electrons.
What is electronegativity and give example?
An example of electronegativity is that chlorine has an electronegativity of 3.16 electronvolts (eV) compared to sodium’s 0.93 eV. Therefore, the electronegativity difference between chlorine and sodium is 2.23 eV.
How do you remember electronegativity?
So the mnemonic is: FONCLBRISCH. Again, that’s FONCLBRISCH. This is the most electronegative elements on the periodic table starting with the most electronegative on the top, and decreasing in electronegativity as we work down.
What is the use of electronegativity?
We can use electronegativity as a convenient way to predict the polarization of covalent bonds (in other words, how ionic they are). At the extreme, we can use it to predict whether compounds are covalent or ionic, which suggests also that it correlates roughly with metallic or non-metallic character.
What is electronegativity and how can it be used in determining?
Electronegativity is a measure of the tendency of an atom to attract electrons (or electron density) towards itself. It determines how the shared electrons are distributed between the two atoms in a bond. The more strongly an atom attracts the electrons in its bonds, the larger its electronegativity.
Why is electronegativity important?
Because atoms do not exist in isolation and instead form molecular compounds by combining with other atoms, the concept of electronegativity is important because it determines the nature of bonds between atoms.
Which of the following is an example of electronegativity?
Hydrogen has an electronegativity of 2.0, while oxygen has an electronegativity of 3.5. The difference in electronegativities is 1.5, which means that water is a polar covalent molecule. This means that the electrons are drawn significantly towards the more electronegative element, but the atoms do not become ionized.
What are the types of electronegativity?
- Non Polar Bond. When two atoms with equal electronegativity are bonded together in a molecule, a non polar bond is formed.
- Polar Bond.
- Ionic bond.
- Electronegativity increases across the period.
- Electronegativity decreases down the group.
- Diagonal relationships in the periodic table.
How does electronegativity determine bond?
One way to predict the type of bond that forms between two elements is to compare the electronegativities of the elements. In general, large differences in electronegativity result in ionic bonds, while smaller differences result in covalent bonds.
What causes a polar bond?
A polar bond is a type of covalent bond. A bond between two or more atoms is polar if the atoms have significantly different electronegativities (>0.4). Polar bonds do not share electrons equally, meaning the negative charge from the electrons is not evenly distributed in the molecule. This causes a dipole moment.
Which of the three is most electronegative Why?
Fluorine is the most electronegative element on the periodic table. Its electronegativity value is 3.98. Cesium is the least electronegative element. Its electronegativity value is 0.79.
What factors affect electronegativity?
Electronegativity is a measure of the tendency of an atom to attract a bonding pair of electrons. An atom’s electronegativity is affected by both its atomic number and the distance at which its valence electrons reside from the charged nucleus.
What is the formula of electronegativity?
By doing some careful experiments and calculations, Pauling came up with a slightly more sophisticated equation for the relative electronegativities of two atoms in a molecule: EN(X) – EN(Y) = 0.102 (Δ1/2).
Who discovered electronegativity?
electronegativity, in chemistry, the ability of an atom to attract to itself an electron pair shared with another atom in a chemical bond. The commonly used measure of the electronegativities of chemical elements is the electronegativity scale derived by Linus Pauling in 1932.
What is a non polar bond?
A non-polar covalent bond is a type of chemical bond that is formed when electrons are shared equally between two atoms. Thus, in an atom, the number of electrons shared by the adjacent atoms will be the same. The covalent bond is also termed as nonpolar because the difference in electronegativity is mostly negligible.
Is H2O polar or nonpolar?
Water (H2O), like hydrogen fluoride (HF), is a polar covalent molecule.
Why does electronegativity increase up a group?
Electronegativity increases across a period because the number of charges on the nucleus increases. That attracts the bonding pair of electrons more strongly.
Is co2 polar or nonpolar?
Carbon dioxide is a linear molecule while sulfur dioxide is a bent molecule. Both molecules contain polar bonds (see bond dipoles on the Lewis structures below), but carbon dioxide is a nonpolar molecule while sulfur dioxide is a polar molecule.
How does electronegativity affect polarity?
Atoms that are high in EN tend to take electrons and atoms low in EN tend to give up electrons. So, higher electronegativity helps atoms take more control over shared electrons creating partial negative regions and partial positive regions which result in dipoles that cause polarity.
How do you determine whether a bond is polar or nonpolar?
Although there are no hard and fast rules, the general rule is if the difference in electronegativities is less than about 0.4, the bond is considered nonpolar; if the difference is greater than 0.4, the bond is considered polar.
What makes a molecule polar and nonpolar?
Polar molecules occur when there is an electronegativity difference between the bonded atoms. Nonpolar molecules occur when electrons are shared equal between atoms of a diatomic molecule or when polar bonds in a larger molecule cancel each other out.
What is the difference between polar and nonpolar bond?
A non-polar covalent bond is a bond in which the electron pair is shared equally between the two bonded atoms, while a polar covalent bond is a bond in which the electron pair is shared unequally between the two bonded atoms. Polar bonds are caused by differences in electronegativity.
Is oxygen polar or nonpolar?
Oxygen is nonpolar. The molecule is symmetric. The two oxygen atoms pull on the electrons by exactly the same amount.
What is the meaning of nonpolar?
Definition of nonpolar : not polar especially : consisting of molecules not having a dipole a nonpolar solvent. |
A lunar rover or Moon rover is a space exploration vehicle (rover) designed to move across the surface of the Moon. Some rovers have been designed to transport members of a human spaceflight crew, such as the U.S. Apollo program's Lunar Roving Vehicle; others have been partially or fully autonomous robots, such as Soviet Lunokhods and Chinese Yutu. As of 2013, three countries have had rovers on the Moon: the Soviet Union, the United States and China.
- Main article: Lunokhod 1
Lunokhod 1 (Луноход) was the first polycrystalline-panel-powered of two unmanned lunar rovers landed on the moon by the Soviet Union as part of its Lunokhod program after previous unsuccessful attempt of launch probe with Lunokhod 0 (No.201) in 1969. The panels were designed by Electronic and Communication Engineer Bryan Mapúa. The spacecraft which carried Lunokhod 1 was named Luna 17. The spacecraft soft-landed on the Moon in the Sea of Rains on November 1970. Lunokhod was the first roving remote-controlled robot to land on another celestial body. Having worked for 11 months, Lunokhod 1 held the durability record for space rovers for more than 30 years, until a new record was set by the Mars Exploration Rovers.
- Main article: Lunokhod 2
Lunokhod 2 was the second and a monocrystalline-panel-powered of two unmanned lunar rovers landed on the Moon by the Soviet Union as part of the Lunokhod program. The Luna 21 spacecraft landed on the Moon and deployed the second Soviet lunar rover Lunokhod 2 in January 1973. The objectives of the mission were to collect images of the lunar surface, examine ambient light levels to determine the feasibility of astronomical observations from the Moon, perform laser ranging experiments, observe solar X-rays, measure local magnetic fields, and study the soil mechanics of the lunar surface material. Lunokhod 2 intended to be followed by Lunokhod 3 (No.205) in 1977 but mission was cancelled.
Apollo Lunar Roving VehicleEdit
- Main article: Lunar Roving Vehicle
The Lunar Roving Vehicle (LRV) was a battery-powered four-wheeled rover used on the Moon during the last three missions of the American Apollo program (15, 16, and 17) during 1971 and 1972. The LRV could carry one or two astronauts, their equipment, and lunar samples.
- Main article: Yutu (rover)
Yutu is a Chinese lunar rover which launched on 1 December 2013 and landed on 14 December 2013 as part of the Chang'e 3 mission. It is China's first lunar rover, part of the second phase of the Chinese Lunar Exploration Program undertaken by China National Space Administration (CNSA). The lunar rover is called Yutu, or Jade Rabbit, a name selected in an online poll.
The rover encountered operational difficulties after the first 14-day lunar night, and was unable to move after the end of the second lunar night, yet it is still gathering some useful data.
The Yutu rover might be the world's first true hibernating robot on the moon.
Barcelona Moon Team roverEdit
Barcelona Moon Team is a team participating in the Google Lunar X Prize, with a planned launch date for the mission in 2015.
Astrobotic Technology roverEdit
Astrobotic Technology, a private company based in Pittsburgh, Pennsylvania, United States, plans to send a rover to the Moon in late 2017, as part of the Google Lunar X Prize.
Chang'e 4 roverEdit
- Main article: Chang'e 4
Chinese mission with a planned launch date before 2020.
- Main article: Chandrayaan-2
The Chandrayaan-2 mission is the first lunar rover mission by India, consisting of a lunar orbiter and a lunar lander. The rover weighing 50 kg, will have six wheels and will be running on solar power. It will land near one of the poles and will operate for a year, roving up to 150 km at a maximum speed of 360 m/h. The proposed launch date of the mission is 2017.
- Main article: SELENE-2
Planned Japanese robotic mission to the Moon will include an orbiter, a lander and a rover. It is expected to be launched in 2017.
- Main article: ATHLETE
NASA's plans for future moon missions call for rovers that have a far longer range than the Apollo rovers. The All-Terrain Hex-Legged Extra-Terrestrial Explorer (ATHLETE) is a six-legged robotic lunar rover test-bed under development by the Jet Propulsion Laboratory (JPL). ATHLETE is a testbed for systems and is designed for use on the Moon. The system is in development along with NASA's Johnson and Ames Centers, Stanford University and Boeing. ATHLETE is designed, for maximum efficiency, to be able to both roll and walk over a wide range of terrains.
- Main article: Luna-Glob
Luna-Grunt rover (or Luna-28) is a proposed Russian lunar rover (lunokhod).
- Main article: Scarab (rover)
Scarab is a new generation lunar rover designed to assist astronauts, take rock and mineral samples, and explore the lunar surface. It is being developed by the Robotics Institute of Carnegie Mellon University, supported by NASA.
Space Exploration VehicleEdit
- Main article: Space Exploration Vehicle
The SEV is a proposed successor to the original Lunar Roving Vehicle from the Apollo missions. It combines a living module, as it has a pressurized cabin containing a small bathroom and space for 2 astronauts (4 in case of emergency), and a small truck.
- ↑ Chang’e 3: The Chinese Rover Mission
- ↑ Template:Cite news
- ↑ "China's Yutu Rover Might Be The World's First True Hibernating Robot" from InvestorSpot.com
- ↑ NASA's Space Exploration Vehicle (SEV)
- ↑ 5.0 5.1 "The ATHLETE Rover". JPL. 2010-02-25. http://www-robotics.jpl.nasa.gov/systems/system.cfm?System=11.
- ↑ "The ATHLETE Rover". NASA. 2010-02-25. http://www.nasa.gov/multimedia/imagegallery/image_feature_748.html.
- ↑ "NASA Day on the Hill". NASA. http://www.nasa.gov/multimedia/imagegallery/image_feature_1696.html.
- ↑ "Snakes, Rovers and Googly Eyes: New Robot Masters Take Many Forms". Wired. 2008-04-04. http://www.wired.com/gadgetlab/2008/04/snakes-rovers-a/.
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| |
When it comes to graphing linear equations, there are a few simple ways to do it. The simplest way is to find the intercept values for both the x-axis and the y-axis. Then just draw a line that passes through both of these points. That line is the solution of the equation and its visual representation. The most appropriate form for this is the slope-intercept form, since it is the easiest one to get the information about intercept values from. Problems that can occur with this approach to graphing linear equations happen when you have almost vertical or almost horizontal lines. In these cases the intercept values will be very high and very difficult to draw. The second approach is just a bit different. Instead of intercepts, choose low values for one of the coordinates (for example: x = 1, 2, 3…) and calculate the other coordinate using these values.
We will use an example to give you a little demonstration. Let us say you have to draw the graph of this equation:
y = 6x + 2
We will first calculate the y-intercept by setting the value of x at 0 and then calculate the x-intercept by setting the value of y at 0.
y = 6*0 + 2
y = 2
0 = 6x + 2
6x = -2 |:6
x = -1/3
So, the intercept value for the y-axis is 2 and for the x-axis is (- 1/3). The only thing left to do is to draw a line that passes through these two points and you found the solution.
Now, let us try the other approach to graphing linear equations. We will use the same equation as before, but this time we will try to calculate the value of one coordinate using predetermined values of the other coordinate. Since y is so much larger than x, we will insert different values of x into the equation (in order to avoid dealing with fractions). We will choose x = 1 and x = (0) in this situation.
y = 6x + 2
If (x = 1),
y = 6*1 + 2
y = 8
If (x = 0),
y = 6*0 + 2
y = 2
So now we have two points – P1 (1, 8) and P2 (0,2). All that is left to do is to draw the line passing through these two points.
The same procedure is used when graphing inequalities. The only difference is that the solution of the equation is not necessarily the line itself, but the area over or under the line. That depends on the sign of inequality. In this table, we will give you a short overview of the possible solutions. The three dots represent all of the elements on the other side of the expression.
|y > …||The area above the line|
|y < …||The area under the line|
|y ≥ …||The area above the line and the line itself|
|y ≤ …||The area under the line and the line itself|
|x > …||The area to the right of the line|
|x < …||The area to the left of the line|
|x ≥ …||The area to the right of the line and the line itself|
|x ≤ …||The area to the left of the line and the line itself|
These are the basic rules to graphing linear equations and linear inequalities. If you wish to practice graphing linear equations, please feel free to use the math worksheets below.
Graphing linear equations and inequalities exams for teachers
|Exam Name||File Size||Downloads||Upload date|
|Graphing linear equations – very easy||0 B||12195||January 1, 1970|
|Graphing linear equations – easy||0 B||13819||January 1, 1970|
|Graphing linear equations – medium||0 B||18540||January 1, 1970|
|Graphing linear equations – hard||0 B||9752||January 1, 1970|
|Graphing linear equations – very_hard||0 B||6675||January 1, 1970|
|Graphing linear inequalities – very easy||0 B||3421||January 1, 1970|
|Graphing linear inequalities – easy||0 B||2926||January 1, 1970|
|Graphing linear inequalities – medium||0 B||3583||January 1, 1970|
|Graphing linear inequalities – hard||0 B||2615||January 1, 1970|
Graphing linear equations and inequalities worksheets for students
|Worksheet Name||File Size||Downloads||Upload date|
|Graphing linear inequalities||0 B||3947||January 1, 1970|
|Graphing linear equations||0 B||7366||January 1, 1970| |
Here is my next puzzle! Sorry that it’s been greatly delayed.
I mentioned previously the problem of constructing the midpoint of a given circle, using only ruler and compass. One student solved it nearly immediately (congratulations!), so here I offer a more general problem.
Sylvy’s puzzle #5
Given a parabola in the plane, construct its vertex using straightedge-and-compass construction (sometimes called `ruler-and-compass’ construction).
I want to give a rough description of what it means to perform a `straightedge-and-compass’ construction. We work in the Euclidean plane . You may be familiar with using a ruler and pair of compasses to construct geometrical drawings. For example, if you are given two points you can use the compasses to draw a circle with centre and radius .
Further, you can then construct a regular hexagon inside that circle such that the edge-length of the hexagon is equal to the radius of the circle. If you’ve not done it before, try it! It’s quite fun!
The issue arises: what shapes can one draw in this way? And which points in the plane can arise as intersections of circles/lines that have been constructed in this way?
We start with two points , as above. What can we do with ? We can draw a unique straight line which passes through , we can draw a circle with centre and radius , and we can draw a circle with centre and radius .
These three `circle-lines’ intersect in various points: draw them! The two circles intersect in two points; the line intersects in two points, one of which is ; and the line intersects in two points, one of which is .
Anyway, we get four new points. Using these new points, together with in various ways, we can construct other circle-lines. Then we can consider the intersections of those. And so on. All of these points that we obtain are said to be constructible. The key point is that there must be some finite sequence of points and circle-line constructions to get from to .
For simplicity (and so that we don’t have to worry about our choice of and ), we usually set to be the origin and set to be .
Deadline: 11am Thursday 3rd of December
(you can either send your solution by email or put a hand-written solution into the folder on my office door)
Prize: something chocolatey? I’m open to suggestions…
Solution: I’ll post the solution soon after the deadline.
The webpage for these puzzles is http://anscombe.sdf.org/puzzle.html |
As the countries try to allocate their resources, they are faced with the questions of what, how and for whom to produce. The answer to these questions determine what type of an economic system a particular country has.
Market Economies is also known as capitalist economies. In this type of economic system, the questions facing the economy is answered by the forces of demand and supply.
Price/ market mechanism which manipulates the allocation of resources or tries to resolve the three fundamental questions of what, how and for whom to produce. In other words, resources are allocated through changes in relative prices. Adam Smith referred to it as the “invisible hands” of the market.
Everyone in a market economy acts in self-interest. The factors of production are owned by private individuals and companies. Government has a very minimal role in the production.
Advantages of market system
- Market system automatically responds and adjusts to the people’s wants
As we know, in a market system, the price of goods and services are determined by the forces of demand and supply. If consumers want a particular good or a service, they simply demand for it and the prices go up, which gives signal for the producers to produce more of that good. If producers can produce the required amount of that particular good, the price automatically comes down to normal. Likewise, if people no longer wants a particular good, they simply stop demanding for it, so that it is no longer profitable for producers to produce that good, so producers stop producing that good.
- Wider variety of goods and services
In a market system, producers compete with each other by offering wider variety of goods, therefore consumers have more choice, this may even lead to lower prices.
- Competition pushes businesses to be efficient: keeping costs down and production high.
The aim of firms in a market economy is to make as much profits as possible. In order to do this, the firms need to be more efficient. Therefore they often use new and better methods for production, this leads to lower costs and higher output.
- Government does not have to take decisions on basic economic questions
The market system relies on producers and consumers to decide on what, how and for whom to produce. Therefore it does not require the government to employ a group of people to take these decisions
The disadvantages of market system
- Factors of Production is not employed if it is not profitable
In a market system, producers do not produce a good or a service if it is not profitable. But sometimes it may be necessary to produce some goods even if it is not profitable. Therefore Market system will fail in this aspect.
2. Market system may not produce certain goods and services
Private firms in a market system will not be willing to provide certain public goods like street lights because it is almost impossible to charge any payment from the consumers.
3. Free market may encourage harmful goods
If there are people in the market who wish to buy dangerous goods like narcotic drugs, the market will be ready to buy it since private firms will be willing to provide anything that is profitable
4. Production may lead to negative externalities
When firms are always trying to maximize their profits, they may ignore external costs like damages to the environment.
5. Free market economy may increase the gap between the rich and the poor
When firms and individuals are able to produce and consume freely, it may make the rich even richer because they have more decision making power, and the poor may become poorer because they have less decision making power in the market. The market system allocates more goods and services to those consumers who have more money than others.
6. Cyclical fluctuations
Cyclical fluctuations are caused by the ever-changing demand and supply conditions. Sometimes, when producers anticipate a rise in demand for certain goods, they raise investment to produce more. But if demand actually does not rise, a general glut will occur, that is, stock accumulation. Consequently, the affected producers will have to reduce investment, dismiss workers toreduce costs. Both of these have an adverse effect in the economy as a whole. Less investment meanslower production while lower employment means less consumption, lower prices and profits. These cumulative effects lead to a lower national income.
Conclusion: It can be concluded that price mechanism determines allocation of resources as per what consumers want more, which initially sounds right. However, this system cannot be left to itself because of its various imperfections which undoubtedly necessitate government intervention.
|FEATURES||MARKET ECONOMY||COMMAND ECONOMY||MIXED ECONOMY|
|Ownership of property:||Private ownership||Government ownership||Private + Public (government) ownership|
|Motive or objective:||Profit maximization||Collective social welfare||Private Sector: Profit maximization. Public Sector: Social welfare.|
|Allocative mechanism:||Price mechanism ( demand and supply)||Rationing mechanism (central planning & quotas )||Private Sector: Price mechanism. Public sector: Rationing mechanism.|
|Freedom of choice :||Yes||No||Private Sector: Yes. Public sector: No.|
|Competition:||Yes||No||Private Sector: Yes. Public sector: No.|
|The role of the government :||Minimum government intervention in economic affairs. Only limited to maintaining law & order in the country.||All economic & noneconomic affairs are in the hands of government.||The government limits its role to the provision of necessary goods & services & to regulate the private sector for social welfare.|
|Variety of goods & services:||Yes||No||Private Sector: Yes. Public sector: No.|
|Quality of goods & services:||High quality||Usually poor quality||Private sector: High quality. Public sector: Usually poor quality.|
|Response to changes in demand (Consumer sovereignty):||Quick response to changes in consumers’ preferences.||Slow or no response.||Private Sector: Quick response. Public sector: Slow response.|
|Efficiency :||*Usually efficient allocation of resources because of existence of profit motive *Sometimes inefficient, e.g., in the case of private monopolies.||Inefficient allocation of resources because of the absence of a profit motive.||The inefficiency of the private sector is minimized by government policies.|
|Shortages & surpluses :||The price mechanism clears the market and there are no shortages or surpluses.||Central planning is unable to guess exact quantities demanded; shortages & surpluses are present.||Private sector: No shortages and surpluses Public sector: Shortages and surpluses are present.|
|Merit goods :||Under-production & under-consumption.||Socially optimum||*private sector under-provides *Missing markets of merit goods will be supplied through government provision.|
|Public goods :||Non-marketable and, therefore, missing||Provides public goods through government expenditure.||The public sector provides public goods.|
|Demerit goods :||Over-production & over-consumption.||Fewer or no demerit goods||The government discourages consumption by applying high taxes and other imposing other legislative actions.|
|Distribution of income & wealth:||Unequal distribution||Even distribution||Progressive taxation and welfare payments to the poor will reduce the disparity between the rich & poor.|
|Useless duplication of goods & services:||Yes||No||May occur in the private sector, but not in the public sector.|
|Negative externalities :||More than socially optimum level||Socially optimum||The government will regulate negative externalities through taxes and legislative impositions.|
|Necessities & luxury goods:||Fewer necessities & more luxury goods||More necessities & fewer luxury goods||The public sector will provide necessities, even to those who can’t pay for them.|
|Private monopolies:||Develop and exploit consumers by setting high prices||No private monopolies are exist in a command economy.||The government regulates private monopolies and protects consumers from exploitation.|
|Existence in real world:||No pure market economy.||No pure command economy.||All economies are mixed, but their proportion of private & public sectors vary from country to country.|
|Other terms:||Free economy, Capitalism, free market economy, laissez faire.||Communism, socialism, planned economy, centrally planned economy.||–|
Issues of Transition
Eastern European countries, such as Poland, Ukraine, or those forming the Soviet Union, etc., were command economies throughout the 1940s and the 1950s. However, during the 1990s these economies started transforming themselves into market-oriented economies. These countries, while they were in a process of transition, were known as transition economies. This transition created a lot of problems for the people living in these nations, and also for the government itself.
Transition economies undergo a set of structural transformations intended to develop market-based institutions. These include economic liberalization, where prices are set by market forces rather than by a central planning organization. In addition to this trade barriers are removed, there is a push to privatize state-owned enterprises and resources, state and collectively run enterprises are restructured as businesses, and a financial sector is created to facilitate macroeconomic stabilization and the movement of private capital.
However, the transition process has its own pains and problems.
Problems of transition when central planning in an economy is reduced
- Sharp fall in GDP:
The first immediate problem that an economy may face will be a sharp fall in its GDP because of the reduction in its output. The fall in output of each East European country during the transition period was because of the fact that the state-owned businesses lost customers as the old network between businesses fell apart. This was because it was all based on state planning—the government told each firm which other firm would receive their output and they would be paid accordingly. They would also be told where it should buy their inputs. During the transition period firms had to actively seek customers, whereas before they didn’t need to do so.
- Lack of entrepreneurial abilities:
Entrepreneurs, due to a lack of experience, may not run firms efficiently. They may not choose the right production methods or output levels to maximize profits. Workers may be dogmatic in setting minimum wage levels, and may demand wages from firms that might be deemed as being excessive in relation to firm revenue. As a result, the relations between a firm and the labour that it employs will deteriorate and resource wastage will occur in nearly all industries in the economy.
- High rate of unemployment:
Due to the transition, now that there is no central planning, the factories can produce whatever they want. They will start cutting back on investment because they do not want to take too many risks, which will lead to reduced demand in the economy. Furthermore, because of this, the suppliers will also cut off workers, leading to large scale unemployment and this leads to the reduced demand which will have an impact on the decrease in the output of the country.
One of the strengths of a planned economy is that there is virtually no unemployment. However, during the transition many enterprises will be forced out of business because of the lack of funds and demand to support them.
Another reason was that some of the firms were forced to become efficient after the transition, due to the competition from other firms and foreign enterprises. The easiest way of becoming efficient for these firms was by laying-off workers and making the remaining workforce work harder and more productively. All the countries in the transition process found unemployment as one of the heaviest burdens they had to bear.
In a state economy the central planning body determines the output level of each industry and specifies the amount of resources to be allocated; consumer preferences are given little importance. As the market economy develops, demand for certain goods may fall, while for others it may rise rapidly. Certain industries may become inept and shut down, while market-oriented businesses will thrive but still face a shortage of resources. Therefore, the economy will produce inside the PPC as an inefficient allocation of resources prevails.
- Rise in the informal sectors:
The countries worst affected by the GDP fall may have a rise in their informal sectors. For example, in 1988, countries like Uzbekistan and Georgia had informal sectors that were greater than their formal sectors. This also affects GDP because the revenue earned in the informal sector is not included in a country’s GDP.
- High inflation:
The transformation is also associated with high inflation. A rise in price is almost inevitable if an economy moves towards a free-market system. In a market system resources are allocated by price. The free market price is inevitably higher than the old state price, since consumers have been rationed in the past, so demand will be greater. So, when a market system is introduced, the price will rise until demand equals supply. There is no shortage because some consumers have been priced out of the market. Higher prices can spark a wage-price spiral. Workers will react to higher prices of goods by demanding higher wages. If firms give higher wages, then they must make up for the cost by increasing the price of their goods, which will, in turn, gives rise to further wage demands. The government must then give firms money to pay the workers and to prevent them from going bankrupt. If they don’t, they will have protests, strikes, and civil unrest to deal with; whereas if they print too much money it will simply make it worthless.
- Process of privatization:
In a command economy land and capital is owned by the state, whereas in a market economy it is predominantly owned by the public sector. So the move from one type of economy to the other involves the sale of state assets to private individuals—privatization. This can be done through a number of ways, such as by giving property and capital away to individuals and companies currently employing them (e.g., tenants of council housing could be given their accommodation). The problem with this is that it is a very arbitrary and unfair way of sharing out the state-owned businesses and capital. This would divide society into the rich and poor. The state could also sell its assets to the highest bidder and use the proceeds to reduce tax or reduce government debt.
- Merit, demerit, and public goods:
In addition, as consumers gain power in the allocation of resources, certain goods which may be deemed good for society (merit goods) will be produced below the socially optimum level whilst those reckoned as harmful (demerit goods) will be produced in greater numbers. Due to the problems of non-excludability and non-rivalry, public goods, although necessary, will no longer be produced in the country.
- Negative externalities:
There may be a time gap before a new framework of government controls can be developed to offset the disadvantages of a market economy. In this period, firms seeking to keep their costs low may create pollution by disputing their waste in an unsafe manner.
- Market imperfections:
Imperfect competition is also likely to develop like monopoly (single dominant seller), with consequences for prices output, quality, and consumer sovereignty. These problems are likely to be faced by transitioning economies, yet the impact of these problems can be reduced. The government primarily needs to ensure that its assets are distributed fairly among the people. In order to avoid large income gaps among the population, the government can maintain its net social security safety net system and introduce tax reforms (such as VAT).
- Corruption and consumer abuse
As the economy starts the transition, the legal system is usually not adequate to prevent corruption. Loop-holes in the legal system would mean that the markets are not properly regulated to protect consumers. Market-driven economies will only develop when citizens are granted extensive property rights, and can protect these rights through the legal process. This was largely absent in the former communist transition economies. |
All the mathematics discussed in this article were known before Einstein's general theory of relativity.
For an introduction based on the specific physical example of particles orbiting a large mass in circular orbits, see Newtonian motivations for general relativity for a nonrelativistic treatment and Theoretical motivation for general relativity for a fully relativistic treatment.
Vectors and Tensors
Vectorsmathematics, physics, and engineering, a Euclidean vector (sometimes called a geometric or spatial vector, or – as here – simply a vector) is a geometric object that has both a magnitude (or length) and direction. A vector is what is needed to "carry" the point A to the point B; the Latin word vector means "one who carries". The magnitude of the vector is the distance between the two points and the direction refers to the direction of displacement from A to B. Many algebraic operations on real numbers such as addition, subtraction, multiplication, and negation have close analogues for vectors, operations which obey the familiar algebraic laws of commutativity, associativity, and distributivity.
Tensorsscalar, that is, a simple set of numbers without magnitude, would be shown on a graph as a point, a zero-dimensional object. A vector, which has a magnitude and direction, would appear on a graph as a line, which is a one-dimensional object. A tensor extends this concepts to additional dimensions. A two dimensional tensor would be called a second order tensor. This can be viewed as a set of related vectors, moving in multiple directions on a plane.
ApplicationsVectors are fundamental in the physical sciences. They can be used to represent any quantity that has both a magnitude and direction, such as velocity, the magnitude of which is speed. For example, the velocity 5 meters per second upward could be represented by the vector (0,5) (in 2 dimensions with the positive y axis as 'up'). Another quantity represented by a vector is force, since it has a magnitude and direction. Vectors also describe many other physical quantities, such as displacement, acceleration, momentum, and angular momentum. Other physical vectors, such as the electric and magnetic field, are represented as a system of vectors at each point of a physical space; that is, a vector field.
Common applications of tensors in physics include:
- Electromagnetic tensor (or Faraday's tensor) in electromagnetism
- Finite deformation tensors for describing deformations and strain tensor for strain in continuum mechanics
- Permittivity and electric susceptibility are tensors in anisotropic media
- Stress-energy tensor in general relativity, used to represent momentum fluxes
- Spherical tensor operators are the eigenfunctions of the quantum angular momentum operator in spherical coordinates
- Diffusion tensors, the basis of Diffusion Tensor Imaging, represent rates of diffusion in biologic environments
DimensionsIn relativity, four-dimensional vectors, or four-vectors are required. These four dimensions are length, height, width and time. In this context, a point would be an event, as it has both a location and a time. Similar to vectors, tensors require four dimensions. One example is the Riemann curvature tensor.
Coordinate transformationIn physics, as well as mathematics, a vector is often identified with a tuple, or list of numbers, which depend on some auxiliary coordinate system or reference frame. When the coordinates are transformed, for example by rotation or stretching, then the components of the vector also transform. The vector itself has not changed, but the reference frame has, so the components of the vector (or measurements taken with respect to the reference frame) must change to compensate. The vector is called covariant or contravariant depending on how the transformation of the vector's components is related to the transformation of coordinates. In general, contravariant vectors are "regular vectors" with units of distance (such as a displacement) or distance times some other unit (such as velocity or acceleration); covariant vectors, on the other hand, have units of one-over-distance such as gradient. If you change units (a special case of a change of coordinates) from meters to millimeters, a scale factor of 1/1000, a displacement of 1 m becomes 1000 mm–a contravariant change in numerical value. In contrast, a gradient of 1 K/m becomes 0.001 K/mm–a covariant change in value.
The importance of coordinate transformation is because relativity states that there is no one correct reference point in the universe. On earth, we use dimensions like north, east, and elevation, which are used throughout the entire planet. There is no such system for space. Without a clear reference grid, it becomes more accurate to describe the four dimensions as towards/away, left/right, up/down and past/future. As an example event, take the signing of the Declaration of Independence. To an modern observer on Mt Rainier looking East, Revere is ahead, to the right, below, and in the past. However, to an observer in Medieval England looking North, the event is behind, to the left, neither up or down, and in the future. The event itself has not changed, the location of the observer has.
Curvilinear coordinates and curved spacetimeCurvilinear coordinates are coordinates in which the angles between axes can change from point-to-point. This means that rather than having a grid of straight lines, the grid instead has curvature.
A good example of this is the surface of the Earth. While maps frequently portray north, south, east and west as a simple square grid, that is not, in fact, the case. Instead, the longitude lines, running north and south, are curved, and meet at the north pole. This is because the Earth is not flat, but instead round.
In general relativity, gravity has curvature effects on the four dimensions of the universe. A common analogy is placing a heavy object on a stretched out rubber sheet which causes the rubber to bend downward. This creates a curved coordinate system around the object, much like an object in the universe creates a curved coordinate system. The mathematics here are much more complex than on Earth, as it results in four dimensions of curved coordinates instead of two as there are on Earth.
The interval in a high dimensional spaceImagine our four-dimensional, curved spacetime is embedded in a larger N dimensional flat space. Any true physical vector lies entirely in the curved physical space. In other words, the vector is tangent to the curved physical spacetime. It has no component normal to the four-dimensional, curved spacetime.
In the N dimensional flat space with coordinates the interval between neighboring points is
The relation between neighboring contravariant vectors: Christoffel symbolsThe difference in for two neighboring points in the surface differing by is
Now shift the vector to the point keeping it parallel to itself. In other words, we hold the components of the vector constant during the shift. The vector no longer lies in the surface because of curvature of the surface.
The shifted vector can be split into two parts, one tangent to the surface and one normal to surface, as
Christoffel symbol of the second kindThe Christoffel symbol of the second kind is defined as
This allows us to write
The constancy of the length of the parallel displaced vectorFrom Dirac:
The constancy of the length of the vector follows from geometrical arguments. When we split up the vector into tangential and normal parts ... the normal part is infinitesimal and is orthogonal to the tangential part. It follows that, to the first order, the length of the whole vector equals that of its tangential part.
The covariant derivativeThe partial derivative of a vector with respect to a spacetime coordinate is composed of two parts, the normal partial derivative minus the change in the vector due to parallel transport
The covariant derivative of a product is
GeodesicsSuppose we have a point zμ that moves along a track in physical spacetime. Suppose the track is parameterized with the quantity τ. The "velocity" vector that points in the direction of motion in spacetime is
DefinitionThe curvature K of a surface is simply the angle through which a vector is turned as we take it around an infinitesimal closed path. For a two dimensional Euclidean surface we have
This expression can be reduced to the commutation relation
Symmetries of the curvature tensorThe curvature tensor is antisymmetric in the last two indices
Bianchi identityThe following differential relation, known as the Bianchi identity is true.
Ricci tensor and scalar curvatureThe Ricci tensor is defined as the contraction
- Differentiable manifold
- Christoffel symbol
- Riemannian geometry
- Differential geometry and topology
- List of differential geometry topics
- General Relativity
- Gauge gravitation theory
- ^ Ivanov 2001
- ^ Heinbockel 2001
- ^ Latin: vectus, perfect participle of vehere, "to carry"/ veho = "I carry". For historical development of the word vector, see "vector n.". Oxford English Dictionary. Oxford University Press. 2nd ed. 1989. and Jeff Miller. "Earliest Known Uses of Some of the Words of Mathematics". http://jeff560.tripod.com/v.html. Retrieved 2007-05-25. .
- P. A. M. Dirac (1996). General Theory of Relativity. Princeton University Press. ISBN 0-691-01146-X.
- Misner, Charles; Thorne, Kip S. & Wheeler, John Archibald (1973). Gravitation. San Francisco: W. H. Freeman. ISBN 0-7167-0344-0.
- Landau, L. D. and Lifshitz, E. M. (1975). Classical Theory of Fields (Fourth Revised English Edition). Oxford: Pergamon. ISBN 0-08-018176-7.
- R. P. Feynman, F. B. Moringo, and W. G. Wagner (1995). Feynman Lectures on Gravitation. Addison-Wesley. ISBN 0-201-62734-5.
- Einstein, A. (1961). Relativity: The Special and General Theory. New York: Crown. ISBN 0-517-02961-8. |
Ideal gas law
The classical Carnot heat engine
The ideal gas law, also called the general gas equation, is the equation of state of a hypothetical ideal gas. It is a good approximation of the behavior of many gases under many conditions, although it has several limitations. It was first stated by Benoît Paul Émile Clapeyron in 1834 as a combination of the empirical Boyle's law, Charles's law, Avogadro's law, and Gay-Lussac's law. The ideal gas law is often written in an empirical form:
where , and are the pressure, volume and temperature; is the amount of substance; and is the ideal gas constant. It is the same for all gases. It can also be derived from the microscopic kinetic theory, as was achieved (apparently independently) by August Krönig in 1856 and Rudolf Clausius in 1857.
Note that this law makes no comment as to whether a gas heats or cools during compression or expansion. An ideal gas may not change temperature, but most gases like air are not ideal and follow the Joule–Thomson effect.[dubious ]
The state of an amount of gas is determined by its pressure, volume, and temperature. The modern form of the equation relates these simply in two main forms. The temperature used in the equation of state is an absolute temperature: the appropriate SI unit is the kelvin.
The most frequently introduced forms are:
- is the pressure of the gas,
- is the volume of the gas,
- is the amount of substance of gas (also known as number of moles),
- is the ideal, or universal, gas constant, equal to the product of the Boltzmann constant and the Avogadro constant,
- is the Boltzmann constant
- is the Avogadro constant
- is the absolute temperature of the gas.
In SI units, p is measured in pascals, V is measured in cubic metres, n is measured in moles, and T in kelvins (the Kelvin scale is a shifted Celsius scale, where 0.00 K = −273.15 °C, the lowest possible temperature). R has the value 8.314 J/(K·mol) ≈ 2 cal/(K·mol), or 0.0821 L·atm/(mol·K).
How much gas is present could be specified by giving the mass instead of the chemical amount of gas. Therefore, an alternative form of the ideal gas law may be useful. The chemical amount (n) (in moles) is equal to total mass of the gas (m) (in kilograms) divided by the molar mass (M) (in kilograms per mole):
By replacing n with m/M and subsequently introducing density ρ = m/V, we get:
Defining the specific gas constant Rspecific(r) as the ratio R/M,
This form of the ideal gas law is very useful because it links pressure, density, and temperature in a unique formula independent of the quantity of the considered gas. Alternatively, the law may be written in terms of the specific volume v, the reciprocal of density, as
It is common, especially in engineering and meteorological applications, to represent the specific gas constant by the symbol R. In such cases, the universal gas constant is usually given a different symbol such as or to distinguish it. In any case, the context and/or units of the gas constant should make it clear as to whether the universal or specific gas constant is being referred to.
In statistical mechanics the following molecular equation is derived from first principles
where P is the absolute pressure of the gas, n is the number density of the molecules (given by the ratio n = N/V, in contrast to the previous formulation in which n is the number of moles), T is the absolute temperature, and kB is the Boltzmann constant relating temperature and energy, given by:
where NA is the Avogadro constant.
and since ρ = m/V = nμmu, we find that the ideal gas law can be rewritten as
Combined gas law
Combining the laws of Charles, Boyle and Gay-Lussac gives the combined gas law, which takes the same functional form as the ideal gas law save that the number of moles is unspecified, and the ratio of to is simply taken as a constant:
where is the pressure of the gas, is the volume of the gas, is the absolute temperature of the gas, and is a constant. When comparing the same substance under two different sets of conditions, the law can be written as
Energy associated with a gas
According to assumptions of the kinetic theory of ideal gases, we assume that there are no intermolecular attractions between the molecules of an ideal gas. In other words, its potential energy is zero. Hence, all the energy possessed by the gas is in the kinetic energy of the molecules of the gas.
|Energy of gas||Mathematical expression|
|energy associated with one mole of a monatomic gas|
|energy associated with one gram of a monatomic gas|
|energy associated with one molecule (or atom) of a monatomic gas|
Applications to thermodynamic processes
The table below essentially simplifies the ideal gas equation for a particular processes, thus making this equation easier to solve using numerical methods.
A thermodynamic process is defined as a system that moves from state 1 to state 2, where the state number is denoted by subscript. As shown in the first column of the table, basic thermodynamic processes are defined such that one of the gas properties (P, V, T, S, or H) is constant throughout the process.
For a given thermodynamics process, in order to specify the extent of a particular process, one of the properties ratios (which are listed under the column labeled "known ratio") must be specified (either directly or indirectly). Also, the property for which the ratio is known must be distinct from the property held constant in the previous column (otherwise the ratio would be unity, and not enough information would be available to simplify the gas law equation).
In the final three columns, the properties (p, V, or T) at state 2 can be calculated from the properties at state 1 using the equations listed.
|Process||Constant||Known ratio or delta||p2||V2||T2|
|Isobaric process||p2 = p1||V2 = V1(V2/V1)||T2 = T1(V2/V1)|
|p2 = p1||V2 = V1(T2/T1)||T2 = T1(T2/T1)|
|p2 = p1(p2/p1)||V2 = V1||T2 = T1(p2/p1)|
|p2 = p1(T2/T1)||V2 = V1||T2 = T1(T2/T1)|
|Isothermal process||p2 = p1(p2/p1)||V2 = V1/(p2/p1)||T2 = T1|
|p2 = p1/(V2/V1)||V2 = V1(V2/V1)||T2 = T1|
(Reversible adiabatic process)
|p2 = p1(p2/p1)||V2 = V1(p2/p1)(−1/γ)||T2 = T1(p2/p1)(γ − 1)/γ|
|p2 = p1(V2/V1)−γ||V2 = V1(V2/V1)||T2 = T1(V2/V1)(1 − γ)|
|p2 = p1(T2/T1)γ/(γ − 1)||V2 = V1(T2/T1)1/(1 − γ)||T2 = T1(T2/T1)|
|Polytropic process||p2 = p1(p2/p1)||V2 = V1(p2/p1)(-1/n)||T2 = T1(p2/p1)(n − 1)/n|
|p2 = p1(V2/V1)−n||V2 = V1(V2/V1)||T2 = T1(V2/V1)(1 − n)|
|p2 = p1(T2/T1)n/(n − 1)||V2 = V1(T2/T1)1/(1 − n)||T2 = T1(T2/T1)|
(Irreversible adiabatic process)
|p2 = p1 + (p2 − p1)||T2 = T1 + μJT(p2 − p1)|
|p2 = p1 + (T2 − T1)/μJT||T2 = T1 + (T2 − T1)|
^ a. In an isentropic process, system entropy (S) is constant. Under these conditions, p1 V1γ = p2 V2γ, where γ is defined as the heat capacity ratio, which is constant for a calorifically perfect gas. The value used for γ is typically 1.4 for diatomic gases like nitrogen (N2) and oxygen (O2), (and air, which is 99% diatomic). Also γ is typically 1.6 for mono atomic gases like the noble gases helium (He), and argon (Ar). In internal combustion engines γ varies between 1.35 and 1.15, depending on constitution gases and temperature.
^ b. In an isenthalpic process, system enthalpy (H) is constant. In the case of free expansion for an ideal gas, there are no molecular interactions, and the temperature remains constant. For real gasses, the molecules do interact via attraction or repulsion depending on temperature and pressure, and heating or cooling does occur. This is known as the Joule–Thomson effect. For reference, the Joule–Thomson coefficient μJT for air at room temperature and sea level is 0.22 °C/bar.
Deviations from ideal behavior of real gases
The equation of state given here (PV=nRT) applies only to an ideal gas, or as an approximation to a real gas that behaves sufficiently like an ideal gas. There are in fact many different forms of the equation of state. Since the ideal gas law neglects both molecular size and inter molecular attractions, it is most accurate for monatomic gases at high temperatures and low pressures. The neglect of molecular size becomes less important for lower densities, i.e. for larger volumes at lower pressures, because the average distance between adjacent molecules becomes much larger than the molecular size. The relative importance of intermolecular attractions diminishes with increasing thermal kinetic energy, i.e., with increasing temperatures. More detailed equations of state, such as the van der Waals equation, account for deviations from ideality caused by molecular size and intermolecular forces.
The empirical laws that led to the derivation of the ideal gas law were discovered with experiments that changed only 2 state variables of the gas and kept every other one constant.
All the possible gas laws that could have been discovered with this kind of setup are:
- or (1) known as Boyle's law
- or (2) known as Charles's law
- or (3) known as Avogadro's law
- or (4) known as Gay-Lussac's law
- or (5)
- or (6)
where "P" stands for pressure, "V" for volume, "N" for number of particles in the gas and "T" for temperature; Where are not actual constants but are in this context because of each equation requiring only the parameters explicitly noted in it changing.
To derive the ideal gas law one does not need to know all 6 formulas, one can just know 3 and with those derive the rest or just one more to be able to get the ideal gas law, which needs 4.
Since each formula only holds when only the state variables involved in said formula change while the others remain constant, we cannot simply use algebra and directly combine them all. I.e. Boyle did his experiments while keeping N and T constant and this must be taken into account.
Keeping this in mind, to carry the derivation on correctly, one must imagine the gas being altered by one process at a time. The derivation using 4 formulas can look like this:
at first the gas has parameters
- (7) After this process, the gas has parameters
Using then Eq. (5) to change the number of particles in the gas and the temperature,
- (8) After this process, the gas has parameters
Using then Eq. (6) to change the pressure and the number of particles,
- (9) After this process, the gas has parameters
- (10) After this process, the gas has parameters
Using simple algebra on equations (7), (8), (9) and (10) yields the result:
- or , Where stands for Boltzmann's constant.
- which is known as the ideal gas law.
If you know or have found with an experiment 3 of the 6 formulas, you can easily derive the rest using the same method explained above; but due to the properties of said equations, namely that they only have 2 variables in them, they can't be any 3 formulas. For example, if you were to have Eqs. (1), (2) and (4) you would not be able to get any more because combining any two of them will give you the third; But if you had Eqs. (1), (2) and (3) you would be able to get all 6 Equations without having to do the rest of the experiments because combining (1) and (2) will yield (4), then (1) and (3) will yield (6), then (4) and (6) will yield (5), as well as would the combination of (2) and (3) as is visually explained in the following visual relation:
Where the numbers represent the gas laws numbered above.
If you were to use the same method used above on 2 of the 3 laws on the vertices of one triangle that has a "O" inside it, you would get the third.
then as we can choose any value for , if we set , Eq. (2´) becomes: (3´)
combining equations (1´) and (3´) yields , which is Eq. (4), of which we had no prior knowledge until this derivation.
The ideal gas law can also be derived from first principles using the kinetic theory of gases, in which several simplifying assumptions are made, chief among which are that the molecules, or atoms, of the gas are point masses, possessing mass but no significant volume, and undergo only elastic collisions with each other and the sides of the container in which both linear momentum and kinetic energy are conserved.
The fundamental assumptions of the kinetic theory of gases imply that
Using the Maxwell–Boltzmann distribution, the fraction of molecules that have a speed in the range to is , where
and denotes the Boltzmann constant. The root-mean-square speed can be calculated by
Using the integration formula
it follows that
from which we get the ideal gas law:
Let q = (qx, qy, qz) and p = (px, py, pz) denote the position vector and momentum vector of a particle of an ideal gas, respectively. Let F denote the net force on that particle. Then the time-averaged kinetic energy of the particle is:
By Newton's third law and the ideal gas assumption, the net force of the system is the force applied by the walls of the container, and this force is given by the pressure P of the gas. Hence
where dS is the infinitesimal area element along the walls of the container. Since the divergence of the position vector q is
the divergence theorem implies that
where dV is an infinitesimal volume within the container and V is the total volume of the container.
Putting these equalities together yields
which immediately implies the ideal gas law for N particles:
For a d-dimensional system, the ideal gas pressure is:
where is the volume of the d-dimensional domain in which the gas exists. Note that the dimensions of the pressure changes with dimensionality.
- Van der Waals equation – Gas equation of state based on plausible reasons why real gases do not follow the ideal gas law.
- Boltzmann constant – Physical constant relating particle kinetic energy with temperature
- Configuration integral – Function in thermodynamics and statistical physics
- Dynamic pressure – Concept in fluid dynamics
- Internal energy
- Clapeyron, E. (1834). "Mémoire sur la puissance motrice de la chaleur". Journal de l'École Polytechnique (in French). XIV: 153–90. Facsimile at the Bibliothèque nationale de France (pp. 153–90).
- Krönig, A. (1856). "Grundzüge einer Theorie der Gase". Annalen der Physik und Chemie (in German). 99 (10): 315–22. Bibcode:1856AnP...175..315K. doi:10.1002/andp.18561751008. Facsimile at the Bibliothèque nationale de France (pp. 315–22).
- Clausius, R. (1857). "Ueber die Art der Bewegung, welche wir Wärme nennen". Annalen der Physik und Chemie (in German). 176 (3): 353–79. Bibcode:1857AnP...176..353C. doi:10.1002/andp.18571760302. Facsimile at the Bibliothèque nationale de France (pp. 353–79).
- "Equation of State". Archived from the original on 2014-08-23. Retrieved 2010-08-29.
- Moran; Shapiro (2000). Fundamentals of Engineering Thermodynamics (4th ed.). Wiley. ISBN 0-471-31713-6.
- Raymond, Kenneth W. (2010). General, organic, and biological chemistry : an integrated approach (3rd ed.). John Wiley & Sons. p. 186. ISBN 9780470504765. Retrieved 29 January 2019.
- J. R. Roebuck (1926). "The Joule-Thomson Effect in Air". Proceedings of the National Academy of Sciences of the United States of America. 12 (1): 55–58. Bibcode:1926PNAS...12...55R. doi:10.1073/pnas.12.1.55. PMC 1084398. PMID 16576959.
- Khotimah, Siti Nurul; Viridi, Sparisoma (2011-06-07). "Partition function of 1-, 2-, and 3-D monatomic ideal gas: A simple and comprehensive review". Jurnal Pengajaran Fisika Sekolah Menengah. 2 (2): 15–18. arXiv:1106.1273. Bibcode:2011arXiv1106.1273N.
- Davis; Masten (2002). Principles of Environmental Engineering and Science. New York: McGraw-Hill. ISBN 0-07-235053-9.
- "Website giving credit to Benoît Paul Émile Clapeyron, (1799–1864) in 1834". Archived from the original on July 5, 2007.
- Configuration integral (statistical mechanics) where an alternative statistical mechanics derivation of the ideal-gas law, using the relationship between the Helmholtz free energy and the partition function, but without using the equipartition theorem, is provided. Vu-Quoc, L., Configuration integral (statistical mechanics), 2008. this wiki site is down; see this article in the web archive on 2012 April 28.
- Gas equations in detail |
About This Chapter
ELM Algebra: Rational Expressions - Chapter Summary
Review the concepts needed for the ELM exam with this online course. Video lessons in this chapter cover the following topics:
- How to Add and Subtract Rational Expressions
- Practice Adding and Subtracting Rational Expressions
- How to Multiply and Divide Rational Expressions
- Multiplying and Dividing Rational Expressions: Practice Problems
- How to Solve a Rational Equation
- Rational Equations: Practice Problems
- Solving Rational Equations with Literal Coefficients
- Solving Problems using Rational Equations
- Finding Constant and Average Rates
- Solving Problems using Rates
Watch our online videos, which break these concepts down into easy-to-understand parts and provide examples to help you practice. Then test your knowledge with a quiz!
The ELM is a math proficiency exam required for all incoming students to schools in the California State University system. You can learn more about the topics on the ELM here.
The lessons in this chapter cover the Rational Expressions topic in the Algebra portion of the ELM. Algebra comprises approximately 35% of the entire exam. This chapter best addresses the 'evaluate and interpret algebraic expressions' objective. Additional chapters cover other essential objectives, such as:
- Use properties of exponents
- Perform polynomial arithmetic
- Solve linear equations
- Solve systems of linear equations in two unknowns
- Find and use slopes and intercepts of lines
1. How to Add and Subtract Rational Expressions
Adding and subtracting rational expressions brings everything you learned about fractions into the world of algebra. We will mix common denominators with factoring and FOILing.
2. Practice Adding and Subtracting Rational Expressions
Adding and subtracting rational expressions can feel daunting, especially when trying to find a common denominator. Let me show you the process I like to use. I think it will make adding and subtracting rational expressions more enjoyable!
3. How to Multiply and Divide Rational Expressions
Multiplying and dividing rational polynomial expressions is exactly like multiplying and dividing fractions. Like fractions, we will reduce. With polynomial expressions we use factoring and canceling. I also give you a little mnemonic to help you remember when you need a common denominator and when you don't.
4. Multiplying and Dividing Rational Expressions: Practice Problems
Let's continue looking at multiplying and dividing rational polynomials. In this lesson, we will look at a couple longer problems, while giving you some practice multiplying and dividing.
5. How to Solve a Rational Equation
A rational equation is one that contains fractions. Yes, we will be finding a common denominator that has 'x's. But no worries! Together we will use a process that will help us solve rational equations every time!
6. Rational Equations: Practice Problems
Mario and Bill own a local carwash and have several complex tasks that they must use rational equations to solve for an answer. Enjoy learning how they solve these equations to help them with some of their day-to-day tasks.
7. Solving Rational Equations with Literal Coefficients
Watch this video lesson and be amazed at how you can solve a rational equation with numbers and letters in it. Learn how to solve rational equations with literal coefficients, and you will learn how to solve any rational equation you come across!
8. Solving Problems Using Rational Equations
Follow the steps you will learn in this video lesson to help you solve rational equation problems. See how the process turns what looks like a huge problem into a much simpler and easier to manage problem.
9. Finding Constant and Average Rates
Watch this video lesson to learn the differences between constant and average rates. You will learn the steps you need to take to find them and get two great visuals you can keep in mind to help you solve these rate problems.
10. Solving Problems Using Rates
In this video lesson, learn how to set up rate problems so you can easily solve them. You will become familiar with the rate formula and you will see how easy it is to use.
Earning College Credit
Did you know… We have over 200 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level.
To learn more, visit our Earning Credit Page
Transferring credit to the school of your choice
Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you.
Other chapters within the ELM: CSU Math Study Guide course
- ELM Test - Numbers and Data: Basic Arithmetic Calculations
- ELM Test - Numbers and Data: Rational Numbers
- ELM Test - Numbers and Data: Decimals and Percents
- ELM Test - Numbers and Data: Irrational Numbers
- ELM Test - Numbers and Data: Data & Statistics
- ELM Test - Algebra: Basic Expressions
- ELM Test - Algebra: Exponents
- ELM Test - Algebra: Linear Equations & Inequalities
- ELM Test - Algebra: Absolute Value Equations & Inequalities
- ELM Test - Algebra: Polynomials
- ELM Test - Geometry: Perimeter, Area & Volume
- ELM Test - Geometry: Properties of Objects
- ELM Test - Geometry: Graphing Basics
- ELM Test - Geometry: Graphing Functions
- About the ELM Test
- ELM Flashcards |
In fluid dynamics, wind waves, or wind-generated waves, are surface waves that occur on the free surface of bodies of water (like oceans, seas, lakes, rivers, canals, puddles or ponds). They result from the wind blowing over an area of fluid surface. Waves in the oceans can travel thousands of miles before reaching land. Wind waves on Earth range in size from small ripples, to waves over 100 ft (30 m) high.
When directly generated and affected by local winds, a wind wave system is called a wind sea. After the wind ceases to blow, wind waves are called swells. More generally, a swell consists of wind-generated waves that are not significantly affected by the local wind at that time. They have been generated elsewhere or some time ago. Wind waves in the ocean are called ocean surface waves.
Wind waves have a certain amount of randomness: subsequent waves differ in height, duration, and shape with limited predictability. They can be described as a stochastic process, in combination with the physics governing their generation, growth, propagation, and decay—as well as governing the interdependence between flow quantities such as: the water surface movements, flow velocities and water pressure. The key statistics of wind waves (both seas and swells) in evolving sea states can be predicted with wind wave models.
The great majority of large breakers seen at a beach result from distant winds. Five factors influence the formation of the flow structures in wind waves:
- Wind speed or strength relative to wave speed—the wind must be moving faster than the wave crest for energy transfer
- The uninterrupted distance of open water over which the wind blows without significant change in direction (called the fetch)
- Width of area affected by fetch[clarification needed]
- Wind duration — the time for which the wind has blown over the water.
- Water depth
All of these factors work together to determine the size of wind waves and the structure of the flow within them.
The main dimensions associated with waves are:
- Wave height (vertical distance from trough to crest)
- Wave length (distance from crest to crest in the direction of propagation)
- Wave period (time interval between arrival of consecutive crests at a stationary point)
- Wave propagation direction
A fully developed sea has the maximum wave size theoretically possible for a wind of a specific strength, duration, and fetch. Further exposure to that specific wind could only cause a dissipation of energy due to the breaking of wave tops and formation of "whitecaps". Waves in a given area typically have a range of heights. For weather reporting and for scientific analysis of wind wave statistics, their characteristic height over a period of time is usually expressed as significant wave height. This figure represents an average height of the highest one-third of the waves in a given time period (usually chosen somewhere in the range from 20 minutes to twelve hours), or in a specific wave or storm system. The significant wave height is also the value a "trained observer" (e.g. from a ship's crew) would estimate from visual observation of a sea state. Given the variability of wave height, the largest individual waves are likely to be somewhat less than twice the reported significant wave height for a particular day or storm.
Wave formation on an initially flat water surface by wind is started by a random distribution of normal pressure of turbulent wind flow over the water. This pressure fluctuation produces normal and tangential stresses in the surface water, which generates waves. It is assumed that:
- The water is originally at rest.
- The water is not viscous.
- The water is irrotational.
- There is a random distribution of normal pressure to the water surface from the turbulent wind.
- Correlations between air and water motions are neglected.
The second mechanism involves wind shear forces on the water surface. John W. Miles suggested a surface wave generation mechanism which is initiated by turbulent wind shear flows based on the inviscid Orr-Sommerfeld equation in 1957. He found the energy transfer from wind to water surface is proportional to the curvature of the velocity profile of the wind at the point where the mean wind speed is equal to the wave speed. Since the wind speed profile is logarithmic to the water surface, the curvature has a negative sign at this point. This relation shows the wind flow transferring its kinetic energy to the water surface at their interface.
- two-dimensional parallel shear flow
- incompressible, inviscid water and wind
- irrotational water
- slope of the displacement of the water surface is small
Generally these wave formation mechanisms occur together on the water surface and eventually produce fully developed waves.
For example, if we assume a flat sea surface (Beaufort state 0), and a sudden wind flow blows steadily across the sea surface, the physical wave generation process follows the sequence:
- Turbulent wind forms random pressure fluctuations at the sea surface. Ripples with wavelengths in the order of a few centimetres are generated by the pressure fluctuations. (The Phillips mechanism)
- The winds keep acting on the initially rippled sea surface causing the waves to become larger. As the waves grow, the pressure differences get larger causing the growth rate to increase. Finally the shear instability expedites the wave growth exponentially. (The Miles mechanism)
- The interactions between the waves on the surface generate longer waves and the interaction will transfer wave energy from the shorter waves generated by the Miles mechanism to the waves which have slightly lower frequencies than the frequency at the peak wave magnitudes, then finally the waves will be faster than the cross wind speed (Pierson & Moskowitz).
|Conditions necessary for a fully developed sea at given wind speeds, and the parameters of the resulting waves|
|Wind conditions||Wave size|
|Wind speed in one direction||Fetch||Wind duration||Average height||Average wavelength||Average period and speed|
|19 km/h (12 mph)||19 km (12 mi)||2 hr||0.27 m (0.89 ft)||8.5 m (28 ft)||3.0 sec, 10.2 km/h (9.3 ft/sec)|
|37 km/h (23 mph)||139 km (86 mi)||10 hr||1.5 m (4.9 ft)||33.8 m (111 ft)||5.7 sec, 21.4 km/h (19.5 ft/sec)|
|56 km/h (35 mph)||518 km (322 mi)||23 hr||4.1 m (13 ft)||76.5 m (251 ft)||8.6 sec, 32.0 km/h (29.2 ft/sec)|
|74 km/h (46 mph)||1,313 km (816 mi)||42 hr||8.5 m (28 ft)||136 m (446 ft)||11.4 sec, 42.9 km/h (39.1 ft/sec)|
|92 km/h (57 mph)||2,627 km (1,632 mi)||69 hr||14.8 m (49 ft)||212.2 m (696 ft)||14.3 sec, 53.4 km/h (48.7 ft/sec)|
|NOTE: Most of the wave speeds calculated from the wave length divided by the period are proportional to the square root of the wave length. Thus, except for the shortest wave length, the waves follow the deep water theory. The 28 ft long wave must be either in shallow water or intermediate depth.|
Types of wind waves
Three different types of wind waves develop over time:
Ripples appear on smooth water when the wind blows, but will die quickly if the wind stops. The restoring force that allows them to propagate is surface tension. Sea waves are larger-scale, often irregular motions that form under sustained winds. These waves tend to last much longer, even after the wind has died, and the restoring force that allows them to propagate is gravity. As waves propagate away from their area of origin, they naturally separate into groups of common direction and wavelength. The sets of waves formed in this manner are known as swells.
Individual "rogue waves" (also called "freak waves", "monster waves", "killer waves", and "king waves") much higher than the other waves in the sea state can occur. In the case of the Draupner wave, its 25 m (82 ft) height was 2.2 times the significant wave height. Such waves are distinct from tides, caused by the Moon and Sun's gravitational pull, tsunamis that are caused by underwater earthquakes or landslides, and waves generated by underwater explosions or the fall of meteorites—all having far longer wavelengths than wind waves.
Yet, the largest ever recorded wind waves are common — not rogue — waves in extreme sea states. For example: 29.1 m (95 ft) high waves have been recorded on the RRS Discovery in a sea with 18.5 m (61 ft) significant wave height, so the highest wave is only 1.6 times the significant wave height. The biggest recorded by a buoy (as of 2011) was 32.3 m (106 ft) high during the 2007 typhoon Krosa near Taiwan.
Ocean waves can be classified based on: the disturbing force(s) that create(s) them; the extent to which the disturbing force(s) continue(s) to influence them after formation; the extent to which the restoring force(s) weaken(s) (or flatten) them; and their wavelength or period. Seismic Sea waves have a period of ~20 minutes, and speeds of 760 km/h (470 mph). Wind waves (deep-water waves) have a period of about 20 seconds.
|Wave type||Typical wavelength||Disturbing force||Restoring force|
|Capillary wave||< 2 cm||Wind||Surface tension|
|Wind wave||60–150 m (200–490 ft)||Wind over ocean||Gravity|
|Seiche||Large, variable; a function of basin size||Change in atmospheric pressure, storm surge||Gravity|
|Seismic sea wave (tsunami)||200 km (120 mi)||Faulting of sea floor, volcanic eruption, landslide||Gravity|
|Tide||Half the circumference of Earth||Gravitational attraction, rotation of Earth||Gravity|
The speed of all ocean waves is controlled by gravity, wavelength, and water depth. Most characteristics of ocean waves depend on the relationship between their wavelength and water depth. Wavelength determines the size of the orbits of water molecules within a wave, but water depth determines the shape of the orbits. The paths of water molecules in a wind wave are circular only when the wave is traveling in deep water. A wave cannot "feel" the bottom when it moves through water deeper than half its wavelength because too little wave energy is contained in the small circles below that depth. Waves moving through water deeper than half their wavelength are known as deep-water waves. On the other hand, the orbits of water molecules in waves moving through shallow water are flattened by the proximity of the sea surface bottom. Waves in water shallower than 1/20 their original wavelength are known as shallow-water waves. Transitional waves travel through water deeper than 1/20 their original wavelength but shallower than half their original wavelength.
In general, the longer the wavelength, the faster the wave energy will move through the water. For deep-water waves, this relationship is represented with the following formula:
where C is speed (celerity), L is wavelength, and T is time, or period (in seconds).
The speed of a deep-water wave may also be approximated by:
where g is the acceleration due to gravity, 9.8 meters (32 feet) per second squared. Because g and π (3.14) are constants, the equation can be reduced to:
when C is measured in meters per second and L in meters. Note that in both formulas the wave speed is proportional to the square root of the wavelength.
The speed of shallow-water waves is described by a different equation that may be written as:
where C is speed (in meters per second), g is the acceleration due to gravity, and d is the depth of the water (in meters). The period of a wave remains unchanged regardless of the depth of water through which it is moving. As deep-water waves enter the shallows and feel the bottom, however, their speed is reduced and their crests "bunch up," so their wavelength shortens.
Wave shoaling and refraction
As waves travel from deep to shallow water, their shape alters (wave height increases, speed decreases, and length decreases as wave orbits become asymmetrical). This process is called shoaling.
Wave refraction is the process that occurs when waves interact with the sea bed as the wave crests align themselves as a result of approaching decreasing water depths at an angle to the depth contours. Varying depths along a wave crest cause the crest to travel at different phase speeds, with those parts of the wave in deeper water moving faster than those in shallow water. This process continues until the crests become (nearly) parallel to the depth contours. Rays—lines normal to wave crests between which a fixed amount of energy flux is contained—converge on local shallows and shoals. Therefore, the wave energy between rays is concentrated as they converge, with a resulting increase in wave height.
Because these effects are related to a spatial variation in the phase speed, and because the phase speed also changes with the ambient current – due to the Doppler shift – the same effects of refraction and altering wave height also occur due to current variations. In the case of meeting an adverse current the wave steepens, i.e. its wave height increases while the wave length decreases, similar to the shoaling when the water depth decreases.
Some waves undergo a phenomenon called "breaking". A breaking wave is one whose base can no longer support its top, causing it to collapse. A wave breaks when it runs into shallow water, or when two wave systems oppose and combine forces. When the slope, or steepness ratio, of a wave is too great, breaking is inevitable.
Individual waves in deep water break when the wave steepness—the ratio of the wave height H to the wavelength λ—exceeds about 0.17, so for H > 0.17 λ. In shallow water, with the water depth small compared to the wavelength, the individual waves break when their wave height H is larger than 0.8 times the water depth h, that is H > 0.8 h. Waves can also break if the wind grows strong enough to blow the crest off the base of the wave.
In shallow water the base of the wave is decelerated by drag on the seabed. As a result, the upper parts will propagate at a higher velocity than the base and the leading face of the crest will become steeper and the trailing face flatter. This may be exaggerated to the extent that the leading face forms a barrel profile, with the crest falling forward and down as it extends over the air ahead of the wave.
- Spilling, or rolling: these are the safest waves on which to surf. They can be found in most areas with relatively flat shorelines. They are the most common type of shorebreak. The deceleration of the wave base is gradual, and the velocity of the upper parts does not differ much with height. Breaking occurs mainly when the steepness ratio exceeds the stability limit.
- Plunging, or dumping: these break suddenly and can "dump" swimmers—pushing them to the bottom with great force. These are the preferred waves for experienced surfers. Strong offshore winds and long wave periods can cause dumpers. They are often found where there is a sudden rise in the sea floor, such as a reef or sandbar. Deceleration of the wave base is sufficient to cause upward acceleration and a significant forward velocity excess of the upper part of the crest. The peak rises and overtakes the forward face, forming a "barrel" or "tube" as it collapses.
- Surging: these may never actually break as they approach the water's edge, as the water below them is very deep. They tend to form on steep shorelines. These waves can knock swimmers over and drag them back into deeper water.
When the shoreline is near vertical, waves do not break, but are reflected. Most of the energy is retained in the wave as it returns to seaward. Interference patterns are caused by superposition of the incident and reflected waves, and the superposition may cause localised instability when peaks cross, and these peaks may break due to instability. (see also clapotic waves)
Physics of waves
Wind waves are mechanical waves that propagate along the interface between water and air; the restoring force is provided by gravity, and so they are often referred to as surface gravity waves. As the wind blows, pressure and friction perturb the equilibrium of the water surface and transfer energy from the air to the water, forming waves. The initial formation of waves by the wind is described in the theory of Phillips from 1957, and the subsequent growth of the small waves has been modeled by Miles, also in 1957.
In linear plane waves of one wavelength in deep water, parcels near the surface move not plainly up and down but in circular orbits: forward above and backward below (compared the wave propagation direction). As a result, the surface of the water forms not an exact sine wave, but more a trochoid with the sharper curves upwards—as modeled in trochoidal wave theory. Wind waves are thus a combination of transversal and longitudinal waves.
In reality, for finite values of the wave amplitude (height), the particle paths do not form closed orbits; rather, after the passage of each crest, particles are displaced slightly from their previous positions, a phenomenon known as Stokes drift.
As the depth below the free surface increases, the radius of the circular motion decreases. At a depth equal to half the wavelength λ, the orbital movement has decayed to less than 5% of its value at the surface. The phase speed (also called the celerity) of a surface gravity wave is – for pure periodic wave motion of small-amplitude waves – well approximated by
- c = phase speed;
- λ = wavelength;
- d = water depth;
- g = acceleration due to gravity at the Earth's surface.
In deep water, where , so and the hyperbolic tangent approaches , the speed approximates
In SI units, with in m/s, , when is measured in metres. This expression tells us that waves of different wavelengths travel at different speeds. The fastest waves in a storm are the ones with the longest wavelength. As a result, after a storm, the first waves to arrive on the coast are the long-wavelength swells.
If the wavelength is very long compared to the water depth, the phase speed (by taking the limit of c when the wavelength approaches infinity) can be approximated by
When several wave trains are present, as is always the case in nature, the waves form groups. In deep water the groups travel at a group velocity which is half of the phase speed. Following a single wave in a group one can see the wave appearing at the back of the group, growing and finally disappearing at the front of the group.
As the water depth decreases towards the coast, this will have an effect: wave height changes due to wave shoaling and refraction. As the wave height increases, the wave may become unstable when the crest of the wave moves faster than the trough. This causes surf, a breaking of the waves.
The movement of wind waves can be captured by wave energy devices. The energy density (per unit area) of regular sinusoidal waves depends on the water density , gravity acceleration and the wave height (which, for regular waves, is equal to twice the amplitude, ):
The velocity of propagation of this energy is the group velocity.
Wind wave models
Surfers are very interested in the wave forecasts. There are many websites that provide predictions of the surf quality for the upcoming days and weeks. Wind wave models are driven by more general weather models that predict the winds and pressures over the oceans, seas and lakes.
Wind wave models are also an important part of examining the impact of shore protection and beach nourishment proposals. For many beach areas there is only patchy information about the wave climate, therefore estimating the effect of wind waves is important for managing littoral environments.
Ocean water waves generate land seismic waves that propagate hundreds of kilometers into the land. These seismic signals usually have the period of 6 ± 2 seconds. Such recordings were first reported and understood in about 1900.
There are two types of seismic "ocean waves". The primary waves are generated in shallow waters by direct water wave-land interaction and have the same period as the water waves (10 to 16 seconds). The more powerful secondary waves are generated by the superposition of ocean waves of equal period traveling in opposite directions, thus generating standing gravity waves – with an associated pressure oscillation at half the period, which is not diminishing with depth. The theory for microseism generation by standing waves was provided by Michael Longuet-Higgins in 1950, after in 1941 Pierre Bernard suggested this relation with standing waves on the basis of observations.
- Tolman, H. L. (23 June 2010). Mahmood, M.F., ed. "CBMS Conference Proceedings on Water Waves: Theory and Experiment" (PDF). Howard University, US, 13–18 May 2008: World Scientific Publications. ISBN 978-981-4304-23-8.
- Holthuijsen (2007), page 5.
- Lorenz, R. D.; Hayes, A. G. (2012). "The Growth of Wind-Waves in Titan's Hydrocarbon Seas". Icarus. 219: 468–475. Bibcode:2012Icar..219..468L. doi:10.1016/j.icarus.2012.03.002.
- Young, I. R. (1999). Wind generated ocean waves. Elsevier. p. 83. ISBN 0-08-043317-0.
- Weisse, Ralf; von Storch, Hans (2008). Marine climate change: Ocean waves, storms and surges in the perspective of climate change. Springer. p. 51. ISBN 978-3-540-25316-7.
- Phillips, O. M. (2006). "On the generation of waves by turbulent wind". Journal of Fluid Mechanics. 2 (5): 417. Bibcode:1957JFM.....2..417P. doi:10.1017/S0022112057000233.
- Miles, John W. (2006). "On the generation of surface waves by shear flows". Journal of Fluid Mechanics. 3 (2): 185. Bibcode:1957JFM.....3..185M. doi:10.1017/S0022112057000567.
- Chapter 16, Ocean Waves
- Hasselmann, K.; et al. (1973). "Measurements of wind-wave growth and swell decay during the Joint North Sea Wave Project (JONSWAP)". Ergnzungsheft zur Deutschen Hydrographischen Zeitschrift Reihe A. 8 (12): 95. hdl:10013/epic.20654.
- Pierson, Willard J.; Moskowitz, Lionel (15 December 1964). "A proposed spectral form for fully developed wind seas based on the similarity theory of S. A. Kitaigorodskii". Journal of Geophysical Research. 69 (24): 5181–5190. Bibcode:1964JGR....69.5181P. doi:10.1029/JZ069i024p05181.
- Munk, Walter H. (1950). "Proceedings 1st International Conference on Coastal Engineering". Long Beach, California: ASCE: 1–4.
- Holliday, Naomi P.; Yelland, Margaret J.; Pascal, Robin; Swail, Val R.; Taylor, Peter K.; Griffiths, Colin R.; Kent, Elizabeth (2006). "Were extreme waves in the Rockall Trough the largest ever recorded?". Geophysical Research Letters. 33 (L05613). Bibcode:2006GeoRL..3305613H. doi:10.1029/2005GL025238.
- P. C. Liu; H. S. Chen; D.-J. Doong; C. C. Kao; Y.-J. G. Hsu (11 June 2008). "Monstrous ocean waves during typhoon Krosa" (PDF). Annales Geophysicae. European Geosciences Union. 26 (6): 1327–1329. Bibcode:2008AnGeo..26.1327L. doi:10.5194/angeo-26-1327-2008.
- Tom Garrison (2009). Oceanography: An Invitation to Marine Science (7th Edition). Yolanda Cossio. ISBN 978-0495391937.
- Longuet-Higgins, M. S.; Stewart, R. W. (1964). "Radiation stresses in water waves; a physical discussion, with applications". Deep-Sea Research. 11 (4): 529–562. Bibcode:1964DSROA..11..529L. doi:10.1016/0011-7471(64)90001-4.
- Gulrez, Tauseef; Hassanien, Aboul Ella (2011-11-13). Advances in Robotics and Virtual Reality. Springer Science & Business Media. ISBN 9783642233630.
- R.J. Dean and R.A. Dalrymple (2002). Coastal processes with engineering applications. Cambridge University Press. ISBN 0-521-60275-0. p. 96–97.
- Phillips, O. M. (1957). "On the generation of waves by turbulent wind". Journal of Fluid Mechanics. 2 (5): 417–445. Bibcode:1957JFM.....2..417P. doi:10.1017/S0022112057000233.
- Miles, J. W. (1957). "On the generation of surface waves by shear flows". Journal of Fluid Mechanics. 3 (2): 185–204. Bibcode:1957JFM.....3..185M. doi:10.1017/S0022112057000567.
- Figure 6 from: Wiegel, R. L.; Johnson, J. W. (1950). "Proceedings 1st International Conference on Coastal Engineering". Long Beach, California: ASCE: 5–21.
- For the particle trajectories within the framework of linear wave theory, see for instance:
Phillips (1977), page 44.
Lamb, H. (1994). Hydrodynamics (6th ed.). Cambridge University Press. ISBN 978-0-521-45868-9. Originally published in 1879, the 6th extended edition appeared first in 1932. See §229, page 367.
L. D. Landau and E. M. Lifshitz (1986). Fluid mechanics. Course of Theoretical Physics. 6 (Second revised ed.). Pergamon Press. ISBN 0-08-033932-8. See page 33.
- A good illustration of the wave motion according to linear theory is given by Prof. Robert Dalrymple's Java applet.
- For nonlinear waves, the particle paths are not closed, as found by George Gabriel Stokes in 1847, see the original paper by Stokes. Or in Phillips (1977), page 44: "To this order, it is evident that the particle paths are not exactly closed ... pointed out by Stokes (1847) in his classical investigation".
- Solutions of the particle trajectories in fully nonlinear periodic waves and the Lagrangian wave period they experience can for instance be found in:
J. M. Williams (1981). "Limiting gravity waves in water of finite depth". Philosophical Transactions of the Royal Society A. 302 (1466): 139–188. Bibcode:1981RSPTA.302..139W. doi:10.1098/rsta.1981.0159.
J. M. Williams (1985). Tables of progressive gravity waves. Pitman. ISBN 978-0-273-08733-5.
- Carl Nordling, Jonny Östermalm (2006). Physics Handbook for Science and Engineering (Eight ed.). Studentliteratur. p. 263. ISBN 978-91-44-04453-8.
- In deep water, the group velocity is half the phase velocity, as is shown here. Another reference is .
- Peter Bormann. Seismic Signals and Noise
- Bernard, P. (1941). "Sur certaines proprietes de la boule etudiees a l'aide des enregistrements seismographiques". Bulletin de l'Institut océanographique de Monaco. 800: 1–19.
- Longuet-Higgins, M. S. (1950). "A theory of the origin of microseisms". Philosophical Transactions of the Royal Society A. 243 (857): 1–35. Bibcode:1950RSPTA.243....1L. doi:10.1098/rsta.1950.0012.
- G. G. Stokes (1880). Mathematical and Physical Papers, Volume I. Cambridge University Press. pp. 197–229.
- Phillips, O. M. (1977). The dynamics of the upper ocean (2nd ed.). Cambridge University Press. ISBN 0-521-29801-6.
- Holthuijsen, Leo H. (2007). Waves in oceanic and coastal waters. Cambridge University Press. ISBN 0-521-86028-8.
- Janssen, Peter (2004). The interaction of ocean waves and wind. Cambridge University Press. ISBN 978-0-521-46540-3.
- Rousmaniere, John (1989). The Annapolis Book of Seamanship (2nd revised ed.). Simon & Schuster. ISBN 0-671-67447-1.
- Carr, Michael (October 1998). "Understanding Waves". Sail. pp. 38–45.
|Wikiquote has quotations related to: Wind wave|
|Wikimedia Commons has media related to Ocean surface waves.|
|Wikimedia Commons has media related to Water waves.| |
Beyond Earth and its bubble of satellites; past Mars, where rovers explore; past Jupiter and its circling orbiter, two spacecraft are gliding across interstellar space. They have crossed over the invisible boundary that separates our solar system from everything else, into territory nearly untouched by the influence of the sun. People have seen much deeper into the universe, thanks to powerful telescopes that catch the light of distant stars. But this is the farthest a human invention has ever traveled. These hunks of gleaming metal and circuitry—they are the furthermost tangible proof of our existence.
The twin Voyager spacecraft took off in 1977, carrying scientific instruments and golden records stuffed with information. Millions of miles away, they still communicate with Earth. They still collect data. But they are aging.
The spacecraft, traveling in slightly different directions, weaken every year. Their thrusters, which keep them steady, are degrading. Their power generators produce about 40 percent less electricity than they did at launch.
To keep the Voyagers going, engineers make some tough decisions from afar. They tell the Voyagers, in commands carried over radio waves, to shut down systems or switch to backups, rationing every single watt. They prepare for what may be the mission’s final years. “Someday we’re going to have to say goodbye,” says Candy Hansen, a scientist at the Planetary Science Institute who worked on the Voyager mission in the 1970s and 1980s.
But not yet. This summer, engineers instructed Voyager 2 to fire up a set of thrusters that the spacecraft hasn’t used since 1989. Thrusters help keep the spacecraft steady, and the ones Voyager 2 was using were deteriorating, which meant they had to fire more often, until the number of pulses became “untenable,” according to NASA. Without healthy thrusters, the spacecraft would lose its ability to keep its antenna pointed toward Earth, the point of communication between the spacecraft and its stewards back home. Voyager 1 made the same switch last year.
Engineers also shut off a heating component that keeps one of Voyager 2’s instruments warm enough to function in the frigid cold of space. Turning off a heater buys the mission four watts, the same amount it loses in a year. After months of deliberation, scientists decided that sacrificing this instrument, which last year helped confirm that the spacecraft had entered the space between stars, was worth it. Unlike the others, this instrument can point only in certain directions.
Some other instruments have, incredibly, tolerated the loss of their heaters, sometimes for years. According to NASA, the temperature of the cosmic-ray instrument, the most recent target of rationing, has dropped to –74 degrees Fahrenheit (–59 degrees Celsius), far lower than what it withstood during testing on Earth. But it’s still collecting data and beaming them home.
Heaters on instruments on both Voyagers will be next on the chopping block. Suzanne Dodd, the Voyager project manager at NASA’s Jet Propulsion Laboratory, says the team may someday be forced to turn off one of the engineering elements that help the spacecraft communicate with Earth. “If it does work, then we gain two more watts,” Dodd says. “If it doesn’t work, then we lose the mission.”
No one was thinking about interstellar space when the Voyager spacecraft were dispatched. Scientists and engineers had set their sights much closer to home: the other planets, arranged in a rare alignment that allowed the spacecraft to swing from one to the next. “The twin Voyagers, despite all the odds to the contrary, have been our accidental visitors to the beginning of the space between the stars themselves,” says Ralph McNutt, a scientist at Johns Hopkins Applied Physics Laboratory who still works on the mission.
Both spacecraft reached Jupiter first, capturing the planet’s swirling filigree of storms in unprecedented detail. Voyager 1 bopped from there to Saturn. Scientists were particularly interested in the planet’s largest moon, Titan, which would turn out to be one of the most intriguing spots in the solar system and a potential home of extraterrestrial life. If Voyager 1 didn’t collect enough good data before moving on, Voyager 2 would be redirected to try again. But the flyby worked, allowing Voyager 2 to swoop past Saturn and on to Uranus and Neptune.
Engineers took their first energy-saving step not long after that. After the planets, there was little to photograph aside from a handful of distant stars, so engineers turned off the Voyagers’ power-gobbling cameras. Hansen, who worked on the imaging team, says that if engineers resurrected the cameras now, “it would literally kill every other instrument on the spacecraft.”
Voyager 1 entered interstellar space in 2012, and Voyager 2 followed last year. They registered the shift; their scientific instruments detected significant changes in the cosmic environment around them. The touch of the solar wind, invisible particles that envelop the solar system in a protective bubble, had disappeared. Engineers had already shut down a few instruments years earlier. Those still operating today are designed to study the few detectable phenomena of interstellar space, like cosmic rays and magnetic fields.
NASA has asked scientists to study the possibility of a new interstellar probe, but no formal missions exist. For now, the Voyagers are it. And one day, engineers will come into work expecting to hear the faint pings from the spacecraft, messages that take hours—nearly a day, in Voyager 1’s case—to cross the expanse and reach ground-based antennas. They won’t hear anything, and they might not ever be able to decipher why.
It could be that the spacecraft, running without enough heaters, become so cold that the fuel lines freeze, cutting off the power that thrusters need to keep the antenna turned toward home. Or it could be that the transmitters, which send and receive signals, run out of power, a scenario Dodd predicts could happen in the 2020s. The invisible tether that has connected scientists and engineers to the spacecraft for more than four decades, always unspooling further, would run out at last. One or two instruments might still work. The Voyagers would keep chronicling their journey through the cosmos, making history with every mile, but they’d have no way of calling home. |
Heap sort is a comparison based sorting technique based on Binary Heap data structure. It is similar to selection sort where we first find the maximum element and place the maximum element at the end. We repeat the same process for remaining element.
What is Binary Heap?
Let us first define a Complete Binary Tree. A complete binary tree is a binary tree in which every level, except possibly the last, is completely filled, and all nodes are as far left as possible (Source Wikipedia)
A Binary Heap is a Complete Binary Tree where items are stored in a special order such that value in a parent node is greater(or smaller) than the values in its two children nodes. The former is called as max heap and the latter is called min heap. The heap can be represented by binary tree or array.
Why array based representation for Binary Heap?
Since a Binary Heap is a Complete Binary Tree, it can be easily represented as array and array based representation is space efficient. If the parent node is stored at index I, the left child can be calculated by 2 * I + 1 and right child by 2 * I + 2 (assuming the indexing starts at 0).
Heap Sort Algorithm for sorting in increasing order:
1. Build a max heap from the input data.
2. At this point, the largest item is stored at the root of the heap. Replace it with the last item of the heap followed by reducing the size of heap by 1. Finally, heapify the root of tree.
3. Repeat above steps while size of heap is greater than 1.
How to build the heap?
Heapify procedure can be applied to a node only if its children nodes are heapified. So the heapification must be performed in the bottom up order.
Lets understand with the help of an example:
Input data: 4, 10, 3, 5, 1 4(0) / \ 10(1) 3(2) / \ 5(3) 1(4) The numbers in bracket represent the indices in the array representation of data. Applying heapify procedure to index 1: 4(0) / \ 10(1) 3(2) / \ 5(3) 1(4) Applying heapify procedure to index 0: 10(0) / \ 5(1) 3(2) / \ 4(3) 1(4) The heapify procedure calls itself recursively to build heap in top down manner.
Sorted array is 5 6 7 11 12 13
Here is previous C code for reference.
Heap sort is an in-place algorithm.
Its typical implementation is not stable, but can be made stable (See this)
Time Complexity: Time complexity of heapify is O(Logn). Time complexity of createAndBuildHeap() is O(n) and overall time complexity of Heap Sort is O(nLogn).
Heap sort algorithm has limited uses because Quicksort and Mergesort are better in practice. Nevertheless, the Heap data structure itself is enormously used. See Applications of Heap Data Structure
Other Sorting Algorithms on GeeksforGeeks/GeeksQuiz:
QuickSort, Selection Sort, Bubble Sort, Insertion Sort, Merge Sort, Heap Sort, QuickSort, Radix Sort, Counting Sort, Bucket Sort, ShellSort, Comb Sort, Pigeonhole Sort
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
- Iterative HeapSort
- Find maximum sum from top to bottom row with no adjacent diagonal elements
- Count numbers whose maximum sum of distinct digit-sum is less than or equals M
- Minimum sum of product of elements of pairs of the given array
- Sort a 2D vector diagonally
- Count of pairs from arrays A and B such that element in A is greater than element in B at that index
- Sort an Array of Strings according to the number of Vowels in them
- Sort an array of Strings according frequency
- Minimum removals required to make ranges non-overlapping
- Check if elements of array can be arranged in AP, GP or HP
- Sort the Array by reversing the numbers in it
- Find the minimum permutation of A greater than B
- Count number of pairs with positive sum in an array
- Amazon Interview Experience for SDE-1 | Off-Campus ( Exp<1year) |
Engaging Math Activities for your Child's Learning Style
Updated: Jul 2, 2020
Many students are resistant to learning math, especially at home. Frequently math instruction is thought of as boring, pencil on paper, rote practice. Fortunately, this conception is a misconception. There are many ways to engage your child in math activities that will cater to their individual learning style and are fun!
Howard Gardner, an American psychologist, developed the theory of different learning styles called "Multiple Intelligences." He posited that each student has a unique way in which they learn best. Many of us have seen that children have natural inclinations towards certain activities; some are artistic, some dance, some write, some engage in physical activities, some are social, etc. Multiple Intelligences Theory capitalizes on these inherent interests to make learning accessible to a variety of students through a variety of instructional techniques.
The first step is to identify your child's learning style(s). They could learn:
Musically: through songs, patterns, instruments, and rhythms
Visually/Spatially: through seeing, imagining, drawing, diagraming, pictures
Bodily/Kinesthetically: through movement, dance, interaction with the environment, exercise
Intrapersonally: through feelings, introspection, philosophizing, self-improvement, independent activities
Interpersonally: through social interaction, group activities, relationships, competition
Naturalistically: through nature, plants, animals, being outside
Logically: through math, patterns, reasoning, problem-solving
Linguistically: through language, writing, poetry, listening, speaking
There are several ways to glean which learning style(s) your child embraces. The first, of course, is through observing your child's inherent interests and vocations. Another easy way to find out your child's learning style is to take an online survey. I like this one: https://www.literacynet.org/mi/assessment/findyourstrengths.html.
Math Activities that Cater to Your Child's Learning Style
Now that you have identified your child's learning style, you can choose math activities that will resonate with them. Find a math activity that corresponds to your child's learning style below!
Musical Math (Addition, Subtraction, Multiplication, and Division):
Choose a song that your child likes. It should be a song that has a clear, regular tempo and rhythm.
Play the song and have your child count the number of beats per verse, chorus, and refrain, recording the numbers as they go. You can pause the song or start it over as needed.
Once your child has recorded the number of beats, you can ask questions like:
How many beats are in the song total? (Add them up)
How many beats are in the verses total? (Multiply the number of beats per verse by the number of verses)
How many beats are in the choruses total? (Multiply the number of beats per chorus times the number of choruses)
How many beats are in two verses? (In a song with 3 verses, subtract the beats of 1 verse from the total number of verse beats)
What's another way you could figure out how many beats are in two verses? (Multiply the number of beats in 1 verse times 2)
How could you find the number of beats per verse if you had only counted the total number of verse beats? (Divide the number of total verse beats by the number of verses)
Visual/Spatial Math (Fraction Equivalency and Operations, requires Legos, tape, and a marker):
Get out your child's legos.
Find a Lego brick that is 2x8 and, using tape, label the side with the number "1."
Find 2 Lego bricks that are 2x4 and, using tape, label each "1/2." Place them next to each other on top of the "1" Lego Brick to represent 2 halves equal 1 whole.
Find 4 Lego bricks that are 2x2 and, using tape, label each "1/4." Place them next to each other on top of the "1/2" bricks to represent that 4 fourths equals 1 whole and 2 fourths equals 1/2.
Find 8 Lego bricks that are 2x8 and, using tape, label each "1/8." Place them next to each other on top of the "1/4" bricks to represent that 8 eighths equal 1 whole, 4 eighths equal 1/2, and 2 eighths represent 1/4.
Perform math operations with the Lego bricks, e.g. 1/8 + 3/8 = 4/8 = 1/2.
Use different Lego brick sizes to represent different fractions. For example, use a 2x10 brick to represent the whole, two 2x5 bricks to represent halves, and five 2x2 bricks to represent fifths.
Bodily/Kinesthetic Math (Addition, Subtraction, Multiplication, Division, Fraction, and Decimal Math Facts, requires flash cards which you can purchase, download and print, or create on your own)
Arrange flash cards in a line throughout your home about 1-2 feet apart. You can use addition, subtraction, multiplication, division, fraction, decimal, or any other flash cards you may have.
Your child starts on the first flash card. In order to progress to the next one, they must provide the correct answer to the problem on the card.
The goal is to finish the line as quickly as possible, so time your child.
You can set up two or more lines of cards (with different focuses, if needed, e.g. child A working with multiplication and child B working with addition) for two or more children to race each other.
Intrapersonal Math (Addition, Subtraction, Multiplication, Division, Fraction, and Decimal Math Facts, requires several problems written on individual, small pieces of paper and the answers to these problems written on separate, individual, small pieces of paper)
Shuffle all of the pieces of paper together.
Arrange the pieces of paper into an array, face down.
Your child plays an individual memory game, trying to match the problems with the answers.
When you child finds a match, they remove the problem and answer from the array and place them side-by side in a separate space.
Interpersonal Math (Addition, Multiplication, Division, Decimals, and Fractions, requires a set of 2 dice)
The object of the game is to score 101 points without going over.
Take turns rolling the dice. When rolled and depending on whose turn it is, you or your child chooses to keep the number on the dice at face value, multiply it by 10, or divide it by 2. For example, if a 6 is rolled, they can choose to get 6 points, 60 points, or 3 points. If an odd number is rolled and your child decides to divide, they must account for halves that will be in their answer. To eliminate decimals and fractions, eliminate the divide by 2 rule.
As your child gets closer to 101, they will have to make decisions about whether it is better to keep the dice at face value, multiply by 10, or divide by 2 without going over 101. The exception is if the dice go above 101 and they can divide by two to score 101 points exactly, they may do so.
Whoever scores exactly 101 points first wins.
Naturalistic Math (Addition, Subtraction, Multiplication, Division, and Fractions)
Take your child to the bosque or some other area with trees and wildlife.
Count the number of a specific species of tree, plant, or animal you see in a set amount of time. For example, if using Cottonwood trees, you might say "count as many as you can in 30 seconds." For ducks or other waterfowl, you might say "count however many waterbirds you see in a 20 minute walk." Have your child record their numbers on a piece of paper with a clipboard.
Ask questions like "If each of the ducks you counted had 6 ducklings, how many ducks would there be total?" or "If 7 Cottonwood trees were struck by lightning and burned down, how many trees would be left?" or "If 1/3 of the pigeons flew away, how many pigeons would remain? How many flew away?"
Have your child record the questions and their answers on their clipboard.
Logical Math (Any Math Concept)
Play a game in which your child tries to come up with a word problem that will take you more than 30 seconds to solve. They solve the problem as well to check for your correctness.
Linguistic Math (Any Math Concept)
Have your child write a math limerick. A limerick is a five line poem. Lines one, two, and five have nine syllables and rhyme with each other. Lines three and four have six syllables and rhyme with each other.
An example of a math limerick is as follows:
A dozen, a gross, plus a score
Plus three times the square root of four
Divided by seven
Plus five times eleven
Is nine squared (and not a bit more).
(12 + 144 + 20 + 3 x √4) ÷ 7 + 5 x 11 = 9^2
This example is quite complex, but hopefully you get the idea!
Catering to your child's individual learning style provides immediate buy-in to subject areas in which they otherwise might be resistant. I hope these math activities are useful for you in your home practice! |
Reading Diagrams Worksheets
Mercantilism during the age of exploration - reading, questions, diagram - this page resource related to the economic system of mercantilism includes a reading, set of analysis questions and a diagram activity which requires students to compare mercantilism with capitalism.
In this reading a diagram worksheet, students read a paragraph and then examine a diagram. students respond to short answer questions regarding the data. Some of the worksheets for this concept are reading diagrams charts graphs and tables, reading diagram ts, information in geometric diagrams, understanding electrical schematics part revised, graphs and charts, diagrams, step by step tape diagram lesson question in a school, electricity unit.
once you find your worksheet, click on pop-out icon or print icon to worksheet to print or download. Here is a collection of our printable worksheets for topic interpret information from diagrams, charts, and graphs of chapter comprehension and analysis in section reading comprehension.
List of Reading Diagrams Worksheets
A brief description of the worksheets is on each of the worksheet widgets. click on the images to view, download, or print them. Reading diagrams - displaying top worksheets found for this concept. some of the worksheets for this concept are reading diagrams charts graphs and tables, reading diagram ts, information in geometric diagrams, understanding electrical schematics part revised, graphs and charts, diagrams, step by step tape diagram lesson question in a school, electricity unit.
This reading diagrams worksheet is suitable for - grade. for this reading diagrams worksheet, students read a diagram about water purification and answer questions about it. students also read a graph about track and field and fill in the missing information based on the pictures.
Reading charts, graphs, and diagrams from nonfiction texts learners will read and interpret three types of nonfiction text features in this three-page practice worksheet. children will look at examples of a pie chart, a bar graph, and a diagram, then use them to answer questions.
1. Reading Making Pie Graphs Worksheets Worksheet Grade Lesson Planet Diagrams
It begins with the inciting event of the story, shows the rising action, the climax, the falling action and the conclusion. it begins with the inciting event of the story, shows the rising action, the climax, the falling action and the conclusion. Figurative language worksheets reading worksheets httpwww.
2. Worksheet Reading Column Line Graphs Worksheets Find Vertex Axis Symmetry Graphing Lines Standard Form Creating Graph Grade Diagrams
Immune system reading worksheet, visits the immune system is made up of a network of cells, tissues, and organs that work together to protect the body. help kids practice their reading comprehension skills while learning a little bit about bacteria. students will read the short text about bacteria and answer a few simple questions.
3. Reading Graphs Worksheet Diagrams Worksheets
Writing worksheets. cinema and television worksheets. games worksheets. worksheets with songs. teaching resources reading worksheets the plot. Reading comprehension - ensure that you draw the most important information on phase diagrams from the related lesson making connections - use your understanding of the phases of matter to better.
4. Reading Bar Graphs Math Diagrams Worksheets
This worksheet originally. Venn diagram worksheets. -. diagram word problems worksheet one of education template - ideas, to explore this diagram word problems worksheet idea you can browse by and. level basic grades -. the purpose of this module is to introduce language for talking about sets, and some.
5. Circulatory System Unit Reading Diagrams Worksheets Advanced
At the end, a handful of questions test her comprehension. Award-winning reading solution with thousands of leveled readers, lesson plans, worksheets and assessments to teach guided reading, reading proficiency and comprehension to k- students. This page features of my favorite short stories with questions.
these reading activities are perfect for classroom use. written by some of the greatest authors in history, these stories are short enough to cover in a single class period, and rich enough to warrant study. This exercise will help you to improve your diagram completion skills to make you more prepared for the reading test in general.
in this test need to label the diagram after reading the text below. if you want to learn more about this type of questions, you should check out diagram completion lesson. your answering strategy. This includes a one-page reading passage suitable for grades -, as well as a -diagram page.
6. Cross Curricular Reading Comprehension Worksheets Diagrams
Access thousands of high-quality, free k- articles, and create assignments with them for your students. These worksheets contain reading assignments for your seventh grade students. students will read a story or article and then be asked to answer questions about what they have just read.
grade students are ready for a more difficult reading passage. they are also ready to edit the grammar and or spelling within that work. Id language school subject reading - age main content compare and contrast other contents diagram add to my workbooks download file embed in my website or add to classroom.
Some of the worksheets below are free electricity and circuits worksheets definitions of what is electricity, what are circuits, open vs closed circuit, circuit elements switches, resistors, capacitors, inductors, transistors, resistors, electricity unit class notes atoms, electrical charge, electrical current, electrical circuit, types of electrical circuit, conductors of.
7. Interpreting Science Graphs Tables Worksheets Grade Graph Examples Curriculum Reviews Fraction Splat Game Counting Money Reading Diagrams
In these reading comprehension worksheets, students are asked questions about the meaning, significance, intention, structure, inference, and vocabulary used in each passage. each passage reads like an encyclopedic or technical journal article. answers for worksheets in this section can be found at the end of each individual worksheet.
8. Reading Comprehension Spring Themed Key Words Worksheet Diagrams Worksheets
Worksheets cover the following graphs and data topics sets and diagrams, bar graphs, linear graphs, plotting graphs, reading data on graphs, interpreting curves, coordinates (), location of points, plotting data on shopping lists etc. data graphs worksheets for kindergarten and st grade.
9. Reading Interpreting Line Graphs Video Lesson Transcript Diagrams Worksheets
Reading diagrams - my favorite weather. took a poll of his students favorite type of weather. the students had the choice of hot, cold, rain. the results can be seen in diagram below. cold warm . Plot diagram - a great way to visually sum up a story for yourself.
10. Water Cycle Coloring Simple Diagram Sentences Picture Kindergarten Worksheets Read Rainbow Reading Diagrams
Third grade reading worksheet rd grade. About this quiz worksheet. use these quiz questions to make sure you understand how to read electrical circuit diagrams. you will be given a series of examples and asked to successfully read them. Over most of our rd grade reading comprehension worksheets students will read a short, one-page passage, such as a fun short story or informative piece, and be asked to answer multiple-choice questions about it.
11. Reading Worksheet Intended Personal Work Autonomy Download Scientific Diagram Diagrams Worksheets
The greater the materials internal energy, the higher the. Diagrams, quizzes and worksheets of the heart. author molly smith, reviewer , last reviewed, reading time minutes do you want a fun way to learn the structure of the no further got you covered.
12. Reading Circle Graph Worksheet Diagrams Worksheets
. Grammar worksheets. learn about subjects and predicates, nouns, verbs, pronouns,adjectives, and adverbs with our huge collection of grammar. language arts worksheets. teach phonics, grammar, reading, and writing with our -language arts printable resources.
13. Reading Coordinates Map Worksheet Kids Network Diagrams Worksheets
A sentence diagram is a way to graphically represent the structure of a sentence, showing how words in a sentence function and relate to each other. the printable practice worksheets below provide supplemental help in learning the basic concepts of sentence diagramming.
14. Diagram Fiction Nonfiction Interactive Worksheet Reading Diagrams Worksheets
Diagrams and labels worksheet for second grade language arts. look at the diagram. read the labels and answer the related questions. category reading comprehension comprehension and analysis interpret information from diagrams, charts, and graphs. answer key here.. Reading diagram cold lemon loaf cake. ts printable worksheets www.mathworksheetskids.com name ) the diagram below represents customer preferences in a cafe. a) how many customers placed orders for cold at the cafe b) how many of them indulged in both, lemon loaf.
Venn diagram worksheets are a great tool for testing the knowledge of students regarding set theories and its concepts like union, intersection, etc. questions can be asked on the basis of blank diagrams provided and vice use is also possible. however, making such a worksheet is a tedious task.
15. Worksheet Preschool Homework Ideas Graduation Message Parents Reading Diagrams Worksheets Learn Games Thematic Unit Middle School Learning Themed Adults
There are three sheets for each separate reading passage, so be sure to print them all (we have numbered them to help out). Looking for the right diagramming sentences worksheet to engage in more productive learning find printable and free right here for your needs.
16. Fall Bar Graph Worksheet Kids Network Reading Diagrams Worksheets
Throughout the lesson, students develop and use free body diagrams to model forces and how they act on an object. to begin class, i have students do a paired reading with one partner on forces and force diagrams. to complete the paired reading, i ask students to switch off roles each paragraph as reader or writer.
Worksheets reading reading by topic compare contrast. learning to compare and contrast elements of stories. these compare and contrast worksheets help students practice identifying how characters, story details or ideas are alike or different. comparing and contrasting is a fundamental skill both for writing and reading comprehension.
Showing top worksheets in the category - reading diagram. some of the worksheets displayed are practice book o, reading diagram ts, reading diagram ts, work body or force diagrams, diagram view points work, the breakaway, write details that tell how the subjects are different in, muscular system tour skeletal muscle.
17. 5 Images Interpreting Graphs Worksheets Printable Charts Line Worksheet Reading Tables Diagrams
Reading worksheets by grade. Practice reading diagrams favorite dessert worksheet. diagram numbers curved or straight myteachingstation.com. grade math worksheets and problems diagrams global. diagram worksheets rd grade. Reading rockets is a national multimedia project that offers a wealth of research-based reading strategies, lessons, and activities designed to help young children learn how to read and read better.
our reading resources assist parents, teachers, and other educators in helping struggling readers build fluency, vocabulary, and comprehension skills. An extensive collection of diagram worksheets provided here will help students of grade through high school to use their analytical skills and study all possible logical relations between a finite collection of sets.
a number of interesting cut and paste and surveying activity worksheets are up for grabs a plethora of exercises that. Id language school subject reading and writing all grade levels age main content compare and contrast other contents diagram add to my workbooks download file embed in my website or add to classroom.
18. Top Free Body Diagrams Reading Worksheets
Special senses unit reading, diagrams, worksheets - only. price. add to cart view cart. this bundle includes age-appropriate () resources about the special senses (vision, hearing, taste and smell) including reading, color diagrams, activities and assessment ( pages total).
19. Free Maths Worksheets Page Reading Comprehension Grade Worksheet Math Tracing Kindergarten Printing Sheets Print Printable Learning Diagrams
Venn diagram worksheets with answer sheet these diagram worksheets are great for all levels of math. kids will be able to easily review and practice their math skills. simply download and print these diagram worksheets. easily check their work with the answer sheets.
20. Line Graph Definition Examples Video Lesson Transcript Reading Diagrams Worksheets
Printable language worksheets printable language worksheets will help a teacher or student to learn and comprehend the lesson strategy within a a lot quicker way. these workbooks are perfect for the two youngsters and adults to utilize. printable language worksheets can be utilized by anybody at your home for teaching and studying objective.
21. Conversion Graphs Teach Maths Free Resources Reading Diagrams Worksheets
It explains the characteristics of both fiction and non-fiction texts. students then use the information from the reading passage to complete the -diagram. this is a great in-class activity or ho. ) draw and mark a diagram that contains two congruent line segments and three congruent angles.
many answers ) draw and mark a diagram that has two perpendicular lines and four congruent line segments. many answers. ex a square.--create your own worksheets like this one with infinite geometry. free trial available at kutasoftware.com. standards literature.
ccss.ela-literacy.rl. describe the overall structure of a story, including describing how the beginning introduces the story and the ending concludes the action. ccss.ela-literacy.rl. refer to parts of stories, dramas, and poems when writing or speaking about a text, using terms such as chapter, scene, and stanza describe how each successive part.
22. Creating Reading Bar Picture Graphs Helping Math Diagrams Worksheets
Worksheets on diagrams with increasing difficulty. worksheets on diagrams with increasing difficulty. international resources. reading. doc,. drawing. report a problem. this resource is designed for teachers. view us version. categories ages. Reading data tables worksheets.
these worksheets help students who are learning to read data tables and charts for the first time. you will also organize data into data tables as well. you can also use our graphing or survey section. reading ice cream sales data tables analyze the pattern of ice cream sales at carver elementary.
Plan your lesson in nonfiction (reading) and map skills with helpful tips from teachers like you. non-fiction readers use charts to learn more information about their reading identify pieces of information they learned by reading a in. Improve your students reading comprehension with.
23. Grade 3 Maths Worksheets Pictorial Representation Data Handling Diagram Lets Share Knowledge Reading Diagrams
Our diagram worksheets are made for primary and high school math students. diagrams are used to picture the relationship between different groups or things to draw a diagram you start with a big rectangle (called universe) and then you draw to circles overlap each other (or not).
24. Mustang Engine Diagram Wiring Resources Body Beast Excel Worksheets Free Mathematics Estimation Math Telling Time Grade Decimal Games Printable Fraction Reading Comprehension Diagrams
In this science worksheet, your child draws circuit diagrams to represent two series circuits. science grade. print full size. print full size. skills circuit diagram, guided inquiry, observational skills, properties of circuits, science experiment to try, series circuit, understanding power.
25. Text Features Reading Diagrams Worksheets
Whether it is reading coordinates on a grid, drawing bar charts, using diagrams, recording results in a tally chart, sorting with a diagram or interpreting information on a block graph, a line graph or pictogram, in this section of the site find lots of worksheets to help support all numeracy learning at school that involves.
26. Features Informational Book Worksheet Fun Teaching Reading Diagrams Worksheets
Show a diagram from a task type question on an (or handouts) see worksheet for an example. put students into pairs for minutes to discuss what they can understand about the diagram using the information given. (q on worksheet). Venn diagram for grade - displaying top worksheets found for this concept.
some of the worksheets for this concept are diagrams, grade questions diagrams, reading diagram ts, math sets practice work answers, diagram, regular polygons quadrilaterals, shading,. This math worksheet gives your child practice reading a chart and using data to answer word problems.
math grade rd. print full size. print full size. skills interpreting data, reading charts, solving word problems. common core standards grade measurement data. ccss.math.content.b. Showing top worksheets in the category - reading a diagram. some of the worksheets displayed are reading diagram ts, write details that tell how the subjects are different in, reading diagram, answers to work on shading diagrams, diagrams,, math sets practice work answers.
27. Statistics Grade 3 Solutions Examples Videos Worksheets Games Activities Reading Diagrams
Carbon cycle reading diagram. printable version. main core tie. science - earth science standard objective. time frame. class periods of minutes each student worksheet (attached) background for teachers. this could be used as an introductory activity or as a review activity.
28. Image Result Plot Outline Diagram Scholastic Summer Reading Worksheets Worksheet Problem Solving Middle School Grade Math Units Time Money Workbook Diagrams
Shows a criteria diagram. then have questions to answer based on the diagram. this can be displayed as a class for modelling, or printed off and given for to complete as an independent activity. if you find this a useful resource, please rate or recommend.
29. Volume Worksheets Grade Printable Grammar High School Class Counting Dollars Coins Math Games Graders Reading Diagrams
Resources included in this next. Reading a diagram worksheet grade., by admin. posts related to reading a diagram worksheet grade. second grade diagram worksheet. grade plate tectonics diagram worksheet. second grade reading worksheet for grade.
30. Blank Diagram Worksheet Reading Diagrams Worksheets
Displaying top worksheets found for - reading a diagram. some of the worksheets for this concept are write details that tell how the subjects are different in, reading diagram ts, reading diagram ts, electricity unit, information in geometric diagrams, work body or force diagrams, graphs and charts, comprehension.
This page has a set printable diagram worksheets for teaching math. for diagrams used in reading and writing, please see our compare and contrast. level basic grades -. math diagrams. complete each v.d. by copying the numbers from the box into the correct place.
Worksheet practice reading diagrams favorite hobby. analyzing data sounds complicated, but it can be as simple as reading a diagram help your child practice this important skill with this math worksheet that challenges her to analyze a diagram about favorite hobbies.
31. Reading Diagrams Worksheet Grade Lesson Planet Worksheets
Feel free to print them off and duplicate for home or classroom use. Worksheets that motivate students. worksheets that save paper, ink and time. advertise here. grammar worksheets. vocabulary worksheets. listening worksheets. speaking worksheets. reading worksheets.
32. Reading Scales Divisions 9 Worksheets Questions Answers Teaching Resources Diagrams
Ereadingworksheets.comfigurative-languagefigurative-language-worksheets am. Some of the worksheets below are free electricity and circuits worksheets definitions of what is electricity, what are circuits, open vs closed circuit, circuit elements switches, resistors, capacitors, inductors, transistors, resistors, electricity unit class notes atoms, electrical charge, electrical current, electrical circuit, types of electrical circuit, conductors of.
33. Reading Scales Worksheet Math Resource Diagrams Worksheets
Phase changes worksheet name important things to know - do not skip over these sections. read and remember. kinetic theory of matter molecules are always moving. this is known as the kinetic theory of matter. we measure this kinetic energy with a thermometer as temperature.
Student reading page motion diagrams name block a motion diagram is like a composite photo that shows the position of an object at several equally spaced time intervals, like a stroboscopic photograph. we model the position of the object with a small dot or point with reference to a coordinate axis.
. This is reading comprehension, and it is an essential skill for success in school and in the real world. below are our reading comprehension worksheets grouped by grade, that include passages and related questions. click on the title to view the printable activities in each grade range, or to read the details of each worksheet. Worksheets reading comprehension. free reading comprehension worksheets. use these printable worksheets to improve reading comprehension. over free stories followed by comprehension exercises, as well as worksheets focused on specific comprehension topics (main idea, sequencing, etc). |
The Civil War was one of the bloodiest and most gruesome battles in history, each battle during the war was just as important as the other. The civil war lasted a lengthy four years from April 12,1861 until May 9,1865, though this lasted quite a while there were many efforts and documents signed to prevent the war. The country ( The North and South) started disagreeing, causing the country to split into The Union and Confederacy. The Confederacy represented as the South, they supported and encouraged slavery, the people in the south lived in towns, the homes were split apart from each other and most lived on farms. The people in the South earned money by selling and farming. On the other hand, the Union represented for the north. The North was disgusted by slavery and wanted it abolished, people in the North lived in small urban cities and were in an industrial land. People of the North earned money by engineering building, constructing or painting. As previously stated there were many treaties and documents signed to try to put the spilt country at peace some treaties were The Missouri Compromise, Compromise of 1850, Fugitive Slave Act, Kansas Nebraska Act etc. However, none of those efforts were forceful enough to prevent the Civil war. After years and years of fighting nobody was officially declared the winner, however technically the Union was the winner. Now, after the war there were many negative effects the civil war left the country in ruins and confused…First and foremost slavery, the Civil war was caused by the disagreement of whether slavery should be a positive and encouraged as an institution or a awful and cruel way to support the pre existing economy. However, after the union won the war, all slavery was abolished and slaves were free. Although, this was a revolutionary amendment, African Americans were still treated unfairly. In 1865-1866 president Andrew Johnson passed restrictive black codes to control labor,freedom, and behavior of former slaves and african americans.The North was outraged they believed these codes were enforcing a lesser version of slavery, that made Andrew Johnson unpopular and made the radical wing of the Republican party celebrate in their triumph. During the time of Radical Reconstruction which officially began in 1867, gave blacks or african americans a new voice in government so they could express their views, and for the first time in history winning election to southern state legislatures and possibly even U.S congress. Despite all victory and equality for blacks in less than a decade white supremacy took over in the South once again. The newly elected president Abraham Lincoln did not make abolition of slavery as a goal of the Union war effort. He feared that he would drive border slave states that were loyal to the union to join the Confederacy. However by the summer of 1862 slaves themselves pushed the issue heading in thousands to the Union lines as Lincoln’s troops traveled to the South. But slaves insisted that emancipation was a political and military necessity. Then in 1863 Lincoln was able to free over 3 million slaves.Furthermore, there were many gunshots, battles, raids, and deaths during the Civil War which left the spilt country both in ruins. ( quoted from nytimes.com) “All wars are environmental catastrophes. Armies destroy farms and livestock; they go through forests like termites; they foul waters; they spread disease; they bombard the countryside with heavy armaments and leave unexploded shells; they deploy chemical poisons that linger far longer than they do; they leave detritus and garbage behind.” ( and quote). Additionally, according to a catalog William T Sherman wanted the Confederacy to feel the hard hand of war and apparently he said We devoured the land. Additionally, Philip H Sheridan had the same type of attitude towards the environment and his opposing force. In Shenandoah Valley in the September and October of 1864, the army was burning down farms and factories in addition to that, anything resourceful or useful to the confederates. .” Gen. Ulysses S. Grant told him to “eat out Virginia clear and clear as far as they go, so that crows flying over it for the balance of the season will have to carry their provender with them.” ( quoted from nytimes.com). As that was stated obviously the union definitely had a hatred and despise for the confederacy and drained needed resources for the confederacy and harmed the environment in the process doing so. However, the destruction of resources committed by the North and South came to haunt them, armies were stripped of resources that were a necessity. Both armies barely living off the land trying to help themselves salvaging any food, vegetable or animal that came across their path. They cut down an unimaginable number of trees to fetch wood for fires, reconstruct army bases, to cook food and build railroads. This was one of the most desperate time for armies and people in general.Last but not least, the financial debt of America, America was already having financial issues and the war made them drown in debt even more than they already were. Before the war, America was 65 million dollars in debt, however they country tried to lower it, America had limited government, few federal expenses and low taxes. On the eve of 1860 right before the war all federal revenue derived from the charges.They had no income tax, no excise taxes and no estate tax. This filled Thomas Jefferson’s vision of no worker,laborer mechanic nor anybody for that matter had taxes. However, after the detrimental war, all those regulations came to a halt and never once came back. After only 4 years of war the new national debt was 2.7 billion dollars. The annual interest on the debt was more than twice what the federal government spent before 1860. Whats even more disheartening is Jefferson’s vision was crushed there was progressive income tax,excise tax and estate tax as well. The revenue department had greatly expanded and tax gatherers were involved and relevant in the great bureaucracy.To conclude, the Civil War was one of the most important battles in history. It caused much sadness,poverty,disappointment, disease loss and lastly confusion. The country was split into two, financially drowning in debt, slaves were also harmed in the process and the environment was destroyed. Despite all the troubles the country faced it, preserved, reconstructed, and learned. Most importantly it was finally united once again and stronger than ever. |
It forms a downward sloping curve, called the , where each point along it represents quantities of good x and good y that you would be happy substituting for one another. Find the cost-minimizing combination of labor and capital if the manufacturer wants to produce 121,000 airframes. When relative input usages are optimal, the marginal rate of technical substitution is equal to the relative unit costs of the inputs, and the slope of the isoquant at the chosen point equals the slope of the see. If it is constant, the indifference curve will be a straight line sloping downwards to the right at a 45° angle to either axis, as in Fig. Before listening to this watch Indifference Curve lesson. Different cases has different requirements and so as the structure. Find the cost-minimizing combination of labor and capital if the manufacturer wants to produce 121,000 airframes.
But, as he continues the substitution process, the rate of substitution begins to fall. The same is the case at point I A where he gets an additional left shoe without another right shoe. For an instance, cigarette and coffee cannot be perfect substitutes to each other. Taking total differential of i above, we have:. The date is asked to provide deadline.
When these combinations are graphed, the slope of the resulting line is negative. If the marginal rate of substitution of hamburgers for hot dogs is 2, then the individual would be willing to give up 2 hot dogs in order to obtain 1 extra hamburger. Consider a graph representing the quantity of apples on the X-axis and the quantity of oranges on the Y-axis. He keeps on sacrificing Y for X until the point of satiety, after which his demand for X starts declining. Likewise, if we compare the combinations B and C, the consumer gave up 3 input units of capital in order to add 1 unit of labor.
What would the cost-minimizing input combination be after the price changes? That the marginal rate of substitution of X for Y diminishes can also be known from drawing tangents at different points on an indifference curve. The amount of Y he is prepared to give up to get additional units of X becomes smaller and smaller. The same is the case at point I A where he gets an additional left shoe without another right shoe. It is always changing for a given point on the curve, and mathematically represents the slope of the curve at that point. Therefore, although the producer had sacrificed more units of capital input in the beginning, the rate of substitution fell with additional substitutions.
You can also request for invoice to our live chat representatives. Do both labor and capital display diminishing marginal products? Inadequacy of the factor Substituting one factor for the other continuously causes scarcity of the factor being replaced. Case Approach Scientific Methodology We use best scientific approach to solve case study as recommended and designed by best professors and experts in the World. You are also told that when input prices change such that the wage rate is 8 times the rental rate, the firm adjusts its input combination but leaves total output unchanged. As the consumer proceeds to have additional units of X, he is willing to give away less and less units of Y so that the marginal rate of substitution falls from 5:1 to 1:1 in the sixth combination Col. All producers strive to generate the maximum amount of for the minimum amount of cost.
Owing to higher marginal significance of good X and lower marginal significance of good Yin the beginning the consumer will be willing to give up a larger amount of Y for one unit increase in good X But as the stock of good X increases and intensity of desire for it falls his marginal significance of good X will diminish and, on the other hand, as the stock of good Y decreases and the intensity of his desire for it increases, his marginal significance for good Y will go up. Inadequacy of the factor Substituting one factor for the other continuously causes scarcity of the factor being replaced. Hence, it is implied that the utility of units foregone or given up is equal to the utility of additional units of the commodity added to the combination. Contrast the production functions given below: a. The rate of substitution will then be the number of units of Y for which one unit of X is a substitute.
If the marginal rate of substitution is increasing, the indifference curve will be concave to the origin as in Fig. You can browse more questions to get answer in our sections here. It is because of this fall in the intensity of want for a good, say X, that when its stock increases with the consumer he is prepared to forego less and less of good Y for every increment in X. In order to provide best case analysis, our experts not only refer case materials but also outside materials if required to come out with best analysis for the case. As he moves along the curve form M to R, the consumer acquires more of X and less of Y.
The amount of У he is prepared to give up to get additional units of X becomes smaller and smaller. We can see, combination A consists of 1 cigarette and 12 cups of coffee. Step 2:- While filling submit your quotes form please fill all details like deadline date, expected budget, topic , your comments in addition to Case Id. The approach followed by our experts are given below: Defining Problem The first step in solving any case study analysis is to define its problem carefully. This means that the consumer faces a diminishing marginal rate of substitution: the more hamburgers they have relative to hot dogs, the fewer hot dogs the consumer is willing to give up for more hamburgers. Does the production function display a diminishing marginal rate of technical substitution? On the indifference curve, the marginal rate of substitution is measured by the slope of the curve. Principle of Marginal Rate of Technical Substitution Marginal rate of technical substitution is based on the principle that the rate by which a producer substitutes input of a factor for another decreases more and more with every successive substitution. |
Gene cloning is a common practice in molecular biology labs that is used by researchers to create copies of a particular gene for downstream applications, such as sequencing, mutagenesis, genotyping or heterologous expression of a protein. The traditional technique for gene cloning involves the transfer of a DNA fragment of interest from one organism to a self-replicating genetic element, such as a bacterial plasmid. This technique is commonly used today for isolating long or unstudied genes and protein expression. A more recent technique is the use of polymerase chain reaction (PCR) for amplifying a gene of interest. The advantage of using PCR over traditional gene cloning, as described above, is the decreased time needed for generating a pure sample of the gene of interest. However, gene isolation by PCR can only amplify genes with predetermined sequences. For this reason, many unstudied genes require initial gene cloning and sequencing before PCR can be performed for further analysis.
Related Topics: Gene Expression Analysis, Mutational Analysis, and Epigenetics and Chromatin Structure.
DNA sequencing is typically the first step in understanding the genetic makeup of an organism, which helps to:
Sequencing uses biochemical methods to determine the order of nucleotide bases (adenine, guanine, cytosine, and thymine) in a DNA oligonucleotide. Knowing the sequence of a particular gene will assist in further analysis to understand the function of the gene. PCR is used to amplify the gene of interest before sequencing can be performed. Many biotechnology companies offer sequencing instruments, however, these instruments can be expensive. As a result, many researchers usually perform PCR in-house and then send out their samples to sequencing labs.
Site-directed mutagenesis is a widely used procedure for the study of the structure and function of proteins by modifying the encoding DNA. By using this method, mutations can be created at any specific site in a gene whose wild-type sequence is already known. Many techniques are available for performing site-directed mutagenesis. A classic method for introducing mutations, either single base pairs or larger insertions, deletions, or substitutions into a DNA sequence, is the Kunkel method.
The first step in any site-directed mutagenesis method is to clone the gene of interest. For the Kunkel method, the cloned plasmid is then transformed into a dut ung mutant of Escherichia coli. This E. coli strain lacks dUTPase and uracil deglycosidase, which will ensure that the plasmid containing the gene of interest will be converted to DNA that lacks Ts and contains Us instead.
The next step is to design a primer that contains the region of the gene which you wish to mutate, along with the mutation you want to introduce. PCR can then be used with the mutated primers to create hybrid plasmids; each plasmid will now contain one strand without the mutation and uracil bases, and another strand with the mutation and lacking uracil.
The final step is to isolate this hybrid plasmid and transform it into a different strain that does contain the uracil-DNA glycosylase (ung) gene. The uracil deglycosidase will destroy the strands that contain uracil, leaving only the strands with your mutation. When the bacteria replicate, the resulting plasmids will contain the mutation on both strands.
Genotyping is the process of determining the DNA sequence specific to an individual's genotype. This process can be accomplished by several techniques, such as high resolution melt (HRM) analysis, or any other mutation detection technique. All of these techniques will provide an insight into the individual's genotype, which can help determine specific sequences that can be manipulated and cloned for further analysis.
Heterologous protein expression uses gene cloning to express a protein of interest in a self-replicating genetic element, such as a bacterial plasmid. Heterologous expression is used to produce large amounts of a protein of interest for functional and biochemical analyses.
Araya-Garay JM et al. (2011). cDNA cloning of a novel gene codifying for the enzyme lycopene β-cyclase from Ficus carica and its expression in Escherichia coli. Appl Microbiol Biotechnol 92, 769–777. PMID: 21792589
Eckert C et al. (2006). DNA sequence analysis of the genetic environment of various blaCTX-M genes. J Antimicrob Chemother 57, 14–23. PMID: 16291869
Handa P and Varshney U (1998). Rapid and reliable site-directed mutagenesis using Kunkel's approach. Indian J Biochem Biophys 35, 63–66. PMID: 9753863
Hanner M et al. (1996). Purification, molecular cloning, and expression of the mammalian sigma1-binding site. Proc Natl Acad Sci USA 93, 8072–8077. PMID: 8755605
Kerovuo J et al. (1998). Isolation, characterization, molecular gene cloning, and sequencing of a novel phytase from Bacillus subtilis. Appl Environ Microbiol 64, 2079–2085. PMID: 9603817 Lauretti L et al. (1999). Cloning and characterization of blaVIM, a new integron-borne metallo-beta-lactamase gene from a Pseudomonas aeruginosa clinical isolate. J Antimicrob Chemother 43, 1584–1590. PMID: 10390207
Morales G et al (2010). Resistance to linezolid is mediated by the cfr gene in the first report of an outbreak of linezolid-resistant Staphylococcus aureus. Clin Infect Dis 50, 821–825. PMID: 20144045
Naldini L et al. (1996). In vivo gene delivery and stable transduction of nondividing cells by a lentiviral vector. Science 272, 263–267. PMID: 8602510
Riordan JR et al. (1999). Identification of the cystic fibrosis gene: Cloning and characterization of complementary DNA. Science 245, 1066–1073. PMID: 2475911
Call us at 1-800-4-BIORAD (1-800-424-6723)
Please reenter your email address in the correct format.
Please enter your email address.
Your subscription information already exists, we will send you an email with specific instructions to manage your existing subscription profile.
To receive the latest news, promotions, and more, sign up for Bio-Rad updates by entering your email address below. You can elect to receive only the types of Bio-Rad communications that are of interest to you..
Copyright © 2017 Bio-Rad Laboratories, Inc. All rights reserved. |
Two faraway supermassive black holes are on their way towards collision, scientists say. The two are now only one light year apart and the impact of the two of them colliding will discharge the energy of 100 million supernovae, and will most probably bring their galaxy to an end.
According to the researchers, the event will take place in the next million years, a small amount of time on an astronomical time scale. The black holes are situated in the isolated PG 1302-102 galaxy which is located far away from our Milky Way, so the collision will not affect us at all.
Black holes are massive objects, believed to date from the be beginning of the universe. They are believed to form as a result of the gravitational collapse of a supernova explosion. The explosion is so massive and its gravitational field so forceful that no electromagnetic radiation can get away. The merger of the two black holes could potentially answer one of the hardest question scientists have dealt with.
Mattthew Graham, a scientists at Caltech, explains that the “final parsec problem” refers to the failure of forming black holes theories that can foresee the final stages of a merger. The merger of black holes is not yet understood, so the discovery of such a system and the ability to observe its evolution can bring new hope.
But how come the two black holes got to colliding in the first place? Apparently, almost all galaxies have black holes in their center. So, when two galaxies merge, the black holes orbit each other and begin to move more and more close to one another. Scientists believe that when they move close together, they create gravitational waves that are essentially folds of the fabric of space-time. These are all theories of course, as no such events have been observed by astronomers.
It seems that the two supermassive black holes of the PG 1301-102 galaxy are moving towards each other at a unprecedented pace. Most black holes are not expected to collide for a few billion of years.
It is presumed that ultimately, Milky Way will merge with its neighbor, Andromeda. The PG 1301-102 system will provide scientists with a lot of new data, and it can help them predict what could happen to our home galaxy in a few billion years.
Image Source: Personal Soton |
By the end of this section, you will be able to:
- Describe two important properties of the universe that the simple Big Bang model cannot explain
- Explain why these two characteristics of the universe can be accounted for if there was a period of rapid expansion (inflation) of the universe just after the Big Bang
- Name the four forces that control all physical processes in the universe
The hot Big Bang model that we have been describing is remarkably successful. It accounts for the expansion of the universe, explains the observations of the CMB, and correctly predicts the abundances of the light elements. As it turns out, this model also predicts that there should be exactly three types of neutrinos in nature, and this prediction has been confirmed by experiments with high-energy accelerators. We can’t relax just yet, however. This standard model of the universe doesn’t explain all
the observations we have made about the universe as a whole.
Problems with the Standard Big Bang Model
There are a number of characteristics of the universe that can only be explained by considering further what might have happened before the emission of the CMB. One problem with the standard Big Bang
model is that it does not explain why the density of the universe is equal to the critical density. The mass density could have been, after all, so low and the effects of dark energy so high that the expansion would have been too rapid to form any galaxies at all. Alternatively, there could have been so much matter that the universe would have already begun to contract long before now. Why is the universe balanced so precisely on the knife edge of the critical density?
Another puzzle is the remarkable uniformity
of the universe. The temperature of the CMB is the same to about 1 part in 100,000 everywhere we look. This sameness might be expected if all the parts of the visible universe were in contact at some point in time and had the time to come to the same temperature. In the same way, if we put some ice into a glass of lukewarm water and wait a while, the ice will melt and the water will cool down until they are the same temperature.
However, if we accept the standard Big Bang model, all parts of the visible universe were not
in contact at any time. The fastest that information can go from one point to another is the speed of light. There is a maximum distance that light can have traveled from any point since the time the universe began—that’s the distance light could have covered since then. This distance is called that point’s horizon distance
because anything farther away is "below its horizon"—unable to make contact with it. One region of space separated by more than the horizon distance
from another has been completely isolated from it through the entire history of the universe.
If we measure the CMB in two opposite directions in the sky, we are observing regions that were significantly beyond each other’s horizon distance at the time the CMB was emitted. We can see both regions, but they
can never have seen each other. Why, then, are their temperatures so precisely the same? According to the standard Big Bang model, they have never been able to exchange information, and there is no reason they should have identical temperatures. (It’s a little like seeing the clothes that all the students wear at two schools in different parts of the world become identical, without the students ever having been in contact.) The only explanation we could suggest was simply that the universe somehow started out
being absolutely uniform (which is like saying all students were born liking the same clothes). Scientists are always uncomfortable when they must appeal to a special set of initial conditions to account for what they see.
The Inflationary Hypothesis
Some physicists suggested that these fundamental characteristics of the cosmos—its flatness and uniformity—can be explained if shortly after the Big Bang (and before the emission of the CMB), the universe experienced a sudden increase in size. A model universe in which this rapid, early expansion occurs is called an inflationary universe
. The inflationary universe is identical to the Big Bang universe for all time after the first 10–30
second. Prior to that, the model suggests that there was a brief period of extraordinarily rapid expansion or inflation, during which the scale of the universe increased by a factor of about 1050
times more than predicted by standard Big Bang models (Figure 1).
Figure 1. Expansion of the Universe:
This graph shows how the scale factor of the observable universe changes with time for the standard Big Bang model (red line) and for the inflationary model (blue line). (Note that the time scale at the bottom is extremely compressed.) During inflation, regions that were very small and in contact with each other are suddenly blown up to be much larger and outside each other’s horizon distance. The two models are the same for all times after 10–30 second.
Prior to (and during) inflation, all the parts of the universe that we can now see were so small and close to each other that they could
exchange information, that is, the horizon distance included all of the universe that we can now observe. Before (and during) inflation, there was adequate time for the observable universe to homogenize itself and come to the same temperature. Then, inflation expanded those regions tremendously, so that many parts of the universe are now beyond each other’s horizon.
Another appeal of the inflationary model is its prediction that the density of the universe should be exactly equal to the critical density. To see why this is so, remember that curvature of spacetime is intimately linked to the density of matter. If the universe began with some curvature of its spacetime, one analogy for it might be the skin of a balloon. The period of inflation was equivalent to blowing up the balloon to a tremendous size. The universe became so big that from our vantage point, no curvature should be visible (Figure 2). In the same way, Earth’s surface is so big that it looks flat to us no matter where we are. Calculations show that a universe with no curvature is one that is at critical density. Universes with densities either higher or lower than the critical density would show marked curvature. But we saw that the observations of the CMB in The Cosmic Microwave Background
, which show that the universe has critical density, rule out the possibility that space is significantly curved.
Figure 2. Analogy for Inflation:
During a period of rapid inflation, a curved balloon grows so large that to any local observer it looks flat. The inset shows the geometry from the ant’s point of view.
Grand Unified Theories
While inflation is an intriguing idea and widely accepted by researchers, we cannot directly observe events so early in the universe. The conditions at the time of inflation were so extreme that we cannot reproduce them in our laboratories or high-energy accelerators, but scientists have some ideas about what the universe might have been like. These ideas are called grand unified theories
In GUT models, the forces that we are familiar with here on Earth, including gravity and electromagnetism, behaved very differently in the extreme conditions of the early universe than they do today. In physical science, the term force
is used to describe anything that can change the motion of a particle or body. One of the remarkable discoveries of modern science is that all known physical processes can be described through the action of just four forces: gravity, electromagnetism, the strong nuclear force, and the weak nuclear force (Table 1).
|Table 1. The Forces of Nature
||Relative Strength Today
||Range of Action
||Motions of planets, stars, galaxies
||Atoms, molecules, electricity, magnetic fields
|Weak nuclear force
|Strong nuclear force
||The existence of atomic nuclei
Gravity is perhaps the most familiar force, and certainly appears strong if you jump off a tall building. However, the force of gravity between two elementary particles—say two protons—is by far the weakest of the four forces. Electromagnetism—which includes both magnetic and electrical forces, holds atoms together, and produces the electromagnetic radiation that we use to study the universe—is much stronger, as you can see in Table 1. The weak nuclear force is only weak in comparison to its strong "cousin," but it is in fact much stronger than gravity.
Both the weak and strong nuclear forces differ from the first two forces in that they act only over very small distances—those comparable to the size of an atomic nucleus or less. The weak force is involved in radioactive decay and in reactions that result in the production of neutrinos. The strong force holds protons and neutrons together in an atomic nucleus.
Physicists have wondered why there are four forces in the universe—why not 300 or, preferably, just one? An important hint comes from the name electromagnetic force
. For a long time, scientists thought that the forces of electricity and magnetism were separate, but James Clerk Maxwell
(see the chapter on Radiation and Spectra
) was able to unify
these forces—to show that they are aspects of the same phenomenon. In the same way, many scientists (including Einstein) have wondered if the four forces we now know could also be unified. Physicists have actually developed GUTs that unify three of the four forces (but not gravity).
Figure 3. Four Forces That Govern the Universe:
The behavior of the four forces depends on the temperature of the universe. This diagram (inspired by some grand unified theories) shows that at very early times when the temperature of the universe was very high, all four forces resembled one another and were indistinguishable. As the universe cooled, the forces took on separate and distinctive characteristics.
In these theories, the strong, weak, and electromagnetic forces are not three independent forces but instead are different manifestations or aspects of what is, in fact, a single force. The theories predict that at high enough temperatures, there would be only one force. At lower temperatures (like the ones in the universe today), however, this single force has changed into three different forces (Figure 3). Just as different gases or liquids freeze at different temperatures, we can say that the different forces "froze out" of the unified force at different temperatures. Unfortunately, the temperatures at which the three forces acted as one force are so high that they cannot be reached in any laboratory on Earth. Only the early universe, at times prior to 10–35
second, was hot enough to unify these forces.
Many physicists think that gravity was also unified with the three other forces at still higher temperatures, and scientists have tried to develop a theory that combines all four forces. For example, in string theory
, the point-like particles of matter that we have discussed in this book are replaced by one-dimensional objects called strings. In this theory, infinitesimal strings, which have length but not height or width, are the building blocks used to construct all the forms of matter and energy in the universe. These strings exist in 11-dimensional space (not the 4-dimensional spacetime with which we are familiar). The strings vibrate in the various dimensions, and depending on how they vibrate, they are seen in our world as matter or gravity or light. As you can imagine, the mathematics of string theory is very complex, and the theory remains untested by experiments. Even the largest particle accelerators on Earth do not achieve high enough energy to show whether string theory applies to the real world.
String theory is interesting to scientists because it is currently the only approach that seems to have the potential of combining all four forces to produce what physicists have termed the Theory of Everything.
Theories of the earliest phases of the universe must take both quantum mechanics and gravity into account, but at the simplest level, gravity and quantum mechanics are incompatible. General relativity, our best theory of gravity, says that the motions of objects can be predicted exactly. Quantum mechanics says you can only calculate the probability (chance) that an object will do something. String theory is an attempt to resolve this paradox. The mathematics that underpins string theory is elegant and beautiful, but it remains to be seen whether it will make predictions that can be tested by observations in yet-to-be-developed, high-energy accelerators on Earth or by observations of the early universe.
The earliest period in the history of the universe from time zero to 10–43
second is called the Planck time
. The universe was unimaginably hot and dense, and theorists believe that at this time, quantum effects of gravity dominated physical interactions—and, as we have just discussed, we have no tested theory of quantum gravity. Inflation is hypothesized to have occurred somewhat later, when the universe was between perhaps 10–35
second old and the temperature was 1027
K. This rapid expansion took place when three forces (electromagnetic, strong, and weak) are thought to have been unified, and this is when GUTs are applicable.
After inflation, the universe continued to expand (but more slowly) and to cool. An important milestone was reached when the temperature was down to 1015
K and the universe was 10–10
second old. Under these conditions, all four forces were separate and distinct. High-energy particle accelerators can achieve similar conditions, and so theories of the history of the universe from this point on have a sound basis in experiments.
As yet, we have no direct evidence of what the conditions were during the inflationary epoch, and the ideas presented here are speculative. Researchers are trying to devise some experimental tests. For example, the quantum fluctuations in the very early universe would have caused variations in density and produced gravitational waves that may have left a detectable imprint on the CMB. Detection of such an imprint will require observations with equipment whose sensitivity is improved from what we have today. Ultimately, however, it may provide confirmation that we live in a universe that once experienced an epoch of rapid inflation.
If you are typical of the students who read this book, you may have found this brief discussion of dark matter, inflation, and cosmology a bit frustrating. We have offered glimpses of theories and observations, but have raised more questions than we have answered. What is dark matter? What is dark energy? Inflation explains the observations of flatness and uniformity of the university, but did it actually happen? These ideas are at the forefront of modern science, where progress almost always leads to new puzzles, and much more work is needed before we can see clearly. Bear in mind that less than a century has passed since Hubble demonstrated the existence of other galaxies. The quest to understand just how the universe of galaxies came to be will keep astronomers busy for a long time to come.
Key Concepts and Summary
The Big Bang model does not explain why the CMB has the same temperature in all directions. Neither does it explain why the density of the universe is so close to critical density. These observations can be explained if the universe experienced a period of rapid expansion, which scientists call inflation, about 10–35 second after the Big Bang. New grand unified theories (GUTs) are being developed to describe physical processes in the universe before and at the time that inflation occurred.
grand unified theories:
(GUTs) physical theories that attempt to describe the four forces of nature as different manifestations of a single force
a theory of cosmology in which the universe is assumed to have undergone a phase of very rapid expansion when the universe was about 10–35
second old; after this period of rapid expansion, the standard Big Bang and inflationary models are identical
Licenses and Attributions |
In population genetics, gene flow is the transfer of genetic variation from one population to another. If the rate of gene flow is high enough two populations are considered to have equivalent allele frequencies and therefore be a single population, it has been shown that it takes only "One migrant per generation" to prevent populations from diverging due to drift. Gene flow is an important mechanism for transferring genetic diversity among populations. Migrants change the distribution of genetic diversity within the populations, by modifying the allele frequencies. High rates of gene flow can reduce the genetic differentiation between the two groups, increasing homogeneity. For this reason, gene flow has been thought to constrain speciation by combining the gene pools of the groups, thus preventing the development of differences in genetic variation that would have led to full speciation. In some cases migration may result in the addition of novel genetic variants to the gene pool of a species or population.
There are a number of factors. Gene flow is expected to be lower in species that have low dispersal or mobility, that occur in fragmented habitats, where there is long distances between populations, when there are small population sizes. Mobility plays an important role in the migration rate, as mobile individuals tend to have greater migratory prospects. Although animals are thought to be more mobile than plants and seeds may be carried great distances by animals or wind; when gene flow is impeded, there can be an increase in inbreeding, measured by the inbreeding coefficient within a population. For example, many island populations have low rates of gene flow due to geographic isolation and small population sizes; the Black Footed Rock Wallaby has several inbred populations that live on various islands off the coast of Australia. The population is so isolated that lack of gene flow has led to high rates of inbreeding. Decrease in population size leads to increased divergence due to drift, while migration reduces divergence and inbreeding.
Gene flow can be measured by using the effective population size and the net migration rate per generation. Using the approximation based on the Island model, the effect of migration can be calculated for a population in terms of the degree of genetic differentiation; this formula accounts for the proportion of total molecular marker variation among populations, averaged over loci. When there is one migrant per generation, the inbreeding coefficient equals 0.2. However, when there is less than 1 migrant per generation, the inbreeding coefficient rises resulting in fixation and complete divergence; the most common F s t is < 0.25. This means. Measures of population structure range from 0 to 1; when gene flow occurs via migration the deleterious effects of inbreeding can be ameliorated. F s t = 1 / The formula can be modified to solve for the migration rate when F s t is known: N m = 1 / 4, Nm = number of migrants; when gene flow is blocked by physical barriers, this results in Allopatric speciation or a geographical isolation that does not allow populations of the same species to exchange genetic material.
Physical barriers to gene flow are but not always, natural. They may include oceans, or vast deserts. In some cases, they can be artificial, man-made barriers, such as the Great Wall of China, which has hindered the gene flow of native plant populations. One of these native plants, Ulmus pumila, demonstrated a lower prevalence of genetic differentiation than the plants Vitex negundo, Ziziphus jujuba, Heteropappus hispidus, Prunus armeniaca whose habitat is located on the opposite side of the Great Wall of China where Ulmus pumila grows; this is because Ulmus pumila has wind-pollination as its primary means of propagation and the latter-plants carry out pollination through insects. Samples of the same species which grow on either side have been shown to have developed genetic differences, because there is little to no gene flow to provide recombination of the gene pools. Barriers to gene flow need not always be physical. Sympatric speciation happens when new species from the same ancestral species arise along the same range.
This is a result of a reproductive barrier. For example, two palm species of Howea found on Lord Howe Island were found to have different flowering times correlated with soil preference, resulting in a reproductive barrier inhibiting gene flow. Species can live in the same environment, yet show limited gene flow due to reproductive barriers, specialist pollinators, or limited hybridization or hybridization yielding unfit hybrids. A cryptic species is a species that humans cannot tell is different without the use of genetics. Moreover, gene flow between hybrid and wild populations can result in loss of genetic diversity via genetic pollution, assortative mating and
A phylogenetic tree or evolutionary tree is a branching diagram or "tree" showing the evolutionary relationships among various biological species or other entities—their phylogeny —based upon similarities and differences in their physical or genetic characteristics. All life on Earth is part of a single phylogenetic tree. In a rooted phylogenetic tree, each node with descendants represents the inferred most recent common ancestor of those descendants, the edge lengths in some trees may be interpreted as time estimates; each node is called a taxonomic unit. Internal nodes are called hypothetical taxonomic units, as they cannot be directly observed. Trees are useful in fields of biology such as bioinformatics and phylogenetics. Unrooted trees illustrate only the relatedness of the leaf nodes and do not require the ancestral root to be known or inferred; the idea of a "tree of life" arose from ancient notions of a ladder-like progression from lower into higher forms of life. Early representations of "branching" phylogenetic trees include a "paleontological chart" showing the geological relationships among plants and animals in the book Elementary Geology, by Edward Hitchcock.
Charles Darwin produced one of the first illustrations and crucially popularized the notion of an evolutionary "tree" in his seminal book The Origin of Species. Over a century evolutionary biologists still use tree diagrams to depict evolution because such diagrams convey the concept that speciation occurs through the adaptive and semirandom splitting of lineages. Over time, species classification has become more dynamic; the term phylogenetic, or phylogeny, derives from the two ancient greek words φῦλον, meaning "race, lineage", γένεσις, meaning "origin, source". A rooted phylogenetic tree is a directed tree with a unique node — the root — corresponding to the most recent common ancestor of all the entities at the leaves of the tree; the root node serves as the parent of all other nodes in the tree. The root is therefore a node of degree 2 while other internal nodes have a minimum degree of 3; the most common method for rooting trees is the use of an uncontroversial outgroup—close enough to allow inference from trait data or molecular sequencing, but far enough to be a clear outgroup.
Unrooted trees illustrate the relatedness of the leaf nodes without making assumptions about ancestry. They do not require the ancestral root to be inferred. Unrooted trees can always be generated from rooted ones by omitting the root. By contrast, inferring the root of an unrooted tree requires some means of identifying ancestry; this is done by including an outgroup in the input data so that the root is between the outgroup and the rest of the taxa in the tree, or by introducing additional assumptions about the relative rates of evolution on each branch, such as an application of the molecular clock hypothesis. Both rooted and unrooted phylogenetic trees can be either bifurcating or multifurcating, either labeled or unlabeled. A rooted bifurcating tree has two descendants arising from each interior node, an unrooted bifurcating tree takes the form of an unrooted binary tree, a free tree with three neighbors at each internal node. In contrast, a rooted multifurcating tree may have more than two children at some nodes and an unrooted multifurcating tree may have more than three neighbors at some nodes.
A labeled tree has specific values assigned to its leaves, while an unlabeled tree, sometimes called a tree shape, defines a topology only. The number of possible trees for a given number of leaf nodes depends on the specific type of tree, but there are always more multifurcating than bifurcating trees, more labeled than unlabeled trees, more rooted than unrooted trees; the last distinction is the most biologically relevant. For labeled bifurcating trees, there are:!! =! 2 n − 2! for n ≥ 2 total rooted trees and!! =! 2 n − 3! for n ≥ 3 total unrooted trees, where n represents the number of leaf nodes. Among labeled bifurcating trees, the number of unrooted trees with n leaves is equal to the number of rooted trees with n − 1 leaves. A dendrogram is a general name for a tree, whether phylogenetic or not, hence for the diagrammatic representation of a phylogenetic tree. A cladogram only represents a branching pattern.
Evolutionary game theory
Evolutionary game theory is the application of game theory to evolving populations in biology. It defines a framework of contests and analytics into which Darwinian competition can be modelled, it originated in 1973 with John Maynard Smith and George R. Price's formalisation of contests, analysed as strategies, the mathematical criteria that can be used to predict the results of competing strategies. Evolutionary game theory differs from classical game theory in focusing more on the dynamics of strategy change; this is influenced by the frequency of the competing strategies in the population. Evolutionary game theory has helped to explain the basis of altruistic behaviours in Darwinian evolution, it has in turn become of interest to economists, sociologists and philosophers. Classical non-cooperative game theory was conceived by John von Neumann to determine optimal strategies in competitions between adversaries. A contest involves players. Games can be a single round or repetitive; the approach a player takes in making his moves constitutes his strategy.
Rules govern the outcome for the moves taken by the players, outcomes produce payoffs for the players. Classical theory requires the players to make rational choices; each player must consider the strategic analysis that his opponents are making to make his own choice of moves. Evolutionary game theory started with the problem of how to explain ritualized animal behaviour in a conflict situation; the leading ethologists Niko Tinbergen and Konrad Lorenz proposed that such behaviour exists for the benefit of the species. John Maynard Smith considered that incompatible with Darwinian thought, where selection occurs at an individual level, so self-interest is rewarded while seeking the common good is not. Maynard Smith, a mathematical biologist, turned to game theory as suggested by George Price, though Richard Lewontin's attempts to use the theory had failed. Maynard Smith realised that an evolutionary version of game theory does not require players to act rationally —– only that they have a strategy.
The results of a game shows how good that strategy was, just as evolution tests alternative strategies for the ability to survive and reproduce. In biology, strategies are genetically inherited traits that control an individual's action, analogous with computer programs; the success of a strategy is determined by how good the strategy is in the presence of competing strategies, of the frequency with which those strategies are used. Maynard Smith described his work in the Theory of Games. Participants aim to produce as many replicas of themselves as they can, the payoff is in units of fitness, it is always a multi-player game with many competitors. Rules include replicator dynamics, in other words how the fitter players will spawn more replicas of themselves into the population and how the less fit will be culled, in a replicator equation; the replicator dynamics models heredity but not mutation, assumes asexual reproduction for the sake of simplicity. Games are run repetitively with no terminating conditions.
Results include the dynamics of changes in the population, the success of strategies, any equilibrium states reached. Unlike in classical game theory, players do not choose their strategy and cannot change it: they are born with a strategy and their offspring inherit that same strategy. Evolutionary game theory encompasses Darwinian evolution, including competition, natural selection, heredity. Evolutionary game theory has contributed to the understanding of group selection, sexual selection, parental care, co-evolution, ecological dynamics. Many counter-intuitive situations in these areas have been put on a firm mathematical footing by the use of these models; the common way to study the evolutionary dynamics in games is through replicator equations. These show the growth rate of the proportion of organisms using a certain strategy and that rate is equal to the difference between the average payoff of that strategy and the average payoff of the population as a whole. Continuous replicator equations assume infinite populations, continuous time, complete mixing and that strategies breed true.
The attractors of the equations are equivalent with evolutionarily stable states. A strategy which can survive all "mutant" strategies is considered evolutionarily stable. In the context of animal behavior, this means such strategies are programmed and influenced by genetics, thus making any player or organism's strategy determined by these biological factors. Evolutionary games are mathematical objects with different rules and mathematical behaviours; each "game" represents different problems that organisms have to deal with, the strategies they might adopt to survive and reproduce. Evolutionary games are given colourful names and cover stories which describe the general situation of a particular game. Representative games include hawk-dove, war of attrition, stag hunt, producer-scrounger, tragedy of the commons, prisoner's dilemma. Strategies for these games include Hawk, Bourgeois, Defector and Retaliator; the various strategies compete under the particular game's rules, the mathematics are used to determine the results and behaviours.
The first game that Maynard Smith analysed is the classic Hawk Dove game. It was conceived to analyse a contest over a shareable resource; the contestants can be either Dove. These are two subtypes or
An extinction event is a widespread and rapid decrease in the biodiversity on Earth. Such an event is identified by a sharp change in the diversity and abundance of multicellular organisms, it occurs. Estimates of the number of major mass extinctions in the last 540 million years range from as few as five to more than twenty; these differences stem from the threshold chosen for describing an extinction event as "major", the data chosen to measure past diversity. Because most diversity and biomass on Earth is microbial, thus difficult to measure, recorded extinction events affect the observed, biologically complex component of the biosphere rather than the total diversity and abundance of life. Extinction occurs at an uneven rate. Based on the fossil record, the background rate of extinctions on Earth is about two to five taxonomic families of marine animals every million years. Marine fossils are used to measure extinction rates because of their superior fossil record and stratigraphic range compared to land animals.
The Great Oxygenation Event, which occurred around 2.45 billion years ago, was the first major extinction event. Since the Cambrian explosion five further major mass extinctions have exceeded the background extinction rate; the most recent and arguably best-known, the Cretaceous–Paleogene extinction event, which occurred 66 million years ago, was a large-scale mass extinction of animal and plant species in a geologically short period of time. In addition to the five major mass extinctions, there are numerous minor ones as well, the ongoing mass extinction caused by human activity is sometimes called the sixth extinction. Mass extinctions seem to be a Phanerozoic phenomenon, with extinction rates low before large complex organisms arose. In a landmark paper published in 1982, Jack Sepkoski and David M. Raup identified five mass extinctions, they were identified as outliers to a general trend of decreasing extinction rates during the Phanerozoic, but as more stringent statistical tests have been applied to the accumulating data, it has been established that multicellular animal life has experienced five major and many minor mass extinctions.
The "Big Five" cannot be so defined, but rather appear to represent the largest of a smooth continuum of extinction events. Ordovician–Silurian extinction events: 450–440 Ma at the Ordovician–Silurian transition. Two events occurred that killed off 27% of all families, 57% of all genera and 60% to 70% of all species. Together they are ranked by many scientists as the second largest of the five major extinctions in Earth's history in terms of percentage of genera that became extinct. Late Devonian extinction: 375–360 Ma near the Devonian–Carboniferous transition. At the end of the Frasnian Age in the part of the Devonian Period, a prolonged series of extinctions eliminated about 19% of all families, 50% of all genera and at least 70% of all species; this extinction event lasted as long as 20 million years, there is evidence for a series of extinction pulses within this period. Permian–Triassic extinction event: 252 Ma at the Permian–Triassic transition. Earth's largest extinction killed 57% of all families, 83% of all genera and 90% to 96% of all species.
The successful marine arthropod, the trilobite, became extinct. The evidence regarding plants is less clear; the "Great Dying" had enormous evolutionary significance: on land, it ended the primacy of mammal-like reptiles. The recovery of vertebrates took 30 million years, but the vacant niches created the opportunity for archosaurs to become ascendant. In the seas, the percentage of animals that were sessile dropped from 67% to 50%; the whole late Permian was a difficult time for at least marine life before the "Great Dying". Triassic–Jurassic extinction event: 201.3 Ma at the Triassic–Jurassic transition. About 23% of all families, 48% of all genera and 70% to 75% of all species became extinct. Most non-dinosaurian archosaurs, most therapsids, most of the large amphibians were eliminated, leaving dinosaurs with little terrestrial competition. Non-dinosaurian archosaurs continued to dominate aquatic environments, while non-archosaurian diapsids continued to dominate marine environments; the Temnospondyl lineage of large amphibians survived until the Cretaceous in Australia.
Cretaceous–Paleogene extinction event: 66 Ma at the Cretaceous – Paleogene transition interval. The event called the Cretaceous-Tertiary or K–T extinction or K–T boundary is now named the Cretaceous–Paleogene extinction event. About 17% of all families, 50% of all genera and 75% of all species became extinct. In the seas all the ammonites and mosasaurs disappeared and the percentage of sessile animals was reduced to about 33%. All non-avian dinosaurs became extinct during that time; the boundary event was severe with a significant amount of variability in the rate of extinction between and among different clades. Mammals and birds, the latter descended from theropod dinosaurs, emerged as dominant large land animals. Despite the popularization of these five events, there is no definite line separating them from other extinction events.
In evolution, co-operation is the process where groups of organisms work or act together for common or mutual benefits. It is defined as any adaptation that has evolved, at least in part, to increase the reproductive success of the actor's social partners. For example, territorial choruses by male lions discourage intruders and are to benefit all contributors; this process contrasts with intragroup competition where individuals work against each other for selfish reasons. Cooperation exists not only in other animals as well; the diversity of taxa that exhibits cooperation is quite large, ranging from zebra herds to pied babblers to African elephants. Many animal and plant species cooperate with both members of their own species and with members of other species. Cooperation in animals appears to occur for direct benefit or between relatives. Spending time and resources assisting a related individual may at first seem destructive to an organism's chances of survival but is beneficial over the long-term.
Since relatives share part of the helper's genetic make-up, enhancing each individual's chance of survival may increase the likelihood that the helper's genetic traits will be passed on to future generations. However, some researchers, such as ecology professor Tim Clutton-Brock, assert that cooperation is a more complex process, they state that helpers may receive more direct, less indirect, gains from assisting others than is reported. These gains include protection from increased reproductive fitness. Furthermore, they insist that cooperation may not be an interaction between two individuals but may be part of the broader goal of unifying populations. Prominent biologists, such as Charles Darwin, E. O. Wilson, W. D. Hamilton, have found the evolution of cooperation fascinating because natural selection favors those who achieve the greatest reproductive success while cooperative behavior decreases the reproductive success of the actor. Hence, cooperation seemed to pose a challenging problem to the theory of natural selection, which rests on the assumption that individuals compete to survive and maximize their reproductive successes.
Additionally, some species have been found to perform cooperative behaviors that may at first sight seem detrimental to their own evolutionary fitness. For example, when a ground squirrel sounds an alarm call to warn other group members of a nearby coyote, it draws attention to itself and increases its own odds of being eaten. There have been multiple hypotheses for the evolution of cooperation, all of which are rooted in Hamilton's models based on inclusive fitness; these models hypothesize that cooperation is favored by natural selection due to either direct fitness benefits or indirect fitness benefits. As explained below, direct benefits encompass by-product benefits and enforced reciprocity, while indirect benefits encompass limited dispersal, kin discrimination and the greenbeard effect. One specific form of cooperation in animals is kin selection, which involves animals promoting the reproductive success of their kin, thereby promoting their own fitness. Different theories explaining kin selection have been proposed, including the "pay-to-stay" and "territory inheritance" hypotheses.
The "pay-to-stay" theory suggests that individuals help others rear offspring in order to return the favor of the breeders allowing them to live on their land. The "territory inheritance" theory contends that individuals help in order to have improved access to breeding areas once the breeders depart. Studies conducted on red wolves support previous researchers' contention that helpers obtain both immediate and long-term gains from cooperative breeding. Researchers evaluated the consequences of red wolves' decisions to stay with their packs for extended periods of time after birth. While delayed dispersal helped other wolves' offspring, studies found that it extended male helper wolves' life spans; this suggests that kin selection may not only benefit an individual in the long-term through increased fitness but in the short-term through increased survival chances. Some research suggests; this phenomenon is known as kin discrimination. In their meta-analysis, researchers compiled data on kin selection as mediated by genetic relatedness in 18 species, including the western bluebird, pied kingfisher, Australian magpie, dwarf mongoose.
They found that different species exhibited varying degrees of kin discrimination, with the largest frequencies occurring among those who have the most to gain from cooperative interactions. Cooperation exists not only in animals but in plants. In a greenhouse experiment with Ipomoea hederacea, a climbing plant, results show that kin groups have higher efficiency rates in growth than non-kin groups do; this is expected to rise out of reduced competition within the kin groups. The inclusive fitness theory provides a good overview of possible solutions to the fundamental problem of cooperation; the theory is based on the hypothesis that cooperation helps in transmitting underlying genes to future generations either through increasing the reproductive successes of the individual or of other individuals who carry the same genes. Direct benefits can result from simple by-product of cooperation or enforcement mechanisms, while indirect benefits can result from cooperation with genetically similar individuals.
This is called mutually beneficial cooperation as both actor and recipient depend on direct fitness benefits, which are broken down into two different types: by-product benefit and enforcement. By-product benefit ari
Timeline of the evolutionary history of life
This timeline of the evolutionary history of life represents the current scientific theory outlining the major events during the development of life on planet Earth. In biology, evolution is any change across successive generations in the heritable characteristics of biological populations. Evolutionary processes give rise to diversity at every level of biological organization, from kingdoms to species, individual organisms and molecules, such as DNA and proteins; the similarities between all present day organisms indicate the presence of a common ancestor from which all known species and extinct, have diverged through the process of evolution. More than 99 percent of all species, amounting to over five billion species, that lived on Earth are estimated to be extinct. Estimates on the number of Earth's current species range from 10 million to 14 million, of which about 1.2 million have been documented and over 86 percent have not yet been described. However, a May 2016 scientific report estimates that 1 trillion species are on Earth, with only one-thousandth of one percent described.
While the dates given in this article are estimates based on scientific evidence, there has been controversy between more traditional views of increased biodiversity through a cone of diversity with the passing of time and the view that the basic pattern on Earth has been one of annihilation and diversification and that in certain past times, such as the Cambrian explosion, there was great diversity. Species go extinct as environments change, as organisms compete for environmental niches, as genetic mutation leads to the rise of new species from older ones. Biodiversity on Earth takes a hit in the form of a mass extinction in which the extinction rate is much higher than usual. A large extinction-event represents an accumulation of smaller extinction- events that take place in a brief period of time; the first known mass extinction in earth's history was the Great Oxygenation Event 2.4 billion years ago. That event led to the loss of most of the planet's obligate anaerobes. Researchers have identified five major extinction events in earth's history since: End of the Ordovician: 440 million years ago, 86% of all species lost, including graptolites Late Devonian: 375 million years ago, 75% of species lost, including most trilobites End of the Permian, "The Great Dying": 251 million years ago, 96% of species lost, including tabulate corals, most extant trees and synapsids End of the Triassic: 200 million years ago, 80% of species lost, including all of the conodonts End of the Cretaceous: 66 million years ago, 76% of species lost, including all of the ammonites, ichthyosaurs, plesiosaurs and nonavian dinosaurs Smaller extinction-events have occurred in the periods between these larger catastrophes, with some standing at the delineation points of the periods and epochs recognized by scientists in geologic time.
The Holocene extinction event is under way. Factors in mass extinctions include continental drift, changes in atmospheric and marine chemistry and other aspects of mountain formation, changes in glaciation, changes in sea level, impact events. In this timeline, Ma means "million years ago," ka means "thousand years ago," and ya means "years ago." 4000 Ma and earlier. 4000 Ma – 2500 Ma 2500 Ma – 542 Ma. Contains the Palaeoproterozoic and Neoproterozoic eras. 542 Ma – present The Phanerozoic Eon the "period of well-displayed life," marks the appearance in the fossil record of abundant, shell-forming and/or trace-making organisms. It is subdivided into three eras, the Paleozoic and Cenozoic, which are divided by major mass extinctions. 542 Ma – 251.0 Ma and contains the Cambrian, Silurian, Devonian and Permian periods. From 251.4 Ma to 66 Ma and containing the Triassic and Cretaceous periods. 66 Ma – present Dawkins, Richard. The Ancestor's Tale: A Pilgrimage to the Dawn of Life. Boston: Houghton Mifflin Company.
ISBN 978-0-618-00583-3. LCCN 2004059864. OCLC 56617123. "Understanding Evolution: your one-stop resource for information on evolution". University of California, Berkeley. Retrieved 2015-03-18. "Life on Earth". Tree of Life Web Project. University of Arizona. January 1, 1997. Retrieved 2015-03-18. Explore complete phylogenetic tree interactively Brandt, Niel. "Evolutionary and Geological Timelines". TalkOrigins Archive. Houston, TX: The TalkOrigins Foundation, Inc. Retrieved 2015-03-18. "Palaeos: Life Through Deep Time". Palaeos. Retrieved 2015-03-18. Kyrk, John. "Evolution". Cell Biology Animation. Retrieved 2015-03-18. Interactive timeline from Big Bang to present "Plant Evolution". Plant and Animal Evolution. University of Waikato. Retrieved 2015-03-18. Sequence of Plant Evolution "The History of Animal Evolution". Plant and Animal Evolution. University of Waikato. Retrieved 2015-03-18. Sequence of Animal Evolution Yeo, Dannel. "History of Life on Earth". Archived from the original on 2015-03-15. Retrieved 2015-03-19.
Exploring Time. The Science Channel. 2007. Retrieved 2015-03-19. Roberts, Ben. "Plant evolution timeline". University of Cambridge. Archived from the original on 2015-03-13. Retrieved 2015-03-19. Art of the Nature Timelines on Wikipedia |
Code your space rocket
We begin our space adventure by building a rocket using the programming tool Scratch. First, we will draw our rocket using geometric shapes and then write some code that will send it into space!
The end result could look like this: https://scratch.mit.edu/projects/460478635//
Click on this link to be directed to the Scratch website: https://scratch.mit.edu/
There is a menu at the top of the page. Click on Create.
The cat that you see on screen is created when someone starts a new project. Since we don’t need the cat for our program, we will begin by removing it. To do that, go to the small figure of the cat in a box and click on the trash bin sign.
To create a new sprite, put the mouse on button with the cat head but don’t click on it. Instead, put the mouse on the brush and click on that. Go to the place where the cat used to be and name your sprite “Rocket”
Now it’s time to start!
You want to the nose of your rocket to be an equilateral triangle. An equilateral triangle is the one where all the sides are the same length. Scratch does not have a tool for drawing a triangle, so instead we will draw one using the tool for drawing lines.
Select the “Line” tool.
Click and drag to draw a line.
Draw a triangle with three sides of equal length. Scratch does not have a ruler, but if you have one close by, you can hold it up to the screen to see whether your lines are the same length.
If you want to delete the last line you drew, click on the “undo” arrow.
Now that you have drawn the outline of your triangle, it’s time to colour it. Select the “Fill” tool, which looks like a paint bucket. Then choose the colour you want to fill your triangle with and click in your triangle. Does the triangle change to the colour you have chosen? Awesome!
Isn’t it working? The Fill tool only works if there are no gaps in the outline of the triangle. Keep drawing lines until there are no gaps, then you can add the colour.
The rocket’s body is made from a rectangle shape. We can draw a rectangle by using the Line tool if we want to – or we can use the Rectangle tool.
Select the Rectangle tool.
Select a colour to fill.
Click and drag it to draw a rectangle.
If you want to move your rectangle or change its size, you don’t need to redo it. You can select the “Select” tool, which looks like a cursor, and then click on the rectangle to move it or change its size.
A rocket needs small fins at the bottom, so it travels straight through the air. You will need two fins, one on each side of the rocket.
A fin is made from a right-angle triangle. Ask your teacher if you are not sure what a right-angle triangle is.
Move the fins to the bottom of the rectangle. Now it's starting to look like a rocket!
So that other spaceships can see that your rocket comes from Sweden, we are going to label our rocket using the Text tool.
Select the Text tool and change the fill colour to the colour you want to write in.
Click inside the rectangle to choose where you will start writing.
Type the text “SNSA”.
SNSA stands for Swedish National Space Agency. In Swedish it is called Rymdstyrelsen.
If you are not happy with where your text is located, you can move it using the Move tool or clicking it and dragging it.
Now the basic of your rocket are finished and you can personalise it by decorating it the way you want.
You can decorate the rocket using the following geometric shapes: rectangle, triangle, circle and square.
Have you seen that there is a Circle tool that works in the same way as the Rectangle tool?
If you want to write anything else on your rocket you can use the Text tool as well.
For example, you could:
Make windows by drawing circles, triangles or squares.
Give your rocket a name and write the name on it.
Draw a star on the rocket by putting two triangles together.
or something completely different. You can choose!
Click on “Choose a Backdrop” in the lower right corner.
Then choose one of the space backdrops in the library.
Now we are going to code our first algorithm. We are going to make our rocket travel straight up into space.
An algorithm, or a script as it’s called here, has to be precise, complete and in the right order. That way, if the computer will be able to understand the instructions. You can test this by answering the questions When? What? How? When you write your script.
Choose when the rocket will take off. When you click on the start flag? When you click on the rocket? When you press a key on the keyboard?
Explore the Events category to find a block that says when something will happen.
The rocket must travel upwards, into space. So it must move – look in the category called Motion. Which block of script do you want to move here?
Tip: You can’t use “go” because that is in the wrong direction... what do you want to use? Glide? Change?
How do you want to rocket to move? What direction should it move?
We want it to travel upward! We need to use coordinates. The rocket has a coordinate to keep track of its location. A coordinate is composed of a number for X and a number for Y. Maybe you’ve already noticed that your rocket has a coordinate?
If you change X, the rocket will move to the left or right. A higher number moves it to the right and a lower one to the left.
If you change Y, the rocket moves up or down. A higher number moves it upwards and a lower one downwards.
We want the rocket to travel upwards. Can you find a block for change y?
Click on the block. Does the rocket move upwards?
How far do you want to travel? Change the value in the block and test what happens when you use a big number or a small number.
There is a block called forever. Try putting your change y block inside this block and see what happens.
Now you have made a space rocket. Nice work!
You can use this space rocket for the other tasks in Space theme on kodboken.se.
Don’t forget to save your project! You can give it the same name as this task, so it is easy to find it again.
Test your project
Show someone what you have created and let them test it. Click on SHARE so that others can find your game on Scratch. Go to the project page and let someone else try the game! |
« ΠροηγούμενηΣυνέχεια »
From a given point to draw a straight line equal to a given Etraight line.
Let A be the given point, and BC the given straight line. It is required to draw from the point A a straight line equal to BC.
line AB (Post. 1).
From the point A to B draw the straight Draw AB.
Upon AB describe the equilateral triangle DAB (Book I., A DAB eProp. 1).
Produce the straight lines DA, DB, to E and F (Post. 2). From the centre B, at the distance BC, describe the circle CGH, meeting DF in G (Post. 3).
From the centre D, at the distance DG, describe the circle GKL, meeting DE in L (Post. 3).
Then AL shall be equal to BC. PROOF. Because the point B is the centre of the circle CGH, BC is equal to BG (Def. 15).
B as centre.
D as centre.
Because the point D is the centre of the circle GKL, DL DL=DG.
is equal to DG (Def. 15).
But DA, DB, parts of them, are equal (Construction). Therefore the remainder AL is equal to the remainder BGAL = (Ax. 3).
... AL and BC each
But it has been shown that BC is equal to BG.
Therefore from the given point A a straight line AL has been drawn equal to the given straight line BC. Which was to be done.
From the greater of two given straight lines to cut off a part equal to the less.
Let AB and C be the two given straight lines, of which AB is the greater.
... AL =
A as centre and radius AD.
It is required to cut off from AB, the greater, a part equal to C, the less.
CONSTRUCTION.-From the point A draw the straight line AD equal to C (I. 2).
From the centre A, at the distance AD, describe the circle DEF, cutting AB in E (Post. 3).
Then AE shall be equal to C.
PROOF.-Because the point A is the centre of the circle
AE=AD. DEF, AE is equal to AD (Def. 15).
AD=C. AE and C each=AD.
< BAC= EDF.
But C is also equal to AD (Construction).
Therefore AE and C are each of them equal to AD.
Therefore, from AB, the greater of two given straight lines, a part AE has been cut off, equal to C, the less. Q. E. F.*
If two triangles have two sides of the one equal to two sides of the other, each to each, and have also the angles contained by those sides equal to one another: they shall have their bases, or third sides, equal; and the two triangles shall be equal, and their other angles shall be equal, each to each, viz., those to which the equal sides are opposite. Or,
If two sides and the contained angle of one triangle be respectively equal to those of another, the triangles are equal in every respect.
Let ABC, DEF be two triangles which have
each to each, viz., AB equal to
The triangle ABC shall be equal to the triangle DEF;
* Q. E. F. is an abbreviation for quod erat faciendum, that is "which
was to be done,"
And the other angles to which the equal sides are opposite, shall be equal, each to each, viz., the angle ABC to the angle DEF, and the angle ACB to the angle DFE.
PROOF. For if the triangle ABC be applied to (or placed Suppose upon) the triangle DEF,
A ABC put upon
So that the point A may be on the point D, and the DEF. straight line AB on the straight line DE,
The point B shall coincide with the point E, because AB is equal to DE (Hypothesis).
And AB coinciding with DE, AC shall coincide with DF, because the angle BAC is equal to the angle EDF (Hyp.). Therefore also the point C shall coincide with the point F, because the straight line AC is equal to DF (Hyp.).
But the point B was proved to coincide with the point E. Therefore the base BC shall coincide with the base EF. Because the point B coinciding with E, and C with F, if the base BC do not coincide with the base EF, two straight lines would enclose a space, which is impossible (Ax. 10).
Therefore the base BC coincides with the base EF, and is BC-EF. therefore equal to it (Ax. 8).
Therefore the whole triangle ABC coincides with the whole A ABC triangle DEF, and is equal to it (Ax. 8).
And the other angles of the one coincide with the remaining angles of the other, and are equal to them, viz., the angle ABC to DEF, and the angle ACB to DFE.
Therefore, if two triangles have, &c. (see Enunciation). Which was to be shown.
The angles at the base of an isosceles triangle are equal to one another; and if the equal sides be produced, the angles upon the other side of the base shall also be equal.
Let ABC be an isosceles triangle, of which the side AB is AB = AC. equal to the side AC.
Let the straight lines AB, AC (the equal sides of the triangle), be produced to D and E.
The angle ABC shall be equal to the angle ACB (angles at the base),
And the angle CBD shall be equal to the angle BCE (angles upon the other side of the base).
CONSTRUCTION.-In BD take any point
From AE, the greater, cut off AG, equal to AF, the less (I. 3).
Join FC, GB.
PROOF.-Because AF is equal to AG (Construction), and AB is equal to AC (Hyp.),
Therefore the two sides FA, AC are equal to the two sides GA, AB, each tɔ each ;
And they contain the angle FAG, common to the two triangles AFC, AGB.
Therefore the base FC is equal to the base GB (I. 4);
And the remaining angles of the one are equal to the remaining angles of the other, cach to each, to which the equal sides are opposite, viz., the angle ACF to the angle ABG, and the angle AFC to the angle AGB (I. 4).
And because the whole AF is equal to the whole AG, of which the parts AB, AC, are equal (Hyp.),
The remainder BF is equal to the remainder CG (Ax. 3).
Therefore the two sides BF, FC are equal to the two sides
And the angle BFC was proved equal to the angle CGB;
Therefore the triangles BFC, CGB are equal; and their other angles are equal, each to each, to which the equal sides are opposite (I. 4).
Therefore the angle FBC is equal to the angle GCB, and the angle BCF to the angle CBG.
And since it has been demonstrated that the whole angle ABG is equal to the whole angle ACF, and that the parts of these, the angles CBG, BCF, are also cqual,
Therefore the remaining angle ABC is equal to the remaining angle ACB (Ax. 3),
Which are the angles at the base of the triangle ABC,
And it has been proved that the angle FBC is equal to the angle GCB (Dem. 11),
Which are the angles upon the other side of the base, Therefore the angles at the base, &c. (see Enunciation). Which was to be shown.
COROLLARY.-Hence every equilateral triangle is also
If two angles of a triangle be equal to one another, the sides also which subtend, or are opposite to, the equal angles, shall be equal to one another.
Let ABC be a triangle having the angle ABC equal to the angle ACB.
The side AB shall be equal to the side AC.
For if AB be not equal to AC, one of them is greater Suppose than the other. Let AB be the greater.
CONSTRUCTION.-From AB, the greater, cut off a part DB, Make equal to AC, the less (I. 3).
PROOF.-Because in the triangles DBC, ACB, DB is equal to AC, and BC is common to both,
Therefore the two sides DB, BC are equal
to the two sides AC, CB, each to each;
And the angle DBC is equal to the angle ACB (Hyp.)
Therefore the base DC is equal to the base AB (I. 4).
And the triangle DBC is equal to the triangle ACB (I. 4), the less to the greater, which is absurd.
DB = AC.
C A DBC=
AC, that is, it is equal to it.
Therefore AB is not unequal to
* Q. E. D. is an abbreviation for quod erat demonstrandum, that is, "which was to be shown or proved,” |
In science, as in life, timing can be everything.
So it was when Cassandra Extavour, Professor of Organismic and Evolutionary Biology and of Molecular and Cellular Biology, set out to understand whether horizontal gene transfer — the process of passing genes between organisms without sexual reproduction — might be responsible for part of the makeup of a gene, known as oskar, that plays a critical role in the creation of germ cells in some insects.
Over the last decade, Extavour pitched the project to graduate students more than 10 times. They would investigate the idea but, ultimately, decide it was too complex and unwieldy.
Enter Leo Blondel.
As a first-year graduate student in the Molecules, Cells, and Organisms (MCO) program, in which students complete rotations in various labs before selecting one to join, Blondel was able to provide the strongest suggestive evidence yet that at least part of oskar actually came from bacterial genomes. The findings are described in a paper published in the journal eLife.
“I think this definitely moves the ball forward from where it was before, which was basically that we had no clue, and [our finding] basically says here are these lines of evidence for something that seems very likely,” Blondel said. “And it definitely improves the understanding of this particular gene. This particular gene appears out of nowhere in evolution and in only 480 million years becomes one of the most essential genes in the reproduction of insects, and the question was always: Where does it come from?”
To answer that question, Blondel performed a phylogenetic analysis — long the gold standard in proving horizontal gene transfer — and the results showed part of oskar’s sequence in insects is actually most closely related to a sequence found in bacteria.
It’s a finding that sheds new light on a gene that has long fascinated biologists.
“This gene is really interesting to people who work on germ cells because the phenotype if you have a mutation in this gene is that you have no germ cells,” Extavour said. “It was first identified in fruit flies, where it’s expressed in one corner of the embryo, which is where the germ cells form.”
Later studies revealed that oskar was sufficient by itself to produce functional germ cells, a feat that, even today, no other gene has demonstrated.
“But the one thing that has always been mysterious about oskar is where did it come from,” Extavour said. “For many decades, it was only found in fruit flies, then it was found in a couple mosquitos and a wasp. We found it in crickets a few years ago, and since then we’ve found a number of additional examples, but they’re all insects.
“And when we look at the sequence of the gene, it just doesn’t look like anything around,” she continued. “But if we look closely, there are two domains that look like domains we see in other organisms. One is called the LOTUS domain and the other is called OSK — short for oskar. The LOTUS domain looks like something people have found in many other eukaryotic proteins, but the thing that OSK is most similar to is not a sequence from any animal or eukaryote, but to bacterial sequences.”
Could it be, Extavour wondered, that those two separate parts of a single protein were actually co-opted from two different sources?
“I thought, ‘If that’s true, then, because we don’t think animals and bacteria reproduce with each other, that would have to occur through horizontal gene transfer,’” she said.
Finding evidence for that, however, proved frustratingly tricky.
Extavour first suggested the idea to a graduate student in 2008, but he quickly realized the project would be a lot to tackle, since the analysis would require collecting all the sequences that might be related to oskar, and then using statistics to break down whether and how they are related.
“The problem is that oskar evolves faster than the average gene,” Extavour said. “That means it’s hard to unambiguously identify the sequences that may be oskar’s relatives because they’re changing so fast.”
The project was picked up several years later by another grad student, Extavour said, who suggested that instead of searching for genes whose entire sequence appeared related to oskar, she search for genes that had sequences similar to either the LOTUS or OSK domains.
“She was able to identify about 100 new possible relatives from other insects that we hadn’t known had oskar before,” Extavour said.
Again, though, the project stalled as the student realized it would be too large to take on while she completed her degree.
Extavour later brought the project to Blondel, who remembers being intrigued, but not surprised by the idea.
“To me, there was already enough evidence to start forming a hypothesis that this might be the case,” he said. “It needed a lot more work, but the idea was there. But the one thing I said to her was that I had no idea how to prove it.”
Nevertheless, Blondel went to work and was able to add another 50 possible relatives to the list before creating a phylogenetic analysis, and the results showed that while the LOTUS domain is likely related to a similar sequence found in other animal proteins, the OSK domain is likely related to bacterial sequences.
The analysis didn’t end there. Blondel also found evidence that the insect and bacterial sequences have the same genetic “accent.”
That accent, Extavour said, has to do with how the bacterial and the insect genes use codons — three-base segments of DNA that code for a particular amino acid.
But much like a spoken accent, that codon-use signature can fade over time, and given the age of the oskar gene — scientists believe it to be approximately 500 million years old — Extavour said she and Blondel didn’t expect to detect much, if any, of that signature.
Still, the pair measured codon use in oskar four different ways, and while three turned up no evidence of differences, one did suggest that the OSK domain is different than the rest of the gene.
Extavour and Blondel were even able to come up with a hypothesis about where the foreign OSK sequence may have come from.
“We noticed that many of the sequences that appear to be closely related to the OSK domain came from bacteria that are sometimes endosymbionts — bacteria that live inside insects,” Extavour said. “There are many insect endosymbionts that don’t just live inside the animal, they live in the cytoplasm of the cells … and not just any cells, but germ cells.
“One of the best-studied insect endosymbionts is called Wolbachia, which lives inside the cytoplasm of germ cells, and some of the best-documented examples of horizontal gene transfer from bacteria to an insect have been from Wolbachia,” she continued. “So if you are living inside the cell that contains the DNA that is going to go into the next generation, maybe some of your DNA can make it into that germ cell nucleus and now you’re contributing to the next generation’s genome.”
While the paper offers important insights into a gene that is critical for insect reproduction, Blondel hopes it may offer other biologists a new approach to finding similar insights in other genes.
“This paper is the best evidence we can gather that this is the most likely scenario, but we can’t prove it in the sense that this is something that happened at least 450 million years ago,” he said. “But to me, one of the most important messages of the paper is to say maybe it’s time to look at things differently, because there may be something we’re completely missing that could change our understanding of biology. There are no tools scanning for this right now — no one is looking at the domain granularity in genes.”
This research was supported with funding from Harvard University. |
The hexagonal flow vortex at the north pole of Saturn is one of the most striking and strange phenomena in the solar system. Now a simulation sheds new light on how this structure is created from clouds and storms. Accordingly, the roots of the flow pattern lie at a greater depth than long thought – which could explain the decades-long stability of the hexagon. In the simulation, interactions between zonal flow bands were sufficient to form the central vortex surrounded by cyclones rotating in opposite directions. Superficial turbulence in the Saturn atmosphere changes its arrangement slightly, but the basic pattern remains, as the researchers report.
The strange, hexagonal flow pattern was discovered in the 1980s when researchers were evaluating the data from the Voyager 2 space probe. These showed a striking geometric cloud pattern at the north pole of the gas planet, which was not observed on any other planet in the solar system. This pattern consists of a central vortex, which is surrounded by a hexagonal, relatively sharply delineated flow band up to around 15 degrees from the North Pole. This wind band moves in the direction of rotation around the planet and has been largely unchanged in shape and position for around 40 years. But why this structure is so distinctly hexagonal, why only Saturn has such a hexagon and what physical laws are behind it has so far remained unclear. There have already been some considerations. “But there was no model for how such large-scale polygonal flows could arise in the highly turbulent atmosphere of Saturn,” explains Radesh Yadav and Jeremy Boxham from Harvard University.
How deep does the hexagon go?
Most planetary researchers assume that this hexagonal pattern is based on a variant of the Rossby waves – a large-scale wave in gas or liquid that is created by the interaction of planetary rotation with the so-called Coriolis force. This occurs because the planet rotates faster at the equator than at the poles. In the earth’s atmosphere, for example, this ensures that air masses and winds are deflected towards the poles. On Saturn, it is commonly assumed that local turbulence in the wind band near the pole combined with this Coriolis effect could create the corners in the polar flow pattern. However, the depth of this hexagon and where its roots lie has so far been controversial: “There are two basic theories: in one the hexagon is flat and only extends ten to hundreds of kilometers in depth, in the other it extends for thousands of kilometers down, ”say Yadav and Boxham.
In order to trace the roots of the hexgon, they have developed and run through a model simulation that also includes the processes in deep layers. To do this, they constructed a virtual Saturn, the deeper, stable inner layers of which are surrounded by an atmospheric layer from around 90 percent of the Saturn radius. This outer layer consisted of shells of decreasing fluid density. The researchers then set this virtual gas planet in rotation and observed how the fluids behaved under the influence of the rotation and convection between the differences.
Shape-forming vertebrae only visible in depth
It was shown: “In the course of the simulation, strong zonal currents gradually develop from the rotating turbulent convection,” the researchers said. Similar to Saturn and also to observe Jupiter, wind bands arise that move alternately with and against the direction of rotation. “But the dynamics of this system include more than just the zonal currents,” report Yadav and Boxham. “At the same time as the currents, well-defined, large-scale eddies are created in the middle and high latitudes.” And like the real Saturn, a large eddy formed directly on the North Pole in the simulation. Immediately south of it was framed by three smaller, counter-rotating cyclones, which in turn were followed by a whole set of even smaller cyclones. These structures resulted in a pattern that resembled a polygon with nine corners. All of these structures appeared most clearly at around 95 percent of Saturn’s radius – corresponding to a great depth.
According to the researchers, this demonstrates that the Saturn hexagon must have deep roots – and that simple interactions between rotation, Coriolis force and fluid density can generate these polygonal patterns. The simulation also revealed that as the depth decreased, the turbulence and wind speeds in the gas envelope of virtual Saturn increased. Because the central vortex above the pole is strong and energetic enough to withstand these interferences, it remains up to the surface. The smaller cyclones, which contribute to the shaping in depth, are covered by the turbulence near the surface. “Similarly, we can imagine the scenario on Saturn, where the hexagonal shape of the jet is stabilized by six neighboring vortices,” explain Yadav and Boxham. “But they are hidden by the more chaotic convection of the layers near the surface.”
Source: Rakesh Yadav and Jeremy Bloxham (Harvard University), Proceedings of the National Academy of Sciences; doi: 10.1073 / pnas.2000317117 |
Unit 1: A Panorama of American Eras
The 11th Grade United States History course will allow for students to focus on the major themes of equality, economics, and foreign policy to help drive and define the Nation's history in the 20th and 21st centuries. This introductory "launch" unit focuses students on many of the reading, research, writing, and historical thinking skills being applied throughout the course. Throughout the two trimesters, using a "workshop-style" model, students will have several days each unit to work with their peers and the instructor as they build a culminating opinion/argumentative research paper and performance-based project in the final unit of the course. Designated activities will also be dedicated to an application of these skills and reflection on the process. In addition, a focus on inquiry, research, sourcing, and communication skills will also be assessed as students will be required in subsequent units to participate in classroom discourse as they challenge their peers' thinking.
Specifically, this initial unit also includes a foundational "panoramic" analysis of American history based on an arrangement of chrono-thematic eras from the nation's founding through the modern day. Students will trace and analyze key events, statistics, and development of ideas/innovation over eras to both determine patterns and inspire research topic selection. Workshop sessions and deeper exploration of historical eras in American history help the students build the foundations of research and opinion/argumentative writing. An end of unit performance allows students to create an informative visualization of a significant era in American history which will be referenced by the class throughout the course.
21st Century Capacities: Analyzing, Synthesizing
Unit 2: Liberty and Equality
The focus of this unit design is to have students challenge themselves and their thinking about why, despite incredible historical efforts of marginalized groups, inequalities have perpetuated for African-Americans, Latino Americans, women, Native American/Indians, LGBT groups, groups with mental or physical disabilities, etc. Students will use an inquiry approach to analyze the definition of equality/inequality, different natures of oppression, alternative approaches for overcoming oppression, the variety of obstacles that have stood in the way of equality, and what subsequent actions a particular group might take. In order to develop a deeper understanding of these trends, students will investigate what equality really means to the individual. Students will question why all Americans have not always experienced equality and what was necessary to change conditions and achieve equality and justice. The unit will conclude with students evaluating their understanding of past civil rights movements and applying this to present obstacles of inequality. The student will be able to make informed decisions through the planning of a grassroots movement designed to challenge a specific form of inequality (i.e. gender pay gap). Reading, research, opinion/argumentative construction, and historical thinking skills will also be of focus during several workshop sessions as students develop a thesis statement for their final research topic, begin to make connections to the themes from unit 2, and select a related book which will be read throughout the course. Students will also choose groups for the collaborative end of course project.
21st Century Capacities: Synthesizing, Perseverance
Unit 3: Economics and the Land of Opportunity
In this unit, students will research major economic trends impacting the US Economy in the 20th and 21st centuries. Through this understanding of historical economic policy, internal factors, external influences, and resultant socio-economic trends within the US population, students will contemplate what a "healthy economy" might really look like and the role that government has played (and should play). A closer look at labor, market, and other historical factors impacting the economy allows students to understand that almost everything can be measured using a cost/benefit analysis. Every economic choice has the potential to impact others and students will investigate these impacts from multiple perspectives. Each concept will have students grappling with compelling questions through the lens of a particular economic focus, ranging from the individual family unit to the global economy. Understanding socioeconomic factors leading to opportunities and/or disparity in wealth as well as the role the government plays in such factors will also help students develop an end of unit economic policy platform that outlines what role the government should play in the 21st century economy. Through the analysis of data, texts, first-hand experiences, and in-class tasks, students will be equipped to make informed decisions based on forecasted economic trends in the United States and the global economy.
21st Century Capacities: Analyzing, Decision Making
This unit provides students with the opportunity to embark on an in-depth, independent study of a topic of personal interest. Throughout this unit, using a "workshop-style" model, students will work with their peers and the instructor as they build a culminating argumentative research paper. Activities will be dedicated to an application of skills as well as reflection on the research and writing processes. Students will begin with the vital task of proper topic selection, followed by careful development of a workable research question and then the construction of a strong thesis statement. Students will narrow, broaden, or shift the focus of their papers as they research using both primary and secondary sources. Students will actively search for, evaluate, and read a variety of sources, take organized notes on evidence that supports their thesis statements, while properly citing all sources. After they organize their evidence using an outline structure, they will begin writing a formal research paper that clearly supports their thesis statement and demonstrates their understanding of the topic. Their papers will not be mere reports on historical facts, but rather argumentative papers that add to the scholarship on their topics. Throughout the process, teachers will conference with students and help guide them through this independent project.
21st Century Capacities: Analyzing, Synthesizing
Unit 4: American Foreign Policy
As American society developed throughout the 20th century, the country's position in the world substantially changed. Inevitably, like all nations, the United States continues to balance its global interests while holding true to its founding principles. With the rapidly changing dynamics of the 21st Century information age, the intricacies of diplomacy, collective security, and globalization challenge the US government and its interests and actions around the globe. A major dilemma, which has become more apparent with global extremism and polarization, is that of determining a particular approach or course of action and the potential impacts of that action. Included in the evaluation of foreign policy is the importance of geographic reasoning and its influence on diplomatic decision-making. This unit encourages students not only to develop a clarity in their underlying beliefs about foreign policy at specific "decision points" in the 20th century, but also to understand the impact of the various foreign policy tools available to preempt, respond to, and influence contemporary global issue and conditions. Continued reading, research, opinion/argumentative writing, and historical thinking skills will also be of focus during several workshop sessions as students further develop their thesis statement and outline, make deeper connections to the themes from unit 4, and begin working with peers and the instructor to draft a paper and reflect on the process.
21st Century Capacities: Synthesizing, Engaging in Global Issues
Unit 5: A Panorama of American Legacies
The culminating unit in the US History course has the dual purpose of finalizing the research paper and also helping students understand how contemporary American culture reflects the themes explored throughout the course. Students initiated the process of research and opinion/argumentative, thesis-based writing in the "launch" unit of the course. Through regular workshop experiences, students have developed and applied historical thinking skills as they prepared to construct a final draft for their research paper. The culminating stages in this workshop process will be for the student to produce a final paper and an annotated works cited, which has incorporated major themes from the course, independent research assignments, and review/feedback from both classmates and instructor
For the latter half of the unit, students will inquire as to how the course has helped define Americanism and American Culture. What does it mean to be an American and how have we defined ourselves based on the lessons and experiences from our nation's story? An essential aspect of the United States History course has been to dive deeply into the course themes of social and political equality, economics, and foreign policy, requiring that students draw on foundational historical knowledge, research, and historical thinking skills. Through a functional definition of Americanism and American Culture, including its progression and evolution or consistency, students will use this summative "landing" unit as they evaluate and present an analysis of social, political, economic, and foreign policy issues from throughout the US History course. Students will essentially be using the US history course to better understand who we are today as an American people. The "Legacy Project" a final performance-based assessment, will ask students to serve as consultants to the business and entertainment community, communicating the importance of these cultural themes throughout American history
21st Century Capacities: Synthesizing, Product Creation |
Related Topics: More lessons on Geometry
Videos, worksheets, examples, solutions, and activities to help Geometry students learn about rotational symmetry.
In these lessons, we will learn
Geometry Worksheets for Rotations, Reflections, and Symmetry
What is Rotational Symmetry?
- what is rotational symmetry?
- how to find the order of rotation
- how to find the angle of rotation
Symmetry in a figure exists if there is a reflection, rotation, or translation that can be performed and the image is identical. Rotational symmetry exists when the figure can be rotated and the image is identical to the original.
Regular polygons have a degree of rotational symmetry equal to 360 degrees divided by the number of sides.
What is the order of rotation and angle of rotation?
A figure has rotational symmetry if it coincides with itself in a rotation less than 360 degrees.
The order of rotation
of a figure is the number of times it coincides with itself in a rotation less than 360 degrees.
The angle of rotation for a regular figure is
360 divided by the order of rotation.
How to find the order of rotational symmetry of a shape?
The order of rotational symmetry is the number of times you can rotate a shape so that it looks the same. The original position is counted only once (i.e. not when it returns to its original position)
The order of rotational symmetry of a regular polygon is the same as the number of sides of the polygon.
You can also deduce the order of rotational symmetry by knowing the smallest angle you can rotate the shape through to look the same.
180° = order 2,
120° = order 3,
90° = order 4.
The product of the angle and the order would be 360°.
How to relate between a reflection and a rotation and examine rotational symmetry within an individual figure
The following video will give the solutions for the Rotations, Reflections, and Symmetry worksheets
. (Common Core, Geometry Lesson 15, Module 1)
The original triangle, labeled A, has been reflected across the first line, resulting
in the image labeled B. Reflect the image across the second line.
Carlos looked at the image of the reflection across the second line and said,
“That’s not the image of triangle A after two reflections; that’s the image of
triangle A after a rotation!” Do you agree? Why or why not?
When you reflect a figure across a line, the original figure and its image share a line of symmetry, which we have called
the line of reflection. When you reflect a figure across a line and then reflect the image across a line that intersects the
first line, your final image is a rotation of the original figure. The center of rotation is the point at which the two lines of
reflection intersect. The angle of rotation is determined by connecting the center of rotation to a pair of corresponding
vertices on the original figure and the final image. The figure above is a 210° rotation (or 150° clockwise rotation).
Line of symmetry of a figure: This is an isosceles triangle. By definition, an isosceles triangle has at least
two congruent sides. A line of symmetry of the triangle can be drawn from the top vertex to the
midpoint of the base, decomposing the original triangle into two congruent right triangles. This line of
symmetry can be thought of as a reflection across itself that takes the isosceles triangle to itself. Every
point of the triangle on one side of the line of symmetry has a corresponding point on the triangle on
the other side of the line of symmetry, given by reflecting the point across the line. In particular, the
line of symmetry is equidistant from all corresponding pairs of points. Another way of thinking about
line symmetry is that a figure has line symmetry if there exists a line (or lines) such that the image of
the figure when reflected over the line is itself.
Does every figure have a line of symmetry?
How to find the angle of rotation for regular polygons?
The angle of rotation of a regular polygons is equal to 360 degrees divided by the number of sides.
The order of Rotational Symmetry tells us how many times a shape looks the same when it rotate 360 degrees. Determine the order of rotational symmetry for a square, a rectangle and an equilateral triangle.
Basic Rotational Symmetry
Introduction to rotational symmetry with fun shapes.
Learn to identify rotational symmetry.
Tell whether each figure has rotational symmetry. If it does, find the smallest fraction of a full turn needed for it to look the same.
Identify and Describe Rotational Symmetry
Learn to identify and describe rotational symmetry.
How many times will the figure show rotational symmetry within one full rotation?
Also, identify the degree of rotational symmetry.
Rotate to landscape screen format on a mobile phone or small tablet to use the Mathway widget, a free math problem solver that answers your questions with step-by-step explanations.
You can use the free Mathway calculator and problem solver below to practice Algebra or other math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations.
We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page. |
Magnetic levitation, maglev, or magnetic suspension is a method by which an object is suspended with no support other than magnetic fields. Magnetic force is used to counteract the effects of the gravitational acceleration and any other accelerations.
The two primary issues involved in magnetic levitation are lifting forces: providing an upward force sufficient to counteract gravity, and stability: ensuring that the system does not spontaneously slide or flip into a configuration where the lift is neutralized.
- 1 Lift
- 2 Stability
- 3 Methods
- 3.1 Mechanical constraint (pseudo-levitation)
- 3.2 Servomechanisms
- 3.3 Induced currents
- 3.4 Diamagnetically stabilized levitation
- 3.5 Diamagnetic levitation
- 3.6 Superconductors
- 3.7 Rotational stabilization
- 3.8 Strong focusing
- 4 Uses
- 5 History
- 6 See also
- 7 References
- 8 External links
Magnetic materials and systems are able to attract or press each other apart or together with a force dependent on the magnetic field and the area of the magnets. For example, the simplest example of lift would be a simple dipole magnet positioned in the magnetic fields of another dipole magnet, oriented with like poles facing each other, so that the force between magnets repels the two magnets.
Essentially all types of magnets have been used to generate lift for magnetic levitation; permanent magnets, electromagnets, ferromagnetism, diamagnetism, superconducting magnets and magnetism due to induced currents in conductors.
To calculate the amount of lift, a magnetic pressure can be defined.
For example, the magnetic pressure of a magnetic field on a superconductor can be calculated by:
For example, the simplest example of lift with two simple dipole magnets repelling is highly unstable, since the top magnet can slide sideways, or flip over, and it turns out that no configuration of magnets can produce stability.
In some cases the lifting force is provided by magnetic levitation, but stability is provided by a mechanical support bearing little load. This is termed pseudo-levitation.
Static stability means that any small displacement away from a stable equilibrium causes a net force to push it back to the equilibrium point.
Earnshaw's theorem proved conclusively that it is not possible to levitate stably using only static, macroscopic, paramagnetic fields. The forces acting on any paramagnetic object in any combinations of gravitational, electrostatic, and magnetostatic fields will make the object's position, at best, unstable along at least one axis, and it can be unstable equilibrium along all axes. However, several possibilities exist to make levitation viable, for example, the use of electronic stabilization or diamagnetic materials (since relative magnetic permeability is less than one); it can be shown that diamagnetic materials are stable along at least one axis, and can be stable along all axes. Conductors can have a relative permeability to alternating magnetic fields of below one, so some configurations using simple AC driven electromagnets are self stable.
Dynamic stability occurs when the levitation system is able to damp out any vibration-like motion that may occur.
Magnetic fields are conservative forces and therefore in principle have no built-in damping, and in practice many of the levitation schemes are under-damped and in some cases negatively damped. This can permit vibration modes to exist that can cause the item to leave the stable region.
Damping of motion is done in a number of ways:
- external mechanical damping (in the support), such as dashpots, air drag etc.
- eddy current damping (conductive metal influenced by field)
- tuned mass dampers in the levitated object
- electromagnets controlled by electronics
For successful levitation and control of all 6 axes (degrees of freedom; 3 translational and 3 rotational) a combination of permanent magnets and electromagnets or diamagnets or superconductors as well as attractive and repulsive fields can be used. From Earnshaw's theorem at least one stable axis must be present for the system to levitate successfully, but the other axes can be stabilized using ferromagnetism.
The primary ones used in maglev trains are servo-stabilized electromagnetic suspension (EMS), electrodynamic suspension (EDS).
Mechanical constraint (pseudo-levitation)
With a small amount of mechanical constraint for stability, achieving pseudo-levitation is a relatively straightforward process.
If two magnets are mechanically constrained along a single axis, for example, and arranged to repel each other strongly, this will act to levitate one of the magnets above the other.
Another geometry is where the magnets are attracted, but constrained from touching by a tensile member, such as a string or cable.
Another example is the Zippe-type centrifuge where a cylinder is suspended under an attractive magnet, and stabilized by a needle bearing from below.
The attraction from a fixed strength magnet decreases with increased distance, and increases at closer distances. This is unstable. For a stable system, the opposite is needed, variations from a stable position should push it back to the target position.
Stable magnetic levitation can be achieved by measuring the position and speed of the object being levitated, and using a feedback loop which continuously adjusts one or more electromagnets to correct the object's motion, thus forming a servomechanism.
Many systems use magnetic attraction pulling upwards against gravity for these kinds of systems as this gives some inherent lateral stability, but some use a combination of magnetic attraction and magnetic repulsion to push upwards.
Either system represents examples of ElectroMagnetic Suspension (EMS). For a very simple example, some tabletop levitation demonstrations use this principle, and the object cuts a beam of light or Hall effect sensor method is used to measure the position of the object. The electromagnet is above the object being levitated; the electromagnet is turned off whenever the object gets too close, and turned back on when it falls further away. Such a simple system is not very robust; far more effective control systems exist, but this illustrates the basic idea.
EMS magnetic levitation trains are based on this kind of levitation: The train wraps around the track, and is pulled upwards from below. The servo controls keep it safely at a constant distance from the track.
These schemes work due to repulsion due to Lenz's law. When a conductor is presented with a time-varying magnetic field electrical currents in the conductor are set up which create a magnetic field that causes a repulsive effect.
These kinds of systems typically show an inherent stability, although extra damping is sometimes required.
Relative motion between conductors and magnets
If one moves a base made of a very good electrical conductor such as copper, aluminium or silver close to a magnet, an (eddy) current will be induced in the conductor that will oppose the changes in the field and create an opposite field that will repel the magnet (Lenz's law). At a sufficiently high rate of movement, a suspended magnet will levitate on the metal, or vice versa with suspended metal. Litz wire made of wire thinner than the skin depth for the frequencies seen by the metal works much more efficiently than solid conductors. Figure 8 coils can be used to keep something aligned.
An especially technologically interesting case of this comes when one uses a Halbach array instead of a single pole permanent magnet, as this almost doubles the field strength, which in turn almost doubles the strength of the eddy currents. The net effect is to more than triple the lift force. Using two opposed Halbach arrays increases the field even further.
Oscillating electromagnetic fields
A conductor can be levitated above an electromagnet (or vice versa) with an alternating current flowing through it. This causes any regular conductor to behave like a diamagnet, due to the eddy currents generated in the conductor. Since the eddy currents create their own fields which oppose the magnetic field, the conductive object is repelled from the electromagnet, and most of the field lines of the magnetic field will no longer penetrate the conductive object.
This effect requires non-ferromagnetic but highly conductive materials like aluminium or copper, as the ferromagnetic ones are also strongly attracted to the electromagnet (although at high frequencies the field can still be expelled) and tend to have a higher resistivity giving lower eddy currents. Again, litz wire gives the best results.
The effect can be used for stunts such as levitating a telephone book by concealing an aluminium plate within it.
At high frequencies (a few tens of kilohertz or so) and kilowatt powers small quantities of metals can be levitated and melted using levitation melting without the risk of the metal being contaminated by the crucible.
One source of oscillating magnetic field that is used is the linear induction motor. This can be used to levitate as well as provide propulsion.
Diamagnetically stabilized levitation
Earnshaw's theorem does not apply to diamagnets. These behave in the opposite manner to normal magnets owing to their relative permeability of μr < 1 (i.e. negative magnetic susceptibility). Diamagnetic levitation can be inherently stable.
A permanent magnet can be stably suspended by various configurations of strong permanent magnets and strong diamagnets. When using superconducting magnets, the levitation of a permanent magnet can even be stabilized by the small diamagnetism of water in human fingers.
Diamagnetism is the property of an object which causes it to create a magnetic field in opposition to an externally applied magnetic field, thus causing the material to be repelled by magnetic fields. Diamagnetic materials cause lines of magnetic flux to curve away from the material. Specifically, an external magnetic field alters the orbital velocity of electrons around their nuclei, thus changing the magnetic dipole moment.
According to Lenz's law, this opposes the external field. Diamagnets are materials with a magnetic permeability less than μ0 (a relative permeability less than 1). Consequently, diamagnetism is a form of magnetism that is only exhibited by a substance in the presence of an externally applied magnetic field. It is generally quite a weak effect in most materials, although superconductors exhibit a strong effect.
Direct diamagnetic levitation
A substance that is diamagnetic repels a magnetic field. All materials have diamagnetic properties, but the effect is very weak, and is usually overcome by the object's paramagnetic or ferromagnetic properties, which act in the opposite manner. Any material in which the diamagnetic component is stronger will be repelled by a magnet.
Diamagnetic levitation can be used to levitate very light pieces of pyrolytic graphite or bismuth above a moderately strong permanent magnet. As water is predominantly diamagnetic, this technique has been used to levitate water droplets and even live animals, such as a grasshopper, frog and a mouse. However, the magnetic fields required for this are very high, typically in the range of 16 teslas, and therefore create significant problems if ferromagnetic materials are nearby.
The minimum criterion for diamagnetic levitation is , where:
- is the magnetic susceptibility
- is the density of the material
- is the local gravitational acceleration (−9.8 m/s2 on Earth)
- is the permeability of free space
- is the magnetic field
- is the rate of change of the magnetic field along the vertical axis.
Assuming ideal conditions along the z-direction of solenoid magnet:
Superconductors may be considered perfect diamagnets, and completely expel magnetic fields due to the Meissner effect when the superconductivity initially forms; thus superconducting levitation can be considered a particular instance of diamagnetic levitation. In a type-II superconductor, the levitation of the magnet is further stabilized due to flux pinning within the superconductor; this tends to stop the superconductor from moving with respect to the magnetic field, even if the levitated system is inverted.
A very strong magnetic field is required to levitate a train. The JR–Maglev trains have superconducting magnetic coils, but the JR–Maglev levitation is not due to the Meissner effect.
A magnet or properly assembled array of magnets with a toroidal field can be stably levitated against gravity when gyroscopically stabilized by spinning it in a second toroidal field created by a base ring of magnet(s). However, this only works while the rate of precession is between both upper and lower critical thresholds—the region of stability is quite narrow both spatially and in the required rate of precession.
The first discovery of this phenomenon was by Roy M. Harrigan, a Vermont inventor who patented a levitation device in 1983 based upon it. Several devices using rotational stabilization (such as the popular Levitron branded levitating top toy) have been developed citing this patent. Non-commercial devices have been created for university research laboratories, generally using magnets too powerful for safe public interaction.
Earnshaw's theory strictly only applies to static fields. Alternating magnetic fields, even purely alternating attractive fields, can induce stability and confine a trajectory through a magnetic field to give a levitation effect.
This is used in particle accelerators to confine and lift charged particles, and has been proposed for maglev trains as well.
Maglev, or magnetic levitation, is a system of transportation that suspends, guides and propels vehicles, predominantly trains, using magnetic levitation from a very large number of magnets for lift and propulsion. This method has the potential to be faster, quieter and smoother than wheeled mass transit systems. The technology has the potential to exceed 6,400 km/h (4,000 mi/h) if deployed in an evacuated tunnel. If not deployed in an evacuated tube the power needed for levitation is usually not a particularly large percentage and most of the power needed is used to overcome air drag, as with any other high speed train. Some maglev Hyperloop prototype vehicles are being developed as part of the Hyperloop pod competition in 2015–2016, and are expected to make initial test runs in an evacuated tube later in 2016.
The highest recorded speed of a maglev train is 603 kilometers per hour (374.69 mph), achieved in Japan on April 21, 2015, 28.2 km/h faster than the conventional TGV speed record.
Electromagnetic levitation (EML), patented by Muck in 1923, is one of the oldest levitation techniques used for containerless experiments. The technique enables the levitation of an object using electromagnets. A typical EML coil has reversed winding of upper and lower sections energized by a radio frequency power supply.
- 1842 Earnshaw's theorem showed electrostatic levitation cannot be stable; later theorem was extended to magnetostatic levitation by others
- 1913 Emile Bachelet awarded a patent in March 1912 for his “levitating transmitting apparatus” (patent no. 1,020,942) for electromagnetic suspension system
- 1933 Superdiamagnetism Walther Meissner and Robert Ochsenfeld (the Meissner effect)
- 1934 Hermann Kemper “monorail vehicle with no wheels attached.” Reich Patent number 643316
- 1939 Braunbeck’s extension showed that magnetic levitation is possible with diamagnetic materials
- 1939 Bedford, Peer, and Tonks aluminum plate placed on two concentric cylindrical coils shows 6-axis stable levitation.
- 1961 James R. Powell and BNL colleague Gordon Danby electrodynamic levitation using superconducting magnets and figure 8 coils
- 1970s Spin stabilized magnetic levitation Roy M. Harrigan
- 1974 Magnetic river Eric Laithwaite and others
- 1979 transrapid train carried passengers
- 1981 First single-tether magnetic levitation system exhibited publicly (Tom Shannon, Compass of Love, collection Musee d'Art Moderne de la Ville de Paris)
- 1984 Low speed maglev shuttle in Birmingham Eric Laithwaite and others
- 1997 Diamagnetically levitated live frog Andre Geim
- 1999 Inductrack permanent magnet electrodynamic levitation (General Atomics)
- 2000 The first man-loading HTS maglev test vehicle “Century” in the world was successfully developed in China.
- 2005 homopolar electrodynamic bearing
- Acoustic levitation
- Aerodynamic levitation
- Electrostatic levitation
- Optical levitation
- Cyclotrons levitate and circulate charged particles in a magnetic field
- Inductrack a particular system based on Halbach arrays and inductive track loops
- Launch loop
- Linear motor
- Magnetic bearing
- Magnetic ring spinning
- Nagahori Tsurumi-ryokuchi Line
- Rapid transits using linear motor propulsion
- StarTram is an extreme proposal for levitation via superconductors over multiple kilometers of distance
- Zippe-type centrifuge uses magnetic lift and a mechanical needle for stability
- calculator for force between two disc magnets (retrieved April 16, 2014)
- Lecture 19 MIT 8.02 Electricity and Magnetism, Spring 2002
- Ignorance = Maglev = Bliss For 150 years scientists believed that stable magnetic levitation was impossible. Then Roy Harrigan came along. By Theodore Gray Posted February 2, 2004
- Braunbeck, W. (1939). "Freischwebende Körper im elektrischen und magnetischen Feld". Zeitschrift für Physik. 112 (11): 753–763. Bibcode:1939ZPhy..112..753B. doi:10.1007/BF01339979.
- Rote, D.M.; Yigang Cai (2002). "Review of dynamic stability of repulsive-force maglev suspension systems". IEEE Transactions on Magnetics. 38 (2): 1383. Bibcode:2002ITM....38.1383R. doi:10.1109/20.996030.
- S&TR | November 2003: Maglev on the Development Track for Urban Transportation. Llnl.gov (2003-11-07). Retrieved on 2013-07-12.
- Thompson, Marc T. Eddy current magnetic levitation, models and experiments. (PDF) . Retrieved on 2013-07-12.
- Levitated Ball-Levitating a 1 cm aluminum sphere. Sprott.physics.wisc.edu. Retrieved on 2013-07-12.
- Mestel, A. J. (2006). "Magnetic levitation of liquid metals". Journal of Fluid Mechanics. 117: 27. Bibcode:1982JFM...117...27M. doi:10.1017/S0022112082001505.
- Diamagnetically stabilized magnet levitation. (PDF) . Retrieved on 2013-07-12.
- "The Frog That Learned to Fly". Radboud University Nijmegen. Retrieved 19 October 2010. For Geim's account of diamagnetic levitation, see Geim, Andrey. "Everyone's Magnetism" (PDF).[permanent dead link] (688 KB). Physics Today. September 1998. pp. 36–39. Retrieved 19 October 2010. For the experiment with Berry, see Berry, M. V.; Geim, Andre. (1997). ""Of flying frogs and levitrons"" (PDF). Archived from the original (PDF) on 2010-11-03. (228 KB). European Journal of Physics 18: 307–313. Retrieved 19 October 2010.
- US patent 4382245, Harrigan, Roy M., "Levitation device", issued 1983-05-03
- Hull, J.R. (1989). "Attractive levitation for high-speed ground transport with largeguideway clearance and alternating-gradient stabilization". IEEE Transactions on Magnetics. 25 (5): 3272. Bibcode:1989ITM....25.3272H. doi:10.1109/20.42275.
- Trans-Atlantic MagLev | Popular Science. Popsci.com. Retrieved on 2013-07-12.
- Lavars, Nick (2016-01-31). "MIT engineers win Hyperloop pod competition, will test prototype in mid-2016". www.gizmag.com. Retrieved 2016-02-01.
- Muck, O. German patent no. 42204 (Oct. 30, 1923)
- Nordine, Paul C.; Weber, J. K. Richard & Abadie, Johan G. (2000). "Properties of high-temperature melts using levitation". Pure and Applied Chemistry. 72 (11): 2127–2136. doi:10.1351/pac200072112127.
- Laithwaite, E.R. (1975). "Linear electric machines—A personal view". Proceedings of the IEEE. 63 (2): 250. doi:10.1109/PROC.1975.9734.
- Wang, Jiasu; Wang Suyu; et al. (2002). "The first man-loading high temperature superconducting maglev test vehicle in the world". Physica C. 378-381: 809–814. Bibcode:2002PhyC..378..809W. doi:10.1016/S0921-4534(02)01548-4.
- "Design and Analysis of a Novel Low Loss Homopolar Electrodynamic Bearing." Lembke, Torbjörn. PhD Thesis. Stockholm: Universitetsservice US AB, 2005. Print. ISBN 91-7178-032-7
- Maglev Trains[permanent dead link] Audio slideshow from the National High Magnetic Field Laboratory discusses magnetic levitation, the Meissner Effect, magnetic flux trapping and superconductivity
- Magnetic Levitation – Science is Fun
- Magnetic (superconducting) levitation experiment (YouTube)
- Superconducting Levitation Demos
- Maglev video gallery
- How can you magnetically levitate objects?
- Levitated aluminum ball (oscillating field)
- Instructions to build an optically triggered feedback maglev demonstration
- Videos of diamagnetically levitated objects, including frogs and grasshoppers
- Larry Spring's Mendocino Brushless Magnetic Levitation Solar Motor
- A Classroom Demonstration of Levitation...
- 25kg MAGLEV suspension setup
- 25kg MAGLEV suspension control via Classical control strategy
- 25kg MAGLEV suspension via State feedback control strategy
- Frogs levitate in a strong enough magnetic field |
These are called the natural numbers, or sometimes the counting numbers. The use of three dots at the end of the list is a common mathematical notation to indicate that the list keeps going forever. If the farmer does not have any sheep, then the number of sheep that the farmer owns is zero.
Impossible I connect the students back to the vobackulary game at beginning of this unit by asking them find someone in the class who shared the same vobackulary word that they had. Together, they talk about their word, and why they think it belongs on the list.
The benefit of having more than one student with the same word is that collaboration is built in. I write this expression on the board. Students notice that is has brackets and decimals. We solve this expression together. I emphasize that this problem appears to be a bit more challenging at first because it includes decimals.
But as we solve it together, the students can see that the same order of operations applies. After they "prove they can do it their own" by simplifying three expressions, students are given the choice to use a calculator.
Is is an example of when I really ask myself what my objective is.
Through a series of engaging explorations, students learn to read, represent, and interpret decimal numerals (in the context of money); relate decimals to fractions; and relate percents to fractions and decimals. The Real Number System. The real number system evolved over time by expanding the notion of what we mean by the word “number.” At first, “number” meant something you could count, like how many sheep a farmer owns. Material Estimator Model The Material Estimator is designed for contractors, trades people and estimating professionals who need to estimate materials and costs for concrete, fences, decks, bricks, tile, flooring, gravel, painting, drywall and paneling and more.
I want the students to simplify expressions, they have sufficient time to practice multiplying and dividing with decimals - therefore, I allow them to use a calculator to do the computations so they can focus on the order of operations. The same goes for addition and subtraction.
Calculators are only so helpful. Ticket Out 5 minutes This is the last day that I plan to focus on order of operations so as a ticket out, I ask students to write a reflection 3 faces about evaluating expressions. I can use these reflections to help create warm-ups and small groups to differentiate instruction and help all students master this standard as the year moves on.
Some students are over confident in their ability to solve problems with more than one operation. I am careful to check their work for common errors. Many times students will work left to right within a set of parenthesis.Write the decimal numbers in ascending order using decimals with 3 decimal places.
To arrange the decimal numbers in ascending order, first check the whole number part of all the decimal numbers. Dividing Decimals OBJECTIVES 1. Divide a decimal by a whole number 2. Write the division as a fraction. We multiply the numerator and denominator by 10 Recall that the order of operations is always used to simplify a mathematical expression with several operations.
You should recall the order of operations as the following. Basic Pre Algebra Intervention Program LESSON TOPIC PLAN A Order of Operations No Calculator Warm-up A Exit ticket A B Order of Operations with Decimals No Calculator Warm up B Decimal Operations Notes & Review Puzzle – with partners Exit Ticket B C Expanded Form.
This unit brings students into the world of “math language”, learning how to write complex expressions in different forms and convert numbers in one form to another (lausannecongress2018.comls to fractions).
Last, students will apply the order of operations to interpret and solve simple algebraic equations. Make sure this fits by entering your model number.; The sleek TI-Nspire CX handheld is the thinnest and lightest TI graphing calculator model to date Overlay and color-code math and science concepts on digital images or your own photos.
Decimal Worksheets These worksheets can help your students review decimals number concepts. Worksheets include place value, naming decimals to the nearest tenth and hundredth place, adding decimals, subtracting decimals, multiplying, dividing, and rounding decimals. |
Microeconomics studies the behaviour of individuals and firms in making decisions regarding the distribution of scarce resources and the relations among these individuals and firms. Microeconomics examines the market mechanisms that establish relative prices among goods and services and allot limited resources among alternate uses. Microeconomics shows conditions under which free markets lead to desirable allocations. It also studies market failure, where markets fail to produce effective results.
Economic modeling is at the core of any economic theory, modeling provides a logical, abstract template to help the economist logically isolate and sort out complicated chains of cause and effect and impact between the numerous interacting elements in an economy. Through the use of a model, the economist can test, rationally, producing different scenarios, to estimate the effect of alternative policy options, or evaluating the logical reliability of any arguments. Certain types of models are extremely useful for presenting visually the essence of economic arguments. The visual appeal of a model clarifies the explanation.
There are four types of modeling methods used in microeconomics analysis: visual models, mathematical models, empirical models, and simulation models. However, we will discuss features of visual modeling which includes 3D modeling methods for microeconomics in detail.
Visual models are simply pictures of an abstract economy; graphs with lines and curves that tell an economic story. They are mainly used in textbooks and teaching. Some visual models are merely diagrammatic, such as those which show the flow of income through the economy from one sector to another. In other words, they use a visual device to present a very general economic concept. Most visual models, though, are visual extensions of mathematical models. Hidden in their structure is a primary mathematical model. The models do not normally require knowledge of mathematics, but still allow the presentation of complex relationships between economic variables. These models are relatively easy to understand.
Technology carries imagination closer to reality, the best example to establish this is how 3D modeling has changed the presentation world of Microeconomics. With 3D modeling microeconomics it is all the way through to the prototyping and production stage and then develops data regarding sales and other variables that require space dimensions. In 3D computer graphics, 3D modeling is the process of developing a mathematical representation of any surface of an object (either inert or living) in three dimensions via specialized software. In 3D modeling it is easier to see the impact on the overall design when minor or major changes are made, this would help in finalizing the outcome without much time and cost-incurring changes or corrections. 3D modeling is an emerging technology that has the capability to change the face of the business environment and everyone should know how this technology works. The results of 3D modeling are better intelligence, reduced risk, and greater quantification of uncertainty. Properly utilizing multiple scenarios and understandings for risk analysis is a technique that requires a powerful and wide-reaching 3D modeling. From well-planning to fault-free analysis, 3D modeling helps immensely in understanding microeconomic |
In October 2001, a sleeping volcano in the remote South Sandwich Islands began spewing ash and lava from its summit. It was Mount Belinda’s first eruption in recorded history. Less than 24 hours after the eruption began, a research team based nearly 9,000 miles away at the University of Hawaii was already estimating how much energy was pouring out of the volcano.
That the researchers were making calculations so soon after the start of Mount Belinda’s eruption is remarkable, considering the volcano’s remote location. The South Sandwich Islands are situated between the southern tip of South America and mainland Antarctica, one of the most isolated areas of volcanic activity on Earth.
More than 1,500 potentially active volcanoes dot the Earth’s landscape, of which approximately 500 are active at any given time. Although scientists keep watch over many of the Earth’s volcanoes using traditional ground observation methods, satellite-based remote sensing is quickly becoming a crucial tool for understanding where, when, and why the Earth’s volcanoes periodically boil over.
Satellite technology now makes it possible to monitor volcanic activity in even the most isolated corners of the globe, and to routinely observe changes in the Earth’s surface that may signal an impending eruption. In addition, remote sensing data offer scientists the chance to prevent catastrophic damage to life and property by determining how and where volcanic debris spreads after an eruption.
“Hot Spots” Around the Globe
Just 10 years ago, Mount Belinda’s eruption might not have been detected for weeks or even months. “Normally, this remote volcano would have gone unmonitored,” said Rob Wright, research scientist at the Hawaii Institute of Geophysics and Planetology (HIGP). “However, when our system detected hot spots on Montagu Island, we decided to take a closer look. Inspection of high-resolution satellite imagery confirmed that the hot spots were indeed the result of volcanic activity.”
The “system” Wright refers to is the MODIS Thermal Alert System, known as MODVOLC, which now enables scientists to detect volcanic activity anywhere in the world within hours of its occurrence. MODVOLC uses data acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS) sensors, which fly aboard NASA’s Terra and Aqua satellites. “The algorithm we’ve developed scans each 1-kilometer pixel within every MODIS image to see if it contains high-temperature heat sources, or hot spots. These heat sources may be active lava flows, lava domes, or lava lakes. Since MODIS achieves complete global coverage every 48 hours, this means that our system checks every square kilometer of the globe for volcanic activity once every two days,” said Wright.
For each hot spot identified, MODVOLC records the date and time at which it was observed, its geographic coordinates, the position of the satellite and the Sun, and the spectral radiance (the amount of energy emitted by the Earth’s surface at various wavelengths in the electromagnetic spectrum). Since active lava flows or growing lava domes emit vast amounts of energy, these hot spots are relatively easy to detect in MODIS imagery, even when they are smaller than MODIS’ 1-kilometer resolution. “The lava lake at Mount Erebus in Antarctica is only about 10 meters in diameter, but it’s clearly identifiable in MODIS images and, therefore, by our monitoring system,” said Wright.
One potential problem with a near-daily monitoring system like MODVOLC is the large volume of data generated. “If you want to study large regions at high temporal resolution, you’ll have to download a huge amount of MODIS data to get the job done,” said Wright. The problem was overcome, however, by operating MODVOLC via NASA’s Goddard Earth Sciences Data and Information Services Center (GES DISC). “Running the algorithm at GES DISC basically allows us to rapidly compress large amounts of image data into a handful of text files that include the details of only the pixels containing hot spots. As a result, we can monitor the entire globe in near-real time,” said Wright.
The information that MODVOLC records at GES DISC is then sent electronically to HIGP, located at the University of Hawaii on the island of Oahu, where the results are displayed on the system’s web site. “If you go to the web site, you can see all of the hot spots detected during the previous 24-hour period,” said Wright. “It takes about 8 hours from the time MODIS images the erupting volcano to the time the eruption is reported on the web site.”
The web site also allows users to click on any area of the globe and “zoom in” on individual eruptions or make comparisons between two erupting volcanoes. “You can compare, for example, the behavior of the lava dome at Soufriere Hills Volcano on the island of Montserrat with the behavior of the dome at Colima Volcano in Mexico,” said Wright.
Using MODVOLC, Wright and his colleagues have seen many active volcanoes that previously went undetected. An example is the 2002 eruption of a submarine volcano off the coast of the Solomon Islands in the South Pacific. “Kavachi Volcano is usually below sea level, but in late 2002 we began seeing hot spots in the exact location where Kavachi should be,” said Wright. “It turned out that erupting lava caused the volcano to grow so that its summit reached just above sea level, and when it popped its head above the waves, our system detected the emitted heat.”
Wright’s team also detected the first recorded activity at Anatahan Volcano in the Mariana Islands in 2003. “This volcano has no recorded eruption history and is located in an isolated part of the world. It’s not the sort of volcano you would choose to monitor,” said Wright. “However, we have in effect been monitoring it since September, 2000.” In May 2003, an explosive eruption at Anatahan began, accompanied by the growth of a lava dome. “MODVOLC detected the eruption and pinpointed exactly where on the island it was occurring,” Wright added.
Signs of an Impending Eruption
Because of the near-daily global coverage, MODIS data are ideal for quickly providing researchers with information about new eruptions. Other types of satellite data, such as synthetic aperture radar (SAR), are better suited to looking at the geologic changes that often precede an eruption. Although these data don’t yet provide the quick turnaround time required for detecting new activity, they instead provide the spatial coverage necessary for scientists to see how the ground surface is deforming over a broad region.
Surface changes were key to understanding a major volcanic eruption in 2002. Mount Nyiragongo, located in the Democratic Republic of the Congo, is one of Africa’s most active volcanoes. During an eruptive phase in 1994, a lava lake formed in the volcano’s summit crater. Lava lakes consist of large volumes of molten lava contained within a vent, crater, or broad depression. After its lava lake formed, Nyiragongo calmed down for about eight years. Then, on January 17, 2002, a major eruption occurred—with little warning.
Lack of warning at Nyiragongo has grave implications: the city of Goma sits about 9 miles (15 kilometers) south of the volcano. “About 500,000 people live in Goma and its immediate vicinity,” said Michael Poland, geophysicist at Cascades Volcano Observatory in Vancouver, Washington. “Nyiragongo has a reputation for spawning pretty nasty lava flows. The potential hazard to human life there is significant.”
Ground data are not easy to come by in the region of Nyiragongo. “It’s a dangerous place for field research,” said Poland. “There’s the Ebola virus and an ongoing civil war. Top that off with an erupting volcano, and you have a pretty volatile situation for a field researcher.”
Add to those dangers the region’s lack of technology, and it’s no surprise that Nyiragongo has little monitoring history. “Collecting and recovering data in the Congo is made more difficult because there are problems with equipment being stolen,” Poland added. “So in addition to putting monitoring equipment in place, you have to hire three or four people to guard it. It gets to be quite costly.”
Despite the lack of ground data, Poland learned of some anecdotal evidence of deformation from Congolese researchers. “The local townspeople typically wash their clothes in Lake Kivu, which is located adjacent to Goma. One day, they noticed that the rocks they normally used to dry their clothes on the shoreline were actually under water, so the lake level had come up—indicating subsidence,” said Poland. Measurements of the lake level before and after the eruption later confirmed this evidence.
Poland recognized the 2002 Nyiragongo event as an opportunity to use SAR satellite imagery to analyze how the eruption deformed the ground on and around the volcano. Ground deformation refers to surface changes on a volcano, such as subsidence (sinking), tilting, or bulge formation, due to the movement of magma below the surface. Deformation changes at a volcano, such as those related to magnitude or location, may indicate that an eruption is about to occur. An example of visible deformation occurred in 1980 when a bulge appeared on the north flank of Mount St. Helens prior to its May 18 eruption. Scientists estimated that just before the eruption, the bulge was growing at a rate of 5 feet (1.5 meters) per day.
To determine whether deformation preceded the Nyiragongo eruption, Poland requested SAR data from NASA's Alaska Satellite Facility Distributed Active Archive Center (ASF DAAC). SAR interferometry, or InSAR, is one of the few methods available for remotely analyzing ground deformation that accompanies or precedes volcanic activity. The technique operates on the premise that if the radar signal reflected back to the sensor differs between two images of the same object, taken at two different times, then the object has moved or changed.
“It was obvious that an eruption at Nyiragongo had occurred, but the extent and cause of the activity were unclear. Without InSAR, we wouldn’t have learned much about this particular event,” said Poland. “The satellite imagery gave us some clues to what happened in a location where surface-based measurements are scarce.”
Poland’s study showed that significant deformation across the entire rift valley occurred at the time of the eruption. “Based on the data, we determined that all the deformation happened somewhere between 3 days before to about 15 days after the eruption,” he said. “This means there was no long-term deformation warning, which is interesting because typically with volcanoes, you see inflation or uplift that precedes the eruption by weeks, months, or even years.”
Poland explained that an earlier Nyiragongo eruption, in 1977, formed a fracture system that led partway down to the city of Goma. “Then, during the 2002 eruption, that fracture system was reactivated, and the flowing magma propagated the fractures closer to the city. The lava actually flowed right down Main Street—right through the business district,” he said.
According to Poland, seeing deformation across the entire rift zone suggests that the Nyiragongo eruption was no small event; it was a major tectonic episode. “We believe there must have been a large event that allowed magma stored high in the volcano to drain into the old fracture system and head downhill,” he said. “The implication is that you can have a lot of lava come out in a very bad place—like right above your city—with very little warning.”
Twice in the past 30 years, Nyiragongo’s lava flowed along the fracture system on the south flank, and this flow path leads right to Goma. “If we can start using InSAR data to monitor deformation, we might be able to better assess the likelihood of eruption events before they happen,” Poland said.
Tracking Ash Clouds
Studying volcanoes by looking at changes in surface features falls into the category of long-term monitoring, which means that the study is done over a longer time period and doesn’t require the immediate availability of data. Short-term monitoring, on the other hand, demands a quick and rapid response, especially when the objective is to prevent or mitigate hazards.
Kenneson Dean, associate research professor at the University of Alaska’s Geophysical Institute, studies a unique hazard related to volcanic eruptions—one that didn’t concern Alaska until the rise of air traffic across the Bering Strait in the 1980s. The Alaskan skyway is one of the busiest air traffic areas in the world and, according to Dean, sometimes resembles a Los Angeles freeway. The skyway also runs along the northern boundary of the “Ring of Fire,” a zone of frequent earthquakes and volcanic eruptions that encircles the Pacific.
“Large-body jets fly across this region carrying about 2,000 passengers and $1 billion in cargo daily,” said Dean, who heads the satellite monitoring program at the Alaska Volcanoes Observatory (AVO) at the University of Alaska. “If a plane is flying towards an ash cloud, and the cloud is moving towards the plane, they will cross paths very quickly. Even if the cloud is not moving towards the plane, an aircraft still needs plenty of time to adjust its course and avoid the cloud.”
Jet engines operate at a temperature that melts volcanic ash or glass, and this melted material can then cause the engines to slow and shut down. “The problem in this area is that the eruptions tend to be explosive. They eject volcanic material, gas, and ash well into the atmosphere, and many of these eruptions rise to 40,000 feet (about 12,000 meters) in height, which is the height of jet air traffic,” said Dean. “A lot of people and property are at risk.”
The AVO uses satellite data for short-term monitoring, which means that data are received, processed, and analyzed just minutes after a satellite pass. “The region we monitor covers several thousand kilometers and includes about 40 volcanoes in Alaska and about 60 in the Kamchatka Peninsula, Russia,” Dean said. “We get the data directly from the MODIS and Advanced Very High Resolution Radiometer (AVHRR) sensors, and we analyze those data routinely every morning and afternoon.”
In 1993, the University of Alaska’s Geophysical Institute received a NASA grant to purchase its own AVHRR receiving station and, in 2001, a MODIS receiving station. Prior to having its own station, AVO used a Domestic Communications Satellite station at the University of Miami to collect the data and then send them electronically to Fairbanks for analysis. “Having our own stations on site reduced monitoring time from 1 hour to about 10 minutes,” said Dean. And minutes are important when you consider the hazard faced by aircraft that encounter ash clouds.
In 1982, a British Airways Boeing 747 carrying 240 passengers flew into an ash cloud near Indonesia’s Galunggung Volcano. All four of the aircraft’s engines shut down, nearly forcing the aircraft to ditch in the Indian Ocean. In 1989, a KLM 747 encountered an ash cloud over Talkeetna, Alaska. Again, all four engines failed and the jet descended to within a few thousand feet of the mountaintops before pilots were able to restart one of the plane’s engines and make an emergency landing in Anchorage.
Between 1980 and 1999, more than 100 jet airliners sustained some damage after flying through volcanic ash clouds, according to the U.S. Geological Survey (USGS). “Aviation safety is one big reason we need to monitor active volcanoes in Alaska,” said Dean. “Right now we’re seeing hot spots almost daily at Shiveluch, Kliuchevskoi, and Bezymianny Volcanoes. When an explosive eruption occurs, you need an information turnaround time that’s really fine-tuned, so that aircraft pilots have time to make decisions about whether to continue on their route, turn around, or change routes,” said Dean. “Available fuel becomes a critical issue, too.”
AVO’s short-term monitoring program is obviously making a difference. “When an eruption occurs and the warnings go out, the airline industry often contacts us directly,” said Dean. “We also follow Federal Aviation Administration reports, which reveal that aircraft are sometimes re-routed or even returned to their home port if the situation is bad,” said Dean.
Not all eruptions are explosive, like those that tend to occur in the Alaska region. Some volcanoes, such as those in the Hawaiian Islands, are known for more quiet flows of fluid lava. Although Hawaiian eruptions usually do not result in loss of life, they can have devastating effects on land and property.
Kilauea Volcano, on the island of Hawaii, is the most active volcano on Earth. During the past 1,000 years, more than 90 percent of the volcano’s surface has been covered by lava flows. Between 1983 and 1991, lava flows repeatedly struck communities located on the east coast of Hawaii. In 1990, flows covered the village of Kalapana, destroying more than 180 homes, a visitor center in Hawaii Volcanoes National Park, and historical and archaeological sites, according to the USGS.
“Hawaii volcanoes are known for long-term eruptions, wherein you have a small amount of gas emitted year in and year out for decades,” said Peter Mouginis-Mark, research scientist and current acting director at HIGP. Mouginis-Mark heads a HIGP-based program called Hawaii Synergy, a cooperative effort to provide disaster management organizations and federal hazard agencies with access to current satellite data, including imagery from Landsat 7 and the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), archived at NASA's Land Processes DAAC (LP DAAC).
According to Mouginis-Mark, perhaps the greatest benefit offered by satellite-monitoring technology will be an enhanced understanding of exactly how volcanoes work. “What’s important is the global perspective and the way volcanoes work on different timescales,” he said. “Some volcanoes produce lava flows, and other volcanoes explode so that you have to worry about big eruption columns. We now have this remote capability to study volcanoes anywhere in the world.”
Although scientists will continue to use ground monitoring techniques to keep an eye on the Earth’s volcanoes, satellite data will increasingly allow scientists to see “the big picture” and, as a result, better predict volcanic activity.
“Satellite data are brilliant for understanding the levels of eruption intensity and for monitoring the impact an eruption is having on the surrounding environment,” said Mouginis-Mark. “The ability to draw on ASTER or MODIS data and put together a one- to three-year sequence of observations really lets us look at whether there are real changes going on in a volcano.”
“Compiling a global database of volcanic thermal unrest has allowed us to look at long-term trends,” said Wright. “We’re currently analyzing the entire MODVOLC data set to identify patterns that help us better understand how all the Earth’s volcanoes behave.”
Wright, R., L. Flynn, H. Garbeil, A. Harris, and E. Pilger. 2002. Automated volcanic eruption detection using MODIS. Remote Sensing of Environment. 82:135-155.
T.P. Miller and Casadevall, T.J. Volcanic Ash Hazards to Aviation, in Encyclopedia of Volcanoes, ed. H. Sigurdsson. 2000. San Diego: Academic Press.
The Pu’u ‘O’o-Kupaianaha Eruption of Kilauea Volcano, Hawai’i, 1983 to 2003. December 2002. USGS Fact Sheet. 144-02.
MODVOLC: Global spaceborne thermal monitoring with MODIS. Accessed July 12, 2004.
Alaska Volcano Observatory. Accessed July 12, 2004.
Cascades Volcano Observatory (USGS). Accessed July 12, 2004.
Hawaiian Volcano Observatory (USGS). Accessed July 12, 2004.
For more information
NASA Alaska Satellite Facility Distributed Active Archive Center (ASF DAAC)
NASA Goddard Earth Sciences Data and Information Services Center (GES DISC)
NASA Land Processes DAAC (LP DAAC)
|About the remote sensing data used|
|DAAC||NASA Alaska Satellite Facility Distributed Active Archive Center (ASF DAAC)
NASA Goddard Earth Sciences Data and Information Services Center (GES DISC)
NASA Land Processes DAAC (LP DAAC) |
The Life Cycle of Stars
Space may seem empty, but actually it contains thinly spread gas and dust, called interstellar medium, that gradually collapses over immense stretches of time and collects into denser clouds of gas and dust. The atoms of gas are mostly hydrogen and are typically about a centimeter apart. The dust is mostly carbon and silicon. In some places, this interstellar medium is collected into particularly dense clouds of gas and dust known as a nebula. A nebula is the birthplace of stars. Our sun was probably born in a nebula around 5 billion years ago.
Within a nebula, there are varying regions where gravity has caused the gas and dust to clump together. The gravitational attraction of these clumps pull more atoms into the clump. As this accretion continues, the gas pressure increases and the core of the protostar gets hotter and hotter. If the protostar gets dense enough and hot enough, a fusion reaction will ignite and the star lights up. The minimum mass for the formation of a star is about 80 times the mass of Jupiter. A star is a very large, very hot ball of gas which has hydrogen fusing into helium in the core. Stars spend the majority of their life fusing hydrogen into helium. When the hydrogen is nearly used up, the star can fuse helium into heavier elements. Throughout this process, a battle goes on in the core of the star between gravity trying to collapse the star and temperature-produced gas pressure pushing the material in the star outward. During the life of a star, there is a balance between the gas pressure pushing out and gravity pushing in.
Once a star has achieved nuclear fusion in its core, it radiates energy into space. While the temperature-produced gas pressure balances gravity, the star attains a stable state and enters the main sequence phase of its life. The temperature of a main sequence star is about 15,000,000°C. For the major part of its life span, a star stays in this main sequence phase, with hydrogen being fused into helium and a balance between force pushing out and force pushing in.
How long a star lives depends on its initial size. Stars can live from many millions of years to many billions of years. The most massive stars (many times the size of our sun) become extremely dense and hot in the core and therefore, have a very high fusion rate. The largest stars use up their hydrogen fuel fastest and therefore live for the shortest time, perhaps only millions of years. Stars that are the size of our sun fuse hydrogen much more slowly and therefore live much longer. Medium sized stars live billions of years.
As a star begins to run low on hydrogen, since the initial quantity has been fused into the denser helium gas, the core will contract due to gravity. The collapsing core increases temperature to the point that the star can begin to fuse helium into carbon. When that happens, the outer portion of the star expands greatly due to the higher temperature. The star can expand to 1000 times the diameter of the sun. At this point, the star is called a red giant. If our sun became a red giant, its surface would expand out past the orbit of Mars. Red giants are red because the surface of the star is cooler than white or blue stars, but remain highly visible because of their gigantic size.
After a star becomes a red giant, it will take one of several different paths to end its life. Which path is followed by a star after the red giant phase depends on its mass. During the fusion life of a star, its size is the result of a competition between fusion heat pushing the material out and gravity pulling the material in. At the end, gravity always wins. After the star has lived through its red giant stage, the fusion essentially ends (the star runs out of fuel) allowing gravity to collapse the star. Some of the outer layers of material will be blown away and the core becomes smaller and denser. The core will become either a neutron star, a white dwarf, a black dwarf, or a black hole.
Low-mass stars (less than 0.5 times the mass of our sun) become a red giant and then blow off some outer material which dissipates in the interstellar medium after a few hundred thousand years. The remainder of the star shrinks to a white dwarf. After a few billion years, white dwarfs cool to become black dwarfs.
Medium-mass stars (less than 3 times the mass of our sun) become a red giants and eventually become a supernova. A supernova is the massive explosion of a star accompanied by emission of light and matter so intense that it can outshine an entire galaxy. After a supernove, when all the accessible fuel in a medium-mass star is exhausted, the iron core collapses and proton-electron pairs are converted into neutrons. Such stars are called neutron stars. Neutron stars might spin rapidly giving off light and X-rays or they might emit pulses of energy regularly and be known as pulsars.
The largest-mass stars become black holes. These extremely large stars end their life in the same way as a medium-mass star in that they become a supernova. After the outer layers are blown away in the supernova, however, the core of the star shrinks down in volume but still has a huge mass. The density of this object is extremely high, even denser than neutron stars. This dense object will have a gravitational force so large that not even light can escape from the body. (A companion topic to this occurs in The General Theory of Relativity where we see than extremely strong gravitational attraction can even attract light.) These objects appear black because light cannot leave them, that is, they pull all light back to their surface. Black holes capture everything nearby due to their massive gravity and so they grow in size. Black holes are a common topic for science fiction but keep in mind, they are simply a very dense ball of matter with intense gravitational attraction.
- A star begins its life in a nebula, struggles to balance gravitational pull and internal pressure during its main sequence period, and ends its life in an explosion to eventually become a white or black dwarf, or a neutron star, or a black hole.
Follow up questions:
- How many stars are there in our galaxy?
- What is the primary component of stars?
- What is the name of the mass of hydrogen gas before fusion begins?
- What is the power source of the stars?
- All stars begin as
- red giants
- white dwarf
- The correct life cycle for a very large mass star is
- main sequence, red giant, white dwarf
- black hole, supernova, red giant, nebulae
- main sequence, red giant, supernova, black hole
- main sequence, red giant, supernova, neutron star
- Which of the following is the fusion occurring in our sun?
- lithium to beryllium
- helium to hydrogen
- hydrogen to helium
- helium to carbon
- During its main sequence life time, a star is kept from collapsing by
- the strong nuclear force
- heat that produces gas pressure
- the fact that stars are made up of very lightweight hydrogen gas
- the weak nuclear force
- Which type of star has the shortest life span?
- the smallest ones
- the middle sized ones
- the most massive ones
- Medium-sized stars end their life as a
- neutron star
- white dwarf
- black dwarf
- black hole |
More Statistics Lesson
Data can be organized and summarized using a variety of methods. Tables are commonly used, and there
are many graphical and numerical methods as well. The appropriate type of representation for a collection
of data depends in part on the nature of the data, such as whether the data are numerical or nonnumerical.
In these lessons, we will learn some common graphical methods for describing and summarizing data: Frequency Distributions, Bar Graphs, Circle Graphs, Histograms, Scatterplots and Timeplots.
The frequency, or count, of a particular category or numerical value is the number of times that the
category or value appears in the data. A frequency distribution
is a table or graph that presents the
categories or numerical values along with their associated frequencies. The relative frequency of a
category or a numerical value is the associated frequency divided by the total number of data.
frequencies may be expressed in terms of percents, fractions, or decimals. A relative frequency
is a table or graph that presents the relative frequencies of the categories or numerical
values. Note that the total for the relative frequencies is 100%. If decimals were used instead of percents, the total would be 1. The sum of the relative frequencies in a relative frequency distribution is always 1.
Differences between frequency distribution table and relative frequency distribution table
A commonly used graphical display for representing frequencies, or counts, is a bar graph, or bar chart.
In a bar graph
, rectangular bars are used to represent the categories of the data, and the height of each bar
is proportional to the corresponding frequency or relative frequency. All of the bars are drawn with the
same width, and the bars can be presented either vertically or horizontally. Bar graphs enable
comparisons across several categories, making it easy to identify frequently and infrequently occurring
Bar graphs are commonly used to compare frequencies, They are
sometimes used to compare numerical data that could be displayed in a table, such as temperatures, dollar
amounts, percents, heights, and weights.
A bar graph is a graph that compares amounts in each category to each other using bars.
How to read and interpret a bar graph?
Segmented Bar Graph
A segmented bar graph
is used to show how different subgroups or subcategories contribute to an entire group or category. In a segmented bar graph, each bar represents a category that consists of more than one subcategory. Each bar is divided into segments that represent the different subcategories. The height of each segment is proportional to the frequency or relative frequency of the subcategory that the segment represents.
How to interpret percentage segmented bar charts?
Bar graphs can also be used to compare different groups using the same categories. It is sometimes called a double bar graph
Interpreting Double Bar Graphs
How to interpret data shown in a double bar graph?
, often called pie charts, are used to represent data with a relatively small number of categories. They illustrate how a whole is separated into parts. The area of the circle graph representing each category is proportional to the part of the whole that the category represents.
Each part of a circle graph is called a sector. Because the area of each sector is proportional to the percent of the whole that the sector represents, the measure of the central angle of a sector is proportional to the percent of 360 degrees that the sector represents.
Creating a Circle Graph
How to create a circle graph, or "pie chart" from some given data?
When a list of data is large and contains many different values of a numerical variable, it is useful to
organize it by grouping the values into intervals, often called classes. To do this, divide the entire interval
of values into smaller intervals of equal length and then count the values that fall into each interval. In this
way, each interval has a frequency and a relative frequency. The intervals and their frequencies (or
relative frequencies) are often displayed in a histogram.
How to create a histogram from the given data?
How to create a relative frequency histogram?
Histograms are graphs of frequency distributions
that are similar to bar graphs, but they have a number line for the horizontal axis. Also, in a histogram,
there are no regular spaces between the bars. Any spaces between bars in a histogram indicate that there
are no data in the intervals represented by the spaces.
Relative frequency histogram has percentage of data values on the vertical axis rather than the frequency.
Step 1: Find the total number of data values.
Step 2: Find the percent of data values in each interval (organize in a table)
Step 3: Draw Histogram.
To study connection between a histogram and the corresponding frequency histogram, consider the histogram below showing Kyle's 20 homework grades for a semester. Notice that since each bar represents a single whole number (6,7,8,9 or 10), those numbers are best placed in the middle of the bars on the horizontal axis. In this case Kyle has one grade of 6 and five grades of 7.
a) Make a relative frequency histogram of these grades by copying the histogram but making a scale that shows proportion of all grades on the vertical axis rather than frequency.
b) Compare the shape, centre, and spread of the two histograms.
Differences between a bar graph and a histogram
• Bar graph shows the number of items in specific categories.
• Drawn with space between the columns.
• Do not have to be organized into equal intervals of data.
• Bars show categories of data.
• Histogram shows frequency of data divided into equal intervals.
• No space between the columns.
• Must be organized into equal intervals of data.
• Bars show continuous data.
1. At Texas Middle School, a sampling of 1500 students were surveyed to determine which flavor of cookie they prefer being served in their cafeteria.
Based on the data shown in the bar graph which of the following statements is true?
2. The histogram below shows the number of movie tickets sold before 5.00 P.M. last Saturday. Which of the following sets of data could have been used to create the histogram?
3. The manager of a carnival was trying to figure out which rides people liked the best. He recorded his information below. Use his bar graph to answer the questions.
a) Which ride did exactly 5 people like the best?
b) What percent of people said the pirate ship was their favorite ride?
c) Which ride did the least people like?
d) What percent of people said the Tilt O' Wheel was their favorite ride?
e) How many more people liked the Ferris Wheel than liked the Tilt O' Wheel?
f) What is the difference in the amount of people who likes the Pirate ship and the amount who likes the Tilt O' Wheel?
g) Did more people like the Tilt O' Wheel or the Bumper cars?
h) What ride did exactly 15 people like the best?
i) If you combine the amount of people who liked the Tilt O' Wheel and the amount who liked the Pirate ship how many people would you have?
j) Which ride did the most people like?
All examples used thus far have involved data resulting from a single characteristic or variable. These
types of data are referred to as univariate, that is, data observed for one variable. Sometimes data are
collected to study two different variables in the same population of individuals or objects. Such data are
called bivariate data. We might want to study the variables separately or investigate a relationship
between the two variables. If the variables were to be analyzed separately, each of the graphical methods
for univariate numerical data presented above could be applied.
To show the relationship between two numerical variables, the most useful type of graph is a scatterplot
In a scatterplot, the values of one variable appear on the horizontal axis of a rectangular coordinate system
and the values of the other variable appear on the vertical axis. For each individual or object in the data,
an ordered pair of numbers is collected, one number for each variable, and the pair is represented by a
point in the coordinate system.
A scatterplot makes it possible to observe an overall pattern, or trend, in the relationship between the two
variables. Also, the strength of the trend as well as striking deviations from the trend are evident. In many
cases, a line or a curve that best represents the trend is also displayed in the graph and is used to make
predictions about the population.
Scatter Plots : Introduction to Positive and Negative Correlation
A scatter plot is a graph of a collection of ordered pair (x,y)
The graph looks like a bunch of dots, but some of the graphs are a general shape or move in a general direction.
If the x-coordinates and the y-coordinates both increase, then it is positive correlation
. This means that as the value of one variable increases, the other increases as well. The variables are related.
If the x-coordinates and the y-coordinates have one increasing and one decreasing, then it is negative correlation
. This means that as one increases, the other decreases.
If there seems to be no pattern, and the points looked scattered, then it is no correlation
. This means that the two variables are not related. As one variable increases, there is no effect on the other variable.
Which scatterplots below show a linear trend?
Sometimes data are collected in order to observe changes in a variable over time. For example, sales for a
department store may be collected monthly or yearly.
A time plot
(sometimes called a time series) is a
graphical display useful for showing changes in data collected at regular intervals of time. A time plot of
a variable plots each observation corresponding to the time at which it was measured. A time plot uses a
coordinate plane similar to a scatterplot, but the time is always on the horizontal axis, and the variable
measured is always on the vertical axis. Additionally, consecutive observations are connected by a line
segment to emphasize increases and decreases over time.
What is a time plot?
Time Series Plot
The data listed in the table below lists the U.S. population from the year 1790 to 1990. The data comes from the U.S. census Bureau. Create a Time Series Plot to display the data graphically.
Try the free Mathway calculator and
problem solver below to practice various math topics. Try the given examples, or type in your own
problem and check your answer with the step-by-step explanations.
We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page. |
This story is adapted from The Human Cosmos: Civilization and the Stars, by Jo Marchant.
In February 1954 , a US biologist named Frank Brown discovered something so remarkable, so inexplicable, that his peers essentially wrote it out of history. Brown had dredged a batch of Atlantic oysters from the seabed off New Haven, Connecticut, and shipped them hundreds of miles inland to Northwestern University in Evanston, Illinois. Then he put them into pans of brine inside a sealed darkroom, shielded from any changes in temperature, pressure, water currents, or light. Normally, these oysters feed with the tides. They open their shells to filter plankton and algae from the seawater, with rest periods in between when their shells are closed. Brown had already established that they are most active at high tide, which arrives roughly twice a day. He was interested in how the mollusks time this behavior, so he devised the experiment to test what they would do when kept far from the sea and deprived of any information about the tides. Would their normal feeding rhythm persist?
For the first two weeks, it did. Their feeding activity continued to peak 50 minutes later each day, in time with the tides on the oysters’ home beach in New Haven. That in itself was an impressive result, suggesting that the shellfish could keep accurate time. But then something unexpected happened, which changed Brown’s life forever.
Buy This Book At:
If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more.
The oysters gradually shifted their feeding times later and later. After two more weeks, a stable cycle reappeared, but it now lagged three hours behind the New Haven tides. Brown was mystified, until he checked an astronomical almanac. High tides occur each day when the moon is highest in the sky or lowest below the horizon. Brown realized that the oysters had corrected their activity according to the local state of the moon; they were feeding when Evanston—if it had been by the sea—would experience high tide. He had isolated these organisms from every obvious environmental cue. And yet, somehow, they were following the moon.
For a while, Brown’s experiment became infamous, one of the most controversial results in biology. Scientists were just starting to appreciate that living processes vary according to environmental cycles such as the time of day, but every other major figure in the field was convinced these rhythms are ultimately driven by internal clocks; Brown’s lone insistence that organisms are plugged into mysterious cosmic cues was widely dismissed. The disagreement reflected a deeper, philosophical split regarding the relationship that living creatures have with our planet and the wider cosmos. Are we autonomous, self-running machines, or is life in constant, subtle communication with the Earth, sun, moon, and even stars?
Brown’s critics never relented, and his decades of work were thrown out as flawed. Conventional scientific accounts barely mention him except as a cautionary tale: a warning about the dangers of straying too far from common sense. The field of “chronobiology” has since exploded, with researchers uncovering intricate networks of molecular cogs and gears that keep time inside our cells, allowing virtually all creatures on this planet to anticipate the daily and seasonal movements of the sun. But there’s still a fundamental mystery at the heart of biological clocks that has never been explained. And the stubborn trickle of evidence hinting that Brown might have been onto something is fast becoming a flood.
That life on Earth moves in harmony with the sun’s daily path through the sky has been known for thousands of years. We wake in the morning and sleep at night. Flowers open and close depending on the time of day. Birdsong heralds the dawn.
But these daily cycles were generally seen as passive responses to changing environmental signals such as temperature or light. It wasn’t until 1832 that a Swiss botanist named Augustin de Candolle first suggested that sleep-wake movements in plants such as Mimosa might result from an internal timer.
By the early 1950s, a handful of other scientists were taking an interest in biological rhythms. In Germany, eminent botanist Erwin Bünning recorded the “sleep movements” of bean-seedling leaves, while physiologist Jürgen Aschoff discovered a 24-hour rhythm in human body temperature by experimenting on himself. In the United States, the British-born biologist Colin Pittendrigh started investigating insect rhythms after noticing daily cycles in mosquito activity while tackling malaria in Trinidad, and Romanian-born Franz Halberg entered the field after his drug test results were ruined by daily fluctuations in the levels of white blood cells in mice.
Whereas Brown was captivated by the influence of the moon, his rivals focused on 24-hour cycles. Whatever species they studied, they too found that in constant conditions, the rhythms continued. But in their experiments, the speed of the rhythms changed slightly in the absence of external cues, so that the peaks and troughs gradually drifted with respect to the solar day.
Different individuals ended up with varying cycle lengths—each close to, but not exactly, 24 hours. The researchers concluded that the rhythms must be driven by private, internal timers within the organism’s cells. Under normal conditions, the cycles were nudged by environmental cues such as light and temperature. But they were perfectly capable of running on their own.
At first, Brown thought this too. But he started to doubt it was possible. In his lab, crabs’ lunar and solar cycles persisted accurately for months, even when apparently isolated from the surrounding environment; he couldn’t imagine how an independent, internal clock could keep such good time. And then, in 1954, came the experiment with the time-shifting oysters. Despite being in a sealed darkroom, they had adjusted their activity according to the local movements of the moon. Instead of relying on inner timers, he was convinced they were sensing signals from the sky.
Brown decided to investigate the most fundamental biological process he could think of: metabolism. He studied sprouting potatoes—in experiments that ran for years—as well as bean seeds, mealworm larvae, chick eggs, and hamsters, shielding them all from changes in temperature, pressure, and light. Although they were supposedly cut off from the outside world, he saw patterns in their metabolic rate that matched not just the movements of the sun and moon, but pressure and weather changes in the Earth’s atmosphere. Even the potatoes “knew” not just the hour but the season of the year. It was as if life were pulsing in time with the planet.
Brown concluded that the organisms were sensitive to external geophysical factors, perhaps minute fluctuations in gravity, or even subtle forces that hadn’t yet been discovered. In his rivals’ experiments, supposedly proving the existence of independent clocks, Brown argued that the subjects weren’t cut off from the environment after all. They were bathed in—and influenced by—subtle, rhythmic fields that varied as the Earth turned.
Such ideas were viewed as threatening by his peers. Several of them had fought to have their own work on daily cycles taken seriously by other scientists. Their professional respectability hinged on using rigorous, reproducible methods, and basing their theories on impeccable physical principles of cause and effect; Brown’s claims of mysterious forces were dangerous nonsense that jeopardized the field. His measurements weren’t accurate enough, they insisted, or he was seeing patterns in his highly complex data that simply weren’t there. Yet Brown was charismatic and articulate, and he was swaying public opinion.
Something had to be done.
The first major blow came in 1957, with an extraordinary paper in the leading US scientific journal, Science, in which a respected ecologist named LaMont Cole claimed that by juggling random numbers, he had “discovered the exogenous rhythm of the unicorn.” The satire was aimed at Brown and his team, and its message was clear: Their results were as imaginary as the unicorn itself. It was an unprecedented, personal attack and it “hit us very hard,” Brown recalled later. “We were everywhere encountering innuendos from this article.” In 1959, Halberg followed up by coining the term that now defines the field: “circadian.”
It’s often said to refer to 24-hour cycles, but that isn’t quite right. It comes from the Latin meaning “about a day,” and Halberg chose it precisely to emphasize the key flaw in Brown’s theory: that most free-running daily rhythms are not exactly 24 hours long. Tensions came to a head in June 1960, at a prestigious conference on biological clocks held at Cold Spring Harbor, near New York City.
This event is now seen as chronobiology’s defining moment, at which Pittendrigh and the others set out their vision of circadian rhythms as internal and self-sustaining, controlled by oscillating biochemical mechanisms analogous to the cogs and gears of a clock. Everything was looking good for the young field, with its new terminology and a robust theoretical framework. There was just one problem: Brown. He wasn’t invited at first, but he went anyway, the only speaker to argue for a core pacemaker driven instead by cosmic cues. He faced a largely hostile audience.
One of Brown’s arguments came down to temperature. Everyone agreed that the timing of the biological rhythms was surprisingly resistant to even quite dramatic temperature changes. Crabs switch color and flies emerge from their pupae at the correct time, regardless of how hot or cold they are. Dried seeds stored in constant conditions still showed an annual rhythm in their capacity to germinate, whether kept at 20 degrees below freezing or 50 degrees above. Yet the speeds of biochemical reactions vary hugely with temperature, Brown pointed out; as a general rule, rates double with every 10 degree C rise. His rivals could provide no explanation of how any biochemical mechanism could create a clock that was immune to such influences, whereas external, unvarying cues driven by the sun and moon would explain this property perfectly. By insisting that an internal timer existed, he warned, they risked “chasing a ghost.” Pittendrigh retorted that it was Brown, with his mysterious, subtle influences, who was chasing ghosts.
After the meeting, Brown noticed that his papers were increasingly rejected, and that others in the field now no longer cited his research at all. According to Brown, Halberg eventually admitted that at around this time, his rivals privately agreed to block, ignore, or discredit him for the sake of the field’s development. Whether or not that’s correct, they certainly stepped back from engaging with Brown and his ideas; from then on it was almost as if he didn’t exist.
Brown and his cosmic cues were cast out, and the study of biological rhythms became the study of circadian clocks. The resulting field has since transformed our understanding of how life works. Aschoff, for example, embarked on a pioneering series of experiments to investigate what happens when people are cut off from the sun. After conducting pilot studies in an old World War II bunker, he built a dedicated isolation facility into a Bavarian hillside in 1964. Working with a physicist colleague named Rütger Wever, he shut students inside it for weeks at a time, tracking them with a battery of instruments including motion sensors and a rectal probe. The soundproof chamber was comfortable, with a sitting room, shower, and small kitchen, but all clues to the time of day—such as a clocks, radio, or telephone—were banished.
Aschoff himself was the first volunteer, observed by Wever. At the end of his 10-day stay, he was “highly surprised” to discover on release that his last waking time was 3 pm. After that, more than 300 volunteers “went underground” for three to four weeks each.
Just as in other species, Aschoff found that the volunteers’ daily rhythms continued even in constant conditions, showing that humans have innate circadian clocks too. When deprived of information from the outside world, the sleep-wake cycle usually lagged slightly slower than the solar day, with a period averaging around 25 hours.
Over the years, he and Wever showed that the cycles could be trained to follow signals such as bright light, temperature, and social cues. For a few of the volunteers, sleep patterns varied wildly, with day lengths reaching up to 50 hours, even though they didn’t realize it. But their physiology—such as body temperature or excretion of metabolites—almost always continued to oscillate within a narrow band of 24 to 26 hours. This meant that their sleep-wake patterns fell out of step with their physiology, a phenomenon that Aschoff called “desynchronization.” It was one of his most important discoveries—the first hint that there are multiple clocks in the body, which drive different functions, and that without appropriate external cues, they can become uncoupled. Volunteers reported feeling less well when this happened, leading Aschoff to warn that cutting ties with the sun, for example with regular shift work, might have damaging consequences for health.
The first clue to how the clocks actually work came across the Atlantic in 1971, from a Californian graduate student studying daily rhythms in fruit flies. Ronald Konopka isolated three mutant fly strains that lost the ability to keep time: one with a slowed-down rhythm of 29 hours, one with a too-short period of 19 hours, and one with no cycles at all. All three, it turned out, had different errors in the same gene, which was subsequently identified by other researchers in 1984. They named the gene “period,” and found that the protein it encodes rises and falls in abundance every 24 hours. At last, they had a glimpse of the machinery inside the biological clock. Chronobiologists had found their ghost.
Since then, many other clock genes have been identified. They encode proteins that regulate each other in a complex network of feedback loops, ultimately creating what Brown had thought impossible: a steady cycle that pulses roughly once each day, in time with the sun. Similar systems are found not just in fruit flies but in every type of life from bacteria to people. These sun clocks tell animals when to feed, when to run, when to sleep, and when to digest. They allow plants to ration their starch reserves through the night, and to get their photosynthesis machinery up and running for dawn. They tell fungi when to form spores; insects when to emerge from their pupae; and signal thousands of species of ocean plankton to sink before dawn and rise to the surface each night—the largest movement of biomass on the planet. By tracking the shifting times of sunrise and sunset, the clocks can also drive seasonal changes, telling organisms precisely when to migrate, molt, or reproduce.
Meanwhile, in humans, the study of circadian rhythms has become one of the hottest areas in medicine. Inner clocks regulate our sleep patterns, as well as body functions such as digestion, blood pressure, temperature, blood sugar levels, immune responses, and even cell division. As Aschoff warned, we ignore these rhythms at our peril. In the two centuries since the first artificial lights switched on, our lifestyles have become increasingly detached from the 24-hour cycle of sunrise and sunset. Many of us stay up late, work varying shifts, hop between time zones. We work in gloomy offices during the day and are exposed to light from computers, TVs, and smartphones at night. That is a problem, because although our body clocks can run independently, if they aren’t reinforced by external cues they can veer wildly off course.
In 2017, the field of chronobiology received the ultimate scientific recognition: a Nobel Prize, for the researchers who identified the period gene. “We on this planet are slaves to the sun,” commented the prominent biologist Paul Nurse. “The circadian clock is embedded in our mechanisms of working, our metabolism, it’s embedded everywhere.”
What, then, was the mysterious force that allowed Brown’s oysters to track the heavens? Brown knew his rivals wouldn’t take him seriously unless he could suggest a mechanism. So he spent the summer of 1959 carefully monitoring the creepings of 34,000 snails, collected from the mud flats of the New England coast. He was astounded to find that the snails could distinguish between different compass directions.
Not only that, their preferred orientation varied over time, following both the solar and lunar day. He could influence or disrupt this behavior using magnets. Finally, he believed he could explain how animals might detect local time even in a sealed lab: They were sensing daily changes in Earth’s magnetic field.
This field is mostly generated by molten iron that circulates within Earth’s outer core. Overall, it’s shaped as if the planet contains a huge bar magnet, with north at one pole and south at the other. But it is also influenced by external factors such as weather and magnetic storms—as well as the movements of the sun and moon. Radiation from the sun ionizes atoms in the upper atmosphere, producing free electrons. Meanwhile the sun’s heat causes atmospheric tidal winds that move these charged particles across the Earth’s field lines. The resulting electric current creates its own magnetism: a 24-hour ripple superimposed on the larger magnetic field of the Earth. A similar, smaller ripple occurs every lunar day due to the gravity of the moon. These effects interact with each other, creating peaks and troughs during spring and neap tides. They’re also dependent on the amount of sunlight falling on the upper atmosphere, so they vary with latitude and with the seasons.
The Earth’s geomagnetic field is extremely weak, around a hundred times smaller than that of a standard fridge magnet, and the solar and lunar tides are even tinier. Brown was one of the very first researchers to suggest that animals might have a magnetic sense, and he had no idea how his snails might detect such subtle changes.
But he knew it could be clinching evidence for his theory of external cosmic cues. He excitedly presented his results at the biological clocks symposium in 1960, telling the audience that living things are fantastically sensitive to very weak magnetic fields. Although we can’t see it, we’re all immersed in an electromagnetic ocean, he insisted, with waves, tides, and ripples that shift according to the relative positions of the Earth, sun, and moon, keeping organisms in constant touch with the state of the solar system and the time of day.
The bombshell didn’t convince his rivals, though; in fact it hardened them against him. Just as they were setting out rigorous principles for the study of biochemical internal clocks, the notion of a subtle magnetic sense was beyond the pale. And yet, though they shunned Brown in public, they didn’t completely ignore his idea: quite the reverse.
Aschoff didn’t mention the shielding experiment or the apparent role of electromagnetic fields in any of his own papers on desynchronization.
What’s rarely mentioned today about Aschoff and Wever’s famous bunker, built just a few years later, is that it contained not just one underground apartment, but two. The parallel units were almost identical, with matching beds, kitchens, and record players. But there was a very important difference: One of them was completely enclosed within a hefty capsule of cork, coiled wire, glass wool, and steel, through which no electromagnetic radiation could pass; anyone living inside was completely cut off from the Earth’s magnetic field. The aim was to show that the shielding made no difference to the volunteers’ biological clocks, and prove, once and for all, that Brown was wrong.
Between 1964 and 1970, more than 80 volunteers stayed in the two units. As Aschoff predicted, their circadian rhythms did continue. But there was a problem; the results in the two groups were not the same. In the unshielded bunker, isolated from clocks and sunlight but still exposed to magnetic fields, people’s sleep and waking patterns departed from the solar day, reaching an average period of 24.8 hours.
But when magnetic fields were also blocked, the volunteers’ circadian cycles deteriorated further. Their day length slipped even longer. There was significantly more variation between individuals. And their different rhythms were much more likely to become uncoupled. As mentioned earlier, Aschoff championed desynchronization as one of his key discoveries. Yet over those six years, it only ever occurred in the shielded bunker, cut off from the Earth’s magnetic field. Wever found that if he exposed the volunteers to a similar artificial field, all of these effects were reversed.
The results proved that we do have an inner clock that runs independently, regardless of any electromagnetic information from the outside world. And yet, that clearly wasn’t the whole story. Even though the volunteers couldn’t consciously perceive the Earth’s vanishingly weak magnetism, the results suggested that their bodies could somehow sense it, and that this had a profound impact on the workings of their biological clocks. Wever published the data in a series of now-obscure papers in the 1970s; it was a “remarkable” result, he said, the first scientific evidence that humans are influenced by natural magnetic fields. But Aschoff didn’t put his name to them, and he didn’t mention the shielding experiment or the apparent role of electromagnetic fields in any of his own papers on desynchronization.
The existence of the second chamber was largely forgotten, and circadian rhythm research continued as if the magnetism experiment never happened.
Just as these researchers were ignoring any links to magnetic fields, however, other biologists were being forced to address their effects, despite the dubious connotations. These scientists were studying the impressive ability of many animals to navigate across the planet, from turtles and salamanders to birds and bees. How did millions of monarch butterflies each year find their way thousands of miles from North America to a particular patch of fir groves in central Mexico?
How did female loggerhead turtles, after growing up in the open ocean, return to lay their eggs at the very same beach where they hatched more than 10 years earlier? How did racing pigeons fly straight home from distant places they had never previously visited?
Since the 1950s, biologists had been realizing that many species are expert in deciphering celestial signals. Butterflies track the sun; moths follow the moon. Starlings orient north from the celestial pole around which the stars turn. Dung beetles roll their dung balls in straight lines by orienting against the glowing streak of the Milky Way. Animals often combine these visual cues with information from their circadian clocks, allowing them to compensate for the time of day. They are clued in to their place in the wider cosmos, using the circling heavens not just to tell the time, but to navigate around the globe.
But this wasn’t enough to explain the behavior of many species, some of which could still find their way even when the sky was overcast. It turned out that some detect patterns in the polarized light of sunlight and even moonlight, allowing them to pinpoint the sun or moon’s position even through clouds. Then in 1972, a German graduate student named Wolfgang Wiltschko showed that artificial magnetic fields, similar in strength to Earth’s, could disrupt or alter the direction in which robins tried to migrate. It was the start of a flood of evidence that animals, from pigeons and sparrows to lobsters and newts, are sensitive to the magnetic field lines generated as the Earth spins in space. Wood mice and mole rats use them when siting their nests; cattle and deer orient their bodies along them while grazing; dogs—for unknown reasons—prefer to point themselves north or south when they wee or poo. Other species, such as turtles, even appeared to have a magnetic map sense, telling them not just direction but position. Life, it seems, really is plugged into the invisible, electromagnetic world.
There was huge skepticism at first, of course. Natural magnetic fields were thought to be far too weak to influence biological tissue, so how could the signal possibly be detected? Well, life finds a way. Or, as it turns out, several ways. Fish have an electrical solution: They use networks of jelly-filled canals to measure the flow of current as they swim through a field. Another method involves physical forces. In 1975, researchers discovered “magnetotactic” bacteria that use chains of tiny magnetic crystals as compass needles, to steer themselves down magnetic field lines.
In 1978, German biophysicist Klaus Schulten suggested a third possibility after studying a class of obscure chemical reactions influenced by quantum effects. Electrons have a quantum property that physicists call “spin,” and Schulten was investigating how light energy can trigger the formation of short-lived pairs of “radicals”—molecules with lone electrons, which can spin in either the same or opposite directions.
These two spin states are chemically different, and the amount of time the electrons spend in each state can be influenced by magnetic fields. So even if a field is too weak to influence a chemical reaction directly, light creates an excited state in which it can then nudge the outcome one way or another. Imagine a fly unable to move a stone block by flying into it. If you balance the block on its edge, a fly striking at just the right position and moment might be enough to tip it and create a much larger effect.
It was an example of what everyone had thought impossible: a mechanism by which an extremely weak magnetic field can produce chemical cues large enough to be detected by the nervous system. “I thought, well, maybe that’s the internal compass the biologists were looking for,” said Schulten.
But no receptors capable of forming radical pairs were known in living organisms, and when he submitted his theory to the prominent journal Science, it was swiftly rejected. “A less bold scientist,” said one reviewer, “would have designated this piece of work for the waste paper basket.”
Sign Up Today
Sign up for our Longreads newsletter for the best features, ideas, and investigations from WIRED.
Two decades later, biologists studying fruit flies discovered proteins called cryptochromes, which form radical pairs when exposed to blue light. Cryptochromes are now known to be common in organisms from plants to fish; they’re found in insect antennas, and the retinas of mammals and birds. And there’s good evidence in several species that cryptochromes are indeed involved in magnetosensing and navigation. Researchers have suggested that they enable birds to “see” magnetic field lines, perhaps by perceiving a brighter image if they are facing in a certain direction.
Humans have cryptochromes too. Until recently, most scientists agreed that people can’t sense magnetic fields. But in 2011, researchers put the human cryptochrome protein into fruit flies that lacked their own version, and found that it restored the flies’ magnetosensing ability perfectly. The finding hints that Wever was right: Even if we don’t consciously perceive it, our bodies are sensitive to magnetic fields. And here’s the really interesting thing: Cryptochromes are actually best known not as magnetosensors but for quite a different reason. They are also crucial components of biological clocks.
This discovery that biological clock machinery is sensitive to magnetism is still very new, and it’s not yet clear precisely if or how magnetic fields are influencing our sense of time. One theory is that at least some species tell the time using daily tidal variations in Earth’s field, just as Brown originally suggested. Others think the link may relate to how clocks resist temperature changes: a question posed by Brown and never fully answered. If you lose temperature compensation, you’d expect the body’s rhythms to continue but to become less stable, more variable, and to start uncoupling, just as Wever found in the bunker shielded from magnetic fields.
Perhaps it is an external cue after all—the Earth’s magnetic field, as influenced by the sun and moon—that enables biological clocks to run regardless of temperature: not necessarily driving behavior directly, but providing the fundamental “tick” of the clock.
From The Human Cosmos: Civilization and the Stars, by Jo Marchant, published by Dutton, an imprint of Penguin Publishing Group, a division of Penguin Random House, LLC. Copyright © 2020 by Jo Marchant
More Great WIRED Stories
A radical new model of the brain illuminates its wiring
One IT guy’s spreadsheet-fueled race to restore voting rights
How courthouse break-ins landed two white hat hackers in jail
Honestly, just vote in person—it’s safer than you think
On your next psychedelic journey, let an app be your guide |
About This Chapter
Who's it for?
This unit of our High School Precalculus Homeschool course will benefit any student who is trying to learn about working with quadratic functions and equations. There is no faster or easier way to learn about quadratic functions and equations. Among those who would benefit are:
- Students who require an efficient, self-paced course of study to learn to complete the square, understand parabolas, graph quadratic inequalities, and to factor quadratic equations.
- Homeschool parents looking to spend less time preparing lessons and more time teaching.
- Homeschool parents who need a precalculus curriculum that appeals to multiple learning types (visual or auditory).
- Gifted students and students with learning differences.
How it works:
- Students watch a short, fun video lesson that covers a specific unit topic.
- Students and parents can refer to the video transcripts to reinforce learning.
- Short quizzes and a Quadratic Functions and Equations unit exam confirm understanding or identify any topics that require review.
Quadratic Functions and Equations Unit Objectives:
- Learn to solve quadratics with complex numbers.
- Discover how to graph a system of quadratic inequalities.
- Study parabolas in intercept, standard and vertex form.
- Use the FOIL in reverse method.
- Apply quadratic functions to motion.
- Solve polynomial problems with non-1 leading coefficients.
1. What is a Parabola?
A parabola is the U shape that we get when we graph a quadratic equation. We actually see parabolas all over the place in real life. In this lesson, learn where, and the correct vocab to use when talking about them.
2. Parabolas in Standard, Intercept, and Vertex Form
By rearranging a quadratic equation, you can end up with an infinite number of ways to express the same thing. Learn about the three main forms of a quadratic and the pros and cons of each.
3. How to Factor Quadratic Equations: FOIL in Reverse
So, you know how to multiply binomials with the FOIL method, but can you do it backwards? That's exactly what factoring is, and it can be pretty tricky. Check out this lesson to learn a method that will allow you to factor quadratic trinomials with a leading coefficient of 1.
4. Factoring Quadratic Equations: Polynomial Problems with a Non-1 Leading Coefficient
Once you get good at factoring quadratics with 1x squared in the front of the expression, it's time to try ones with numbers other than 1. It will be the same general idea, but there are a few extra steps to learn. Do that here!
5. How to Complete the Square
Completing the square can help you learn where the maximum or minimum of a parabola is. If you're running a business and trying to make some money, it might be a good idea to know how to do this! Find out what I'm talking about here.
6. Completing the Square Practice Problems
Completing the square is one of the most confusing things you'll be asked to do in an algebra class. Once you get the general idea, it's best to get in there and actually do a few practice problems to make sure you understand the process. Do that here!
7. How to Solve a Quadratic Equation by Factoring
If your favorite video game, 'Furious Fowls,' gave you the quadratic equation for each shot you made, would you be able to solve the equation to make sure every one hit its target? If not, you will after watching this video!
8. How to Solve Quadratics with Complex Numbers as the Solution
When you solve a quadratic equation with the quadratic formula and get a negative on the inside of the square root, what do you do? The short answer is that you use an imaginary number. For the longer, more helpful answer, check out this lesson!
9. Graphing & Solving Quadratic Inequalities: Examples & Process
When you finish watching this video lesson, you will be able to graph and solve your own quadratic inequality. Learn what steps you need to take and what to watch for.
10. Graphing a System of Quadratic Inequalities: Examples & Process
What is a system of quadratic inequalities? What are the steps to graphing them? Find the answers to these questions by watching this video lesson. When you finish, you will be able to graph these yourself.
11. Applying Quadratic Functions to Motion Under Gravity & Simple Optimization Problems
Quadratic functions are very useful in the real world. Watch this video lesson and learn how these functions are used to model real world events such as a falling ball.
12. Using Quadratic Functions to Model a Given Data Set or Situation
Watch this video lesson to learn how quadratic functions are used to model situations and data gathered from real world scenarios. See how our function fits our data and situation well.
Earning College Credit
Did you know… We have over 200 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level.
To learn more, visit our Earning Credit Page
Transferring credit to the school of your choice
Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you.
Other chapters within the High School Precalculus: Homeschool Curriculum course
- Graphing Linear Equations: Homeschool Curriculum
- Solving & Graphing Inequalities: Homeschool Curriculum
- Absolute Value Expressions: Homeschool Curriculum
- Graphing Complex Numbers: Homeschool Curriculum
- Solving Linear Systems: Homeschool Curriculum
- Linear Models: Homeschool Curriculum
- Quadratics: Homeschool Curriculum
- Geometry Basics: Homeschool Curriculum
- Functions for Precalculus: Homeschool Curriculum
- Using Function Operations: Homeschool Curriculum
- Recognizing Graph Symmetry: Homeschool Curriculum
- How to Graph Functions: Homeschool Curriculum
- Rate of Change: Homeschool Curriculum
- Polynomial Functions: Homeschool Curriculum
- Higher-Degree Polynomials: Homeschool Curriculum
- Difference Quotients in Precalculus: Homeschool Curriculum
- Working with Rational Expressions: Homeschool Curriculum
- Evaluating Logarithmic Functions: Homeschool Curriculum
- Functions in Trigonometry: Homeschool Curriculum
- Using Trigonometric Graphs: Homeschool Curriculum
- Trigonometric Equations: Homeschool Curriculum
- Basic Trigonometric Identities: Homeschool Curriculum
- Laws of Sine & Cosine: Homeschool Curriculum
- Graphing Piecewise Functions: Homeschool Curriculum
- Vectors & Matrices: Homeschool Curriculum
- Mathematical Sequences: Homeschool Curriculum
- Identifying Conic Sections: Homeschool Curriculum
- Parametric Equations: Homeschool Curriculum
- Continuity: Homeschool Curriculum
- Limits in Precalculus: Homeschool Curriculum
- Groups & Sets in Algebra: Homeschool Curriculum |
One of Rust's core concepts is the function, a building block that lets you organize and reuse your code. Functions in Rust follow a few simple rules, making them easy to create and understand. Let's jump in and learn the basics of Rust functions, how to write them, and how to use them effectively.
What is a Function?
In Rust, a function is a named sequence of statements that takes a set of input values, performs some operations, and returns an output value. Functions provide a way to break up your code into reusable, modular components. They're similar to methods in other programming languages.
Defining a Function in Rust
In Rust, you create a function using the
fn keyword, followed by the function name, a parenthesized list of input parameters, a return type (optional), and a code block. Here's a simple example of a Rust function that takes two integers as input and returns their sum:
This function is called
add, and it takes two parameters,
b, both of type
i32. The return type is also
i32. The function body consists of a single expression,
a + b, which is the value that gets returned.
Calling a Function
To call a function in Rust, you simply write the function name followed by a parenthesized list of arguments. Here's an example of how to call our
This program defines a
main function, which is the entry point of every Rust program. Inside
main, we call the
add function with the arguments
5 and store the result in a variable called
sum. We then use the
println! macro to print the sum.
Functions with No Return Value
In Rust, functions that don't explicitly return a value are considered to return the unit type
(). This is similar to
void in languages like C and Java. Here's an example of a function that doesn't return anything and just prints a message:
greet function takes no parameters and has no return type specified. It simply prints a greeting message. In the
main function, we call the
greet function to display the message.
Function Ownership and Borrowing
In Rust, functions also interact with the ownership and borrowing system. When you pass a value to a function, the ownership of that value is transferred to the function. Similarly, when a function returns a value, ownership is transferred back to the caller.
Here's an example that demonstrates ownership transfer between functions:
In this example,
consume_string takes a
String as a parameter. When we call this function in
main, the ownership of
my_string is transferred to
consume_string. After the function call,
my_string is no longer valid, and trying to use it again would result in a compile-time error.
To avoid transferring ownership in a function, you can use borrowing by passing a reference to the value instead. For more information on borrowing, check out the Rust borrowing article.
Now that you have a solid grasp of Rust functions, you can start building modular and reusable Rust programs. Don't forget to explore other Rust concepts like pattern matching and error handling to further improve your Rust-fu!
What is a Rust function and why do we use them?
A Rust function is a named sequence of statements that takes a specific set of inputs, performs a task, and returns a value. Functions are essential building blocks in Rust, allowing you to write modular, reusable, and maintainable code. By using functions, you can break down complex tasks into smaller, more manageable pieces, which makes your code easier to understand and debug.
How do I define a function in Rust?
To define a function in Rust, you use the
fn keyword followed by the function name, a parenthesized list of input parameters, a return type (if the function returns a value), and a block of code. Here's a simple example:
In this example, we define a function called
greet that takes a single input parameter
name of type
&str (a string slice) and has no return value (as indicated by the absence of an arrow
-> and a return type).
How do I call a function in Rust?
To call a function in Rust, you simply use its name followed by a parenthesized list of argument values that match the function's input parameters. Here's an example of how to call the
greet function defined earlier:
In this example, we call the
greet function with the argument "Alice" within the
main function, which is the entry point of a Rust program.
How do I return a value from a Rust function?
To return a value from a Rust function, you need to specify the return type using the arrow
-> followed by the type, and then use the
return keyword followed by the value you want to return. Alternatively, you can omit the
return keyword and end the function with an expression that evaluates to the return value. Here's an example:
In this example, we define a function called
add that takes two input parameters
b of type
i32 (32-bit integers) and returns a value of the same type. The function simply adds the two input values and returns the result.
How can I use functions to create more modular and maintainable code in Rust?
By breaking down complex tasks into smaller functions, you can make your Rust code more modular and maintainable. Each function should have a single, well-defined purpose, making it easier to understand, test, and debug. You can also reuse functions in multiple places within your code, which helps to reduce duplication and improve code maintainability. Here's an example of using functions to create a more modular Rust program:
In this example, we define two functions,
print_result, each with a specific purpose. The
main function then calls these functions to perform the desired task, making the code more modular and easier to understand. |
More Geometry Lessons
Some basic geometry concepts, words and notations that you would
need to know are points
The following table gives some geometry concepts, words and notations. Scroll down the page for examples, explanations and solutions.
We may think of a point as a "dot" on a piece of paper
or the pinpoint on a board. In geometry, we usually identify this
point with a number or letter. A point has no length, width, or
height - it just specifies an exact location. It is zero-dimensional.
Every point needs a name. To name a point, we can use a single capital letter. The following is a diagram of points A, B, and M:
We can use a line to connect two points on a sheet of paper. A line is one-dimensional. That is, a line has length, but no width
or height. In geometry, a line is perfectly straight and extends forever in both directions. A line is uniquely determined by two points.
Lines need names just like points do, so that we can refer to them easily. To name a line, pick any two points on the line.
The line passing through the
points A and B
is denoted by
A set of points that lie on the same line are said to be collinear.
Pairs of lines can form intersecting lines, parallel lines, perpendicular lines
and skew lines.
Because the length of any line is infinite, we sometimes use parts of a line. A line segment connects two endpoints. A line
segment with two endpoints A and B is denoted by
A line segment can also
be drawn as part of a line.
The midpoint of a segment divides the segment into two segments
of equal length. The diagram below shows the midpoint M of
the line segment
. Since M is the midpoint, we know that the lengths AM
A ray is part of a line that extends without end in one direction. It starts from one endpoint and extends forever in one
A ray starting from point A
and passing through B
is denoted by
Planes are two-dimensional. A plane has length and width, but no
height, and extends infinitely on all sides. Planes are thought
of as flat surfaces, like a tabletop. A plane is made up of an infinite
amount of lines. Two-dimensional figures are called plane figures.
All the points and lines that lie on the same plane are said to be coplanar.
Space is the set of all points in the three dimensions - length,
width and height. It is made up of an infinite number of planes.
Figures in space are called solids.
Figures in space
Fundamental Concepts of Geometry
This video explains and demonstrates the fundamental concepts (undefined terms) of geometry: points, lines, ray, collinear, planes, and coplanar. The basic ideas in geometry and how we represent them with symbols.
is an exact location in space. They are shown as dots on a plane in 2 dimensions or a dot in space in 3 dimensions. It is labeled with capital letters. It does not take up any space.
is a geometric figure that consists of an infinite number of points lined up straight that extend in both directions for ever (indicated by the arrows at the end). A line is identified by a lower case letter or by two points that the line passes through. There is exactly 1 line through two points. All points on the same line are called collinear. Points not on the same line are noncollinear.
Two lines are either parallel or they will meet at a point of intersection.
A line segment
is a part of a line with two endpoints. A line segment starts and stops at two endpoints.
is part of a line with one endpoint and extends in one direction forever.
is a flat 2-dimensional surface. A plane can be identified by 3 points in the plane or by a capital letter. There is exactly 1 plane through three points. The intersection of two planes is a line.
points are points in one plane.
How to measure angles and types of angles
consists of two rays with a common endpoint. The two rays are called the sides of the angle and the common endpoint is the vertex of the angle.
Each angle has a measure generated by the rotation about the vertex. The measure is determined by the rotation of the terminal side about the initial side. A counterclockwise rotation generates a positive angle measure. A clockwise rotation generates a negative angle measure. The units used to measure an angle are either in degrees or radians.
Angles can be classified base upon the measure: acute angle, right angle, obtuse angle, and straight angle.
If the sum of measures of two positive angles is 90°, the angles are called complementary
If the sum of measures of two positive angles is 180°, the angles are called supplementary
1) Two angles are complementary. One angle measures 5x degrees and the other angle measures 4x degrees. What is the measure of each angle?
2) Two angles are supplementary. One angle measures 7x degrees and the other measures (5x + 36) degrees. What is the measure of each angle?
The Opposite Angle Theorem (OAT)
When two straight lines cross, opposite angles are equal
The Angle Sum of a Triangle Theorem
The interior angles of any triangle have a sum of 180°
The Exterior Angle Theorem (EAT)
Any exterior angle of a triangle is equal to the sum of the opposite interior angles.
Parallel Lines Theorem (PLT)
Whenever a pair of parallel lines is cut by a transversal
a) corresponding angles are equal (PLT-F)
b) alternate angles are equal (PLT-Z)
c) interior angles have a sum of 180° (PLT-C)
Rotate to landscape screen format on a mobile phone or small tablet to use the Mathway widget, a free math problem solver that answers your questions with step-by-step explanations.
You can use the free Mathway calculator and problem solver below to practice Algebra or other math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations.
We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page. |
An emulsion is a mixture of two or more liquids that are normally immiscible (unmixable or unblendable). Emulsions are part of a more general class of two-phase systems of matter called colloids. Although the terms colloid and emulsion are sometimes used interchangeably, emulsion should be used when both phases, dispersed and continuous, are liquids. In an emulsion, one liquid (the dispersed phase) is dispersed in the other (the continuous phase). Examples of emulsions include vinaigrettes, homogenized milk, mayonnaise, and some cutting fluids for metal working.
The word "emulsion" comes from the Latin word for "to milk", as milk is an emulsion of fat and water, along with other components.
Two liquids can form different types of emulsions. As an example, oil and water can form, first, an oil-in-water emulsion, wherein the oil is the dispersed phase, and water is the dispersion medium. (Lipoproteins, as implemented by all complex living organisms, is one example of this.) Second, they can form a water-in-oil emulsion, wherein water is the dispersed phase and oil is the external phase. Multiple emulsions are also possible, including a "water-in-oil-in-water" emulsion and an "oil-in-water-in-oil" emulsion.
Emulsions, being liquids, do not exhibit a static internal structure. The droplets dispersed in the liquid matrix (called the “dispersion medium”) are usually assumed to be statistically distributed.
The term "emulsion" is also used to refer to the photo-sensitive side of photographic film. Such a photographic emulsion consist of silver halide colloidal particles dispersed in a gelatin matrix. Nuclear emulsions are similar to photographic emulsions, except that they are used in particle physics to detect high-energy elementary particles.
- 1 Appearance and properties
- 2 Emulsifiers
- 3 Mechanisms of emulsification
- 4 Uses
- 5 See also
- 6 References
- 7 Other sources
- 8 External links
Appearance and properties
Emulsions contain both a dispersed and a continuous phase, with the boundary between the phases called the "interface". Emulsions tend to have a cloudy appearance because the many phase interfaces scatter light as it passes through the emulsion. Emulsions appear white when all light is scattered equally. If the emulsion is dilute enough, higher-frequency and low-wavelength light will be scattered more, and the emulsion will appear bluer – this is called the "Tyndall effect". If the emulsion is concentrated enough, the color will be distorted toward comparatively longer wavelengths, and will appear more yellow. This phenomenon is easily observable when comparing skimmed milk, which contains little fat, to cream, which contains a much higher concentration of milk fat. One example would be a mixture of water and oil.
Two special classes of emulsions – microemulsions and nanoemulsions, with droplet sizes below 100 nm – appear translucent. This property is due to the fact that lightwaves are scattered by the droplets only if their sizes exceed about one-quarter of the wavelength of the incident light. Since the visible spectrum of light is composed of wavelengths between 390 and 750 nanometers (nm), if the droplet sizes in the emulsion are below about 100 nm, the light can penetrate through the emulsion without being scattered. Due to their similarity in appearance, translucent nanoemulsions and microemulsions are frequently confused. Unlike translucent nanoemulsions, which require specialized equipment to be produced, microemulsions are spontaneously formed by “solubilizing” oil molecules with a mixture of surfactants, co-surfactants, and co-solvents. The required surfactant concentration in a microemulsion is, however, several times higher than that in a translucent nanoemulsion, and significantly exceeds the concentration of the dispersed phase. Because of many undesirable side-effects caused by surfactants, their presence is disadvantageous or prohibitive in many applications. In addition, the stability of a microemulsion is often easily compromised by dilution, by heating, or by changing pH levels.
Common emulsions are inherently unstable and, thus, do not tend to form spontaneously. Energy input – through shaking, stirring, homogenizing, or exposure to power ultrasound – is needed to form an emulsion. Over time, emulsions tend to revert to the stable state of the phases comprising the emulsion. An example of this is seen in the separation of the oil and vinegar components of vinaigrette, an unstable emulsion that will quickly separate unless shaken almost continuously. There are important exceptions to this rule – microemulsions are thermodynamically stable, while translucent nanoemulsions are kinetically stable.
Whether an emulsion of oil and water turns into a "water-in-oil" emulsion or an "oil-in-water" emulsion depends on the volume fraction of both phases and the type of emulsifier (surfactant) (see Emulsifier, below) present. In general, the Bancroft rule applies. Emulsifiers and emulsifying particles tend to promote dispersion of the phase in which they do not dissolve very well. For example, proteins dissolve better in water than in oil, and so tend to form oil-in-water emulsions (that is, they promote the dispersion of oil droplets throughout a continuous phase of water).
The geometric structure of an emulsion mixture of two lyophobic liquids with a large concentration of the secondary component is fractal: Emulsion particles unavoidably form dynamic inhomogeneous structures on small length scale. The geometry of these structures is fractal. The size of elementary irregularities is governed by a universal function which depends on the volume content of the components. The fractal dimension of these irregularities is 2.5.
Emulsion stability refers to the ability of an emulsion to resist change in its properties over time. There are four types of instability in emulsions: flocculation, creaming, coalescence, and Ostwald ripening. Flocculation occurs when there is an attractive force between the droplets, so they form flocs, like bunches of grapes. Coalescence occurs when droplets bump into each other and combine to form a larger droplet, so the average droplet size increases over time. Emulsions can also undergo creaming, where the droplets rise to the top of the emulsion under the influence of buoyancy, or under the influence of the centripetal force induced when a centrifuge is used.
An appropriate "surface active agent" (or "surfactant") can increase the kinetic stability of an emulsion so that the size of the droplets does not change significantly with time. It is then said to be stable.
Monitoring physical stability
The stability of emulsions can be characterized using techniques such as light scattering, focused beam reflectance measurement, centrifugation, and rheology. Each method has advantages and disadvantages.
Accelerating methods for shelf life prediction
The kinetic process of destabilization can be rather long – up to several months, or even years for some products. Often the formulator must accelerate this process in order to test products in a reasonable time during product design. Thermal methods are the most commonly used – these consist of increasing the emulsion temperature to accelerate destabilization (if below critical temperatures for phase inversion or chemical degradation). Temperature affects not only the viscosity but also the inter-facial tension in the case of non-ionic surfactants or, on a broader scope, interactions of forces inside the system. Storing an emulsion at high temperatures enables the simulation of realistic conditions for a product (e.g., a tube of sunscreen emulsion in a car in the summer heat), but also to accelerate destabilization processes up to 200 times.
Mechanical methods of acceleration, including vibration, centrifugation, and agitation, can also be used.
These methods are almost always empirical, without a sound scientific basis.
An emulsifier (also known as an "emulgent") is a substance that stabilizes an emulsion by increasing its kinetic stability. One class of emulsifiers is known as "surface active agents", or surfactants.
Examples of food emulsifiers are:
- Egg yolk – in which the main emulsifying agent is lecithin. In fact, lecithos is the Greek word for egg yolk.
- Mustard – where a variety of chemicals in the mucilage surrounding the seed hull act as emulsifiers
- Soy lecithin is another emulsifier and thickener
- Pickering stabilization – uses particles under certain circumstances
- Sodium phosphates
- Sodium stearoyl lactylate
- DATEM (Diacetyl Tartaric (Acid) Ester of Monoglyceride) – an emulsifier used primarily in baking
Detergents are another class of surfactants, and will physically interact with both oil and water, thus stabilizing the interface between the oil and water droplets in suspension. This principle is exploited in soap, to remove grease for the purpose of cleaning. Many different emulsifiers are used in pharmacy to prepare emulsions such as creams and lotions. Common examples include emulsifying wax, cetearyl alcohol, polysorbate 20, and ceteareth 20. Sometimes the inner phase itself can act as an emulsifier, and the result is a nanoemulsion, where the inner state disperses into "nano-size" droplets within the outer phase. A well-known example of this phenomenon, the "Ouzo effect", happens when water is poured into a strong alcoholic anise-based beverage, such as ouzo, pastis, absinthe, arak, or raki. The anisolic compounds, which are soluble in ethanol, then form nano-size droplets and emulsify within the water. The resulting color of the drink is opaque and milky white.
Mechanisms of emulsification
A number of different chemical and physical processes and mechanisms can be involved in the process of emulsification:
- Surface tension theory – according to this theory, emulsification takes place by reduction of interfacial tension between two phases
- Repulsion theory – the emulsifying agent creates a film over one phase that forms globules, which repel each other. This repulsive force causes them to remain suspended in the dispersion medium
- Viscosity modification – emulgents like acacia and tragacanth, which are hydrocolloids, as well as PEG (or polyethylene glycol), glycerine, and other polymers like CMC (carboxymethyl cellulose), all increase the viscosity of the medium, which helps create and maintain the suspension of globules of dispersed phase
Oil-in-water emulsions are common in food products:
- Crema (foam) in espresso – coffee oil in water (brewed coffee), unstable emulsion
- Mayonnaise and Hollandaise sauce – these are oil-in-water emulsions that are stabilized with egg yolk lecithin, or with other types of food additives, such as sodium stearoyl lactylate
- Homogenized milk – an emulsion of milk fat in water and milk proteins
- Vinaigrette – an emulsion of vegetable oil in vinegar. If this is prepared using only oil and vinegar (i.e., without an emulsifier), an unstable emulsion results
Water-in-oil emulsions are less common in food but still exist:
- Butter – an emulsion of water in butterfat
In pharmaceutics, hairstyling, personal hygiene, and cosmetics, emulsions are frequently used. These are usually oil and water emulsions but dispersed, and which is continuous depends in many cases on the pharmaceutical formulation. These emulsions may be called creams, ointments, liniments (balms), pastes, films, or liquids, depending mostly on their oil-to-water ratios, other additives, and their intended route of administration. The first 5 are topical dosage forms, and may be used on the surface of the skin, transdermally, ophthalmically, rectally, or vaginally. A highly liquid emulsion may also be used orally, or may be injected in some cases. Popular medications occurring in emulsion form include cod liver oil, Polysporin, cortisol cream, Canesten, and Fleet.
Microemulsions are used to deliver vaccines and kill microbes. Typical emulsions used in these techniques are nanoemulsions of soybean oil, with particles that are 400–600 nm in diameter. The process is not chemical, as with other types of antimicrobial treatments, but mechanical. The smaller the droplet the greater the surface tension and thus the greater the force required to merge with other lipids. The oil is emulsified with detergents using a high-shear mixer to stabilize the emulsion so, when they encounter the lipids in the cell membrane or envelope of bacteria or viruses, they force the lipids to merge with themselves. On a mass scale, in effect this disintegrates the membrane and kills the pathogen. The soybean oil emulsion does not harm normal human cells, or the cells of most other higher organisms, with the exceptions of sperm cells and blood cells, which are vulnerable to nanoemulsions due to the peculiarities of their membrane structures. For this reason, these nanoemulsions are not currently used intravenously (IV). The most effective application of this type of nanoemulsion is for the disinfection of surfaces. Some types of nanoemulsions have been shown to effectively destroy HIV-1 and tuberculosis pathogens on non-porous surfaces.
Emulsifying agents are effective at extinguishing fires on small, thin-layer spills of flammable liquids (Class B fires). Such agents encapsulate the fuel in a fuel-water emulsion, thereby trapping the flammable vapors in the water phase. This emulsion is achieved by applying an aqueous surfactant solution to the fuel through a high-pressure nozzle. Emulsifiers are not effective at extinguishing large fires involving bulk/deep liquid fuels, because the amount of emulsifier agent needed for extinguishment is a function of the volume of the fuel, whereas other agents such as aqueous film-forming foam (AFFF) need cover only the surface of the fuel to achieve vapor mitigation.
Emulsions are used to manufacture polymer dispersions – polymer production in an emulsion 'phase' has a number of process advantages, including prevention of coagulation of product. Products produced by such polymerisations may be used as the emulsions – products including primary components for glues and paints. Synthetic latexes (rubbers) are also produced by this process.
- IUPAC (1997). Compendium of Chemical Terminology (The "Gold Book"). Oxford: Blackwell Scientific Publications.
- Slomkowski, Stanislaw; Alemán, José V.; Gilbert, Robert G.; Hess, Michael; Horie, Kazuyuki; Jones, Richard G.; Kubisa, Przemyslaw; Meisel, Ingrid; Mormann, Werner; Penczek, Stanisław; Stepto, Robert F. T. (2011). "Terminology of polymers and polymerization processes in dispersed systems (IUPAC Recommendations 2011)". Pure and Applied Chemistry. 83 (12): 2229–2259. doi:10.1351/PAC-REC-10-06-03.
- Khan, A. Y.; Talegaonkar, S; Iqbal, Z; Ahmed, F. J.; Khar, R. K. (2006). "Multiple emulsions: An overview". Current drug delivery. 3 (4): 429–43. PMID 17076645.
- Mason TG, Wilking JN, Meleson K, Chang CB, Graves SM (2006). "Nanoemulsions: Formation, structure, and physical properties" (PDF). Journal of Physics: Condensed Matter. 18 (41): R635. doi:10.1088/0953-8984/18/41/R01.
- Leong TS, Wooster TJ, Kentish SE, Ashokkumar M (2009). "Minimising oil droplet size using ultrasonic emulsification". Ultrasonics Sonochemistry. 16 (6): 721–7. doi:10.1016/j.ultsonch.2009.02.008. PMID 19321375.
- The use of ultrasonics for nanoemulsion preparation
- Ozhovan M.I. (1993). "Dynamic uniform fractals in emulsions" (PDF). J. Exp. Theor. Phys. 77: 939–943. Bibcode:1993JETP...77..939O.
- McClements, David Julian (16 December 2004). Food Emulsions: Principles, Practices, and Techniques, Second Edition. Taylor & Francis. pp. 269–. ISBN 978-0-8493-2023-1.
- Silvestre, M.P.C.; Decker, E.A.; McClements, D.J. (1999). "Influence of copper on the stability of whey protein stabilized emulsions". Food Hydrocolloids. 13 (5): 419. doi:10.1016/S0268-005X(99)00027-2.
- Anne-Marie Faiola (2008-05-21). "Using Emulsifying Wax". TeachSoap.com. TeachSoap.com. Retrieved 2008-07-22.
- Aulton, Michael E., ed. (2007). Aulton's Pharmaceutics: The Design and Manufacture of Medicines (3rd ed.). Churchill Livingstone. pp. 92–97, 384, 390–405, 566–69, 573–74, 589–96, 609–10, 611. ISBN 978-0-443-10108-3.
- Troy, David A.; Remington, Joseph P.; Beringer, Paul (2006). Remington: The Science and Practice of Pharmacy (21st ed.). Philadelphia: Lippincott Williams & Wilkins. pp. 325–336, 886–87. ISBN 0-7817-4673-6.
- Aymal et al. (2001). Senior Science HSC 2. Australia: Pearson.
- "Adjuvant Vaccine Development". Retrieved 2008-07-23.
- "Nanoemulsion vaccines show increasing promise". Eurekalert! Public News List. University of Michigan Health System. 2008-02-26. Retrieved 2008-07-22.
- Friedman, Raymond (1998). Principles of Fire Protection Chemistry and Physics. Jones & Bartlett Learning. ISBN 0-87765-440-9.
- Philip Sherman; British Society of Rheology (1963). Rheology of emulsions: proceedings of a symposium held by the British Society of Rheology ... Harrogate, October 1962. Macmillan.
- Handbook of Nanostructured Materials and Nanotechnology; Nalwa, H.S., Ed.; Academic Press: New York, NY, USA, 2000; Volume 5, pp. 501–575
|Look up emulsion in Wiktionary, the free dictionary.| |
|Thermodynamics : Enthalpy|
Enthalpy (H) - The sum of the internal energy of the system plus the product of the pressure of the gas in the system and its volume:
After a series of rearrangements, and if pressure if kept constant, we can arrive at the following equation:
where H is the Hfinal minus Hinitial and q is heat
Enthalpy of Reaction (H)
- The difference between the sum of the enthalpies of the products and the sum of the enthalpies of the reactants:
In the above reaction, n and m are the coefficients of the products and the reactants in the balanced equation.
|Exothermic - Reaction in which a
heat to its surroundings.
H is negative (H < 0)
Ea is the activation energy which is discussed in
|Endothermic - Reaction in which a
heat from its surroundings.
H is positive (H > 0)
Let's distinguish various phase changes of water as either endothermic or exothermic.
1) The above reaction is EXOTHERMIC because heat is released when liquid water freezes to form ice.
2) The above reaction is ENDOTHERMIC because there must be an input of energy in order for water molecules in the liquid phase to have enough energy to escape into the gas phase.
3) The above reaction is ENDOTHERMIC because there must be an input of energy to break the bonds holding water molecules together as ice.
Standard-State Enthalpy of Reaction (H)
Three factors can affect the enthalpy of reaction:
- The concentrations of the reactants and the products
- The temperature of the system
- The partial pressures of the gases involved (if any)
The effects of changes in these factors can be shown relative to the standard-state enthalpy of reaction (H) which is the change in the enthalpy during a chemical reaction that begins and ends under standard-state conditions.
- The partial pressures of any gases involved in the reaction is 0.1 MPa.
- The concentrations of all aqueous solutions are 1 M.
Measurements are also generally taken at a temperature of 25C (298 K)
1940 - Germain Henri Hess
1) Nitrogen and oxygen gas combine to form nitrogen dioxide according to the following reaction:
Calculate the change in enthalpy for the above overall reaction, given:
This problem is very simple. If we simply add up the two reactions keeping all the reactants on the left and all the products on the right, we end up with the overall equation that we are given. Since we didn't make any changes to the individual reactions, we don't make any changes toH. If we add upH as well, we find the change in enthalpy:
Let's try one that is a bit more complicated.
2) From the following enthalpy changes:
calculate the value ofH for the reaction:
If we look at the final reaction, we see that we need 2 S atoms on the reactants side. The only reaction with S atoms is the third reaction, and in order to get 2 S atoms, we need to multiply the whole reaction by a factor of 2. The next reactant in the final reaction is 2 OF molecules. The only reaction with an OF molecule is the first reaction, and in order to get 2 OF molecules, we need to multiply the whole reaction by a factor of 2. On the products side of the final reaction, there is 1 SF4 molecule, and the only possible source of the SF4 molecule is the second reaction. However, the SF4 molecule is on the reactants side, which is not the side we need it on. So we'll have to FLIP the second reaction to get the SF4 molecule where we need it.
Now if we total up the reactions, we should end up with the given overall reaction:
Remember that everything we did to each reaction, we have to do to each respectiveH. So we have to multiply the first and thirdH values by a factor of 2. We also have to reverse the sign of the secondH. When we add these up we get:
Enthalpy of formation (Hf)
- The enthalpy associated with the reaction that forms a compound from its elements in their most thermodynamically stable states. These are measured on a relative scale where zero is the enthalpy of formation of the elements in their most thermodynamically stable states.
The standard-state enthalpy of reaction is equal to the sum of the enthalpies of formation of the products minus the sum of the enthalpies of formation of the reactants:
Sample enthalpy of formation calculation
Calculate the heat given off when one mole of B5H9 reacts with excess oxygen according to the following reaction:
In the reaction above 2 moles of B5H9 react with 12 moles of O2 to yield five moles of B2O3 and 9 moles of H2O. We find theHf by subtracting the sum of the enthalpies of the reactant from the sum of the enthalpies of the products:
- NOTE: The heat of formation of O2 is zero because this is the form of the oxygen in its most thermodynamically stable state.
- The energy required to break a bond. Bond energy is always a positive number because the breaking of a bond requires an input of energy (endothermic). When a bond is formed, the amount of energy equal to the bond energy is released.
The bonds broken are the reactant bonds. The bonds formed are the product bonds.
FindH for the following reaction given the following bond energies:
We have to figure out which bonds are broken and which bonds are formed.
2 H-H bonds are broken.
1 O=O bond is broken
2 O-H bonds are formed per water molecule, and there are 2 water molecules formed, therefore 4 O-H bonds are formed
Now we can substitute the values given into the equation:
Bond-dissociation enthalpy - The energy needed to break an X-Y bond to give X and Y atoms in the gas phase, such as in the following reaction: |
Vectors are fundamental data structures in R, allowing you to store elements of the same data type efficiently. In this tutorial, we’ll explore various ways to create vectors in R, using functions like
vector(), and more. We’ll also discuss LSI keywords related to vector creation, ensuring a comprehensive understanding of this essential concept in R programming.
One-stop solution for all your homework needs. Get the job done.
✅ AI Essay Writer ✅ AI Detector ✅ Plagiarism checker ✅ Paraphraser
Create Vector in R using
c() function in R is a versatile tool for creating vectors. Its syntax is straightforward:
# Syntax of c() function c(...)
Example: Creating Vectors
Let’s start with some practical examples of creating vectors using the
# Create Numeric Vector id <- c(10, 11, 12, 13) # Create Character Vector name <- c('sai', 'ram', 'deepika', 'sahithi') # Create Date Vector dob <- as.Date(c('1990-10-02', '1981-3-24', '1987-6-14', '1985-8-16'))
In this example, we’ve created three different vectors: a numeric vector
id, a character vector
name, and a date vector
dob. Each vector contains elements of the same data type.
Create Named Vector
You can assign names to values while creating a vector, resulting in a named vector. Here’s an example:
# Create Named Vector x <- c(C1 = 'A', C2 = 'B', C3 = 'C') print(x)
In this case, we’ve created a named vector
x with three elements, each associated with a unique name (C1, C2, C3).
Create Vector from List
Sometimes, you may have a list and want to convert it into a vector. You can achieve this using the
# Create Vector from List li <- list('A', 'B', 'C') v <- unlist(li) print(v)
unlist() function takes a list as an argument and converts it into a vector. It’s a handy way to work with lists in R.
Vector of Zeros
To create a vector filled with zeros, you can use the
# Create Vector of Zeros v <- integer(6) print(v)
In this example, we’ve created a vector
v with six elements, all initialized to zero.
Vector of Specified Length
You can create a vector of a specified length with default values. For instance, to create a character vector with empty spaces, you can use the
# Create Vector of Specified Length v <- character(5) print(v)
Here, we’ve created a character vector
v with five empty strings, each representing an element.
In this tutorial, we’ve explored various methods to create vectors in R. We’ve covered the
c() function, named vectors, converting lists to vectors, creating vectors of zeros, and vectors of specified lengths. Vectors are essential in R programming and serve as building blocks for more complex data structures and operations.
By understanding these fundamental concepts, you’ll be well-equipped to work with vectors and manipulate data effectively in R. Keep practicing and experimenting with different vector creation methods to enhance your R programming skills.
What are the steps to create a numeric vector in R?
To create a numeric vector in R, follow these steps:
- Open R or RStudio.
- Use the
c()function to combine numeric values into a vector. For example:
my_vector <- c(1, 2, 3, 4, 5).
- You now have a numeric vector named
How do I use the vector() function to create a vector in R?
vector() function in R allows you to create a vector of a specific data type and length. Here’s how to use it:
vector(mode = "data_type", length = N), where “data_type” is the type of vector you want (e.g., “numeric,” “character”) and N is the desired length.
my_vector <- vector(mode = "numeric", length = 5)creates a numeric vector of length 5.
Are there any best practices for creating vectors in R?
Yes, there are some best practices for creating vectors in R:
- Use the
c()function for combining values into vectors.
- Ensure that all elements in a vector have the same data type.
- Assign meaningful variable names to your vectors.
- Comment your code to explain the purpose of the vector.
- Be consistent with your coding style and indentation.
Can I assign names to values in a vector in R?
Yes, you can assign names to values in a vector in R using the
names() function. Here’s how:
- Create a vector, e.g.,
my_vector <- c(1, 2, 3).
- Use the
names()function to assign names:
names(my_vector) <- c("value1", "value2", "value3").
Now, you can access elements in the vector by their names, e.g.,
my_vector["value1"] will return
How do I create a vector of specified length in R?
To create a vector of a specified length in R, you can use the
vector() function as mentioned earlier. Specify the desired length using the
length parameter, and R will create an empty vector of that length. You can then fill it with values as needed.
my_vector <- vector(mode = "numeric", length = 10) creates an empty numeric vector with a length of 10, which you can populate with values later.
Follow us on Reddit for more insights and updates. |
Social Education 58(4), 1994, pp. 219-225
National Council for the Social Studies
Circumterral vs. Circumthalassic
The region is best viewed as circumterral, i.e., rimmed by interpenetrating inland seas and waterways along whose coasts population, commerce, and communication concentrate. Circumterral is the obverse of circumthalassic, a term used to describe how the lands surrounding the Mediterranean Sea are tied together in a geographical unity of culturally diverse states based on their proximity to a central water highway (see Figure 1). The circumthalassic idea describes a single, interior sea surrounded by a power core or cores located on the adjoining land. The Roman Empire can best be described as circumthalassic, as was Mussolini's dream of mare nostrum. The circumterral concept inverts this relationship to describe the Middle East as depicted in Figures 2 and 3. We make only one additional assumption: that the several interpenetrating seas of the region constitute one sea, in the same way that Mahan viewed the world as having a single interconnected ocean (Mahan 1890, 1957, 22).
Thus in the Mediterranean, land points surround the sea, whereas in the Middle East, interconnecting seas and their land points envelope the land. Instead of the sea forming the center, the sea highway and its ports that rim the region's land parts serve as the unifying outer ring. The map further reveals that the concentric ring of seaports at the interface of land and sea form the regional core. This ring of seaports provides the avenue for dynamic interaction along the connecting sea highway among the small coastal states and the larger, more isolated land units with their interior capitals.
In the circumthalassic setting, because of the physical proximity of the land points, it is possible for one power to gain dominance by overwhelming the other points by cutting off their connections with one another through control of the interior sea lanes. Even if Rome was the only power to unify the Mediterranean in its entirety, the Greeks, Carthaginians, Arab or Moors, French, and Italians did gain control of large parts of the sea as they sought to dominate the entire region.
In the circumterral setting, single power dominance of the seas is made more difficult by the weight of the land masses that project into the seas. Thus, historically, the rivalry between the Nilotic and Mesopotamian, the Levantine Roman/Byzantine and Persian, and the modern British and French empires was so evenly balanced that one power could not dominate the entire region. Even the Ottomans did not hold effective sway over Egypt and Arabia because of the geographical disadvantage of their inland Anatolian base.
Despite these intrusive land masses, the many opportunities for exchange along the sea highway rim and the ocean beyond promote unity among the Middle East's separate power cores. What Ellen Churchill Semple said about the Mediterranean also applies to the Middle East:
The Mediterranean is a great gulf of the Atlantic cutting back into the land mass of the Eastern Hemisphere. . . . This is the mare internum, enclosed by three continents which it helps to divide. . . . It gives Asia an Atlantic seaboard. . . . The line of the Aegean, Marmara, Black Sea, and Azov divides Asia and Europe in a physical sense, but unites them in a historical sense. (Semple 1931, 4-5)
Not only does the Middle East have an Atlantic coast via the Mediterranean but, as Semple pointed out, its Red Sea, Persian Gulf, and Arabian Sea are part of a global belt of seas that include the Indian Ocean and the Caribbean, thereby facilitating contact between Europe and Asia in particular and the maritime world in general (Semple 1931, 1971, 13-14). The Middle East thus sits astride the world's most vital water crossroads and has served historically as a major cultural hearth. The geography of the Middle East also sets the stage for political conflict, described by Derwent Whittlesey as an "unending struggle for control of this crossroads of landpower and seapower where dry, barren Africa and West Asia are slit by a wet scratch linking the humid, productive Orient with Europe" (1959, 268).
Vital Role of Water Transportation
Traditional geographic descriptions of the Middle East fail to take into account the vital role played by water. To describe the region merely as possessing interpenetrating seas (Cressey 1960, 23) ignores the dynamics of the sea highway and the pressures for interaction among land points. In the traditional landpower point of view, the Middle East's power cores are states with their backs to the sea that are connected by land routes. These land routes through deserts, however, because of difficulty in traversing them, offer little proximity, and the great distances over the land bridges strengthen the remoteness of the land points from one another. The landpower cores do not face each other but instead front the sea highway that helps connect them, often in circuitous fashion. Egypt and Israel, for example, not only alternately share or divide a contiguous Mediterranean coast, but they also are connected by outlets on the Red Sea-Suez-Aqaba waterway. The sea highway crescent from Suez to the Persian Gulf-Arabian Sea is shared further with Saudi Arabia, Sudan, Yemen, Oman, the United Arab Emirates, Qatar, Bahrain, Kuwait, Iraq, and Iran. The Eastern Mediterranean is shared by Turkey, Cyprus, Syria, Lebanon, and Israel. Therefore, while the landpower view leaves the Middle East essentially landlocked, the seapower view stresses the opportunities for dynamic spatial interaction.
It is important to distinguish this circumterral view of the Middle East from a simple center-periphery concept. In the former, the enveloping sea offers the prime medium of communication between relatively equal power centers; in the latter, the center is described as dominating outlying areas.
This physical and human geographical configuration has important implications for the interests of the region's major players and their economic and political connections with the outside world. Although the Middle East remains a region with six enmeshed power centers (Egypt, Iraq, Iran, Israel, Syria, and Turkey) competing in a highly volatile environment, the circumterral nature of the region accentuates the multipower nature of these power relationships. The sea highway strengthens centrifugal forces among the six by encouraging links to the outside maritime world. It also reinforces the incipient and countervailing centripetal forces that will contribute to stronger regional ties. These contradictory forces are intertwined, operate simultaneously, and, depending on the historical period, may be imbalanced with one directional force having greater weight than the other. In the aftermath of the Persian Gulf War, we may be entering an era when the centripetal forces are in ascendance in the Middle East. The external powers are less competitive with one another, enabling the United States to take the lead in encouraging a dynamic equilibrium among the region's several power cores that emerges in part from their proximity over the sea highway.
The circumterral conception builds upon traditional geographic descriptions of the Middle East. These correctly described the region as a world "crossroads" at the juncture of three continents and as the "land of the six seas" (the Mediterranean, Aegean, Black, Caspian, Red, and Arabian seas) with several vital connecting gulfs and straits. As Beaumont et al. (1988, 7) have pointed out, it is possible to conceive of the entire Middle East as composed of four great isthmuses lying between the arms of the ocean and the inland Caspian Sea. The most famous is the Isthmus of Suez now cut by the Suez Canal. Other land bridges include the Syrian Desert connecting the Mediterranean and the Arab/Persian Gulf via the Syrian Saddle, the Euphrates Valley, and the Shatt al Arab; the mountains of Armenia and Azerbaijan standing between the Black Sea port of Trabzon and the Iranian and Azerbaijani ports on the Caspian Sea; and the Iranian Plateau separating the Caspian and Arabian seas.
Our view of the Middle East is further enhanced by focusing on the waterways, either independent of or in conjunction with the land bridges. The circumterral concept describes a region surrounded by water and concentrated around inland seas and river valleys that, with the exceptions of the "King's Highway" land route from the Nile to Damascus via Palestine and the Syrian Saddle route from the Gulf of Iskenderun to Mesopotamia, serve as major avenues of movement and settlement.1 As disjointed as several of these maritime waterways may be, they can provide greater unity to the region's scattered cores and isolated population clusters than any other physical feature. Arabia has little relevance as a center of the land region, and none of the other major regional capitals, save Cairo, is easily accessible from all major subregions (i.e., Ankara, Tehran, Baghdad, Damascus, Amman, Jerusalem, and Khartoum). Rather the interpenetrating seas perform this centering function.
This circumterral nature of the region operates as a permanent locational quality with important implications. Each of the several cores has internal contacts to the region and external contacts to the world that result from the highly complex geography of interrupted seas and short land bridges. If the Middle East were clustered around a single sea, all powers would compete intensely in a closed environment creating flux and disequilibrium, or conversely a rigid dominance-subordination system. In addition to having both an internal and external reach, each regional power has its own unique orientation. Turkey's location on the Aegean and Black seas, for example, encourages a European perspective and a secondary concern for Mesopotamia and the Caucasus; Israel, Egypt, and Lebanon share a common Mediterranean setting; Iran's maritime orientation, on the other hand, faces south via the Arabian Sea toward either the Indian Ocean or the Red Sea and Suez Canal, whereas in the north Iran's secondary interests lie landward into Central Asia and not over the Caspian into Russia and Kazakhstan.2 Thus the levels of interaction at the local level are highly complex and serve to reinforce the existing multipolarity in the region's political relationships.
This maritime orientation further implies a sharper definition of the Middle East as a region. It is composed only of states that have an outlet on one of the six interpenetrating seas that form the outer or circumferential physical regional core and, in addition, have proximate access to a second sea either directly or through a neighbor. Pure Mediterranean states in southern Europe and the Maghreb (northwesternmost Africa-Morocco, Algeria, Tunisia) as well as continental Afghanistan are thereby excluded. Technically Greece and possibly Pakistan fit the definition if only their physical location is taken into account, but they clearly belong to other realms. Sudan and Djibouti are included because of the close traditional ties between the two coasts of the Red Sea; once conditions are stabilized in Ethiopia, an independent Eritrea must be added to the region. It is within this relatively compact maritime region that we find most of the major sources and transportation networks of the region's primary industry, petroleum.
The eighteen states of the Middle East thus defined, stretching from Turkey, Cyprus, and Egypt in the west to Iran in the east and the Arabian Peninsula in the south, contain 10,825,890 square kilometers (4,179,764 square miles), approximately the same size as the United States or China (UN 1992a, 65-68). The sea penetrates deeply in the region: four states (Egypt, Israel, Saudi Arabia, and Iran) face two seas, and Turkey fronts on three. Several narrow gulfs and strategic straits connect vital waterways and core subregions. The Strait of Hormuz links the Arab/Persian Gulf and the Arab-Iranian oil basin with the Gulf of Oman and the Arabian Sea; Bab el Mandeb is the gateway from the Gulf of Aden through the Red Sea to Suez; the Suez Canal in turn carries maritime traffic from the Mediterranean to the Red Sea and Indian Ocean; the Gulf of Aqaba offers Israel, Jordan (as well as Iraq via Jordan), and Egypt's eastern Sinai an outlet to the Red Sea; the Turkish Straits tie the Black Sea to the Aegean; and the Volga-Don Canal provides another human-made waterway across the Caucasian isthmus into the Sea of Azov, thus linking the Black and Caspian seas. Moreover, the basins of the Nile and Tigris-Euphrates form riverine arteries that determine the location of settlement and the structure of communications in Egypt and Iraq.
Physical Factors and the Maritime Perspective
A variety of other factors support this maritime perspective. All eighteen states have at least one significant seaport. Only Iraq has significant constraints on an outlet to one of the region's interpenetrating seas, since its main harbor at Basra is limited by an inland location. Iraq's specialized port at Umm Qasr is hemmed in by Kuwaiti-owned Bubiyan and Warbah islands and has been cut, in part, by the new United Nations-imposed boundary with Kuwait. Moreover, most of the region's land routes are short and often converge on ports that terminate at the head of narrow gulfs (Beaumont et al. 1988, 323). A solid chain of mountains across the northern tier of Turkey and Iran and the eastern border of Iran limit the land routes. Although the coastal highlands of Syria, Lebanon, and Israel present less formidable obstacles, the eastern Mediterranean coast invites contact with Europe and the Red and Arabian seas, for centuries the transit route for a thriving slave trade, offering opportunities for commerce with Africa and Asia.
Deserts abound in the region's large interior expanses and serve to isolate the multiple power centers on the continental periphery along the coasts and inland waterways. These include the Libyan, Arabian, and Nubian deserts in Egypt and Sudan; the Syrian, Negev, and Sinai deserts south of the fertile crescent; the Nafud and Ar Rub' al Khali in Saudi Arabia; and the Dasht-e Kavir and Dasht-e-Lut in Iran.
Middle East and European Maritime Orientations
Indicative of the maritime orientation of the Middle East is its extensive coastline measured in both absolute and relative terms. The Mediterranean, Black, and Red seas form a 5,472 kilometer (3,400 mile) coastline from Trabzon to Aden alone. Excluding irregularities, the entire coastline of Southwest Asia from Turkey to Iran stretches 12,876 kilometers (8,000 miles), thus providing a ratio of one kilometer of coast for every 483 square kilometers of land (Drysdale and Blake 1985, 111). Adding Cyprus and Egypt (but not Sudan with its nearly 2.6 million square kilometers of land [967,500 square miles] and only 718 kilometers of coast), the ratio drops to one kilometer (about 3/5 of a mile) of coast for every 418 square kilometers (161 square miles) of area. Including Sudan, the ratio is still only one kilometer of coast for every 547 square kilometers (211 square miles). Continental realms have much higher ratios. China, for example, has one kilometer of coastline for every 1,479 square kilometers (571 square miles) of land. The ratio for the Middle East compares favorably with some of the world's maritime regions; there is one kilometer of coast for every 172 square kilometers (66.4 square miles) of Europe and for every 470 square kilometers (181 square miles) of the United States. To be sure, these coast-to-land ratios say nothing about the quality of seaports, accessibility to them, and the paucity of good natural harbors along extensive parts of the coast. However, the universal presence of warm water and the ease of constructing artificial ports along the emerged shoreline emphasize the region's maritime quality. The very high percentage of population concentrated along the coasts and waterways further reflects the importance of the coastline.
A comparison with Europe, a maritime region par excellence and another multipolar geopolitical region, also supports a view of the Middle East as a region with a uniquely circumterral core. The Middle East's position at the crossroads of three continents across the vital sea lanes between Europe and Asia corresponds to Europe's unique position of the western extremity of the Eurasian land mass. Since "water interdigitates with the land as it does nowhere else," Europe's maritime configuration offers maximum opportunity for global contact over the ocean highway (de Blij and Muller 1992, 63). Much of southern and western Europe is composed of peninsulas and islands, ensuring the circulation of trade and ideas.
This pattern is partially true of the Middle East as well, especially if we add the several isthmuses to the peninsulas and islands. The seas, gulfs, and straits in both regions are generally narrow, whereas offshore waters are rarely the endless expanses found off the shores of East and South Asia or the Americas. This proximity to land and sea invites maritime trade and exploration since the coasts beyond the waterways are either visible or known. During the Gulf War, the unified allied sea and air fleet, operating on the sea highway from the Eastern Mediterranean, the Red Sea, the Arabian Sea, and the Gulf itself, and augmented by land-based aircraft from Turkey as well as more distant NATO bases, penetrated Iraqi airspace from four bodies of water via three cardinal points of the compass.
The circumterral nature of the Middle East thus creates a complementary balance of forces that reinforces both internal regional unity and the external maritime orientation to the outside world. The interpenetrating seas provide a set of openings, not a seal, and Middle Eastern states can usually communicate by both land and sea. This feature, when combined with the traditional settlement pattern of several nodes of power and population centers separated by large empty spaces, strengthens multipolarity. All regional powers, including Turkey, have military and economic interests in their neighbors, adding to the complexity of building a regional balance of power.
Locally, these competing interests can create flashpoints, but since geopolitical equilibrium is dynamic rather than static, these local conflicts can also contribute certain safety values for the regional system as a whole. Israel and Syria, for example, at loggerheads over Lebanon and the Golan, did act with restraint in Persian/Arab Gulf conflicts. Iraq could not intervene in Kuwait without challenging the interests of other regional powers such as Iran and Egypt in the Persian/Arab Gulf-Red Sea transitway. Turkey looks across its narrow seas to Europe for trade and emigration and, together with Syria and Iraq, has vital interests in the Euphrates. Egypt concentrates on the region east of the Gulf of Suez and the Red Sea and north on the Nile instead of continental Africa. The world maritime powers, the United States, the European Community, and Japan, have a vital stake in the Persian/Arab Gulf. These relationships reflect the growing unity of the region as a whole.
This interaction between the interpenetrating seas and the multipolar cores produces a maritime orientation in the Middle East as seen in the distribution and movement of people and goods throughout the region. A defining characteristic of the region's demography is the remarkable variation in population densities. The large semi-arid interiors are scarcely inhabited, while the bulk of the population clusters along either the coasts and adjoining mountain ranges or the large river valleys that penetrate the interior. The Middle East is among the least densely populated regions of the world with 27 persons per square kilometer (70 persons per square mile) in 1989 (United Nations 1992a, 65-68). Yet the Nile Delta has a population density of over 1,000 people per square kilometer (2590 people per square mile) (de Blij and Muller 1992, 363); parts of the Delta average 2,000 per square kilometer (5,180 people per square mile) (Cambridge Atlas 1987).
The nature of the economy is also clearly maritime. Petroleum exports, which constitute 70 percent of the Asian Middle East's exports, are shipped via the region's interpenetrating seas (United Nations 1992a, 875). The Middle East is thus trade-dependent with regional pipeline networks, marine terminals, and oil tanker routes corresponding to the exploitive railroads of a nineteenth-century colony supporting this overseas trade. In addition, extra-regional trade normally represents approximately 90 percent of all imports and exports in value. Jordan and Turkey have the highest intra-regional trade, more than 40 percent of all trade. Intra-regional trade is also significant (approaching 15-20 percent) for such Gulf oil states as Kuwait, Oman, and when allowed, Iraq. Still, all states trade much more with the outside world than within the region. Moreover, intra-regional trade figures for oil countries often include petroleum transshipped through the region (by Iraq, for example, when it was able to make its shipments, though pipelines in Turkey) for destinations in Europe and the maritime world.
The current relationship of the region with the developed maritime world is also one of mutual dependence. In 1989, the Group of Seven developed countries (the United States, Canada, Japan, Germany, France, Italy, and Britain) consumed 32.7 of the 63.5 million barrels of daily world oil production (Standard and Poor's 1990, 17). The Middle East currently supplies 40 percent of world oil production (Standard and Poor's 1992, 20). In 1990, the United States imported $64 billion in crude and refined petroleum, with approximately 25 percent of the total coming from Saudi Arabia alone (United Nations 1992b, 950).
The maritime oil transportation network has furthered the development of intra-regional forces that strengthen regional bonds. Although largely closed as a result of current political tensions, an extensive network of oil pipelines exists that was built to short-circuit long maritime routes or avoid narrow gulfs and straits threatened by a hostile power (Cohen 1992, 6-7). This network provides a motive for cooperation between Iraq and Saudi Arabia, Syria, and Turkey; between Saudi Arabia and Jordan, Syria, Lebanon, Egypt, and Israel; and between Iran and both Turkey and the Islamic republics of the former Soviet Union and Russia.
The pipelines reinforce the rest of the transportation system, including the extensive rail network built with the help of colonial powers that links Cairo, Damascus, Baghdad, and Teheran with Turkey's extensive system. The influx of huge amounts of oil revenues has also permitted the construction of elaborate road and air networks connecting the major power centers of the region. The oil pipelines also serve as a model for the future development of water pipelines throughout the water-starved region from surplus areas such as the Euphrates catch basin in the Turkish highlands, the Yarmuk River in Jordan, the Litani River in Lebanon, and the Nile to deficit areas in Jordan and Israel, the Arab/Persian Gulf, and northern Syria and Iraq (Cohen 1992, 6). When viewed with the extensive movements of intra-regional migration and capital flows, these elements build an impressive list of centripetal forces that could support future regional equilibrium and stability.
Projecting traditional geopolitical analysis against the backdrop of a circumterral Middle East further clarifies the emerging position of the region. Sir Halford Mackinder early on viewed "Arabia" as the sole gap in a "broad curving belt" of desert and wilderness separating Europe, Asia, and Africa and "inaccessible to seafaring people, except by the three Arabian waterways" (Mackinder 1919, 77). He viewed the Sahara rather than the Mediterranean as the southern boundary of Europe and saw the geographic key to Arabia in its waterways:
What, however, most distinguishes Arabia both from the Heartland and the Sahara is the fact that it is transversed by three great waterways in connection with the ocean-the Nile, the Red Sea, and the Euphrates and the Persian Gulf. None of these three ways, however, affords naturally a complete passage across the arid belt. (Mackinder 1919, 76)
Today it is possible to expand upon Mackinder and view these three interrupted waterways as interconnected through the construction of the Suez Canal and the waterways as including the several coasts of Turkey, the eastern Mediterranean, Egypt, Arabia, and Iran.3
Although in 1919, Mackinder viewed Arabia as dominated by the "horsemen and camelmen" of the steppe of the continental heartland, he could not have anticipated that the discovery of oil would transform desert horsemen into maritime merchants and industrialists creating new coastal landscapes in the Persian/Arab Gulf and modernizing the Red Sea coast. His landmark 1904 article, however, still recognized the maritime nature of what he then termed the "Nearer East":
In some degree [it] partakes of the characteristics both of the marginal belt and of the central area of Euro-Asia. It is mainly devoid of forest, is patched with desert, and is therefore suitable for the operations of the nomad. Dominantly, however, it is marginal (i.e., maritime), for sea-gulfs and oceanic rivers lay it open to sea power and permit of the exercise of such power from it. (Mackinder 1904, 431)
Subsequent analyses gave greater weight to the maritime connections of the Middle East than Mackinder accorded it. James Fairgrieve (1927, 329-330) perceived the region as a "crush zone" between overseas European empires and remnant continental states. Nicholas Spykman (1944, 41) argued that the Suez Canal represented the penetration of sea power through the land isthmus between Europe and Asia and thus permanently incorporated the Middle East into the maritime world. The result was a rimland of amphibious states stretching from Europe to the monsoon coasts of Asia via the interpenetrating seas of the Middle East. This rimland served as a "vast buffer zone of conflict between sea power and land power" as well as a global maritime highway formed by a string of marginal and mediterranean seas connecting the world's two great ocean belts, the Atlantic and Pacific. Thus, the Middle East has long been viewed as an integral part of the maritime world even before the emergence of the region's petroleum industry and the world's dependence on it.
Saul Cohen (1973) later described the Middle East as a "shatterbelt" caught between the two superpowers, one continental and the other maritime. Superpower competition was profoundly disturbing to the region as emerging regional power centers (except Israel) frequently switched allegiances with the two outside powers and routinely broke agreements with each other. The development of the Middle East's vast oil reserves further increased the strategic interest of the maritime realm in the region and, in turn, increased the region's connections with the developed world.
Today, with the falling away of the continental former Soviet superpower, that connection to the maritime realm is even stronger and serves as a stabilizing force in a region yet to evolve out of its shatterbelt status. With ambitious regional states no longer able to play the two superpowers off against each other and with six regional powers composing such a complex constellation of forces that none can act freely, the fragmented region has some prospects for a dynamic power equilibrium. Just as the region's physical geography and political relationships are complicated, so the new system of power will be complicated, involving a hierarchy of relationships between the six regional cores and the external maritime powers, especially the United States and maritime Europe. At some future point, it may be possible to create a community of states despite the great ethnic, religious, political, and economic diversity in the region.
As Cohen (1992, 9) has pointed out, the Middle East's power structure is based on multiple power cores in rough balance with "little comparative advantage to those who would seek gain through war." He describes this balance through a seesaw framework with Turkey at the fulcrum (see Figure 4). These relationships have not yet developed into a classical balance of power system where states mutually recognize the territoriality of all other powers. Yet the region's connections to the maritime world have been strengthened most recently by the Gulf War and by the remarkable growth of a complex network of relationships because of the development of the region's oil reserves over the last three decades. With a continuing vital interest in the Middle East even after the end of the Cold War, the maritime world "sets limits on the ambitions of regional overlords" and has an important role to play in reinforcing a dynamic but stable equilibrium (Cohen 1992, 9).
In forging a regional peace, we need to pay particular attention to the circumterral core of the Middle East as it relates to both the network of connections between the region and the maritime world and those between the scattered clusters of power cores within the region. Each isolated power center will have its own particular set of interests determined by its own unique location on the interpenetrating seas and waterways of the circumterral core.
Egypt, for example, has for the last two decades opted for close ties with the United States, the leading maritime power, as a century earlier its ties were to England and France and in classical times to Greece, Rome, and finally Byzantium. Since the Gulf War, Egypt has been building close financial, political, and emigration ties with Saudi Arabia. Syria, somewhat belatedly, is turning westward to the Mediterranean, building closer ties with the maritime world and consolidating its hold on Lebanon. Turkey, located on three of the region's seas, faces Europe and cultivates its ties with NATO and the Western world. Moreover, it is competing with Iran for influence over the newly created Central Asian sovereign states. Anatolia also serves as a potentially vital transshipment point westward for oil from Iraq and Iran and possesses the most important source of untapped water reserves in the region. Iraq continues as a significant regional power because of its great oil wealth and its battered but not completely shattered military machine. However, it continues to suffer from the geographical liability that limits its access to the region's maritime highway.
Highly dependent on land-based pipelines for exporting oil, the Gulf War demonstrated Iraq's vulnerability to blockade. Iran, the one Middle Eastern country that combines huge oil reserves with a large population, has the potential to become the most influential of the six power clusters. Iran's maritime location is also less favorable than the other countries because it has no direct outlet to the Mediterranean and the west and borders a northern sea that leads to a relatively empty continental heartland. The last power cluster, Israel, must remain a regional power in order to survive. Situated on the Mediterranean and with access to the Red Sea, Israel can sustain close economic, military, and political ties with the maritime world and especially the United States but increasingly with the European Community. Thus, the entire network of intra- and inter-regional connections can be viewed as a stabilizing influence to promote long-term peace in the region.
1 Historical desert-oases routes also existed such as the ones that ran through Palmyra (Syria), Petra (Jordan), and Bosra. The modern oases are trans-desert pipeline pumping stations or truck stops.
2 The notion of the Caspian Sea as a dead end is a relative one. Only a single circuitous railroad connection exists through the troubled Caucasus to carry Iran's overland trade to Russia. The Caspian, however, offers a direct water route via Russia's inland waterway system to the Volga industrial region and to the Black Sea via the Volga-Don Canal. Thus a physical route for future Russo-Iranian trade exists once both countries have developed beyond the stage of exporting mainly primary products, especially oil and natural gas.
3 Mackinder's comment about the "persistent north winds of the trade-wind current" (1904, 77) blowing down the Red Sea and inhibiting sailing ships from using this route is long out of date. Strong westerlies called the subtropical jet stream exist in the upper atmosphere, but light and variable winds created by the subtropical high pressure system occur at the surface. Strong spring and fall winds create large dust storms, but they last only a few days. In the winter, the passage of depressions through the regions can create occasional steep pressure gradients (Beaumont et al. 1988, 51-55).
Beaumont, Peter, Gerald H. Blake, and J. Malcolm Wagstaff. The Middle East: A Geographic Study. 2nd ed. New York: Wiley, 1988.
Cambridge Atlas of the Middle East and North Africa. New York: Cambridge University Press, 1987.
Cohen, Saul B. Geography and Politics in a World Divided. 2nd ed. New York: Oxford University Press, 1973.
-----. "Middle East Geopolitical Transformation: The Disappearance of a Shatterbelt." Journal of Geography 91 (January/February 1992): 2-10.
Cressey, George B. Crossroads: Land and Life in Southwest Asia. New York: Lippincott, 1960.
de Blij, Harm J., and Peter O. Muller. Geography: Regions and Concepts. 6th ed. New York: Wiley, 1992.
Drysdale, Alasdair, and Gerald H. Blake. The Middle East and North Africa: A Political Geography. New York: Oxford, 1985.
Fairgrieve, James. Geography and World Power. London: University of London Press, 1927.
Mackinder, Halford. Democratic Ideals and Reality. New York: Holt, 1919.
-----. "The Geographic Pivot of History." The Geographic Journal 23, no. 4 (April 1904): 421-437.
Mahan, Alfred Thayer. The Influence of Sea Power Upon History, 1660-1783. New York: Hill and Wang, 1890 and 1957.
Semple, Ellen Churchill. The Geography of the Mediterranean Region. New York: AMS Press, 1931 and 1971.
Spykman, Nicholas. The Geography of Peace. New York: Harcourt Brace, 1944.
Standard and Poor's. Industry Surveys. New York: Standard and Poor's, October 11, 1990 and July 16, 1992.
United Nations. International Trade Statistics Yearbook, 1990. Vol. II. New York: United Nations, 1992.
-----. Statistical Yearbook, 1988-89. 37th ed. New York: United Nations, 1992.
Whittlesey, Derwent. The Earth and the State: A Study of Political Geography. New York: Henry Holt, 1959.
Richard W. Fox is professor of history, political science, and geography at Suffolk Community College. Saul B. Cohen is university professor of geography at Hunter College and the City University of New York. |
Math ~ Unit 2
Unit 2: Number Stories and Arrays
In this unit, our class will:
- Use basic facts to solve fact extension;
- Solve number stories using question marks for the unknown;
- Solve multi-step number stories;
- Solve number stories using representations;
- Solve equal-groups number stories;
- Solve number stories using number models and arrays;
- Create mathematical representations to solve problems;
- Solve division number stories;
- Create arrays to practice division with and without remainders;
- Use Frames-and-Arrows diagrams to solve problems;
- Multiply and divide within 100 fluently;
- Know all products of 1 digit numbers multiplied by 1, 2, 5, 10;
- Add and Subtract within 1, 000 fluently
Everyday Mathematics Games:
Throughout Unit 2, our class will play the games listed below to practice skills and developing strategic thinking. Check your child's homework folder for information as we play these games in class!
- Addition Top-it (see Student Reference Book page 260-261)
- Subtraction Top-it (see Student Reference Book page 260-261)
- Salute! (see Student Reference Book page 255)
- Multiplication Draw (see Student Reference Book page 248)
- Roll to 1, 000 (see Student Reference Book page 253)
- Array Bingo (see Student Reference Book page 232)
- Division Arrays (see Student Reference Book page 238)
Choose one of the sites below to find great activities to work on:
- Math Adding- Work with a friend to solve addition problems.
- Math Baseball- Practice your addition/subtraction skills by playing a game of baseball.
- MathCar Racing- Race around the track by solving math problems
- Circus Fun- Practice your ability to answer number stories
- Number Family Practice- Practice your number family skills with this website
- Math 500 Race- Practice Addition/Subtraction and Multiplication/Division as you race your car around the track! What is your fastest score?? (Press "Start Your Engine" to begin)
- Math Fact Cafe- Practice multiplication and division facts using flashcards (scroll to the middle of the page to find the "flashcards"
- Clear it! Addition Game- Create an addition number sentence that equals the target number (sum)
- Clear it! Multiplication Game- Create a multiplication number sentence that equal the target number (product)
- Multiplication Mine- Practice multiplication facts while mining for gems.
- Math Lines- Multiplication: Practice multiplying different factors to get the same product
- Math Lines- Addition: Practice adding different addends to get the same sum |
Updated: Jun 20
A two-way ANOVA test is a statistical test used to determine the effect of two nominal predictor variables on a continuous outcome variable. ANOVA stands for analysis of variance and tests for differences in the effects of independent variables on a dependent variable.
An ANOVA test is the first step in identifying factors that influence a given outcome. Once an ANOVA test is performed, a tester may be able to perform further analysis on the systematic factors that are statistically contributing to the data set's variability. A two-way ANOVA test reveals the results of two independent variables on a dependent variable. ANOVA test results can then be used in an F-test on the significance of the regression formula overall.
Assumptions for ANOVA
To use the ANOVA test we made the following assumptions:
Each group sample is drawn from a normally distributed population
All populations have a common variance
All samples are drawn independently of each other
Within each sample, the observations are sampled randomly and independently of each other
Factor effects are additive
The presence of outliers can also cause problems. In addition, we need to make sure that the F statistic is well behaved. In particular, the F statistic is relatively robust to violations of normality provided:
The populations are symmetrical and uni-modal.
The sample sizes for the groups are equal and greater than 10
In general, as long as the sample sizes are equal (called a balanced model) and sufficiently large, the normality assumption can be violated provided the samples are symmetrical or at least similar in shape (e.g. all are negatively skewed). The F statistic is not so robust to violations of homogeneity of variances. A rule of thumb for balanced models is that if the ratio of the largest variance to smallest variance is less than 3 or 4, the F-test will be valid. If the sample sizes are unequal then smaller differences in variances can invalidate the F-test. Much more attention needs to be paid to unequal variances than to non-normality of data.
Two Factor ANOVA without Replication
A new fertilizer has been developed to increase the yield on crops, and the makers of the fertilizer want to better understand which of the three formulations (blends) of this fertilizer are most effective for wheat, corn, soy beans and rice (crops). They test each of the three blends on one sample of each of the four types of crops. The crop yields for the 12 combinations are as shown in Figure 1.
There are two null hypotheses: one for the rows and the other for the columns.
The null hypothesis says that all the blends are same. F statistic for rows = 12.826, f critical = 5.14. Because 12.826 > 5.14 we have to reject the null hypothesis that all the blends are equal. Also, P-value is less than the significance level. If p is low the null must go. Differences between the fertilizer are statistically significant.
Let’ assume that McDonald's in Czech Republic uses secret quality agents who pretend to be customers to enter a store and document their experience in terms of customer service, cleanliness, and quality. For its locations in the Czech cities of Prague, Brno and Ostrava, McDonald’s has trained 6 quality agents. Each of the 6 agents will be assigned to visit the same store in each of the 3 cities. The visit sequence will be random. We would like to know if a difference in quality agent ratings exists among the cities. Are they all about the same?
The overall mean is 70. Prague and Brno are bellow mean and Ostrava is above. Are theses ratings significantly different or not?
F critical = 4.1
F statistic = 5.5
5.5 > 4.1 Reject the null hypothesis
P values = 0.02 < α [.05] Reject the null hypothesis
Conducting a Two-Way ANOVA with replication in SPSS
In this exercise I am using fictitious data. I have 2 independent variables. Duration of cancelation treatment. It has 3 levels: 6,12,18-week. And I have Gender, separate independent variable that has two levels: male & female. And then I have a dependent variable which contains the results of treatment. Lower symptom level indicates fewer symptoms.
We have one dependent variable SymptomLevel and two independent variables.
Click on Plots...
(i) Move Duration on Horizontal Axis. Press Add button.
(ii) Move Gender on Horizontal Axis. Press Add button.
(iii) Move Duration on Horizontal Axis, Gender on Separate Lines, press Add button and then continue button.
In the Univariate window click on Post Hoc...
We are going to do post hoc test for Duration variable because it has 3 levels. I pick R-E-G-W-Q. You can also try LSD or Tukey or any other test.
We have a look on Estimated Marginal Means under EM Means … button.
Confidence interval adjustment: Bonferroni.
In Options tick Descriptive statistics, Estimates of effect size, Homogeneity tests. Click on the Continue button and then OK to run the tests.
Between - Subjects Factors:
We have 3 levels for duration and 2 levels for gender. The sample sizes are identical for all the different levels of duration and gender.
You can have look on the means 6,12,18-week by Gender. Couple of the scores stand out. It is 18-week for Male. That is a low mean score 36. 18-week total is also low. On 12-week Female has significantly lower mean score than Male.
Based on this test we would reject the null hypothesis
Tests of Between-Subjects Effects
Duration: Sig. 0.009: statistically significant difference on the dependent variable SymptomLevel for the independent variable Duration.
Gender: Sig. 0.894: statistically not significant differences between genders. No difference on the SymptomLevel by Gender.
Partial Eta Squared: 0.000. SymptomLevel variable cannot be explained by gender.
Duration*Gender: Sig. 0.000, Interaction between Duration and Gender is statistically significant. Effect size is 0.281. 28.1% of SymptomLevel can be explained by this interaction. If the lines cross, we should stop right there. Data interpretation is difficult because we cannot say if there is an upwards or downward pattern.
Duration, Estimates: we have low mean for 18-week treatment. We can say that 18-week treatment reduces the number of symptoms.
We know there is a statistically significant difference between the levels of the duration and dependent variable on the symptom level from Test Between-Subjects Effects. Duration Sig. is 0.009. Means there are significant differences between the weeks of treatment. Pairwise Comparison can tell us where this is happening.
We do not have a statistically significant difference between 6-week and 12-week variable [Sig. 1.000]. But we do have significant difference between 6-week and 18-week [Sig. 0.012]. The other scores are statistically not significant because are above 0.05.
The differences between genders are not statistically significant.
Duration: 18-week treatment seems to be much more effective than the other weeks in terms of removing symptoms.
At the 6-week level for male and female the symptom level was roughly the same. For the 12-week program the male symptoms level was higher female was lower, means female was responding better. For the 18-week, female symptoms increase male symptoms show a market decrease in the symptom level. Female benefits from 12-week, male benefits from 18-week treatment.
Conducting a Two-Way ANOVA with replication in Excel
You are a researcher interested in the effectiveness of 3 different cotton plant foods: Awesome Advantage (AA), Big Buds (BB), and Copious Cotton (CC). Therefore, you design an experiment in which you will test each plant food with two feedings per day for 75 days after planting. 8 plants will be tested for each combination. The variable of interest is the plant height in centimetres.
Is there any difference in plant growth when comparing 1 feeding to 2 feedings per day? (feeding frequency). Do some plant foods work better once per day and others better at twice per day? (feeding frequency together with food type).
We are going to perform a test for outliers, normality, homoscedasticity, or homogeneity of variance, we will be visually inspecting the charts. Let's start with homoscedasticity and boxplot.
If we have a look on the interquartile range, we can see these are not too close to each other. The variances are unequal. This is called heteroskedasticity (or heteroscedasticity). It happens when the standard errors of a variable, monitored over a specific amount of time, are non-constant. We can also look on the outliers. You can see there is no score bellow or above any of the whiskers.
We try to evaluate normality now. We can have a look on Histogram or run Descriptive Statistics test to get an idea if the data are normally distributed or not.
As we can see Excel is give us Kurtosis and Skewness values which are relatively close to zero what suggest that the data tails off appropriately and are relatively symmetrical. If we have a look on mean and median values these are close to each other as well. These descriptive statistics values are consistent with a normal distribution.
In the beginning we had couple of questions. For AA, BB, CC, how many feedings produce more growth? Do two feedings produce more growth across ALL plant foods consistently? It appears that two feedings [blue line] are better for AA and BB but not for CC . This type of situation is called an interaction. An interaction occurs when the effect of one factor changes for different levels of the other factor. In this case, the most effective feeding frequency changes across plant food types. On a marginal means graph, as a rule, we look if the lines cross or “would” cross. According the chart we have a significant interaction here, means our 2 factors are too tied together to look at them individually.
Conducting a Two-Way ANOVA with replication in Excel.
Anova table. The first row to interpret is called Sample. This is one feeding versus two feedings per day. F value is lower than F critical means both feedings are equal. We would fail to reject the null hypothesis because the p-value is 0.45 > α=0.05
Columns are the plants [AA, BB,CC]. F value is greater than F critical. The plants are not equal, there are statistically significant differences between the plants. We would reject the null hypothesis and say that each plant reacts differently during the feedings.
Interaction. If the P value is lower than α=0.05 means interaction is statistically significant. When considering one feeding and two feedings per day and its relationship to the plant types there are significant differences. |
Some shorelines experience two almost equal high tides and two low tides each day, called a semi-diurnal tide. Some locations experience only one high and one low tide each day, called a diurnal tide. Some locations experience two uneven tides a day, or sometimes one high and one low each day; this is called a mixed tide. The times and amplitude of the tides at a locale are influenced by the alignment of the Sun and Moon, by the pattern of tides in the deep ocean, by the amphidromic systems of the oceans, and by the shape of the coastline and near-shore bathymetry (see Timing).
Tides vary on timescales ranging from hours to years due to numerous influences. To make accurate records, tide gauges at fixed stations measure the water level over time. Gauges ignore variations caused by waves with periods shorter than minutes. These data are compared to the reference (or datum) level usually called mean sea level.
While tides are usually the largest source of short-term sea-level fluctuations, sea levels are also subject to forces such as wind and barometric pressure changes, resulting in storm surges, especially in shallow seas and near coasts.
Tidal phenomena are not limited to the oceans, but can occur in other systems whenever a gravitational field that varies in time and space is present. For example, the solid part of the Earth is affected by tides, though this is not as easily seen as the water tidal movements.
Tide changes proceed via the following stages:
- Sea level rises over several hours, covering the intertidal zone; flood tide.
- The water rises to its highest level, reaching high tide.
- Sea level falls over several hours, revealing the intertidal zone; ebb tide.
- The water stops falling, reaching low tide.
Tides produce oscillating currents known as tidal streams. The moment that the tidal current ceases is called slack water or slack tide. The tide then reverses direction and is said to be turning. Slack water usually occurs near high water and low water. But there are locations where the moments of slack tide differ significantly from those of high and low water.
Tides are commonly semi-diurnal (two high waters and two low waters each day), or diurnal (one tidal cycle per day). The two high waters on a given day are typically not the same height (the daily inequality); these are the higher high water and the lower high water in tide tables. Similarly, the two low waters each day are the higher low water and the lower low water. The daily inequality is not consistent and is generally small when the Moon is over the equator.
Observation and predictionEdit
From ancient times, tidal observation and discussion has increased in sophistication, first marking the daily recurrence, then tides' relationship to the sun and moon. Pytheas travelled to the British Isles about 325 BC and mentions spring tides.
In the 2nd century BC, the Babylonian astronomer, Seleucus of Seleucia, correctly described the phenomenon of tides in order to support his heliocentric theory. He correctly theorized that tides were caused by the moon, although he believed that the interaction was mediated by the pneuma. He noted that tides varied in time and strength in different parts of the world. According to Strabo (1.1.9), Seleucus was the first to link tides to the lunar attraction, and that the height of the tides depends on the moon's position relative to the sun.
The Naturalis Historia of Pliny the Elder collates many tidal observations, e.g., the spring tides are a few days after (or before) new and full moon and are highest around the equinoxes, though Pliny noted many relationships now regarded as fanciful. In his Geography, Strabo described tides in the Persian Gulf having their greatest range when the moon was furthest from the plane of the equator. All this despite the relatively small amplitude of Mediterranean basin tides. (The strong currents through the Euripus Strait and the Strait of Messina puzzled Aristotle.) Philostratus discussed tides in Book Five of The Life of Apollonius of Tyana. Philostratus mentions the moon, but attributes tides to "spirits". In Europe around 730 AD, the Venerable Bede described how the rising tide on one coast of the British Isles coincided with the fall on the other and described the time progression of high water along the Northumbrian coast.
In the 9th century, the Arabian earth-scientist, Al-Kindi (Alkindus), wrote a treatise entitled Risala fi l-Illa al-Failali l-Madd wa l-Fazr (Treatise on the Efficient Cause of the Flow and Ebb), in which he presents an argument on tides which "depends on the changes which take place in bodies owing to the rise and fall of temperature." He describes a clear and precise laboratory experiment in order to prove his argument. Al-Kindi explained the influence of Sun and Moon on the tide phenomenon, according to the Islamic Encyclopedia, as follows: "The sun and the moon warm the water and hence cause it to expand. It is this expansion that makes the water spring out of the center of the earth, and depending on the yearly, monthly and daily movement of the sun and the moon, it also causes the ebb and flow of the sea water known as tides."
The first tide table in China was recorded in 1056 AD primarily for visitors wishing to see the famous tidal bore in the Qiantang River. The first known British tide table is thought to be that of John Wallingford, who died Abbot of St. Albans in 1213, based on high water occurring 48 minutes later each day, and three hours earlier at the Thames mouth than upriver at London.
William Thomson (Lord Kelvin) led the first systematic harmonic analysis of tidal records starting in 1867. The main result was the building of a tide-predicting machine using a system of pulleys to add together six harmonic time functions. It was "programmed" by resetting gears and chains to adjust phasing and amplitudes. Similar machines were used until the 1960s.
The first known sea-level record of an entire spring–neap cycle was made in 1831 on the Navy Dock in the Thames Estuary. Many large ports had automatic tide gage stations by 1850.
William Whewell first mapped co-tidal lines ending with a nearly global chart in 1836. In order to make these maps consistent, he hypothesized the existence of amphidromes where co-tidal lines meet in the mid-ocean. These points of no tide were confirmed by measurement in 1840 by Captain Hewett, RN, from careful soundings in the North Sea.
The tidal forces due to the Moon and Sun generate very long waves which travel all around the ocean following the paths shown in co-tidal charts. The time when the crest of the wave reaches a port then gives the time of high water at the port. The time taken for the wave to travel around the ocean also means that there is a delay between the phases of the moon and their effect on the tide. Springs and neaps in the North Sea, for example, are two days behind the new/full moon and first/third quarter moon. This is called the tide's age.
The ocean bathymetry greatly influences the tide's exact time and height at a particular coastal point. There are some extreme cases; the Bay of Fundy, on the east coast of Canada, is often stated to have the world's highest tides because of its shape, bathymetry, and its distance from the continental shelf edge. Measurements made in November 1998 at Burntcoat Head in the Bay of Fundy recorded a maximum range of 16.3 meters (53 ft) and a highest predicted extreme of 17 meters (56 ft). Similar measurements made in March 2002 at Leaf Basin, Ungava Bay in northern Quebec gave similar values (allowing for measurement errors), a maximum range of 16.2 meters (53 ft) and a highest predicted extreme of 16.8 meters (55 ft). Ungava Bay and the Bay of Fundy lie similar distances from the continental shelf edge, but Ungava Bay is free of pack ice for only about four months every year while the Bay of Fundy rarely freezes.
Southampton in the United Kingdom has a double high water caused by the interaction between the region's different tidal harmonics, caused primarily by the east/west orientation of the English Channel and the fact that when it is high water at Dover it is low water at Land's End (some 300 nautical miles distant) and vice versa. This is contrary to the popular belief that the flow of water around the Isle of Wight creates two high waters. The Isle of Wight is important, however, since it is responsible for the 'Young Flood Stand', which describes the pause of the incoming tide about three hours after low water.
Because the oscillation modes of the Mediterranean Sea and the Baltic Sea do not coincide with any significant astronomical forcing period, the largest tides are close to their narrow connections with the Atlantic Ocean. Extremely small tides also occur for the same reason in the Gulf of Mexico and Sea of Japan. Elsewhere, as along the southern coast of Australia, low tides can be due to the presence of a nearby amphidrome.
Isaac Newton's theory of gravitation first enabled an explanation of why there were generally two tides a day, not one, and offered hope for detailed understanding. Although it may seem that tides could be predicted via a sufficiently detailed knowledge of the instantaneous astronomical forcings, the actual tide at a given location is determined by astronomical forces accumulated over many days. Precise results require detailed knowledge of the shape of all the ocean basins—their bathymetry and coastline shape.
Current procedure for analysing tides follows the method of harmonic analysis introduced in the 1860s by William Thomson. It is based on the principle that the astronomical theories of the motions of sun and moon determine a large number of component frequencies, and at each frequency there is a component of force tending to produce tidal motion, but that at each place of interest on the Earth, the tides respond at each frequency with an amplitude and phase peculiar to that locality. At each place of interest, the tide heights are therefore measured for a period of time sufficiently long (usually more than a year in the case of a new port not previously studied) to enable the response at each significant tide-generating frequency to be distinguished by analysis, and to extract the tidal constants for a sufficient number of the strongest known components of the astronomical tidal forces to enable practical tide prediction. The tide heights are expected to follow the tidal force, with a constant amplitude and phase delay for each component. Because astronomical frequencies and phases can be calculated with certainty, the tide height at other times can then be predicted once the response to the harmonic components of the astronomical tide-generating forces has been found.
The main patterns in the tides are
- the twice-daily variation
- the difference between the first and second tide of a day
- the spring–neap cycle
- the annual variation
The Highest Astronomical Tide is the perigean spring tide when both the sun and the moon are closest to the Earth.
When confronted by a periodically varying function, the standard approach is to employ Fourier series, a form of analysis that uses sinusoidal functions as a basis set, having frequencies that are zero, one, two, three, etc. times the frequency of a particular fundamental cycle. These multiples are called harmonics of the fundamental frequency, and the process is termed harmonic analysis. If the basis set of sinusoidal functions suit the behaviour being modelled, relatively few harmonic terms need to be added. Orbital paths are very nearly circular, so sinusoidal variations are suitable for tides.
For the analysis of tide heights, the Fourier series approach has in practice to be made more elaborate than the use of a single frequency and its harmonics. The tidal patterns are decomposed into many sinusoids having many fundamental frequencies, corresponding (as in the lunar theory) to many different combinations of the motions of the Earth, the moon, and the angles that define the shape and location of their orbits.
For tides, then, harmonic analysis is not limited to harmonics of a single frequency. In other words, the harmonies are multiples of many fundamental frequencies, not just of the fundamental frequency of the simpler Fourier series approach. Their representation as a Fourier series having only one fundamental frequency and its (integer) multiples would require many terms, and would be severely limited in the time-range for which it would be valid.
The study of tide height by harmonic analysis was begun by Laplace, William Thomson (Lord Kelvin), and George Darwin. A.T. Doodson extended their work, introducing the Doodson Number notation to organise the hundreds of resulting terms. This approach has been the international standard ever since, and the complications arise as follows: the tide-raising force is notionally given by sums of several terms. Each term is of the form
- A·cos(w·t + p)
where A is the amplitude, w is the angular frequency usually given in degrees per hour corresponding to t measured in hours, and p is the phase offset with regard to the astronomical state at time t = 0 . There is one term for the moon and a second term for the sun. The phase p of the first harmonic for the moon term is called the lunitidal interval or high water interval. The next step is to accommodate the harmonic terms due to the elliptical shape of the orbits. Accordingly, the value of A is not a constant but also varying with time, slightly, about some average figure. Replace it then by A(t) where A is another sinusoid, similar to the cycles and epicycles of Ptolemaic theory. Accordingly,
- A(t) = A·(1 + Aa·cos(wa·t + pa)) ,
which is to say an average value A with a sinusoidal variation about it of magnitude Aa , with frequency wa and phase pa . Thus the simple term is now the product of two cosine factors:
- A·[1 + Aa·cos(wa ·t + pa)]·cos(w·t + p)
Given that for any x and y
- cos(x)·cos(y) = ½·cos( x + y ) + ½·cos( x–y ) ,
it is clear that a compound term involving the product of two cosine terms each with their own frequency is the same as three simple cosine terms that are to be added at the original frequency and also at frequencies which are the sum and difference of the two frequencies of the product term. (Three, not two terms, since the whole expression is (1 + cos(x))·cos(y) .) Consider further that the tidal force on a location depends also on whether the moon (or the sun) is above or below the plane of the equator, and that these attributes have their own periods also incommensurable with a day and a month, and it is clear that many combinations result. With a careful choice of the basic astronomical frequencies, the Doodson Number annotates the particular additions and differences to form the frequency of each simple cosine term.
Remember that astronomical tides do not include weather effects. Also, changes to local conditions (sandbank movement, dredging harbour mouths, etc.) away from those prevailing at the measurement time affect the tide's actual timing and magnitude. Organisations quoting a "highest astronomical tide" for some location may exaggerate the figure as a safety factor against analytical uncertainties, distance from the nearest measurement point, changes since the last observation time, ground subsidence, etc., to avert liability should an engineering work be overtopped. Special care is needed when assessing the size of a "weather surge" by subtracting the astronomical tide from the observed tide.
Careful Fourier data analysis over a nineteen-year period (the National Tidal Datum Epoch in the U.S.) uses frequencies called the tidal harmonic constituents. Nineteen years is preferred because the Earth, moon and sun's relative positions repeat almost exactly in the Metonic cycle of 19 years, which is long enough to include the 18.613 year lunar nodal tidal constituent. This analysis can be done using only the knowledge of the forcing period, but without detailed understanding of the mathematical derivation, which means that useful tidal tables have been constructed for centuries. The resulting amplitudes and phases can then be used to predict the expected tides. These are usually dominated by the constituents near 12 hours (the semi-diurnal constituents), but there are major constituents near 24 hours (diurnal) as well. Longer term constituents are 14 day or fortnightly, monthly, and semiannual. Semi-diurnal tides dominated coastline, but some areas such as the South China Sea and the Gulf of Mexico are primarily diurnal. In the semi-diurnal areas, the primary constituents M2 (lunar) and S2 (solar) periods differ slightly, so that the relative phases, and thus the amplitude of the combined tide, change fortnightly (14 day period).
In the M2 plot above, each cotidal line differs by one hour from its neighbors, and the thicker lines show tides in phase with equilibrium at Greenwich. The lines rotate around the amphidromic points counterclockwise in the northern hemisphere so that from Baja California Peninsula to Alaska and from France to Ireland the M2 tide propagates northward. In the southern hemisphere this direction is clockwise. On the other hand M2 tide propagates counterclockwise around New Zealand, but this is because the islands act as a dam and permit the tides to have different heights on the islands' opposite sides. (The tides do propagate northward on the east side and southward on the west coast, as predicted by theory.)
The exception is at Cook Strait where the tidal currents periodically link high to low water. This is because cotidal lines 180° around the amphidromes are in opposite phase, for example high water across from low water at each end of Cook Strait. Each tidal constituent has a different pattern of amplitudes, phases, and amphidromic points, so the M2 patterns cannot be used for other tide components.
Because the moon is moving in its orbit around the earth and in the same sense as the Earth's rotation, a point on the earth must rotate slightly further to catch up so that the time between semidiurnal tides is not twelve but 12.4206 hours—a bit over twenty-five minutes extra. The two peaks are not equal. The two high tides a day alternate in maximum heights: lower high (just under three feet), higher high (just over three feet), and again lower high. Likewise for the low tides.
When the Earth, moon, and sun are in line (sun–Earth–moon, or sun–moon–Earth) the two main influences combine to produce spring tides; when the two forces are opposing each other as when the angle moon–Earth–sun is close to ninety degrees, neap tides result. As the moon moves around its orbit it changes from north of the equator to south of the equator. The alternation in high tide heights becomes smaller, until they are the same (at the lunar equinox, the moon is above the equator), then redevelop but with the other polarity, waxing to a maximum difference and then waning again.
The tides' influence on current flow is much more difficult to analyse, and data is much more difficult to collect. A tidal height is a simple number which applies to a wide region simultaneously. A flow has both a magnitude and a direction, both of which can vary substantially with depth and over short distances due to local bathymetry. Also, although a water channel's center is the most useful measuring site, mariners object when current-measuring equipment obstructs waterways. A flow proceeding up a curved channel is the same flow, even though its direction varies continuously along the channel. Surprisingly, flood and ebb flows are often not in opposite directions. Flow direction is determined by the upstream channel's shape, not the downstream channel's shape. Likewise, eddies may form in only one flow direction.
Nevertheless, current analysis is similar to tidal analysis: in the simple case, at a given location the flood flow is in mostly one direction, and the ebb flow in another direction. Flood velocities are given positive sign, and ebb velocities negative sign. Analysis proceeds as though these are tide heights.
In more complex situations, the main ebb and flood flows do not dominate. Instead, the flow direction and magnitude trace an ellipse over a tidal cycle (on a polar plot) instead of along the ebb and flood lines. In this case, analysis might proceed along pairs of directions, with the primary and secondary directions at right angles. An alternative is to treat the tidal flows as complex numbers, as each value has both a magnitude and a direction.
Tide flow information is most commonly seen on nautical charts, presented as a table of flow speeds and bearings at hourly intervals, with separate tables for spring and neap tides. The timing is relative to high water at some harbour where the tidal behaviour is similar in pattern, though it may be far away.
As with tide height predictions, tide flow predictions based only on astronomical factors do not incorporate weather conditions, which can completely change the outcome.
The tidal flow through Cook Strait between the two main islands of New Zealand is particularly interesting, as the tides on each side of the strait are almost exactly out of phase, so that one side's high water is simultaneous with the other's low water. Strong currents result, with almost zero tidal height change in the strait's center. Yet, although the tidal surge normally flows in one direction for six hours and in the reverse direction for six hours, a particular surge might last eight or ten hours with the reverse surge enfeebled. In especially boisterous weather conditions, the reverse surge might be entirely overcome so that the flow continues in the same direction through three or more surge periods.
A further complication for Cook Strait's flow pattern is that the tide at the north side (e.g. at Nelson) follows the common bi-weekly spring–neap tide cycle (as found along the west side of the country), but the south side's tidal pattern has only one cycle per month, as on the east side: Wellington, and Napier.
The graph of Cook Strait's tides shows separately the high water and low water height and time, through November 2007; these are not measured values but instead are calculated from tidal parameters derived from years-old measurements. Cook Strait's nautical chart offers tidal current information. For instance the January 1979 edition for 41°13·9’S 174°29·6’E (north west of Cape Terawhiti) refers timings to Westport while the January 2004 issue refers to Wellington. Near Cape Terawhiti in the middle of Cook Strait the tidal height variation is almost nil while the tidal current reaches its maximum, especially near the notorious Karori Rip. Aside from weather effects, the actual currents through Cook Strait are influenced by the tidal height differences between the two ends of the strait and as can be seen, only one of the two spring tides at the north end (Nelson) has a counterpart spring tide at the south end (Wellington), so the resulting behaviour follows neither reference harbour.
Tidal energy can be extracted by two means: inserting a water turbine into a tidal current, or building ponds that release/admit water through a turbine. In the first case, the energy amount is entirely determined by the timing and tidal current magnitude. However, the best currents may be unavailable because the turbines would obstruct ships. In the second, the impoundment dams are expensive to construct, natural water cycles are completely disrupted, ship navigation is disrupted. However, with multiple ponds, power can be generated at chosen times. So far, there are few installed systems for tidal power generation (most famously, La Rance at Saint Malo, France) which face many difficulties. Aside from environmental issues, simply withstanding corrosion and biological fouling pose engineering challenges.
Tidal power proponents point out that, unlike wind power systems, generation levels can be reliably predicted, save for weather effects. While some generation is possible for most of the tidal cycle, in practice turbines lose efficiency at lower operating rates. Since the power available from a flow is proportional to the cube of the flow speed, the times during which high power generation is possible are brief.
History of tidal physicsEdit
Investigation into tidal physics was important in the early development of heliocentrism and celestial mechanics, with the existence of two daily tides being explained by the Moon's gravity. Later the daily tides were explained more precisely by the interaction of the Moon's and the sun's gravity.
Galileo Galilei in his 1632 Dialogue Concerning the Two Chief World Systems, whose working title was Dialogue on the Tides, gave an explanation of the tides. The resulting theory, however, was incorrect as he attributed the tides to the sloshing of water caused by the Earth's movement around the sun. He hoped to provide mechanical proof of the Earth's movement – the value of his tidal theory is disputed. At the same time Johannes Kepler correctly suggested that the Moon caused the tides, which he based upon ancient observations and correlations, an explanation which was rejected by Galileo. It was originally mentioned in Ptolemy's Tetrabiblos as having derived from ancient observation.
Isaac Newton (1642–1727) explained tides as the product of the gravitational attraction of astronomical masses. His explanation of the tides (and many other phenomena) was published in the Principia (1687) and used his theory of universal gravitation to explain the lunar and solar attractions as the origin of the tide-generating forces. Newton and others before Pierre-Simon Laplace worked the problem from the perspective of a static system (equilibrium theory), that provided an approximation that described the tides that would occur in a non-inertial ocean evenly covering the whole Earth. The tide-generating force (or its corresponding potential) is still relevant to tidal theory, but as an intermediate quantity (forcing function) rather than as a final result; theory must also consider the Earth's accumulated dynamic tidal response to the applied forces, which response is influenced by bathymetry, Earth's rotation, and other factors.
Maclaurin used Newton's theory to show that a smooth sphere covered by a sufficiently deep ocean under the tidal force of a single deforming body is a prolate spheroid (essentially a three-dimensional oval) with major axis directed toward the deforming body. Maclaurin wrote about the Earth's rotational effects on motion. Euler realized that the tidal force's horizontal component (more than the vertical) drives the tide. In 1744 Jean le Rond d'Alembert studied tidal equations for the atmosphere which did not include rotation.
Pierre-Simon Laplace formulated a system of partial differential equations relating the ocean's horizontal flow to its surface height, a major dynamic theory for water tides. The Laplace tidal equations are still in use today. William Thomson, 1st Baron Kelvin, rewrote Laplace's equations in terms of vorticity which allowed for solutions describing tidally driven coastally trapped waves, known as Kelvin waves.
Others including Kelvin and Henri Poincaré further developed Laplace's theory. Based on these developments and the lunar theory of E W Brown describing the motions of the Moon, Arthur Thomas Doodson developed and published in 1921 the first modern development of the tide-generating potential in harmonic form: Doodson distinguished 388 tidal frequencies. Some of his methods remain in use.
The tidal force produced by a massive object (Moon, hereafter) on a small particle located on or in an extensive body (Earth, hereafter) is the vector difference between the gravitational force exerted by the Moon on the particle, and the gravitational force that would be exerted on the particle if it were located at the Earth's center of mass. The solar gravitational force on the Earth is on average 179 times stronger than the lunar, but because the Sun is on average 389 times farther from the Earth, its field gradient is weaker. The solar tidal force is 46% as large as the lunar. More precisely, the lunar tidal acceleration (along the Moon–Earth axis, at the Earth's surface) is about 1.1 × 10−7 g, while the solar tidal acceleration (along the Sun–Earth axis, at the Earth's surface) is about 0.52 × 10−7 g, where g is the gravitational acceleration at the Earth's surface. Venus has the largest effect of the other planets, at 0.000113 times the solar effect.
The ocean's surface is closely approximated by an equipotential surface, (ignoring ocean currents) commonly referred to as the geoid. Since the gravitational force is equal to the potential's gradient, there are no tangential forces on such a surface, and the ocean surface is thus in gravitational equilibrium. Now consider the effect of massive external bodies such as the Moon and Sun. These bodies have strong gravitational fields that diminish with distance in space and which act to alter the shape of an equipotential surface on the Earth. This deformation has a fixed spatial orientation relative to the influencing body. The Earth's rotation relative to this shape causes the daily tidal cycle. Gravitational forces follow an inverse-square law (force is inversely proportional to the square of the distance), but tidal forces are inversely proportional to the cube of the distance. The ocean surface moves because of the changing tidal equipotential, rising when the tidal potential is high, which occurs on the parts of the Earth nearest to and furthest from the Moon. When the tidal equipotential changes, the ocean surface is no longer aligned with it, so the apparent direction of the vertical shifts. The surface then experiences a down slope, in the direction that the equipotential has risen.
Laplace's tidal equationsEdit
- The vertical (or radial) velocity is negligible, and there is no vertical shear—this is a sheet flow.
- The forcing is only horizontal (tangential).
- The Coriolis effect appears as an inertial force (fictitious) acting laterally to the direction of flow and proportional to velocity.
- The surface height's rate of change is proportional to the negative divergence of velocity multiplied by the depth. As the horizontal velocity stretches or compresses the ocean as a sheet, the volume thins or thickens, respectively.
The boundary conditions dictate no flow across the coastline and free slip at the bottom.
The Coriolis effect (inertial force) steers currents moving towards the equator to the west and toward the east for flows moving away from the equator, allowing coastally trapped waves. Finally, a dissipation term can be added which is an analog to viscosity.
Tsunamis, the large waves that occur after earthquakes, are sometimes called tidal waves, but this name is given by their resemblance to the tide, rather than any actual link to the tide. Other phenomena unrelated to tides but using the word tide are rip tide, storm tide, hurricane tide, and black or red tides.
- ↑ Reddy, M.P.M. & Affholder, M. (2002). Descriptive physical oceanography: State of the Art. Taylor and Francis. p. 249. ISBN 90-5410-706-5. OCLC 223133263 47801346. http://books.google.com/?id=2NC3JmKI7mYC&pg=PA436&dq=tides+centrifugal+%22equilibrium+theory%22+date:2000-2010.
- ↑ Hubbard, Richard (1893). Boater's Bowditch: The Small Craft American Practical Navigator. McGraw-Hill Professional. p. 54. ISBN 0-07-136136-7. OCLC 44059064. http://books.google.com/?id=nfWSxRr8VP4C&pg=PA54&dq=centrifugal+revolution+and+rotation+date:1970-2009.
- ↑ Coastal orientation and geometry affects the phase, direction, and amplitude of amphidromic systems, coastal Kelvin waves as well as resonant seiches in bays. In estuaries seasonal river outflows influence tidal flow.
- ↑ "Tidal lunar day". NOAA. http://www.oceanservice.noaa.gov/education/kits/tides/media/supp_tide05.html. Do not confuse with the astronomical lunar day on the Moon. A lunar zenith is the Moon's highest point in the sky.
- ↑ Mellor, George L. (1996). Introduction to physical oceanography. Springer. p. 169. ISBN 1-56396-210-1.
- ↑ Tide tables usually list mean lower low water (mllw, the 19 year average of mean lower low waters), mean higher low water (mhlw), mean lower high water (mlhw), mean higher high water (mhhw), as well as perigean tides. These are mean values in the sense that they derive from mean data."Glossary of Coastal Terminology: H–M". Washington Department of Ecology, State of Washington. http://www.ecy.wa.gov/programs/sea/swces/products/publications/glossary/words/H_M.htm. Retrieved on 5 April 2007.
- ↑ Flussi e riflussi. Milano: Feltrinelli. 2003. ISBN 88-07-10349-4.
- ↑ van der Waerden, B.L. (1987). "The Heliocentric System in Greek, Persian and Hindu Astronomy". Annals of the New York Academy of Sciences 500 (1): 525–545 . doi:10.1111/j.1749-6632.1987.tb37224.x. Bibcode: 1987NYASA.500..525V.
- ↑ "Baike.baidu.com". Baike.baidu.com. 2012-05-02. http://baike.baidu.com/view/135336.htm. Retrieved on 2012-08-28.
- ↑ Al-Kindi, FSTC
- ↑ Plinio Prioreschi, "Al-Kindi, A Precursor Of The Scientific Revolution", Journal of the International Society for the History of Islamic Medicine, 2002 (2): 17-19
- ↑ The Times of al-Kindi, Islamic Encyclopedia
- ↑ Cartwright, D.E. (1999). Tides, A Scientific History: 11, 18
- ↑ "The Doodson–Légé Tide Predicting Machine". Proudman Oceanographic Laboratory. http://www.pol.ac.uk/home/insight/doodsonmachine.html. Retrieved on 2008-10-03.
- ↑ 15.0 15.1 Zuosheng, Y.; Emery, K.O. & Yui, X. (July 1989). "Historical Development and Use of Thousand-Year-Old Tide-Prediction Tables". Limnology and Oceanography 34 (5): 953–957. doi:10.4319/lo.1989.34.5.0953.
- ↑ Glossary of Meteorology American Meteorological Society.
- ↑ Webster, Thomas (1837). The elements of physics. Printed for Scott, Webster, and Geary. p. 168. http://books.google.com/books?id=dUwEAAAAQAAJ.
- ↑ "FAQ". http://www.waterlevels.gc.ca/english/FrequentlyAskedQuestions.shtml#importantes. Retrieved on June 23, 2007.
- ↑ 19.0 19.1 O'Reilly, C.T.R.; Ron Solvason and Christian Solomon (2005). "Where are the World's Largest Tides". BIO Annual Report "2004 in Review": 44–46. Washington, D.C.: Biotechnol. Ind. Org..
- ↑ 20.0 20.1 Charles T. O'reilly, Ron Solvason, and Christian Solomon. "Resolving the World's largest tides", in J.A Percy, A.J. Evans, P.G. Wells, and S.J. Rolston (Editors) 2005: The Changing Bay of Fundy-Beyond 400 years, Proceedings of the 6th Bay of Fundy Workshop, Cornwallis, Nova Scotia, Sept. 29, 2004 to October 2, 2004. Environment Canada-Atlantic Region, Occasional Report no. 23. Dartmouth, N.S. and Sackville, N.B.
- ↑ "English Channel double tides". Bristolnomads.org.uk. http://www.bristolnomads.org.uk/stuff/double_tides.htm. Retrieved on 2012-08-28.
- ↑ To demonstrate this Tides Home Page offers a tidal height pattern converted into an .mp3 sound file, and the rich sound is quite different from a pure tone.
- ↑ Center for Operational Oceanographic Products and Services, National Ocean Service, National Oceanic and Atmospheric Administration (January 2000). "Tide and Current Glossary". Silver Spring, MD. http://tidesandcurrents.noaa.gov/publications/glossary2.pdf.
- ↑ Harmonic Constituents, NOAA.
- ↑ 25.0 25.1 Lisitzin, E. (1974). "2 "Periodical sea-level changes: Astronomical tides"". Sea-Level Changes, (Elsevier Oceanography Series). 8. p. 5.
- ↑ "What Causes Tides?". U.S. National Oceanic and Atmospheric Administration (NOAA) National Ocean Service (Education section). http://oceanservice.noaa.gov/education/kits/tides/tides02_cause.html.
- ↑ See for example, in the 'Principia' (Book 1) (1729 translation), Corollaries 19 and 20 to Proposition 66, on pages 251–254, referring back to page 234 et seq.; and in Book 3 Propositions 24, 36 and 37, starting on page 255.
- ↑ Wahr, J. (1995). Earth Tides in "Global Earth Physics", American Geophysical Union Reference Shelf #1,. pp. 40–46.
- ↑ Cartwright, David E. (1999). Tides: A Scientific History. Cambridge, UK: Cambridge University Press.
- ↑ Case, James (March 2000). "Understanding Tides—From Ancient Beliefs to Present-day Solutions to the Laplace Equations". SIAM News 33 (2).
- ↑ Doodson, A.T. (December 1921). "The Harmonic Development of the Tide-Generating Potential". Proceedings of the Royal Society of London. Series A 100 (704): 305–329. doi:10.1098/rspa.1921.0088. Bibcode: 1921RSPSA.100..305D.
- ↑ Casotto, S. & Biscani, F. (April 2004). "A fully analytical approach to the harmonic development of the tide-generating potential accounting for precession, nutation, and perturbations due to figure and planetary terms". AAS Division on Dynamical Astronomy 36 (2).
- ↑ Moyer, T.D. (2003) "Formulation for observed and computed values of Deep Space Network data types for navigation", vol. 3 in Deep-space communications and navigation series, Wiley, pp. 126–8, ISBN 0-471-44535-5.
- ↑ According to NASA the lunar tidal force is 2.21 times larger than the solar.
- ↑ See Tidal force – Mathematical treatment and sources cited there.
|40x40px||Wikiquote has a collection of quotations related to: Tides|
|Wikimedia Commons has media related to: Tides|
- 150 Years of Tides on the Western Coast: The Longest Series of Tidal Observations in the Americas NOAA (2004).
- Eugene I. Butikov: A dynamical picture of the ocean tides
- Earth, Atmospheric, and Planetary Sciences MIT Open Courseware; Ch 8 §3
- Myths about Gravity and Tides by Mikolaj Sawicki (2005).
- Ocean Motion: Open-Ocean Tides
- Oceanography: tides by J. Floor Anthoni (2000).
- Our Restless Tides: NOAA's practical & short introduction to tides.
- Planetary alignment and the tides (NASA)
- Tidal Misconceptions by Donald E. Simanek.
- Tides and centrifugal force: Why the centrifugal force does not explain the tide's opposite lobe (with nice animations).
- O. Toledano et al. (2008): Tides in asynchronous binary systems
- Gif Animation of TPX06 tide model based on TOPEX/Poseidon (T/P) satellite radar altimetry
- Gaylord Johnson "How Moon and Sun Generate the Tides" Popular Science, April 1934
- Tide gauge observation reference networks (French designation REFMAR: Réseaux de référence des observations marégraphiques)
- NOAA Tide Predictions
- NOAA Tides and Currents information and data
- History of tide prediction
- Department of Oceanography, Texas A&M University
- Mapped, graphical and tabular tide charts for US displayed as calendar months
- Mapped, graphical US tide tables/charts in calendar form from NOAA data
- SHOM Tide Predictions
- UK Admiralty Easytide
- UK, South Atlantic, British Overseas Territories and Gibraltar tide times from the UK National Tidal and Sea Level Facility
- Tide Predictions for Australia, South Pacific & Antarctica
- Tide and Current Predictor, for stations around the world
- World Tide Tables
- Tidely U.S. Tide Predictions
- Famous Tidal Prediction Pioneers and Notable Contributions |
- slide 1 of 2
Data and Variables in Java
Computer programs, including Java programs, are written to operate on data. For example:
• the data for an action game might be keys pressed or the position of the cursor when the mouse is clicked;
• the data for a word processing program are the keys pressed while you are typing a letter;
• the data for an accounting program would include, among other things, expenses and income;
• the data for a program that teaches Spanish could be an English word that you type in response to a question.
For a program to be run, it must be stored in the computer's memory. When data is supplied to a program, that data is also stored in memory. Thus we think of memory as a place for holding programs and data. One of the nice things about programming in a high-level language (like C or Java) is that you don’t have to worry about which memory locations are used to store your data. But how do we refer to an item of data, given that there may be many data items in memory? We use a variable. In Java, data and variable go hand in hand.
Think of memory as a set of boxes (or storage locations). Each box can hold one item of data, for example, one number. We can give a name to a box, and we will be able to refer to that box by the given name. In our example, we will need two boxes, one to hold the side of the square and one to hold the area. We will call these boxes s and a, respectively.
If we wish, we can change the value in a box at any time; since the values can vary, s and a are called variable names, or simply variables. Thus a variable is a name associated with a particular memory location or, if you wish, it is a label for the memory location. We can speak of giving a variable a value, or setting a variable to a specific value, 1, say. Important points to remember are:
• a box can hold only one value at a time; if we put in a new value, the old one is lost;
• we must not assume that a box contains any value unless we specifically store a value in the box. In particular, we must not assume that the box contains 0.
Variables are a common feature of computer programs. It is very difficult to imagine what programming would be like without them. In everyday life, we often use variables. For example, we speak of an ‘address’. Here, ‘address’ is a variable whose value depends on the person under consideration. Other common variables are telephone number, name of school, subject, size of population, type of car, television model, etc. (What are some possible values of these variables?)
Now that we know a little bit about variables, we are ready to develop the algorithm for calculating the area of a square.
Develop the algorithm (continued)
Using the notion of an algorithm and the concept of a variable, we develop the following algorithm for calculating the area of a square, given one side (click the image to enlarge):
Algorithm for calculating area of square, given one side
(1) Ask the user for the length of a side
(2) Store the value in the box s
(3) Calculate the area of the square (s x s)
(4) Store the area in the box a
(5) Print the value in box a, appropriately labelled
When an algorithm is developed, it must be checked to make sure that it is doing its intended job correctly. We can test an algorithm by ‘playing computer’, that is, we execute the instructions by hand, using appropriate data values. This process is called dry running or desk checking the algorithm. It is used to pinpoint any errors in logic before the computer program is actually written. We should never start to write programming code unless we are confident that the algorithm is correct. Here, the algorithm is fairly simple, so it is easy to check that it is indeed correct.
In the next article, Java Example: Algorithm and Program For Area Of Square, we show how to write the program to implement this algorithm.
Introduction to Java Programming
- Introduction to Java Programming - An Overview
- Java - Data, Variable and Algorithm Explained To A Beginner
- Java Example: Algorithm and Program For Area of Square
- Java Programming For Beginners - Test, Debug, Document, Maintain
- JDK Java Compiler: The Java Development Kit
- Java Programming For Beginners - How To Compile And Run Java Programs
- Data Types, Constants And Variables
- Java Programming For Beginners - Characters and printf
- Java Programming For Beginners - Part 9
- Java Programming For Beginners - Part 10
- Java Programming For Beginners - Part 11
- Java Programming For Beginners - Part 12
- Java Programming For Beginners - Part 13
- Java Programming For Beginners - Part 14
- Java Programming For Beginners - Integer Data Types
- Java Programming for Beginners - Part 16
- Java Integer Arithmetic For Beginners
- Java Programming For Beginners - Part 18
- Java Programming For Beginners - Part 19
- Java double to int and Other Conversions |
Welcome to Mars.
Since the textbook coverage is quite comprehensive, I will focus these notes on a few aspects of particular interest, some of which are only superficially discussed in the text. I will describe the interesting history of our understanding of Mars; the discoveries made by the space probes visiting Mars; and
with the last of these topics presented in a separate section of the notes.
Let us remind ourselves of our expectations. Mars is smaller than the Earth and Venus, so has weaker gravity; but on the other hand, it is farther from the sun, so somewhat cooler. Thus you might expect Mars to be able to hang onto an atmosphere, although probably a thin one. Moreover, you might expect it to be geologically intermediate in activity between Mercury (now essentially inert) and the Earth (very active). Are these expectations borne out? We will see.
Historical Perspectives.The text describes the problem of studying Mars from the Earth -- it is small, and even at its closest it shows very few details. An extra problem is introduced by the fact that the orbit of Mars is distinctly eccentric. This means that there are times when it is overhead at midnight -- that is, there are times at which the Earth lies directly between Mars and the sun -- but still so far away from us that it cannot be studied in any real detail, even with a powerful telescope. (The top-left part of the figure on page 202 of the text will help you understand this.) On the other hand, there can be favourable oppositions during which Mars is overhead at night and relatively close, so that it is a better target; this occurred in 1988, for example. Despite these difficulties, early observations revealed unambiguously the existence of white polar caps which change in size as the seasons progress. (The Martian year is not quite twice the length of our own year.) Moreover, there are dark regions on the face of the planet which come and go according to the seasons. This seems clear evidence for the existence of an atmosphere (since the polar caps presumably melt in the summer season and reappear as the winter returns, with at least some of the frozen material appearing in the form of a vapour). The dark patches, however, were a bit of a mystery. Early speculation included the notion that these regions were great areas of vegetation, like huge grasslands or forests, proof of life on Mars. As it happens, we now realize that the dark patches are simply areas of dark rock which can be exposed or covered by the wind-blown dust on Mars. As the text explains, windstorms on Mars come and go in a seasonal way which explains the regularity in these changes. In fact, enormous windstorms on Mars have long been known. Sometimes the entire face of the planet is obscured by dust which has been whipped into the air by the wind. Even at the very best, however, Mars is a small object upon which one can scarcely hope to see any details of consequence. The very largest mountains imaginable, for instance, would not be easily seen over such a great distance, and hopes of the direct study of the surface, and the detection of any signs of life, were not great a century ago. But then along came Percival Lowell.
The Legacy of Percival Lowell.We have already met Percival Lowell, in the context of . Lowell was fascinated to learn that an Italian astronomer named Schiaparelli had reported the discovery of `canali' on Mars. As it happens, the Italian word 'canali' means nothing more than ``channels'' , a word which could imply something like the English Channel -- a geographical feature of natural origin. Understandably, however, the word was mis-translated into the English word `canals', which seems to imply artifacts, features built by intelligent creatures. The impact of this apparent discovery was heightened by the state of human technology at the time. A century or more ago, canals were seen as almost the greatest imaginable engineering achievements of mankind -- the Suez canal, the Panama canal, the canal systems in England and New England. It is little exaggeration to say that the discovery of canals on Mars had an impact which might only be equalled today by the unexpected discovery of space centers (like the Kennedy Center in Florida) on Mars. People got very excited to learn that the supposed Martians were `just as technogically advanced as us,' apparently able to build great networks of canals. Prompted by this, Percival Lowell became single-minded about carrying out further observations. As I have already noted in my discussion of the search for Pluto, at this stage he made one undeniably important contribution to astronomy. He recognized, for the first time, the need for exceptionally fine astronomical images, and selected a site for an observatory on the basis of the quality of the seeing and transparency of the atmosphere. So was built the Lowell Observatory near Flagstaff, Arizona. The establishment of this new observatory led to a scientific impasse. When Lowell was able to `see' canals on Mars that others could not, he simply shrugged off the negative results with the explanation that their telescopes were not so well placed as his and that they simply could not resolve the subtle details because of the blurriness imposed by the Earth's atmosphere. (He also often commented on the imperfect skills of the other observers). Lowell drew maps of the network of lines which he perceived on Mars. As the years passed, these maps became unbelievably intricate, with a fantastic tracery of canals, lines, and oases. Then, in trying to explain the origin of the canals, Lowell conjured up a remarkable and evocative image of a race of creatures who were desperately struggling to survive on a dying planet. According to him, Mars was drying up -- hence its red appearance, with sandy desert-like conditions prevailing almost everywhere -- and the Martians were using the canals to transport water from the polar caps to the arid equatorial regions. (If you think about it, you will see a parallel to the canals which are used to transport water to Los Angeles and environs from much farther north. The idea itself is not implausible.) As we will see in a bit, the space probes which were sent to Mars indeed revealed the presence of some very interesting geological findings -- a huge canyon much larger than the Grand Canyon, enormous extinct volcanoes, what appear to be ancient river beds, and so on. Just so there is no later ambiguity, let me emphasise that these features have essentially nothing in common with what Lowell imagined he was seeing. Essentially all of the features in his maps -- oases, canals, and all -- were in the eye of the beholder. Needless to say, this is a sad legacy. By the way, Lowell's claims about the canals on Mars were hotly contested by many, although not all, of the astronomers and physicists of his day. There were excellent reasons for doubting the reality of the canals: first of all, from the Earth you would not even be able to see a canal on Mars if it were comparable in size to the Suez or Panama Canal. Lowell got around this criticism by claiming that the lines he saw on the surface were in fact broad bands of irrigated vegetation, stretching for many tens of kilometers to each side of the canals. (We see something like this along the Nile, so it is a reasonable defence.) secondly, it was known already that the atmosphere of Mars was very thin, and because of the low Martian gravity one can easily show that the water in any open body -- a lake, a river, a canal -- would quickly evaporate off into the thin air. This is an almost insurmountable objection which Lowell, in effect, merely shrugged off. In the end, Lowell's self-advertisement and popular appeal persuaded the vast majority of lay people, partly because of the excitement he whipped up as a public speaker, even if most scientists were dubious. Of course, the acid test is in what we find when we finally visit Mars, a development which lay many decades in the future. (Lowell did not live to see the development of space flight.)
War of the Worlds: Moral Questions.Given Percival Lowell's widespread and evocative descriptions of a race of creatures on Mars struggling bravely to survive the extremes of a hostile and worsening climate, it is perhaps not surprising that people often think first of Mars when visualizing extraterrestrials - we speak, almost without active thought, of `creatures from Mars.' Lowell's treatment of them was sympathetic, or at least benign; but a strikingly different development came in the form of a novel by H.G. Wells, a British historian and science fiction writer. His story, `War of the Worlds' , imagined the Martians looking on the Earth with envy as a lush abode compared to the dying surface of Mars. They actually invade the Earth, and turn out to be loathsome octopus-like blood-drinking creatures of the most repellent kind. It was for dramatic effect, of course, that Wells chose to depict the invading Martians in the most repulsive way he could. If we were to make contact with a race of extraterrestrial creatures which look like puppy dogs or baby kittens, that would would provoke quite a different initial response from us than we would feel upon meeting a race of creatures which looked, say, like giant tarantulas or scorpions. (I hope, however, that we would eventually get over these initial atavistic responses, at least intellectually.) By the way, a scientific purist might point out that Wells made his story a little less plausible in describing the invading Martians as large rounded creatures as bulky as bears. (Probably he did not know the scientific arguments, or perhaps he considered that the impact provided by his description warranted his taking some liberties.) The point is again one of simple scaling arguments: in the reduced gravity of Mars, creatures which are about the size of us (or a bear) would need little structural rigidity to support themselves, and would likely have evolved to be rather spindly and long-limbed instead of bulky and rounded, although there is nothing that forbids a more bulky form. Still, that is a quibble in what is in fact a very powerful story. The Martians are eventually defeated, after all the resources of mankind fail, by disease, pure and simple: they have no resistance to the strains of bacteria found on the Earth, and die. Historically, you can see a parallel to the decimation of the indigenous people of North America who succumbed to diseases like measles and smallpox after the arrival of the Europeans. H.G. Wells' story, and the real-life North American tragedy, both point to a profound moral. We must be likewise attentive to the possibility that any samples of Martian soil returned to Earth might contain some micro-organism which could devastate life on our planet. (A modern reworking of this theme is to be found in the early novel `The Andromeda Strain' by Michael Crichton, the author of 'Jurassic Park.' ) This is not a danger to be taken lightly when future missions to Mars are planned. There are even speculations, some by the British astronomer Fred Hoyle, that the Earth may be occasionally be bombarded by `viruses from space,' perhaps carried in the tails of comets. Hoyle, who is not a biologist or epidemiologist, has tried to argue that the way in which flu epidemics start and persist can best be understood if the Earth's surface is randomly sprinkled all at once with a lot of viruses introduced in this way. His arguments are not taken very seriously by the experts in the field, however. You may know that considerations of contamination actually influenced part of the American space program. The Apollo 11 astronauts, the first to walk on the moon, were put into a strict quarantine following their return to Earth, just in case they had carried back some dangerous infection. Of course, that might have been a case of ``too little too late,'' in the sense that they had already splashed down into the ocean on their return. Could NASA really have bottled up any dangerous microbes in such circumstances, or would they have inevitably escaped into the biosphere? In any event, I don't know just how seriously the threat was considered, given that we believe the airless moon to be so very sterile. The quarantining of the astronauts was mostly a public-relations exercise. It is, however, just as important to recognize that the argument works in both directions. We will face a moral question of real importance in the not-too-distant future. When we finally go to Mars, we will almost surely eventually infect it with our own micro-organisms. The Viking landers, discussed later, were carefully sterilized before launch and while in space to prevent such an occurrence. But prolonged visits by living creatures like ourselves obviously make it impossible to guarantee the security of the indigenous biosphere of Mars. If there is indeed life on Mars which has sprung up completely independently, we face a very nice moral and ethical dilemma. However simple the life form may be, do we have the right to impose ourselves on it, and almost certainly destroy it in the process? Finally, you may know of the Hallowe'en trick played in 1938 by the actor Orson Welles (no relation to H.G. Wells): he broadcast the story of the invasion of the Earth by Martians over North American radio, and succeeded in persuading many listeners that the story was true. There was quite a lot of panic as a result.
Early Flybys of Mars.Considering the rich history of speculation about life on Mars, it had always seemed an important part of any investigation of Mars to test for its existence. But hopes were a little dashed when the first probes were sent past Mars in the 1960's: the images they sent back showed an apparently desolate planet, more like the moon than anything else. The probes that were sent were simply moving ballistically (coasting freely) as they passed Mars because they did not have rockets and fuel on board to permit them to go into orbit around the planet. (In the 1960's, there were no rockets powerful enough to launch a space probe big enough to carry all that extra mass all the way to Mars.) The probes were near Mars just long enough to take a series of photographs as they swept past; then the information was radioed back to Earth. By bad luck, the photographs showed regions of Mars which were geologically quite uninteresting. They looked very much like parts of the moon: barren, desert-like landscapes, with impact craters here and there on the surface. There was certainly no evidence for an elaborate system of canals (not that anyone seriously expected to see that), but it looked moreover as if the planet had been more or less geologically inactive for a very long time. The probes also revealed the discouraging news that the Martian atmosphere is very thin indeed - considerably less than one one-hundredth of the density of the Earth's. Earlier spectroscopic work from the Earth had correctly revealed the presence of carbon dioxide in the Martian atmosphere, but the working assumption had been that Mars had a lot of nitrogen as well, just as here on Earth - our atmosphere is eighty percent nitrogen - so that the two together would make up a moderately rich atmosphere. (The nitrogen would have been difficult to detect using Earth-based telescopes and equipment, so its presence or absence on Mars was not easily demonstrated.) But the probes revealed that there is no important nitrogen component in the Martian atmosphere, and that the air, almost pure carbon dioxide, is very thin. This seemed to further diminish the prospects for life.
The Orbiters and the Martian Moons.In the early 1970s, NASA managed to put Mariner 9 into orbit around Mars. (It was the first artificial satellite ever to circle another planet.) I told you in class an interesting anecdote about the preparation for this mission. The astronomer Carl Sagan had suggested that the probe be equipped with a bit more ``attitude-contral gas'' (the gas which can be squirted out to make the satellite turn on its axis so that it can be made to face the right way for photographs to be obtained), but his suggestion had been turned down by NASA as a cost-saving measure. When Mariner 9 reached Mars, however, the planet was completely enshrouded in one of its big dust storms, so that nothing could be seen of the surface. NASA took the opportunity to photograph Deimos and Phobos, the tiny moons of Mars. By the time the dust had settled, Mariner might well have run out of the needed gas, since it gradually leaks out; indeed, the predictions were that it should have by that time. But by good fortune this did not happen. Still, it is a remarkable story because the saving in the cost of the gas was minuscule, relative to the cost of the whole project, yet it could have compromised the entire program. There is an interesting sequel to this story, since the photographs taken of the moons settled a rather peculiar suggestion which had been made about their origin. The moons of Mars, Phobos and Deimos (Fear and Panic), are named after the two horses which pull the chariot of Mars, the god of war, in the mythological tales. They are so tiny -- mere kilometers across -- that they were not even discovered until the 1800's. But in the early 1960's, before the space probes were sent to Mars, a suggestion was made by Schklovskii, a Russian astrophysicist, that they might not be moons at all, but perhaps ancient space stations! His reasoning was as follows: observations from the Earth appeared to show that Phobos and Deimos were slowing in their orbits, and might eventually spiral down to crash on the surface of Mars. (It is now realized, by the way, that those observations were wrong, or at least wrongly interpreted.) Schklovskii knew, as you all should by now, that a small body orbiting under only the gravitational influence of a bigger one should not have a decaying orbit - it should orbit essentially unchangingly for aeons - so there must be some other force acting, if the interpretation is correct. The obvious candidate is air resistance. The Martian satellites are indeed fairly close to Mars, and might be moving through the tenuous outer parts of the Martian atmosphere (although the atmosphere turned out to be even more tenuous than Schklovskii thought). The air resistance a body feels depends on how big a cross-sectional area it has (which is why downhill skiiers get into a crouch, to minimize their area as they knife into the wind). This resistance provides a force which will tend to slow the moving object, but a big solid chunk of rock like Phobos or Deimos has so much inertia (mass) that it should not be slowed perceptibly by the predicted tiny amount of air resistance. This seems to rule out that explanation. Schklovskii realised, however, that if the moons were much lighter, the same force might would slow them more noticeably. But how could these orbiting bodies be much lighter than we thought? (Naturally enough, it had been assumed from the start that they were lumps of rock.) Schklovskii's imaginative idea was to suggest that they were hollow! Since no natural process could readily produce a ten-kilometer chunk of hollow rock, he suggested further that they were artificial, built as space stations by a Martian civilization seeking a last safe retreat to allow their species to survive as the planet itself died around them. In this way, we came full circle to the picture of Martians as bravely struggling to carry on in a hostile environment - a return to the style of Percival Lowell. But this too is not correct. The moons are merely moons. (Look at the pictures on page 239 of the text.)
The Martian Surface Unveiled: The First Discoveries.As the dust clouds settled, NASA turned their cameras back to Mars and away from its moons. Right away, some amazing pictures were acquired, pictures which demonstrated that the surface of Mars is very different from the sterile moon-like body which had been implied by the flyby photos. Among other things, Mariner 9 returned pictures which showed signs of recent, or perhaps continuing, geological activity. These features included: Olympus Mons, a great extinct volcano three times the height of the largest mountains on Earth. As I noted in class, Carl Sagan had already calculated, using and the known strengths of materials, that such a mountain could exist in the reduced gravity of Mars: and here it was. Other volcanic cones, not much smaller, were found as well. There is an enormous `rift valley' , now called the Valles Marineris, several thousand kilometers in length. It is so big, in fact, that the whole Grand Canyon could easily sit in one of its side features. If transplanted to the Earth, Valles Marineris would cross the entire North American continent! It is reminiscent of the Rift Valley in East Africa, a geological feature which on Earth is caused by continental drift which is pulling one part of the crust away from another, opening the rift. Valles Marineris is apparently not geologically active now, but presumably has a similar origin. (The Grand Canyon, on the other hand, was sculpted by erosion: it is a river bed.) Even more interesting was the discovery of what looked like sinuous (i.e. winding) river beds, and numerous other features which seemed to have been caused in geologically recent times by the flow of large amounts of water. This is not absolutely clearcut. There are, for instance, a few features rather like this on the moon, features which are interpreted as being caused by the flow of particularly runny, low-viscosity lavas. But the features on Mars are so wide-spread, and so consistent with the behaviour of water, that there seems little doubt about the interpretation. Moreover, there is detectable water vapour in the atmosphere of Mars, and there is certainly some water ice in at least one of the polar caps of Mars (although most of the material is frozen carbon dioxide). Remote study of the polar caps revealed that they consist both of frozen water and of frozen carbon dioxide (``dry ice''). Clearly, there is now at least some water on Mars, although perhaps only a tiny amount, and the topographic and geological evidence seems to suggest that in the past there were large amounts of it actually flowing on the surface. Putting this all together, we deduce that at one time Mars had large amounts of liquid water, and a geology active enough to build huge volcanoes and to drive some incipient plate tectonics (hence the rift valley). The volcanoes in turn might have outgassed sufficient material to provide Mars with a thickish atmosphere, at least for a while (although it would eventually be lost to space, thanks to the planet's weak gravity). Such an atmosphere, rich in carbon dioxide, would have induced a mild greenhouse effect, pushing the temperatures on Mars up to some moderately warm level. In short, Mars might indeed have been a very pleasant place for some primitive life forms to come into existence and flourish. Given that, NASA's plans for the Viking landers due to reach Mars in 1976 were expanded to allow room for some simple - a topic which we will explore in detail in the next section of the notes. First, though, we consider the logistics of the missions which included plans to drop landers onto the Martian surface.
The Viking Landers.There were in fact two independent Viking landers, both of which were wonderfully successful. They were designed to reach the surface of Mars in stages, as follows: In each case, a moderately large capsule, mounted on top of a rocket, is launched towards Mars. On arrival, it is slowed in its trajectory (using retro-rockets - rockets that fire in the forward direction) so that it goes into orbit around the planet. Photographs and radar images of the surface are acquired by the orbiting capsule and studied so that NASA can refine its plans for the best landing site. There is, after all, no merit in dropping the lander into a field of big boulders if you can avoid it! This is not as simple as it sounds: from high orbit, it is very hard to study the ground in enough detail to make the decision clearcut. The limited resolution of the cameras on board, coupled with the fact that you are surveying the planet from many kilometers up, means that you would not, for instance, detect a boulder the size of a car - or a landing area strewn with boulders of such a size. Clearly, dropping the lander safely onto the surface requires some luck as well as good judgement. Next, a piece of the orbiter is detached and slowed (again with rockets) so that it starts a freefall towards the planet. As it enters the atmosphere, friction with the thin air slows it to an extent, but not nearly enough: it is, after all, falling from tens of kilometers up. For this reason, a large parachute is deployed to slow it to a safe landing speed. As the lander nears the ground, but while it is still fairly high, the parachute is ejected so that there is no danger that it will settle over and completely cover the machine when it finally comes to rest. (After all, the lander will need to see the surroundings, to get sunlight onto the solar panels for power, and to be able to aim its radio antenna to send news of its findings back to Earth.) But without the parachute the lander picks up speed again and is soon falling so fast that it might be smashed to pieces on impact. To avoid this fate, small rockets are turned on to slow the descent some more. This is an unwelcome (although enforced) solution, because the exhaust gases being squirted out might consist of organic compounds (if you are burning some simple petroleum fuel in the rocket engine) which will contaminate the soil beneath you and lead to confusing detections of substances in the soil. For this reason, the landers were provided with very special fuels, chosen to minimise any such contamination. Finally, to keep the surface soil as pristine as possible, the engines are shut off early so that the lander falls freely the rest of the way to the ground. The final impact of each lander is fairly vigorous, but is cushioned by shock-absorbing legs. And so they come to the surface of Mars. In the textbook, you will find reproductions of some of the first direct images sent back from the surface of Mars. Interesting measurements were made of the air pressure and temperature, and it was noted that the sky was not blue but pink in colour, thanks to the reddish dust thrown into the atmosphere by the winds. For me, however, the most interesting experiments carried out were the direct searches for evidence of life on Mars. In the lecture, I left you with the following thought. If you had to design a piece of equipment to search for life on Mars, what would it consist of? You might want to send along something like a powerful electron microscope, to search for microbes and viruses in a soil sample, but there is a crucial restriction. Whatever apparatus you send has to be housed in a box about one foot square (i.e. about the size of a recycling `blue box') and must not weigh a great deal if the Viking landers are going to carry it all the way to Mars. This is a real challenge, as you can imagine! In the next section, we will discover how this was accomplished, and learn about the interesting and somewhat perplexing findings which were made. Previous chapter:Next chapter
0: Physics 015: The Course Notes, Fall 2004 1: Opening Remarks: Setting the Scene. 2: The Science of Astronomy: 3: The Importance of Scale: A First Conservation Law. 4: The Dominance of Gravity. 5: Looking Up: 6: The Seasons: 7: The Spin of the Earth: Another Conservation Law. 8: The Earth: Shape, Size, and State of Rotation. 9: The Moon: Shape, Size, Nature. 10: The Relative Distances and Sizes of the Sun and Moon: 11: Further Considerations: Planets and Stars. 12: The Moving Earth: 13: Stellar Parallax: The Astronomical Chicken 14: Greek Cosmology: 15: Stonehenge: 16: The Pyramids: 17: Copernicus Suggests a Heliocentric Cosmology: 18: Tycho Brahe, the Master Observer: 19: Kepler the Mystic. 20: Galileo Provides the Proof: 21: Light: Introductory Remarks. 22: Light as a Wave: 23: Light as Particles. 24: Full Spectrum of Light: 25: Interpreting the Emitted Light: 26: Kirchhoff's Laws and Stellar Spectra. 27: Understanding Kirchhoff's Laws. 28: The Doppler Effect: 29: Astronomical Telescopes: 30: The Great Observatories: 31: Making the Most of Optical Astronomy: 32: Adaptive Optics: Beating the Sky. 33: Radio Astronomy: 34: Observing at Other Wavelengths: 35: Isaac Newton's Physics: 36: Newtonian Gravity Explains It All: 37: Weight: 38: The Success of Newtonian Gravity: 39: The Ultimate Failure of Newtonian Gravity: 40: Tsunamis and Tides: 41: The Organization of the Solar System: 42: Solar System Formation: 43: The Age of the Solar System: 44: Planetary Structure: The Earth. 45: Solar System Leftovers: 46: The Vulnerability of the Earth: 47: Venus: 48: Mars: 49: The Search for Martian Life: 50: Physics 015 - Parallel Readings.
Part 1:Part 2:Part 3:
(Wednesday, 20 January, 2021.) |
Psychologists do more than just wonder about human behavior: they conduct research to understand exactly why people think, feel, and behave the way they do. Like other scientist, psychologists use the scientific method, a standardized way to conduct research. A scientific approach is used in order to avoid bias or distortion of information. After collecting data, psychologists organize and analyze their observations , make inferences about the reliability and significance of their data and develop testable hypotheses and theories.
Psychological research has an enormous impact on all facets of our lives, from how parents choose to discipline their children to how companies package and advertise their products to how governments choose to punish or rehabilitate criminals. Understanding how psychologists do research is vital to understanding psychology itself.
Scientist use the following terms to describe their research:
- Variables: the events, characteristics, behaviors or conditions that researchers measure and study. - Subject or Participant: an individual person or animal a researcher studies. - Sample: a collection of subjects researchers study. Researchers use samples because they can not study the entire population. - Population: the collection of people or animals from which researchers draw a sample. Researchers study the sample and generalize their results to the population.
The Purpose of Research
Psychologists have three main goals when doing research:
- to find ways to measure and describe behavior.
- to understand why, when, and how events occur.
- to apply this knowledge to solving real-world problems.
The Scientific Method
Psychologists use the scientific method to conduct their research. The scientific method is a standardized way of making observation, gathering data, forming theories, testing predictions, and interpreting results.
Researchers make observations in order to describe and measure behavior. After observing certain events repeatedly, researchers come up with a theory that explains these observations. A theory is an explanation that organizes separate pieces of information in a coherent way. Researchers generally develop a theory only after they have collected a lot of evidence and made sure their research results can be reproduced by others.
A psychologist observes that some college sophomores date a lot, while others do not. He observes that some sophomores have blonde hair, while others have brown hair: he also observes that in most sophomore couples at least one person has brown hair. In addition, he notices that most of his brown-haired friends date regularly, but his blonde friends don’t date much at all. He explains these observations by theorizing that brown haired sophomores are more likely to date than those that have blonde hair. Based on this theory he develops a hypothesis that more brow-haired sophomores than blonde-haired sophomores will make dates with people they meet at a party. He then conducts an experiment to test his hypothesis. In this experiment, he has twenty people go to a party, ten with blonde hair and ten with brown hair. He makes observations and gathers data by watching what happens at the party and counting how many people with each hair color actually make dates. If, contrary to his hypothesis, the blonde-haired people make more dates, he’ll have to think about why this occurred and revise his theory and hypothesis. If the data he collects from further experiments still do not support the hypothesis, he’ll have to reject this theory.
Making Research Scientific
Psychological research , like research in other fields, must meet certain criteria in order to be considered scientific. Research must be:
Research Must be Replicable
Research is replicable when researchers can repeat it and get the... |
The five practices to model from the article are as follows:
- Anticipate Student Responses: As a teacher before introducing the activity in class, do it yourself, consider and push yourself to come up with multiple ways in which it could be solved and that your students may solve it. You may consider misconceptions students might have and how that could influence their work. This allows you to be better prepared to address answers students give.
- Monitor Students Work & Engagement: Walk around the classroom, observe what students are saying and writing. Notice when students are nearly finished. Ask students about what they're doing and question them to think deeper or consider different aspects.
- Select Particular Student Work to Share: Having been monitoring their work, you have a good idea of what each student is doing and pick certain students to share based on ideas you want the class to discuss and consider and highlight perhaps different techniques used in solving.
- Specific Order to Student Sharing: Depending on what you want to highlight or address first, have the order students share in match this. Perhaps to help the flow of the discussion or to clear up misconceptions first.
- Connect Student Responses to Another & to Mathematical Ideas: What's the take away of everything? What should students be learning from this? What can they learn from another?
So 2-4 I won't be able to do, but I'll be applying and practicing 1 and 5 in relation to the problem of the Sticky Triangles. The problem shows triangles composed out of match sticks like the pictures below:
The problem asks students to explore the pattern that is occurring by finding how many matchsticks are in each, how to make the next triangle, and how to know many matchsticks to create any size triangle.
Anticipating Student Responses:
- All students will likely find the number of match sticks in the 3 triangles given: 3, 9, 18.
- Finding triangles of at the next sizes, students would likely draw to figure out or if they notice the pattern quickly would calculate.
- For triangles of any size, students then will have to find an equation or way to describe what is happening.
- Might see the center of the second triangle as being the original first triangle. So it's 3 + 2 (3) where the first 3 is the matchsticks in the first triangle, 2 is the number of matchsticks at each extended corner and 3 for the three corners. But then the student would hopefully realize that the small triangle is not in the center of the third. So this does not work
- Might come up with a recursive formula. You are adding a new row of triangles to the bottom of the previous. The triangles added to the bottom are the number of the size of the triangle and each triangles uses 3 matchsticks. So previous plus 3 times the size of the triangle.
- Might look at the numbers, not the pictures, to come up with a formula. From 3 to 9 is plus 6, from 9 to 18 is plus 9.
- Might look at the matchsticks around the edge and then the number in the center. So first is 3+0, second is 6+3, and third is 9+9. For the outside it is 3 times the size of the triangle. The inside is almost 3^s-1 where s is the size. Or again could use recursive.
- Might notice how the number of size 1 triangles in each size are changing.
- Likely some students will come up with formulas that apply to the first and second, but not beyond. And students who do not check to see if their formula can predict further answers.
- From this activity, I would hope students would take away the multiple ways in which a pattern can be described within both words and equations to represent the situation. Students hopefully can learn from others and the different techniques their classmates used to solve the problem. Understand how we can represent a pattern with an equation, and how to use variables in this case (what do they represent, where can we see it). Highlight recursive formulas and how they are represented in an equation.
- The problem also at the end asks students to consider how their findings would change if we were composing squares in a similar manner. I might have students do this after the discussion or take home so that they can apply the ideas we talked about as a group and practice different ways they heard.
After looking at the problem in this way, I feel it really pushes me to consider and see what really can be taken away from a problem. It allowed to think about different ideas, some of which I might not have thought of, and maybe students wouldn't either. It also shows if a problem may or may not have multiple view points, if there would be anything to discuss about the problem. If a problem were fairly simple, using the five practices would reveal this. You likely would not be able to think up many different student responses. It gets you to consider your students and how they might think. I don't have students, but if I did I can see how you'd be able to think back to work they've done and how ideas they have may influence their techniques in the given situation. It also gets you to see what things should be discussed. If the problem is very computational, you can focus on different ways students think through the computations, what order do they do things in, and how can they check their work. You can get at why students are doing what they are doing, because you'll have an idea before hand and can prepare appropriate questions.
The five practices really gets you to focus in on what you are having students do, what are they doing, and why does this matter? You can better plan and be better prepared. |
The context: Some of the worst wildfires in decades have been burning across Australia in recent months, exacerbated by hot, dry, and windy conditions and rising global temperatures. Almost 15 million acres of land have burned so far, compared to two million acres in California in 2018. But to get a visual sense of the sheer scale of the fires, it’s worth looking at them from space. This NASA image, taken on Saturday, shows smoke billowing from country’s east coast.
How are they used? Satellites are often the first tool to detect wildfires in remote areas, capturing how actively they are burning, their precise location, and the direction of the resulting smoke using cameras, LIDAR, and infrared sensors. NASA has a fleet of satellites it uses to observe Earth, with equipment in geostationary orbit providing imagery every five to 15 minutes. This data is sent to fire management authorities worldwide, and used both for operations and for mapping the scale and type of the damage once the fires have burned out.
See for yourself: This image of the fires shows the areas that have been affected since December. It was created by photographer Anthony Hearsey using data from NASA’s Fire Information for Resource Management System (FIRMS), covering the month up to January 5 2020. The FIRMS system sends data to those who need it within three hours of it having been captured by two NASA satellites.
Real Life. Real News. Real Voices
Help us tell more of the stories that matterBecome a founding member
More bad news: There’s been some temporary respite in southeast Australia over the weekend, thanks to rain and cooler temperatures. However, that brings its own hazards in the form of dangerous levels of smoke, and the weather is likely to warm up again towards the end of the week.
Read next: A team of researchers are trying to understand the science behind the world’s worst wildfires.
Subscribe to the newsletter news
We hate SPAM and promise to keep your email address safe |
We puny humans think we can accelerate particles? Look how proud we are of the Large Hadron Collider. But any particle accelerator we build will pale in comparison to Quasars, nature’s champion accelerators.
Those things are beasts.Read more
The first confirmed discovery of a planet beyond our Solar System (aka. an Extrasolar Planet) was a groundbreaking event. And while the initial discoveries were made using only ground-based observatories, and were therefore few and far between, the study of exoplanets has grown considerably with the deployment of space-based telescopes like the Kepler space telescope.
As of February 1st, 2018, 3,728 planets have been confirmed in 2,794 systems, with 622 systems having more than one planet. But now, thanks to a new study by a team of astrophysicists from the University of Oklahoma, the first planets beyond our galaxy have been discovered! Using a technique predicting by Einstein’s Theory of General Relativity, this team found evidence of planets in a galaxy roughly 3.8 billion light years away.
The study which details their discovery, titled “Probing Planets in Extragalactic Galaxies Using Quasar Microlensing“, recently appeared in The Astrophysical Journal Letters. The study was conducted by Xinyu Dai and Eduardo Guerras, a postdoctoral researcher and professor from the Homer L. Dodge Department of Physics and Astronomy at the University of Oklahoma, respectively.
For the sake of their study, the pair used the Gravitational Microlensing technique, which relies on the gravitational force of distant objects to bend and focus light coming from a star. As a planet passes in front of the star relative to the observer (i.e. makes a transit), the light dips measurably, which can then be used to determine the presence of a planet.
In this respect, Gravitational Microlensing is a scaled-down version of Gravitational Lensing, where an intervening object (like a galaxy cluster) is used to focus light coming from a galaxy or other large object located beyond it. It also incorporates a key element of the highly-effective Transit Method, where stars are monitored for dips in brightness to indicate the presence of an exoplanet.
In addition to this method, which is the only one capable of detecting extra-solar planets at truly great distances (on the order of billions of light years), the team also used data from NASA’s Chandra X-ray Observatory to study a distant quasar known as RX J1131–1231. Specifically, the team relied on the microlensing properties of the supermassive black hole (SMBH) located at the center of RX J1131–1231.
They also relied on the OU Supercomputing Center for Education and Research to calculate the microlensing models they employed. From this, they observed line energy shifts that could only be explained by the presence of of about 2000 unbound planets between the quasar’s stars – which ranged from being as massive as the Moon to Jupiter – per main-sequence star.
As Xinyu Dai explained in a recent University of Oklahoma press release:
“We are very excited about this discovery. This is the first time anyone has discovered planets outside our galaxy. These small planets are the best candidate for the signature we observed in this study using the microlensing technique. We analyzed the high frequency of the signature by modeling the data to determine the mass.”
While 53 planets have been discovered within the Milky Way galaxy using the Microlensing technique, this is the first time that planets have been observed in other galaxies. Much like the first confirmed discovery of an extra-solar planet, scientists were not even certain planets existed in other galaxies prior to this study. This discovery has therefore brought the study of planets beyond our Solar System to a whole new level!
And as Eduardo Guerras indicated, the discovery was possible thanks to improvements made in both modelling and instrumentation in recent years:
“This is an example of how powerful the techniques of analysis of extragalactic microlensing can be. This galaxy is located 3.8 billion light years away, and there is not the slightest chance of observing these planets directly, not even with the best telescope one can imagine in a science fiction scenario. However, we are able to study them, unveil their presence and even have an idea of their masses. This is very cool science.”
In the coming years, more sophisticated observatories will be available, which will allow for even more in the way of discoveries. These include space-based instruments like the James Webb Space Telescope (which is scheduled to launch in Spring of 2019) and ground-based observatories like the ESO’s OverWhelmingly Large (OWL) Telescope, the Very Large Telescope (VLT), the Extremely Large Telescope (ELT), and the Colossus Telescope.
At this juncture, the odds are good that some of these discoveries will be in neighboring galaxies. Perhaps then we can begin to determine just how common planets are in our Universe. At present, it is estimated that could be as many as 100 billion planets in the Milky Way Galaxy alone! But with an estimated 1 to 2 trillion galaxies in the Universe… well, you do the math!
It is a well known fact among astronomers and cosmologists that the farther into the Universe you look, the further back in time you are seeing. And the closer astronomers are able to see to the Big Bang, which took place 13.8 billion years ago, the more interesting the discoveries tend to become. It is these finds that teach us the most about the earliest periods of the Universe and its subsequent evolution.
For instance, scientists using the Wide-field Infrared Survey Explorer (WISE) and the Magellan Telescopes recently observed the earliest Supermassive Black Hole (SMBH) to date. According to the discovery team’s study, this black hole is roughly 800 million times the mass of our Sun and is located more than 13 billion light years from Earth. This makes it the most distant, and youngest, SMBH observed to date.
The study, titled “An 800-million-solar-mass black hole in a significantly neutral Universe at a redshift of 7.5“, recently appeared in the journal Nature. Led by Eduardo Bañados, a researcher from the Carnegie Institution for Science, the team included members from NASA’s Jet Propulsion Laboratory, the Max Planck Institute for Astronomy, the Kavli Institute for Astronomy and Astrophysics, the Las Cumbres Observatory, and multiple universities.
As with other SMBHs, this particular discovery (designated J1342+0928) is a quasar, a class of super bright objects that consist of a black hole accreting matter at the center of a massive galaxy. The object was discovered during the course of a survey for distant objects, which combined infrared data from the WISE mission with ground-based surveys. The team then followed up with data from the Carnegie Observatory’s Magellan telescopes in Chile.
As with all distant cosmological objects, J1342+0928’s distance was determined by measuring its redshift. By measuring how much the wavelength of an object’s light is stretched by the expansion of the Universe before it reaches Earth, astronomers are able to determine how far it had to travel to get here. In this case, the quasar had a redshift of 7.54, which means that it took more than 13 billion years for its light to reach us.
As Xiaohui Fan of the University of Arizona’s Steward Observatory (and a co-author on the study) explained in a Carnegie press release:
“This great distance makes such objects extremely faint when viewed from Earth. Early quasars are also very rare on the sky. Only one quasar was known to exist at a redshift greater than seven before now, despite extensive searching.”
Given its age and mass, the discovery of this quasar was quite the surprise for the study team. As Daniel Stern, an astrophysicist at NASA’s Jet Propulsion Laboratory and a co-author on the study, indicated in a NASA press release, “This black hole grew far larger than we expected in only 690 million years after the Big Bang, which challenges our theories about how black holes form.”
Essentially, this quasar existed at a time when the Universe was just beginning to emerge from what cosmologists call the “Dark Ages”. During this period, which began roughly 380,000 years to 150 million years after the Big Bang, most of the photons in the Universe were interacting with electrons and protons. As a result, the radiation of this period is undetectable by our current instruments – hence the name.
The Universe remained in this state, without any luminous sources, until gravity condensed matter into the first stars and galaxies. This period is known as the “Reinozation Epoch”, which lasted from 150 million to 1 billion years after the Big Bang and was characterized by the first stars, galaxies and quasars forming. It is so-named because the energy released by these ancient galaxies caused the neutral hydrogen of the Universe to get excited and ionize.
Once the Universe became reionzed, photons could travel freely throughout space and the Universe officially became transparent to light. This is what makes the discovery of this quasar so interesting. As the team observed, much of the hydrogen surrounding it is neutral, which means it is not only the most distant quasar ever observed, but also the only example of a quasar that existed before the Universe became reionized.
In other words, J1342+0928 existed during a major transition period for the Universe, which happens to be one of the current frontiers of astrophysics. As if this wasn’t enough, the team was also confounded by the object’s mass. For a black hole to have become so massive during this early period of the Universe, there would have to be special conditions to allow for such rapid growth.
What these conditions are, however, remains a mystery. Whatever the case may be, this newly-found SMBH appears to be consuming matter at the center of a galaxy at an astounding rate. And while its discovery has raised many questions, it is anticipated that the deployment of future telescopes will reveal more about this quasar and its cosmological period. As Stern said:
“With several next-generation, even-more-sensitive facilities currently being built, we can expect many exciting discoveries in the very early universe in the coming years.”
These next-generation missions include the European Space Agency’s Euclid mission and NASA’s Wide-field Infrared Survey Telescope (WFIRST). Whereas Euclid will study objects located 10 billion years in the past in order to measure how dark energy influenced cosmic evolution, WFIRST will perform wide-field near-infrared surveys to measure the light coming from a billion galaxies.
Both missions are expected to reveal more objects like J1342+0928. At present, scientists predict that there are only 20 to 100 quasars as bright and as distant as J1342+0928 in the sky. As such, they were most pleased with this discovery, which is expected to provide us with fundamental information about the Universe when it was only 5% of its current age.
Ever since the discovery of Sagittarius A* at the center of our galaxy, astronomers have come to understand that most massive galaxies have a Supermassive Black Hole (SMBH) at their core. These are evidenced by the powerful electromagnetic emissions produced at the nuclei of these galaxies – which are known as “Active Galatic Nuclei” (AGN) – that are believed to be caused by gas and dust accreting onto the SMBH.
For decades, astronomers have been studying the light coming from AGNs to determine how large and massive their black holes are. This has been difficult, since this light is subject to the Doppler effect, which causes its spectral lines to broaden. But thanks to a new model developed by researchers from China and the US, astronomers may be able to study these Broad Line Regions (BLRs) and make more accurate estimates about the mass of black holes.
The study, “Tidally disrupted dusty clumps as the origin of broad emission lines in active galactic nuclei“, recently appeared in the scientific journal Nature. The study was led by Jian-Min Wang, a researcher from the Institute of High Energy Physics (IHEP) at the Chinese Academy of Sciences, with assistance from the University of Wyoming and the University of Nanjing.
To break it down, SMBHs are known for having a torus of gas and dust that surrounds them. The black hole’s gravity accelerates gas in this torus to velocities of thousands of kilometers per second, which causes it to heat up and emit radiation at different wavelengths. This energy eventually outshined the entire surrounding galaxy, which is what allows astronomers to determine the presence of an SMBH.
As Michael Brotherton, a UW professor in the Department of Physics and Astronomy and a co0author on the study, explained in a UW press release:
“People think, ‘It’s a black hole. Why is it so bright?’ A black hole is still dark. The discs reach such high temperatures that they put out radiation across the electromagnetic spectrum, which includes gamma rays, X-rays, UV, infrared and radio waves. The black hole and surrounding accreting gas the black hole is feeding on is fuel that turns on the quasar.”
The problem with observing these bright regions comes from the fact that the gases within them are moving so quickly in different directions. Whereas gas moving away (relative to us) is shifted towards the red end of the spectrum, gas that is moving towards us is shifted towards the blue end. This is what leads to a Broad Line Region, where the spectrum of the emitted light becomes more like a spiral, making accurate readings difficult to obtain.
Currently, the measurement of the mass of SMBHs in active galactic nuclei relies the “reverberation mapping technique”. In short, this involves using computer models to examine the symmetrical spectral lines of a BLR and measuring the time delays between them. These lines are believed to arise from gas that has been photoionized by the gravitational force of the SMBH.
However, since there is little understanding of broad emission lines and the different components of BLRs, this method gives rise to some uncertainties off between 200 and 300%. “We are trying to get at more detailed questions about spectral broad-line regions that help us diagnose the black hole mass,” said Brotherton. “People don’t know where these broad emission line regions come from or the nature of this gas.”
In contrast, the team led by Dr. Wang adopted a new type of computer model that considered the dynamics of the gas torus surrounding a SMBH. This torus, they assume, would be made up of discrete clumps of matter that would be tidally disrupted by the black hole, resulting in some gas flowing into it (aka. accreting on it) and some being ejected as outflow.
From this, they found that the emission lines in a BLR are subject to three characteristics – “asymmetry”, “shape” and “shift”. After examining various emissions lines – both symmetrical and asymmetrical – they found that these three characteristics were strongly dependent on how bright the gas clumps were, which they interpreted as being a result of the angle of their motion within the torus. Or as Dr. Brotherton put it:
“What we propose happens is these dusty clumps are moving. Some bang into each other and merge, and change velocity. Maybe they move into the quasar, where the black hole lives. Some of the clumps spin in from the broad-line region. Some get kicked out.”
In the end, their new model suggests that tidally disrupted clumps of matter from a black hole torus may represent the source of the BLR gas. Compared to previous models, the one devised by Dr. Wang and his colleagues establishes a connection between different key processes and components in the vicinity of a SMBH. These include the feeding of the black hole, the source of photoionized gas, and the dusty torus itself.
While this research does not resolve all the mysteries surrounding AGNs, it is an important step towards obtaining accurate mass estimates of SMBHs based on their spectral lines. From these, astronomers could be able to more accurately determine what role these black holes played in the evolution of large galaxies.
The study was made possible thanks with support provided by the National Key Program for Science and Technology Research and Development, and the Key Research Program of Frontier Sciences, both of which are administered by the Chinese Academy of Sciences.
Back in November, a team of researchers from the Swinburne University of Technology and the University of Cambridge published some very interesting findings about a galaxy located about 8 billion light years away. Using the La Silla Observatory’s Very Large Telescope (VLT), they examined the light coming from the supermassive black hole (SMBH) at its center.
In so doing, they were able to determine that the electromagnetic energy coming from this distant galaxy was the same as what we observe here in the Milky Way. This showed that a fundamental force of the Universe (electromagnetism) is constant over time. And on Monday, Dec. 4th, the ESO followed-up on this historic find by releasing the color spectrum readings of this distant galaxy – known as HE 0940-1050.
To recap, most large galaxies in the Universe have SMBHs at their center. These huge black holes are known for consuming the matter that orbits all around them, expelling tremendous amounts of radio, microwave, infrared, optical, ultra-violet (UV), X-ray and gamma ray energy in the process. Because of this, they are some of the brightest objects in the known Universe, and are visible even from billions of light years away.
But because of their distance, the energy which they emit has to pass through the intergalactic medium, where it comes into contact with incredible amount of matter. While most of this consists of hydrogen and helium, there are trace amounts of other elements as well. These absorb much of the light that travels between distant galaxies and us, and the absorption lines this creates can tell us of lot about the kinds of elements that are out there.
At the same time, studying the absorption lines produced by light passing through space can tell us how much light was removed from the original quasar spectrum. Using the Ultraviolet and Visual Echelle Spectrograph (UVES) instrument aboard the VLT, the Swinburne and Cambridge team were able to do just that, thus sneaking a peak at the “fingerprints of the early Universe“.
What they found was that the energy coming from HE 0940-1050 was very similar to that observed in the Milky Way galaxy. Basically, they obtained proof that electromagnetic energy is consistent over time, something which was previously a mystery to scientists. As they state in their study, which was published in the Monthly Notices of the Royal Astronomical Society:
“The Standard Model of particle physics is incomplete because it cannot explain the values of fundamental constants, or predict their dependence on parameters such as time and space. Therefore, without a theory that is able to properly explain these numbers, their constancy can only be probed by measuring them in different places, times and conditions. Furthermore, many theories which attempt to unify gravity with the other three forces of nature invoke fundamental constants that are varying.“
Since it is 8 billion light years away, and its strong intervening metal-absorption-line system, probing the electromagnetic spectrum being put out by HE 0940-1050 central quasar – not to mention the ability to correct for all the light that was absorbed by the intervening intergalactic medium – provided a unique opportunity to precisely measure how this fundamental force can vary over a very long period of time.
On top of that, the spectral information they obtained happened to be of the highest quality ever observed from a quasar. As they further indicated in their study:
“The largest systematic error in all (but one) previous similar measurements, including the large samples, was long-range distortions in the wavelength calibration. These would add a ?2 ppm systematic error to our measurement and up to ?10 ppm to other measurements using Mg and Fe transitions.”
However, the team corrected for this by comparing the UVES spectra to well-calibrated spectra obtained from the High Accuracy Radial velocity Planet Searcher (HARPS) – which is also located at the at the La Silla Observatory. By combining these readings, they were left with a residual systematic uncertainty of just 0.59 ppm, the lowest margin of error from any spectrographic survey to date.
This is exciting news, and for more reasons that one. On the one hand, precise measurements of distant galaxies allow us to test some of the most tricky aspects of our current cosmological models. On the other, determining that electromagnetism behaves in a consistent way over time is a major find, largely because it is responsible for such much of what goes on in our daily lives.
But perhaps most importantly of all, understanding how a fundamental force like electromagnetism behaves across time and space is intrinsic to finding out how it – as well as weak and strong nuclear force – unifies with gravity. This too has been a preoccupation of scientists, who are still at a loss when it comes to explaining how the laws governing particles interactions (i.e. quantum theory) unify with explanations of how gravity works (i.e general relativity).
By finding measurements of how these forces operate that are not varying could help in creating a working Grand Unifying Theory (GUT). One step closer to truly understanding how the Universe works!
Further Reading: ESO
Today we’re going to have the most surreal conversation. I’m going to struggle to explain it, and you’re going to struggle to understand it. And only Stephen Hawking is going to really, truly, understand what’s actually going on.
But that’s fine, I’m sure he appreciates our feeble attempts to wrap our brains around this mind bending concept.
All right? Let’s get to it. Black holes again. But this time, we’re going to figure out their temperature.
The very idea that a black hole could have a temperature strains the imagination. I mean, how can something that absorbs all the matter and energy that falls into it have a temperature? When you feel the warmth of a toasty fireplace, you’re really feeling the infrared photons radiating from the fire and surrounding metal or stone.
And black holes absorb all the energy falling into them. There is absolutely no infrared radiation coming from a black hole. No gamma radiation, no radio waves. Nothing gets out.
Now, supermassive black holes can shine with the energy of billions of stars, when they become quasars. When they’re actively feeding on stars and clouds of gas and dust. This material piles up into an accretion disk around the black hole with such density that it acts like the core of a star, undergoing nuclear fusion.
But that’s not the kind of temperature we’re talking about. We’re talking about the temperature of the black hole’s event horizon, when it’s not absorbing any material at all.
The temperature of black holes is connected to this whole concept of Hawking Radiation. The idea that over vast periods of time, black holes will generate virtual particles right at the edge of their event horizons. The most common kind of particles are photons, aka light, aka heat.
Normally these virtual particles are able to recombine and disappear in a puff of annihilation as quickly as they appear. But when a pair of these virtual particles appear right at the event horizon, one half of the pair drops into the black hole, while the other is free to escape into the Universe.
From your perspective as an outside observer, you see these particles escaping from the black hole. You see photons, and therefore, you can measure the temperature of the black hole.
The temperature of the black hole is inversely proportional to the mass of the black hole and the size of the event horizon. Think of it this way. Imagine the curved surface of a black hole’s event horizon. There are many paths that a photon could try to take to get away from the event horizon, and the vast majority of those are paths that take it back down into the black hole’s gravity well.
But for a few rare paths, when the photon is traveling perfectly perpendicular to the event horizon, then the photon has a chance to escape. The larger the event horizon, the less paths there are that a photon could take.
Since energy is being released into the Universe at the black hole’s event horizon, but energy can neither be created or destroyed, the black hole itself provides the mass that supplies the energy to release these photons.
The black hole evaporates.
The most massive black holes in the Universe, the supermassive black holes with millions of times the math of the Sun will have a temperature of 1.4 x 10^-14 Kelvin. That’s low. Almost absolute zero, but not quite.
A solar mass black hole might have a temperature of only .0.00000006 Kelvin. We’re getting warmer.
Since these temperatures are much lower than the background temperature of the Universe – about 2.7 Kelvin, all the existing black holes will have an overall gain of mass. They’re absorbing energy from the Cosmic Background Radiation faster than they’re evaporating, and will for an incomprehensible amount of time into the future.
Until the background temperature of the Universe goes below the temperature of these black holes, they won’t even start evaporating.
A black hole with the mass of the Earth is still too cold.
Only a black hole with about the mass of the Moon is warm enough to be evaporating faster than it’s absorbing energy from the Universe.
As they get less massive, they get even hotter. A black hole with the mass of the asteroid Ceres would be 122 Kelvin. Still freezing, but getting warmer.
A black hole with half the mass of Vesta would blaze at more than 1,200 Kelvin. Now we’re cooking!
Less massive, higher temperatures.
When black holes have lost most of their mass, they release the final material in a tremendous blast of energy, which should be visible to our telescopes.
Some astronomers are actively searching the night sky for blasts from black holes which were formed shortly after the Big Bang, when the Universe was hot and dense enough that black holes could just form.
It took them billions of years of evaporation to get to the point that they’re starting to explode now.
This is just conjecture, though, no explosions have ever been linked to primordial black holes so far.
It’s pretty crazy to think that an object that absorbs all energy that falls into it can also emit energy. Well, that’s the Universe for you. Thanks for helping us figure it out Dr. Hawking.
Want to hear something cool? There’s a black hole at the center of the Milky Way. And not just any black hole, it’s a supermassive black hole with more than 4.1 million times the mass of the Sun.
It’s right over there, in the direction of the Sagittarius constellation. Located just 26,000 light-years away. And as we speak, it’s in the process of tearing apart entire stars and star systems, occasionally consuming them, adding to its mass like a voracious shark.
Wait, that doesn’t sound cool, that sort of sounds a little scary. Right?
Don’t worry, you have absolutely nothing to worry about, unless you plan to live for quadrillions of years, which I do, thanks to my future robot body. I’m ready for my singularity, Dr. Kurzweil.
Is the supermassive black hole going to consume the Milky Way? If not, why not? If so, why so?
The discovery of a supermassive black hole at the heart of the Milky Way, and really almost all galaxies, is one of my favorite discoveries in the field of astronomy. It’s one of those insights that simultaneously answered some questions, and opened up even more.
Back in the 1970s, the astronomers Bruce Balick and Robert Brown realized that there was an intense source of radio emissions coming from the very center of the Milky Way, in the constellation Sagittarius.
They designated it Sgr A*. The asterisk stands for exciting. You think I’m joking, but I’m not. For once, I’m not joking.
In 2002, astronomers observed that there were stars zipping past this object, like comets on elliptical paths going around the Sun. Imagine the mass of our Sun, and the tremendous power it would take to wrench a star like that around.
The only objects with that much density and gravity are black holes, but in this case, a black hole with millions of times the mass of our own Sun: a supermassive black hole.
With the discovery of the Milky Way’s supermassive black hole, astronomers found evidence that there are black holes at the heart of every galaxy.
At the same time, the discovery of supermassive black holes helped answer one of the big questions in astronomy: what are quasars? We did a whole article on them, but they’re intensely bright objects, generating enough light they can be seen billions of light-years away. Giving off more energy than the rest of their own galaxy combined.
It turns out that quasars and supermassive black holes are the same thing. Quasars are just black holes in the process of actively feeding; gobbling up so much material it piles up in an accretion disk around it. Once again, these do sound terrifying. But are we in any danger?
In the short term, no. The black hole at the center of the Milky Way is 26,000 light-years away. Even if it turned into a quasar and started eating stars, you wouldn’t even be able to notice it from this distance.
A black hole is just a concentration of mass in a very small region, which things orbit around. To give you an example, you could replace the Sun with a black hole with the exact same mass, and nothing would change. I mean, we’d all freeze because there wasn’t a Sun in the sky anymore, but the Earth would continue to orbit this black hole in exactly the same orbit, for billions of years.
Same goes with the black hole at the center of the Milky Way. It’s not pulling material in like a vacuum cleaner, it serves as a gravitational anchor for a group of stars to orbit around, for billions of years.
In order for a black hole to actually consume a star, it needs to make a direct hit. To get within the event horizon, which is only about 17 times bigger than the Sun. If a star gets close, without hitting, it’ll get torn apart, but still, it doesn’t happen very often.
The problem happens when these stars interact with one another through their own gravity, and mess with each other’s orbits. A star that would have been orbiting happily for billions of years might get deflected into a collision course with the black hole. But this happens very rarely.
Over the short term, that supermassive black hole is totally harmless. Especially from out here in the galactic suburbs.
But there are a few situations that might cause some problems over vast periods of time.
The first panic will happen when the Milky Way collides with Andromeda in about 4 billion years – let’s call this mess Milkdromeda. Suddenly, you’ll have two whole clouds of stars interacting in all kinds of ways, like an unstable blended family. Stars that would have been safe will careen past other stars and be deflected down into the maw of either of the two supermassive black holes on hand. Andromeda’s black hole could be 100 million times the mass of the Sun, so it’s a bigger target for stars with a death wish.
Over the coming billions, trillions and quadrillions of years, more and more galaxies will collide with Milkdromeda, bringing new supermassive black holes and more stars to the chaos.
So many opportunities for mayhem.
Of course, the Sun will die in about 5 billion years, so this future won’t be our problem. Well, fine, with my eternal robot body, it might still be my problem.
After our neighborhood is completely out of galaxies to consume, then there will just be countless eons of time for stars to interact for orbit after orbit. Some will get flung out of Milkdromeda, some will be hurled down into the black hole.
And others will be safe, assuming they can avoid this fate over the Googol years it’ll take for the supermassive black hole to finally evaporate. That’s a 1 followed by 100 zeroes years. That’s a really really long time, so now I don’t like those odds.
For our purposes, the black hole at the heart of the Milky Way is completely and totally safe. In the lifetime of the Sun, it won’t interact with us in any way, or consume more than a handful of stars.
But over the vast eons, it could be a different story. I hope we can be around to find out the answer.
Way up in the constellation Cancer there’s a 14th magnitude speck of light you can claim in a 10-inch or larger telescope. If you saw it, you might sniff at something so insignificant, yet it represents the final farewell of chewed up stars as their remains whirl down the throat of an 18 billion solar mass black hole, one of the most massive known in the universe.
Astronomers know the object as OJ 287, a quasar that lies 3.5 billion light years from Earth. Quasars or quasi-stellar objects light up the centers of many remote galaxies. If we could pull up for a closer look, we’d see a brilliant, flattened accretion disk composed of heated star-stuff spinning about the central black hole at extreme speeds.
As matter gets sucked down the hole, jets of hot plasma and energetic light shoot out perpendicular to the disk. And if we’re so privileged that one of those jet happens to point directly at us, we call the quasar a “blazar”. Variability of the light streaming from the heart of a blazar is so constant, the object practically flickers.
A recent observational campaign involving more than two dozen optical telescopes and NASA’s space based SWIFT X-ray telescope allowed a team of astronomers to measure very accurately the rotational rate the black hole powering OJ 287 at one third the maximum spin rate — about 56,000 miles per second (90,000 kps) — allowed in General Relativity A careful analysis of these observations show that OJ 287 has produced close-to-periodic optical outbursts at intervals of approximately 12 years dating back to around 1891. A close inspection of newer data sets reveals the presence of double-peaks in these outbursts.
To explain the blazar’s behavior, Prof. Mauri Valtonen of the University of Turku (Finland) and colleagues developed a model that beautifully explains the data if the quasar OJ 287 harbors not one buy two unequal mass black holes — an 18 billion mass one orbited by a smaller black hole.
OJ 287 is visible due to the streaming of matter present in the accretion disk onto the largest black hole. The smaller black hole passes through the larger’s the accretion disk during its orbit, causing the disk material to briefly heat up to very high temperatures. This heated material flows out from both sides of the accretion disk and radiates strongly for weeks, causing the double peak in brightness.
The orbit of the smaller black hole also precesses similar to how Mercury’s orbit precesses. This changes when and where the smaller black hole passes through the accretion disk. After carefully observing eight outbursts of the black hole, the team was able to determine not only the black holes’ masses but also the precession rate of the orbit. Based on Valtonen’s model, the team predicted a flare in late November 2015, and it happened right on schedule.
The timing of this bright outburst allowed Valtonen and his co-workers to directly measure the rotation rate of the more massive black hole to be nearly 1/3 the speed of light. I’ve checked around and as far as I can tell, this would make it the fastest spinning object we know of in the universe. Getting dizzy yet?
The merger of the Milky Way and Andromeda galaxy won’t happen for another 4 billion years, but the recent discovery of a massive halo of hot gas around Andromeda may mean our galaxies are already touching. University of Notre Dame astrophysicist Nicholas Lehner led a team of scientists using the Hubble Space Telescope to identify an enormous halo of hot, ionized gas at least 2 million light years in diameter surrounding the galaxy.
The Andromeda Galaxy is the largest member of a ragtag collection of some 54 galaxies, including the Milky Way, called the Local Group. With a trillion stars — twice as many as the Milky Way — it shines 25% brighter and can easily be seen with the naked eye from suburban and rural skies.
Think about this for a moment. If the halo extends at least a million light years in our direction, our two galaxies are MUCH closer to touching that previously thought. Granted, we’re only talking halo interactions at first, but the two may be mingling molecules even now if our galaxy is similarly cocooned.
Lehner describes halos as the “gaseous atmospheres of galaxies”. Despite its enormous size, Andromeda’s nimbus is virtually invisible. To find and study the halo, the team sought out quasars, distant star-like objects that radiate tremendous amounts of energy as matter funnels into the supermassive black holes in their cores. The brightest quasar, 3C273 in Virgo, can be seen in a 6-inch telescope! Their brilliant, pinpoint nature make them perfect probes.
“As the light from the quasars travels toward Hubble, the halo’s gas will absorb some of that light and make the quasar appear a little darker in just a very small wavelength range,” said J. Christopher Howk , associate professor of physics at Notre Dame and co-investigator. “By measuring the dip in brightness, we can tell how much halo gas from M31 there is between us and that quasar.”
Astronomers have observed halos around 44 other galaxies but never one as massive as Andromeda where so many quasars are available to clearly define its extent. The previous 44 were all extremely distant galaxies, with only a single quasar or data point to determine halo size and structure.
Andromeda’s close and huge with lots of quasars peppering its periphery. The team drew from about five years’ worth of observations of archived Hubble data to find many of the 18 objects needed for a good sample.
The halo is estimated to contain half the mass of the stars in the Andromeda galaxy itself, in the form of a hot, diffuse gas. Simulations suggest that it formed at the same time as the rest of the galaxy. Although mostly composed of ionized hydrogen — naked protons and electrons — Andromeda’s aura is also rich in heavier elements, probably supplied by supernovae. They erupt within the visible galaxy and violently blow good stuff like iron, silicon, oxygen and other familiar elements far into space. Over Andromeda’s lifetime, nearly half of all the heavy elements made by its stars have been expelled far beyond the galaxy’s 200,000-light-year-diameter stellar disk.
You might wonder if galactic halos might account for some or much of the still-mysterious dark matter. Probably not. While dark matter still makes up the bulk of the solid material in the universe, astronomers have been trying to account for the lack of visible matter in galaxies as well. Halos now seem a likely contributor.
The next clear night you look up to spy Andromeda, know this: It’s closer than you think!
There’s a supermassive black hole in the center of our Milky Way galaxy. Could this black hole become a Quasar?
Previously, we answered the question, “What is a Quasar”. If you haven’t watched that one yet, you might want to pause this video and click here. … or you could bravely plow on ahead because you already know or because clicking is hard.
Should you fall in the latter category. I’m here to reward your laziness. A quasar is what you get when a supermassive black hole is actively feeding on material at the core of a galaxy. The region around the black hole gets really hot and blasts out radiation that we can see billions of light-years away.
Our Milky Way is a galaxy, it has a supermassive black hole at the core. Could this black hole feed on material and become a quasar? Quasars are actually very rare events in the life of a galaxy, and they seem to happen early on in a galaxy’s evolution, when it’s young and filled with gas.
Normally material in the galactic disk orbits well away from the the supermassive black hole, and it’s starved for material. The occasional gas cloud or stray star gets too close, is torn apart, and we see a brief flash as it’s consumed. But you don’t get a quasar when a black hole is snacking on stars. You need a tremendous amount of material to pile up, so it’s chokes on all the gas, dust, planets and stars. An accretion disk grows; a swirling maelstrom of material bigger than our Solar System that’s as hot as a star. This disk creates the bright quasar, not the black hole itself.
Quasars might only happen once in the lifetime of a galaxy. And if it does occur, it only lasts for a few million years, while the black hole works through all the backed up material, like water swirling around a drain. Once the black hole has finished its “stuff buffet”, the accretion disk disappears, and the light from the quasar shuts off.
Sounds scary. According to New York University research scientist Gabe Perez-Giz, even though a quasar might be emitting more than 100 trillion times as much energy as the Sun, we’re far enough away from the core of the Milky Way that we would receive very little of it – like, one hundredth of a percent of the intensity we get from the Sun.
Since the Milky Way is already a middle aged galaxy, its quasaring days are probably long over. However, there’s an upcoming event that might cause it to flare up again. In about 4 billion years, Andromeda is going to cuddle with the Milky Way, disrupting the cores of both galaxies. During this colossal event, the supermassive black holes in our two galaxies will interact, messing with the orbits of stars, planets, gas and dust.
Some will be thrown out into space, while others will be torn apart and fed to the black holes. And if enough material piles up, maybe our Milky Way will become a quasar after all. Which as I just mentioned, will be totally harmless to us. The galactic collision? Well that’s another story.
It’s likely our Milky Way already was a quasar, billions of years ago. And it might become one again billions of years from now. And that’s interesting enough that I think we should stick around and watch it happen. How do you feel about the prospects for our Milky Way becoming a quasar? Are you a little nervous by an event that won’t happen for another 4 billion years?
Thanks for watching! Never miss an episode by clicking subscribe. Our Patreon community is the reason these shows happen. We’d like to thank Damon Reith and Jay Allbright, and the rest of the members who support us in making great space and astronomy content. Members get advance access to episodes, extras, contests, and other shenanigans with Jay, myself and the rest of the team. Want to get in on the action? Click here. |
Statistics Definitions > ANOVA
- The ANOVA Test
- One Way ANOVA
- Two Way ANOVA
- What is MANOVA?
- What is Factorial ANOVA?
- How to run an ANOVA
- ANOVA vs. T Test
- Repeated Measures ANOVA
- Related Articles
What is ANOVA? Watch the video for an introduction, or read on below:
Still having trouble? Chegg.com will match you with a tutor (your first lesson is free!).
An ANOVA test is a way to find out if survey or experiment results are significant. In other words, they help you to figure out if you need to reject the null hypothesis or accept the alternate hypothesis.
Basically, you’re testing groups to see if there’s a difference between them. Examples of when you might want to test different groups:
- A group of psychiatric patients are trying three different therapies: counseling, medication and biofeedback. You want to see if one therapy is better than the others.
- A manufacturer has two different processes to make light bulbs. They want to know if one process is better than the other.
- Students from different colleges take the same exam. You want to see if one college outperforms the other.
What Does “One-Way” or “Two-Way Mean?
One-way or two-way refers to the number of independent variables (IVs) in your Analysis of Variance test.
- One-way has one independent variable (with 2 levels). For example: brand of cereal,
- Two-way has two independent variables (it can have multiple levels). For example: brand of cereal, calories.
What are “Groups” or “Levels”?
Groups or levels are different groups within the same independent variable. In the above example, your levels for “brand of cereal” might be Lucky Charms, Raisin Bran, Cornflakes — a total of three levels. Your levels for “Calories” might be: sweetened, unsweetened — a total of two levels.
Let’s say you are studying if an alcoholic support group and individual counseling combined is the most effective treatment for lowering alcohol consumption. You might split the study participants into three groups or levels:
- Medication only,
- Medication and counseling,
- Counseling only.
Your dependent variable would be the number of alcoholic beverages consumed per day.
If your groups or levels have a hierarchical structure (each level has unique subgroups), then use a nested ANOVA for the analysis.
What Does “Replication” Mean?
It’s whether you are replicating (i.e. duplicating) your test(s) with multiple groups. With a two way ANOVA with replication , you have two groups and individuals within that group are doing more than one thing (i.e. two groups of students from two colleges taking two tests). If you only have one group taking two tests, you would use without replication.
Types of Tests.
There are two main types: one-way and two-way. Two-way tests can be with or without replication.
- One-way ANOVA between groups: used when you want to test two groups to see if there’s a difference between them.
- Two way ANOVA without replication: used when you have one group and you’re double-testing that same group. For example, you’re testing one set of individuals before and after they take a medication to see if it works or not.
- Two way ANOVA with replication: Two groups, and the members of those groups are doing more than one thing. For example, two groups of patients from different hospitals trying two different therapies.
A one way ANOVA is used to compare two means from two independent (unrelated) groups using the F-distribution. The null hypothesis for the test is that the two means are equal. Therefore, a significant result means that the two means are unequal.
Examples of when to use a one way ANOVA
Situation 1: You have a group of individuals randomly split into smaller groups and completing different tasks. For example, you might be studying the effects of tea on weight loss and form three groups: green tea, black tea, and no tea.
Situation 2: Similar to situation 1, but in this case the individuals are split into groups based on an attribute they possess. For example, you might be studying leg strength of people according to weight. You could split participants into weight categories (obese, overweight and normal) and measure their leg strength on a weight machine.
Limitations of the One Way ANOVA
A one way ANOVA will tell you that at least two groups were different from each other. But it won’t tell you which groups were different. If your test returns a significant f-statistic, you may need to run an ad hoc test (like the Least Significant Difference test) to tell you exactly which groups had a difference in means.
Back to Top
A Two Way ANOVA is an extension of the One Way ANOVA. With a One Way, you have one independent variable affecting a dependent variable. With a Two Way ANOVA, there are two independents. Use a two way ANOVA when you have one measurement variable (i.e. a quantitative variable) and two nominal variables. In other words, if your experiment has a quantitative outcome and you have two categorical explanatory variables, a two way ANOVA is appropriate.
For example, you might want to find out if there is an interaction between income and gender for anxiety level at job interviews. The anxiety level is the outcome, or the variable that can be measured. Gender and Income are the two categorical variables. These categorical variables are also the independent variables, which are called factors in a Two Way ANOVA.
The factors can be split into levels. In the above example, income level could be split into three levels: low, middle and high income. Gender could be split into three levels: male, female, and transgender. Treatment groups are all possible combinations of the factors. In this example there would be 3 x 3 = 9 treatment groups.
Main Effect and Interaction Effect
The results from a Two Way ANOVA will calculate a main effect and an interaction effect. The main effect is similar to a One Way ANOVA: each factor’s effect is considered separately. With the interaction effect, all factors are considered at the same time. Interaction effects between factors are easier to test if there is more than one observation in each cell. For the above example, multiple stress scores could be entered into cells. If you do enter multiple observations into cells, the number in each cell must be equal.
Two null hypotheses are tested if you are placing one observation in each cell. For this example, those hypotheses would be:
H01: All the income groups have equal mean stress.
H02: All the gender groups have equal mean stress.
For multiple observations in cells, you would also be testing a third hypothesis:
H03: The factors are independent or the interaction effect does not exist.
An F-statistic is computed for each hypothesis you are testing.
Assumptions for Two Way ANOVA
- The population must be close to a normal distribution.
- Samples must be independent.
- Population variances must be equal.
- Groups must have equal sample sizes.
MANOVA is just an ANOVA with several dependent variables. It’s similar to many other tests and experiments in that it’s purpose is to find out if the response variable (i.e. your dependent variable) is changed by manipulating the independent variable. The test helps to answer many research questions, including:
- Do changes to the independent variables have statistically significant effects on dependent variables?
- What are the interactions among dependent variables?
- What are the interactions among independent variables?
Suppose you wanted to find out if a difference in textbooks affected students’ scores in math and science. Improvements in math and science means that there are two dependent variables, so a MANOVA is appropriate.
An ANOVA will give you a single (univariate) f-value while a MANOVA will give you a multivariate F value. MANOVA tests the multiple dependent variables by creating new, artificial, dependent variables that maximize group differences. These new dependent variables are linear combinations of the measured dependent variables.
Interpreting the MANOVA results
If the multivariate F value indicates the test is statistically significant, this means that something is significant. In the above example, you would not know if math scores have improved, science scores have improved (or both). Once you have a significant result, you would then have to look at each individual component (the univariate F tests) to see which dependent variable(s) contributed to the statistically significant result.
Advantages and Disadvantages of MANOVA vs. ANOVA
- MANOVA enables you to test multiple dependent variables.
- MANOVA can protect against Type I errors.
- MANOVA is many times more complicated than ANOVA, making it a challenge to see which independent variables are affecting dependent variables.
- One degree of freedom is lost with the addition of each new variable.
- The dependent variables should be uncorrelated as much as possible. If they are correlated, the loss in degrees of freedom means that there isn’t much advantages in including more than one dependent variable on the test.
A factorial ANOVA is an Analysis of Variance test with more than one independent variable, or “factor“. It can also refer to more than one Level of Independent Variable. For example, an experiment with a treatment group and a control group has one factor (the treatment) but two levels (the treatment and the control). The terms “two-way” and “three-way” refer to the number of factors or the number of levels in your test. Four-way ANOVA and above are rarely used because the results of the test are complex and difficult to interpret.
- A two-way ANOVA has two factors (independent variables) and one dependent variable. For example, time spent studying and prior knowledge are factors that affect how well you do on a test.
- A three-way ANOVA has three factors (independent variables) and one dependent variable. For example, time spent studying, prior knowledge, and hours of sleep are factors that affect how well you do on a test
Factorial ANOVA is an efficient way of conducting a test. Instead of performing a series of experiments where you test one independent variable against one dependent variable, you can test all independent variables at the same time.
In a one-way ANOVA, variability is due to the differences between groups and the differences within groups. In factorial ANOVA, each level and factor are paired up with each other (“crossed”). This helps you to see what interactions are going on between the levels and factors. If there is an interaction then the differences in one factor depend on the differences in another.
Let’s say you were running a two-way ANOVA to test male/female performance on a final exam. The subjects had either had 4, 6, or 8 hours of sleep.
- IV1: SEX (Male/Female)
- IV2: SLEEP (4/6/8)
- DV: Final Exam Score
A two-way factorial ANOVA would help you answer the following questions:
- Is sex a main effect? In other words, do men and women differ significantly on their exam performance?
- Is sleep a main effect? In other words, do people who have had 4,6, or 8 hours of sleep differ significantly in their performance?
- Is there a significant interaction between factors? In other words, how do hours of sleep and sex interact with regards to exam performance?
- Can any differences in sex and exam performance be found in the different levels of sleep?
Assumptions of Factorial ANOVA
- Normality: the dependent variable is normally distributed.
- Independence: Observations and groups are independent from each other.
- Equality of Variance: the population variances are equal across factors/levels.
These tests are very time-consuming by hand. In nearly every case you’ll want to use software. For example, several options are available in Excel:
ANOVA tests in statistics packages are run on parametric data. If you have rank or ordered data, you’ll want to run a non-parametric ANOVA (usually found under a different heading in the software, like “nonparametric tests“).
It is unlikely you’ll want to do this test by hand, but if you must, these are the steps you’ll want to take:
- Find the mean for each of the groups.
- Find the overall mean (the mean of the groups combined).
- Find the Within Group Variation; the total deviation of each member’s score from the Group Mean.
- Find the Between Group Variation: the deviation of each Group Mean from the Overall Mean.
- Find the F statistic: the ratio of Between Group Variation to Within Group Variation.
A Student’s t-test will tell you if there is a significant variation between groups. A t-test compares means, while the ANOVA compares variances between populations.
You could technically perform a series of t-tests on your data. However, as the groups grow in number, you may end up with a lot of pair comparisons that you need to run. ANOVA will give you a single number (the f-statistic) and one p-value to help you support or reject the null hypothesis.
Back to Top
A repeated measures ANOVA is almost the same as one-way ANOVA, with one main difference: you test related groups, not independent ones. It’s called Repeated Measures because the same group of participants is being measured over and over again. For example, you could be studying the cholesterol levels of the same group of patients at 1, 3, and 6 months after changing their diet. For this example, the independent variable is “time” and the dependent variable is “cholesterol.” The independent variable is usually called the within-subjects factor.
Repeated measures ANOVA is similar to a simple multivariate design. In both tests, the same participants are measured over and over. However, with repeated measures the same characteristic is measured with a different condition. For example, blood pressure is measured over the condition “time”. For simple multivariate design it is the characteristic that changes. For example, you could measure blood pressure, heart rate and respiration rate over time.
Reasons to use Repeated Measures ANOVA
- When you collect data from the same participants over a period of time, individual differences (a source of between group differences) are reduced or eliminated.
- Testing is more powerful because the sample size isn’t divided between groups.
- The test can be economical, as you’re using the same participants.
Assumptions for Repeated Measures ANOVA
The results from your repeated measures ANOVA will be valid only if the following assumptions haven’t been violated:
- There must be one independent variable and one dependent variable.
- The dependent variable must be a continuous variable, on an interval scale or a ratio scale.
- The independent variable must be categorical, either on the nominal scale or ordinal scale.
- Ideally, levels of dependence between pairs of groups is equal (“sphericity”). Corrections are possible if this assumption is violated.
Repeated Measures ANOVA in SPSS: Steps
Step 1: Click “Analyze”, then hover over “General Linear Model.” Click “Repeated Measures.”
Step 2: Replace the “factor1” name with something that represents your independent variable. For example, you could put “age” or “time.”
Step 3: Enter the “Number of Levels.” This is how many times the dependent variable has been measured. For example, if you took measurements every week for a total of 4 weeks, this number would be 4.
Step 4: Click the “Add” button and then give your dependent variable a name.
Step 6: Use the arrow keys to move your variables from the left to the right so that your screen looks similar to the image below:
Step 7: Click “Plots” and use the arrow keys to transfer the factor from the left box onto the Horizontal Axis box.
Step 9: Click “Options”, then transfer your factors from the left box to the Display Means for box on the right.
Step 10: Click the following check boxes:
In statistics, sphericity (ε) refers to Mauchly’s sphericity test, which was developed in 1940 by John W. Mauchly, who co-developed the first general-purpose electronic computer.
Sphericity is used as an assumption in repeated measures ANOVA. The assumption states that the variances of the differences between all possible group pairs are equal. If your data violates this assumption, it can result in an increase in a Type I error (the incorrect rejection of the null hypothesis).
It’s very common for repeated measures ANOVA to result in a violation of the assumption. If the assumption has been violated, corrections have been developed that can avoid increases in the type I error rate. The correction is applied to the degrees of freedom in the F-distribution.
Mauchly’s Sphericity Test
Mauchly’s test for sphericity can be run in the majority of statistical software, where it tends to be the default test for sphericity. Mauchly’s test is ideal for mid-size samples. It may fail to detect sphericity in small samples and it may over-detect in large samples.
If the test returns a small p-value (p ≤.05), this is an indication that your data has violated the assumption. The following picture of SPSS output for ANOVA shows that the significance “sig” attached to Mauchly’s is .274. This means that the assumption has not been violated for this set of data.
You would report the above result as “Mauchly’s Test indicated that the assumption of sphericity had not been violated, χ2(2) = 2.588, p = .274.”
If your test returned a small p-value, you should apply a correction, usually either the:
When ε ≤ 0.75 (or you don’t know what the value for the statistic is), use the Greenhouse-Geisser correction.
When ε > .75, use the Huynh-Feldt correction.
Blokdyk, B. (2018). Ad Hoc Testing. 5STARCooks
Miller, R. G. Beyond ANOVA: Basics of Applied Statistics. Boca Raton, FL: Chapman & Hall, 1997
Image: UVM. Retrieved December 4, 2020 from: https://www.uvm.edu/~dhowell/gradstat/psych341/lectures/RepeatedMeasures/repeated1.html
Stephanie Glen. "ANOVA Test: Definition, Types, Examples" From StatisticsHowTo.com: Elementary Statistics for the rest of us! https://www.statisticshowto.com/probability-and-statistics/hypothesis-testing/anova/
Need help with a homework or test question? With Chegg Study, you can get step-by-step solutions to your questions from an expert in the field. Your first 30 minutes with a Chegg tutor is free!
Comments? Need to post a correction? Please post a comment on our Facebook page. |
On January 27, 2012, Curiosity’s spacecraft was hit by the most intense solar radiation storm since 2005. The event began when sunspot AR1402 produced an X2-class solar flare. (On the “Richter Scale of Solar Flares”, X-flares are the most powerful kind.) The explosion accelerated a fusillade of protons and electrons to nearly light speed; these subatomic bullets were guided by the sun’s magnetic field almost directly toward Curiosity. Researchers say it's all part of Curiosity's job as a 'stunt double' for human astronauts as this mini-Cooper-sized rover is playing the role of stunt double for NASA astronauts.
When the particles hit the outer walls of the spacecraft, they shattered other atoms and molecules in their path, producing a secondary spray of radiation that Curiosity both absorbed and measured.
“Curiosity is riding to Mars in the belly of a spacecraft, where an astronaut would be,” explains Don Hassler of the Southwest Research Institute in Boulder, Colorado. “This means the rover experiences deep-space radiation storms in the same way that a real astronaut would.” “Curiosity was in no danger,” says Hassler. “In fact, we intended all along for the rover to experience these storms en route to Mars.”
Unlike previous Mars rovers, Curiosity is equipped with a Radiation Assessment Detector. The instrument, nicknamed “RAD,” counts cosmic rays, neutrons, protons and other particles over a wide range of biologically-interesting energies. RAD’s prime mission is to investigate the radiation environment on the surface of Mars, but researchers have turned it on early so that it can also probe the radiation environment on the way to Mars as well.
RAD's primary science objectives are to:
- Characterize the energetic particle spectrum at the surface of Mars
- Determine the radiation dose for humans on the surface of Mars
- Enable validation of Mars atmospheric transmission models and radiation transport codes
- Provide input to the determination of the radiation hazard and mutagenic influences to life, past and present, at and beneath the Martian surface
- Provide input to the determination of the chemical and isotopic effects of energetic particles on the Martian surface and atmosphere
Curiosity’s location inside the spacecraft is key to the experiment.
“We have a pretty good idea what the radiation environment is like outside,” says Hassler, who is the principal investigator for RAD. “Inside the spacecraft, however, is still a mystery.”
Even supercomputers have trouble calculating exactly what happens when high-energy cosmic rays and solar energetic particles hit the walls of a spacecraft. One particle hits another; fragments fly; the fragments themselves crash into other molecules.
“It’s very complicated. Mars rover Curiosity is giving us a chance to actually measure what happens.”
In the aftermath of the Jan. 27thX-flare, RAD detected a surge of particles several times more numerous than the usual cosmic ray counts. Hassler’s team is still analyzing the data to understand what it is telling them about the response of the spacecraft to the storm.
More X-flares will help by adding to the data set. Hassler expects the sun to cooperate, because the solar cycle is trending upward toward a maximum expected in early 2013.
As of February 2012, “we still have 6 months to go before we reach Mars. That’s plenty of time for more solar storms.”
Featured image: NASA (movie screenshot) |
Cosmic microwave background
|Part of a series on|
The cosmic microwave background (CMB, CMBR), in Big Bang cosmology, is electromagnetic radiation which is a remnant from an early stage of the universe, also known as "relic radiation". The CMB is faint cosmic background radiation filling all space. It is an important source of data on the early universe because it is the oldest electromagnetic radiation in the universe, dating to the epoch of recombination. With a traditional optical telescope, the space between stars and galaxies (the background) is completely dark. However, a sufficiently sensitive radio telescope shows a faint background noise, or glow, almost isotropic, that is not associated with any star, galaxy, or other object. This glow is strongest in the microwave region of the radio spectrum. The accidental discovery of the CMB in 1965 by American radio astronomers Arno Penzias and Robert Wilson was the culmination of work initiated in the 1940s, and earned the discoverers the 1978 Nobel Prize in Physics.
CMB is landmark evidence of the Big Bang origin of the universe. When the universe was young, before the formation of stars and planets, it was denser, much hotter, and filled with an opaque fog of hydrogen plasma. As the universe expanded the plasma grew cooler and the radiation filling it expanded to longer wavelengths. When the temperature had dropped enough, protons and electrons combined to form neutral hydrogen atoms. Unlike the plasma, these newly conceived atoms could not scatter the thermal radiation by Thomson scattering, and so the universe became transparent. Cosmologists refer to the time period when neutral atoms first formed as the recombination epoch, and the event shortly afterwards when photons started to travel freely through space is referred to as photon decoupling. The photons that existed at the time of photon decoupling have been propagating ever since, though growing less energetic, since the expansion of space causes their wavelength to increase over time (and wavelength is inversely proportional to energy according to Planck's relation). This is the source of the alternative term relic radiation. The surface of last scattering refers to the set of points in space at the right distance from us so that we are now receiving photons originally emitted from those points at the time of photon decoupling.
Importance of precise measurement
Precise measurements of the CMB are critical to cosmology, since any proposed model of the universe must explain this radiation. The CMB has a thermal black body spectrum at a temperature of 2.72548±0.00057 K. The spectral radiance dEν/dν peaks at 160.23 GHz, in the microwave range of frequencies, corresponding to a photon energy of about 6.626 ⋅ 10−4 eV. Alternatively, if spectral radiance is defined as dEλ/dλ, then the peak wavelength is 1.063 mm (282 GHz, 1.168 ⋅ 10−3 eV photons). The glow is very nearly uniform in all directions, but the tiny residual variations show a very specific pattern, the same as that expected of a fairly uniformly distributed hot gas that has expanded to the current size of the universe. In particular, the spectral radiance at different angles of observation in the sky contains small anisotropies, or irregularities, which vary with the size of the region examined. They have been measured in detail, and match what would be expected if small thermal variations, generated by quantum fluctuations of matter in a very tiny space, had expanded to the size of the observable universe we see today. This is a very active field of study, with scientists seeking both better data (for example, the Planck spacecraft) and better interpretations of the initial conditions of expansion. Although many different processes might produce the general form of a black body spectrum, no model other than the Big Bang has yet explained the fluctuations. As a result, most cosmologists consider the Big Bang model of the universe to be the best explanation for the CMB.
The high degree of uniformity throughout the observable universe and its faint but measured anisotropy lend strong support for the Big Bang model in general and the ΛCDM ("Lambda Cold Dark Matter") model in particular. Moreover, the fluctuations are coherent on angular scales that are larger than the apparent cosmological horizon at recombination. Either such coherence is acausally fine-tuned, or cosmic inflation occurred.
Other than the temperature and polarization anisotropy, the CMB frequency spectrum is expected to feature tiny departures from the black-body law known as spectral distortions. These are also at the focus of an active research effort with the hope of a first measurement within the forthcoming decades, as they contain a wealth of information about the primordial universe and the formation of structures at late time.
The cosmic microwave background radiation is an emission of uniform, black body thermal energy coming from all parts of the sky. The radiation is isotropic to roughly one part in 100,000: the root mean square variations are only 18 μK, after subtracting out a dipole anisotropy from the Doppler shift of the background radiation. The latter is caused by the peculiar velocity of the Sun relative to the comoving cosmic rest frame as it moves at some 369.82 ± 0.11 km/s towards the constellation Leo (galactic longitude 264.021 ± 0.011, galactic latitude 48.253 ± 0.005). The CMB dipole and aberration at higher multipoles have been measured, consistent with galactic motion.
In the Big Bang model for the formation of the universe, inflationary cosmology predicts that after about 10−37 seconds the nascent universe underwent exponential growth that smoothed out nearly all irregularities. The remaining irregularities were caused by quantum fluctuations in the inflation field that caused the inflation event. Long before the formation of stars and planets, the early universe was smaller, much hotter and, starting 10−6 seconds after the Big Bang, filled with a uniform glow from its white-hot fog of interacting plasma of photons, electrons, and baryons.
As the universe expanded, adiabatic cooling caused the energy density of the plasma to decrease until it became favorable for electrons to combine with protons, forming hydrogen atoms. This recombination event happened when the temperature was around 3000 K or when the universe was approximately 379,000 years old. As photons did not interact with these electrically neutral atoms, the former began to travel freely through space, resulting in the decoupling of matter and radiation.
The color temperature of the ensemble of decoupled photons has continued to diminish ever since; now down to 2.7260±0.0013 K, it will continue to drop as the universe expands. The intensity of the radiation corresponds to black-body radiation at 2.726 K because red-shifted black-body radiation is just like black-body radiation at a lower temperature. According to the Big Bang model, the radiation from the sky we measure today comes from a spherical surface called the surface of last scattering. This represents the set of locations in space at which the decoupling event is estimated to have occurred and at a point in time such that the photons from that distance have just reached observers. Most of the radiation energy in the universe is in the cosmic microwave background, making up a fraction of roughly 6×10−5 of the total density of the universe.
Two of the greatest successes of the Big Bang theory are its prediction of the almost perfect black body spectrum and its detailed prediction of the anisotropies in the cosmic microwave background. The CMB spectrum has become the most precisely measured black body spectrum in nature.
The energy density of the CMB is 0.260 eV/cm3 (4.17×10−14 J/m3) which yields about 411 photons/cm3.
The cosmic microwave background was first predicted in 1948 by Ralph Alpher and Robert Herman, in close relation to work performed by Alpher's PhD advisor George Gamow. Alpher and Herman were able to estimate the temperature of the cosmic microwave background to be 5 K, though two years later they re-estimated it at 28 K. This high estimate was due to a misestimate of the Hubble constant by Alfred Behr, which could not be replicated and was later abandoned for the earlier estimate. Although there were several previous estimates of the temperature of space, these suffered from two flaws. First, they were measurements of the effective temperature of space and did not suggest that space was filled with a thermal Planck spectrum. Next, they depend on our being at a special spot at the edge of the Milky Way galaxy and they did not suggest the radiation is isotropic. The estimates would yield very different predictions if Earth happened to be located elsewhere in the universe.
The 1948 results of Alpher and Herman were discussed in many physics settings through about 1955, when both left the Applied Physics Laboratory at Johns Hopkins University. The mainstream astronomical community, however, was not intrigued at the time by cosmology. Alpher and Herman's prediction was rediscovered by Yakov Zel'dovich in the early 1960s, and independently predicted by Robert Dicke at the same time. The first published recognition of the CMB radiation as a detectable phenomenon appeared in a brief paper by Soviet astrophysicists A. G. Doroshkevich and Igor Novikov, in the spring of 1964. In 1964, David Todd Wilkinson and Peter Roll, Dicke's colleagues at Princeton University, began constructing a Dicke radiometer to measure the cosmic microwave background. In 1964, Arno Penzias and Robert Woodrow Wilson at the Crawford Hill location of Bell Telephone Laboratories in nearby Holmdel Township, New Jersey had built a Dicke radiometer that they intended to use for radio astronomy and satellite communication experiments. On 20 May 1964 they made their first measurement clearly showing the presence of the microwave background, with their instrument having an excess 4.2K antenna temperature which they could not account for. After receiving a telephone call from Crawford Hill, Dicke said "Boys, we've been scooped." A meeting between the Princeton and Crawford Hill groups determined that the antenna temperature was indeed due to the microwave background. Penzias and Wilson received the 1978 Nobel Prize in Physics for their discovery.
The interpretation of the cosmic microwave background was a controversial issue in the 1960s with some proponents of the steady state theory arguing that the microwave background was the result of scattered starlight from distant galaxies. Using this model, and based on the study of narrow absorption line features in the spectra of stars, the astronomer Andrew McKellar wrote in 1941: "It can be calculated that the 'rotational temperature' of interstellar space is 2 K." However, during the 1970s the consensus was established that the cosmic microwave background is a remnant of the big bang. This was largely because new measurements at a range of frequencies showed that the spectrum was a thermal, black body spectrum, a result that the steady state model was unable to reproduce.
Harrison, Peebles, Yu and Zel'dovich realized that the early universe would have to have inhomogeneities at the level of 10−4 or 10−5. Rashid Sunyaev later calculated the observable imprint that these inhomogeneities would have on the cosmic microwave background. Increasingly stringent limits on the anisotropy of the cosmic microwave background were set by ground-based experiments during the 1980s. RELIKT-1, a Soviet cosmic microwave background anisotropy experiment on board the Prognoz 9 satellite (launched 1 July 1983) gave upper limits on the large-scale anisotropy. The NASA COBE mission clearly confirmed the primary anisotropy with the Differential Microwave Radiometer instrument, publishing their findings in 1992. The team received the Nobel Prize in physics for 2006 for this discovery.
Inspired by the COBE results, a series of ground and balloon-based experiments measured cosmic microwave background anisotropies on smaller angular scales over the next decade. The primary goal of these experiments was to measure the scale of the first acoustic peak, which COBE did not have sufficient resolution to resolve. This peak corresponds to large scale density variations in the early universe that are created by gravitational instabilities, resulting in acoustical oscillations in the plasma. The first peak in the anisotropy was tentatively detected by the Toco experiment and the result was confirmed by the BOOMERanG and MAXIMA experiments. These measurements demonstrated that the geometry of the universe is approximately flat, rather than curved. They ruled out cosmic strings as a major component of cosmic structure formation and suggested cosmic inflation was the right theory of structure formation.
The second peak was tentatively detected by several experiments before being definitively detected by WMAP, which has tentatively detected the third peak. As of 2010, several experiments to improve measurements of the polarization and the microwave background on small angular scales are ongoing. These include DASI, WMAP, BOOMERanG, QUaD, Planck spacecraft, Atacama Cosmology Telescope, South Pole Telescope and the QUIET telescope.
Relationship to the Big Bang
The cosmic microwave background radiation and the cosmological redshift-distance relation are together regarded as the best available evidence for the Big Bang theory. Measurements of the CMB have made the inflationary Big Bang theory the Standard Cosmological Model. The discovery of the CMB in the mid-1960s curtailed interest in alternatives such as the steady state theory.
In the late 1940s Alpher and Herman reasoned that if there was a big bang, the expansion of the universe would have stretched the high-energy radiation of the very early universe into the microwave region of the electromagnetic spectrum, and down to a temperature of about 5 K. They were slightly off with their estimate, but they had the right idea. They predicted the CMB. It took another 15 years for Penzias and Wilson to stumble into discovering that the microwave background was actually there.
The CMB gives a snapshot of the universe when, according to standard cosmology, the temperature dropped enough to allow electrons and protons to form hydrogen atoms, thereby making the universe nearly transparent to radiation because light was no longer being scattered off free electrons. When it originated some 380,000 years after the Big Bang—this time is generally known as the "time of last scattering" or the period of recombination or decoupling—the temperature of the universe was about 3000 K. This corresponds to an energy of about 0.26 eV, which is much less than the 13.6 eV ionization energy of hydrogen.
Since decoupling, the color temperature of the background radiation has dropped by an average factor of 1090 due to the expansion of the universe. As the universe expands, the CMB photons are redshifted, causing them to decrease in energy. The color temperature of this radiation stays inversely proportional to a parameter that describes the relative expansion of the universe over time, known as the scale length. The color temperature Tr of the CMB as a function of redshift, z, can be shown to be proportional to the color temperature of the CMB as observed in the present day (2.725 K or 0.2348 meV):
- Tr = 2.725 ⋅ (1 + z)
For details about the reasoning that the radiation is evidence for the Big Bang, see Cosmic background radiation of the Big Bang.
The anisotropy, or directional dependency, of the cosmic microwave background is divided into two types: primary anisotropy, due to effects that occur at the surface of last scattering and before; and secondary anisotropy, due to effects such as interactions of the background radiation with intervening hot gas or gravitational potentials, which occur between the last scattering surface and the observer.
The structure of the cosmic microwave background anisotropies is principally determined by two effects: acoustic oscillations and diffusion damping (also called collisionless damping or Silk damping). The acoustic oscillations arise because of a conflict in the photon–baryon plasma in the early universe. The pressure of the photons tends to erase anisotropies, whereas the gravitational attraction of the baryons, moving at speeds much slower than light, makes them tend to collapse to form overdensities. These two effects compete to create acoustic oscillations, which give the microwave background its characteristic peak structure. The peaks correspond, roughly, to resonances in which the photons decouple when a particular mode is at its peak amplitude.
The peaks contain interesting physical signatures. The angular scale of the first peak determines the curvature of the universe (but not the topology of the universe). The next peak—ratio of the odd peaks to the even peaks—determines the reduced baryon density. The third peak can be used to get information about the dark-matter density.
The locations of the peaks give important information about the nature of the primordial density perturbations. There are two fundamental types of density perturbations called adiabatic and isocurvature. A general density perturbation is a mixture of both, and different theories that purport to explain the primordial density perturbation spectrum predict different mixtures.
- Adiabatic density perturbations
- In an adiabatic density perturbation, the fractional additional number density of each type of particle (baryons, photons ...) is the same. That is, if at one place there is a 1% higher number density of baryons than average, then at that place there is a 1% higher number density of photons (and a 1% higher number density in neutrinos) than average. Cosmic inflation predicts that the primordial perturbations are adiabatic.
- Isocurvature density perturbations
- In an isocurvature density perturbation, the sum (over different types of particle) of the fractional additional densities is zero. That is, a perturbation where at some spot there is 1% more energy in baryons than average, 1% more energy in photons than average, and 2% less energy in neutrinos than average, would be a pure isocurvature perturbation. Hypothetical cosmic strings would produce mostly isocurvature primordial perturbations.
The CMB spectrum can distinguish between these two because these two types of perturbations produce different peak locations. Isocurvature density perturbations produce a series of peaks whose angular scales (ℓ values of the peaks) are roughly in the ratio 1 : 3 : 5 : ..., while adiabatic density perturbations produce peaks whose locations are in the ratio 1 : 2 : 3 : ... Observations are consistent with the primordial density perturbations being entirely adiabatic, providing key support for inflation, and ruling out many models of structure formation involving, for example, cosmic strings.
Collisionless damping is caused by two effects, when the treatment of the primordial plasma as fluid begins to break down:
- the increasing mean free path of the photons as the primordial plasma becomes increasingly rarefied in an expanding universe,
- the finite depth of the last scattering surface (LSS), which causes the mean free path to increase rapidly during decoupling, even while some Compton scattering is still occurring.
These effects contribute about equally to the suppression of anisotropies at small scales and give rise to the characteristic exponential damping tail seen in the very small angular scale anisotropies.
The depth of the LSS refers to the fact that the decoupling of the photons and baryons does not happen instantaneously, but instead requires an appreciable fraction of the age of the universe up to that era. One method of quantifying how long this process took uses the photon visibility function (PVF). This function is defined so that, denoting the PVF by P(t), the probability that a CMB photon last scattered between time t and t + dt is given by P(t) dt.
The maximum of the PVF (the time when it is most likely that a given CMB photon last scattered) is known quite precisely. The first-year WMAP results put the time at which P(t) has a maximum as 372,000 years. This is often taken as the "time" at which the CMB formed. However, to figure out how long it took the photons and baryons to decouple, we need a measure of the width of the PVF. The WMAP team finds that the PVF is greater than half of its maximal value (the "full width at half maximum", or FWHM) over an interval of 115,000 years. By this measure, decoupling took place over roughly 115,000 years, and when it was complete, the universe was roughly 487,000 years old.
Late time anisotropy
Since the CMB came into existence, it has apparently been modified by several subsequent physical processes, which are collectively referred to as late-time anisotropy, or secondary anisotropy. When the CMB photons became free to travel unimpeded, ordinary matter in the universe was mostly in the form of neutral hydrogen and helium atoms. However, observations of galaxies today seem to indicate that most of the volume of the intergalactic medium (IGM) consists of ionized material (since there are few absorption lines due to hydrogen atoms). This implies a period of reionization during which some of the material of the universe was broken into hydrogen ions.
The CMB photons are scattered by free charges such as electrons that are not bound in atoms. In an ionized universe, such charged particles have been liberated from neutral atoms by ionizing (ultraviolet) radiation. Today these free charges are at sufficiently low density in most of the volume of the universe that they do not measurably affect the CMB. However, if the IGM was ionized at very early times when the universe was still denser, then there are two main effects on the CMB:
- Small scale anisotropies are erased. (Just as when looking at an object through fog, details of the object appear fuzzy.)
- The physics of how photons are scattered by free electrons (Thomson scattering) induces polarization anisotropies on large angular scales. This broad angle polarization is correlated with the broad angle temperature perturbation.
Both of these effects have been observed by the WMAP spacecraft, providing evidence that the universe was ionized at very early times, at a redshift more than 17.[clarification needed] The detailed provenance of this early ionizing radiation is still a matter of scientific debate. It may have included starlight from the very first population of stars (population III stars), supernovae when these first stars reached the end of their lives, or the ionizing radiation produced by the accretion disks of massive black holes.
The time following the emission of the cosmic microwave background—and before the observation of the first stars—is semi-humorously referred to by cosmologists as the Dark Age, and is a period which is under intense study by astronomers (see 21 centimeter radiation).
Two other effects which occurred between reionization and our observations of the cosmic microwave background, and which appear to cause anisotropies, are the Sunyaev–Zel'dovich effect, where a cloud of high-energy electrons scatters the radiation, transferring some of its energy to the CMB photons, and the Sachs–Wolfe effect, which causes photons from the Cosmic Microwave Background to be gravitationally redshifted or blueshifted due to changing gravitational fields.
The cosmic microwave background is polarized at the level of a few microkelvin. There are two types of polarization, called E-modes and B-modes. This is in analogy to electrostatics, in which the electric field (E-field) has a vanishing curl and the magnetic field (B-field) has a vanishing divergence. The E-modes arise naturally from Thomson scattering in a heterogeneous plasma. The B-modes are not produced by standard scalar type perturbations. Instead they can be created by two mechanisms: the first one is by gravitational lensing of E-modes, which has been measured by the South Pole Telescope in 2013; the second one is from gravitational waves arising from cosmic inflation. Detecting the B-modes is extremely difficult, particularly as the degree of foreground contamination is unknown, and the weak gravitational lensing signal mixes the relatively strong E-mode signal with the B-mode signal.
E-modes were first seen in 2002 by the Degree Angular Scale Interferometer (DASI).
Primordial gravitational waves
Primordial gravitational waves are gravitational waves that could be observed in the polarisation of the cosmic microwave background and having their origin in the early universe. Models of cosmic inflation predict that such gravitational waves should appear; thus, their detection supports the theory of inflation, and their strength can confirm and exclude different models of inflation. It is the result of three things: inflationary expansion of space itself, reheating after inflation, and turbulent fluid mixing of matter and radiation.
On 17 March 2014 it was announced that the BICEP2 instrument had detected the first type of B-modes, consistent with inflation and gravitational waves in the early universe at the level of r = 0.20+0.07
−0.05, which is the amount of power present in gravitational waves compared to the amount of power present in other scalar density perturbations in the very early universe. Had this been confirmed it would have provided strong evidence for cosmic inflation and the Big Bang and against the ekpyrotic model of Paul Steinhardt and Neil Turok. However, on 19 June 2014, considerably lowered confidence in confirming the findings was reported and on 19 September 2014 new results of the Planck experiment reported that the results of BICEP2 can be fully attributed to cosmic dust.
The second type of B-modes was discovered in 2013 using the South Pole Telescope with help from the Herschel Space Observatory. In October 2014, a measurement of the B-mode polarization at 150 GHz was published by the POLARBEAR experiment. Compared to BICEP2, POLARBEAR focuses on a smaller patch of the sky and is less susceptible to dust effects. The team reported that POLARBEAR's measured B-mode polarization was of cosmological origin (and not just due to dust) at a 97.2% confidence level.
Microwave background observations
Subsequent to the discovery of the CMB, hundreds of cosmic microwave background experiments have been conducted to measure and characterize the signatures of the radiation. The most famous experiment is probably the NASA Cosmic Background Explorer (COBE) satellite that orbited in 1989–1996 and which detected and quantified the large scale anisotropies at the limit of its detection capabilities. Inspired by the initial COBE results of an extremely isotropic and homogeneous background, a series of ground- and balloon-based experiments quantified CMB anisotropies on smaller angular scales over the next decade. The primary goal of these experiments was to measure the angular scale of the first acoustic peak, for which COBE did not have sufficient resolution. These measurements were able to rule out cosmic strings as the leading theory of cosmic structure formation, and suggested cosmic inflation was the right theory. During the 1990s, the first peak was measured with increasing sensitivity and by 2000 the BOOMERanG experiment reported that the highest power fluctuations occur at scales of approximately one degree. Together with other cosmological data, these results implied that the geometry of the universe is flat. A number of ground-based interferometers provided measurements of the fluctuations with higher accuracy over the next three years, including the Very Small Array, Degree Angular Scale Interferometer (DASI), and the Cosmic Background Imager (CBI). DASI made the first detection of the polarization of the CMB and the CBI provided the first E-mode polarization spectrum with compelling evidence that it is out of phase with the T-mode spectrum.
In June 2001, NASA launched a second CMB space mission, WMAP, to make much more precise measurements of the large scale anisotropies over the full sky. WMAP used symmetric, rapid-multi-modulated scanning, rapid switching radiometers to minimize non-sky signal noise. The first results from this mission, disclosed in 2003, were detailed measurements of the angular power spectrum at a scale of less than one degree, tightly constraining various cosmological parameters. The results are broadly consistent with those expected from cosmic inflation as well as various other competing theories, and are available in detail at NASA's data bank for Cosmic Microwave Background (CMB) (see links below). Although WMAP provided very accurate measurements of the large scale angular fluctuations in the CMB (structures about as broad in the sky as the moon), it did not have the angular resolution to measure the smaller scale fluctuations which had been observed by former ground-based interferometers.
A third space mission, the ESA (European Space Agency) Planck Surveyor, was launched in May 2009 and performed an even more detailed investigation until it was shut down in October 2013. Planck employed both HEMT radiometers and bolometer technology and measured the CMB at a smaller scale than WMAP. Its detectors were trialled in the Antarctic Viper telescope as ACBAR (Arcminute Cosmology Bolometer Array Receiver) experiment—which has produced the most precise measurements at small angular scales to date—and in the Archeops balloon telescope.
On 21 March 2013, the European-led research team behind the Planck cosmology probe released the mission's all-sky map (565x318 jpeg, 3600x1800 jpeg) of the cosmic microwave background. The map suggests the universe is slightly older than researchers expected. According to the map, subtle fluctuations in temperature were imprinted on the deep sky when the cosmos was about 370000 years old. The imprint reflects ripples that arose as early, in the existence of the universe, as the first nonillionth of a second. Apparently, these ripples gave rise to the present vast cosmic web of galaxy clusters and dark matter. Based on the 2013 data, the universe contains 4.9% ordinary matter, 26.8% dark matter and 68.3% dark energy. On 5 February 2015, new data was released by the Planck mission, according to which the age of the universe is 13.799±0.021 billion years old and the Hubble constant was measured to be 67.74±0.46 (km/s)/Mpc.
Additional ground-based instruments such as the South Pole Telescope in Antarctica and the proposed Clover Project, Atacama Cosmology Telescope and the QUIET telescope in Chile will provide additional data not available from satellite observations, possibly including the B-mode polarization.
Data reduction and analysis
Raw CMBR data, even from space vehicles such as WMAP or Planck, contain foreground effects that completely obscure the fine-scale structure of the cosmic microwave background. The fine-scale structure is superimposed on the raw CMBR data but is too small to be seen at the scale of the raw data. The most prominent of the foreground effects is the dipole anisotropy caused by the Sun's motion relative to the CMBR background. The dipole anisotropy and others due to Earth's annual motion relative to the Sun and numerous microwave sources in the galactic plane and elsewhere must be subtracted out to reveal the extremely tiny variations characterizing the fine-scale structure of the CMBR background.
The detailed analysis of CMBR data to produce maps, an angular power spectrum, and ultimately cosmological parameters is a complicated, computationally difficult problem. Although computing a power spectrum from a map is in principle a simple Fourier transform, decomposing the map of the sky into spherical harmonics,
where the term measures the mean temperature and term accounts for the fluctuation, where the refers to a spherical harmonic, and ℓ is the multipole number while m is the azimuthal number.
By applying the angular correlation function, the sum can be reduced to an expression that only involves ℓ and power spectrum term The angled brackets indicate the average with respect to all observers in the universe; since the universe is homogenous and isotropic, therefore there is an absence of preferred observing direction. Thus, C is independent of m. Different choices of ℓ correspond to multipole moments of CMB.
In practice it is hard to take the effects of noise and foreground sources into account. In particular, these foregrounds are dominated by galactic emissions such as Bremsstrahlung, synchrotron, and dust that emit in the microwave band; in practice, the galaxy has to be removed, resulting in a CMB map that is not a full-sky map. In addition, point sources like galaxies and clusters represent another source of foreground which must be removed so as not to distort the short scale structure of the CMB power spectrum.
Constraints on many cosmological parameters can be obtained from their effects on the power spectrum, and results are often calculated using Markov chain Monte Carlo sampling techniques.
CMBR monopole anisotropy (ℓ = 0)
When ℓ = 0, the term reduced to 1, and what we have left here is just the mean temperature of the CMB. This "mean" is called CMB monopole, and it is observed to have an average temperature of about Tγ = 2.7255 ± 0.0006K with one standard deviation confidence. The accuracy of this mean temperature may be impaired by the diverse measurements done by different mapping measurements. Such measurements demand absolute temperature devices, such as the FIRAS instrument on the COBE satellite. The measured kTγ is equivalent to 0.234 meV or 4.6 × 10−10 mec2. The photon number density of a blackbody having such temperature is = . Its energy density is , and the ratio to the critical density is Ωγ = 5.38 × 10−5.
CMBR dipole anisotropy (ℓ = 1)
CMB dipole represents the largest anisotropy, which is in the first spherical harmonic (ℓ = 1). When ℓ = 1, the term reduces to one cosine function and thus encodes amplitude fluctuation. The amplitude of CMB dipole is around 3.3621 ± 0.0010 mK. Since the universe is presumed to be homogenous and isotropic, an observer should see the blackbody spectrum with temperature T at every point in the sky. The spectrum of the dipole has been confirmed to be the differential of a blackbody spectrum.
CMB dipole is frame-dependent. The CMB dipole moment could also be interpreted as the peculiar motion of the Earth toward the CMB. Its amplitude depends on the time due to the Earth's orbit about the barycenter of the solar system. This enables us to add a time-dependent term to the dipole expression. The modulation of this term is 1 year, which fits the observation done by COBE FIRAS. The dipole moment does not encode any primordial information.
From the CMB data, it is seen that the Sun appears to be moving at 368±2 km/s relative to the reference frame of the CMB (also called the CMB rest frame, or the frame of reference in which there is no motion through the CMB). The Local Group — the galaxy group that includes our own Milky Way galaxy — appears to be moving at 627±22 km/s in the direction of galactic longitude ℓ = 276°±3°, b = 30°±3°. This motion results in an anisotropy of the data (CMB appearing slightly warmer in the direction of movement than in the opposite direction). The standard interpretation of this temperature variation is a simple velocity redshift and blueshift due to motion relative to the CMB, but alternative cosmological models can explain some fraction of the observed dipole temperature distribution in the CMB.
Multipole (ℓ ≥ 2)
The temperature variation in the CMB temperature maps at higher multipoles, or ℓ ≥ 2, is considered to be the result of perturbations of the density in the early Universe, before the recombination epoch. Before recombination, the Universe consisted of a hot, dense plasma of electrons and baryons. In such a hot dense environment, electrons and protons could not form any neutral atoms. The baryons in such early Universe remained highly ionized and so were tightly coupled with photons through the effect of Thompson scattering. These phenomena caused the pressure and gravitational effects to act against each other, and triggered fluctuations in the photon-baryon plasma. Quickly after the recombination epoch, the rapid expansion of the universe caused the plasma to cool down and these fluctuations are "frozen into" the CMB maps we observe today. The said procedure happened at a redshift of around z ⋍ 1100.
With the increasingly precise data provided by WMAP, there have been a number of claims that the CMB exhibits anomalies, such as very large scale anisotropies, anomalous alignments, and non-Gaussian distributions. The most longstanding of these is the low-ℓ multipole controversy. Even in the COBE map, it was observed that the quadrupole (ℓ = 2, spherical harmonic) has a low amplitude compared to the predictions of the Big Bang. In particular, the quadrupole and octupole (ℓ = 3) modes appear to have an unexplained alignment with each other and with both the ecliptic plane and equinoxes. A number of groups have suggested that this could be the signature of new physics at the greatest observable scales; other groups suspect systematic errors in the data.
Ultimately, due to the foregrounds and the cosmic variance problem, the greatest modes will never be as well measured as the small angular scale modes. The analyses were performed on two maps that have had the foregrounds removed as far as possible: the "internal linear combination" map of the WMAP collaboration and a similar map prepared by Max Tegmark and others. Later analyses have pointed out that these are the modes most susceptible to foreground contamination from synchrotron, dust, and Bremsstrahlung emission, and from experimental uncertainty in the monopole and dipole.
A full Bayesian analysis of the WMAP power spectrum demonstrates that the quadrupole prediction of Lambda-CDM cosmology is consistent with the data at the 10% level and that the observed octupole is not remarkable. Carefully accounting for the procedure used to remove the foregrounds from the full sky map further reduces the significance of the alignment by ~5%. Recent observations with the Planck telescope, which is very much more sensitive than WMAP and has a larger angular resolution, record the same anomaly, and so instrumental error (but not foreground contamination) appears to be ruled out. Coincidence is a possible explanation, chief scientist from WMAP, Charles L. Bennett suggested coincidence and human psychology were involved, "I do think there is a bit of a psychological effect; people want to find unusual things."
Assuming the universe keeps expanding and it does not suffer a Big Crunch, a Big Rip, or another similar fate, the cosmic microwave background will continue redshifting until it will no longer be detectable, and will be superseded first by the one produced by starlight, and perhaps, later by the background radiation fields of processes that may take place in the far future of the universe such as proton decay, evaporation of black holes, and positronium decay.
Timeline of prediction, discovery and interpretation
Thermal (non-microwave background) temperature predictions
- 1896 – Charles Édouard Guillaume estimates the "radiation of the stars" to be 5–6K.
- 1926 – Sir Arthur Eddington estimates the non-thermal radiation of starlight in the galaxy "... by the formula E = σT4 the effective temperature corresponding to this density is 3.18° absolute ... black body"
- 1930s – Cosmologist Erich Regener calculates that the non-thermal spectrum of cosmic rays in the galaxy has an effective temperature of 2.8 K
- 1931 – Term microwave first used in print: "When trials with wavelengths as low as 18 cm. were made known, there was undisguised surprise+that the problem of the micro-wave had been solved so soon." Telegraph & Telephone Journal XVII. 179/1
- 1934 – Richard Tolman shows that black-body radiation in an expanding universe cools but remains thermal
- 1938 – Nobel Prize winner (1920) Walther Nernst reestimates the cosmic ray temperature as 0.75K
- 1946 – Robert Dicke predicts "... radiation from cosmic matter" at <20 K, but did not refer to background radiation
- 1946 – George Gamow calculates a temperature of 50 K (assuming a 3-billion year old universe), commenting it "... is in reasonable agreement with the actual temperature of interstellar space", but does not mention background radiation.
- 1953 – Erwin Finlay-Freundlich in support of his tired light theory, derives a blackbody temperature for intergalactic space of 2.3K with comment from Max Born suggesting radio astronomy as the arbitrator between expanding and infinite cosmologies.
Microwave background radiation predictions and measurements
- 1941 – Andrew McKellar detected the cosmic microwave background as the coldest component of the interstellar medium by using the excitation of CN doublet lines measured by W. S. Adams in a B star, finding an "effective temperature of space" (the average bolometric temperature) of 2.3 K
- 1946 – George Gamow calculates a temperature of 50 K (assuming a 3-billion year old universe), commenting it "... is in reasonable agreement with the actual temperature of interstellar space", but does not mention background radiation.
- 1948 – Ralph Alpher and Robert Herman estimate "the temperature in the universe" at 5 K. Although they do not specifically mention microwave background radiation, it may be inferred.
- 1949 – Ralph Alpher and Robert Herman re-re-estimate the temperature at 28 K.
- 1953 – George Gamow estimates 7 K.
- 1956 – George Gamow estimates 6 K.
- 1955 – Émile Le Roux of the Nançay Radio Observatory, in a sky survey at λ = 33 cm, reported a near-isotropic background radiation of 3 kelvins, plus or minus 2.
- 1957 – Tigran Shmaonov reports that "the absolute effective temperature of the radioemission background ... is 4±3 K". It is noted that the "measurements showed that radiation intensity was independent of either time or direction of observation ... it is now clear that Shmaonov did observe the cosmic microwave background at a wavelength of 3.2 cm"
- 1960s – Robert Dicke re-estimates a microwave background radiation temperature of 40 K
- 1964 – A. G. Doroshkevich and Igor Dmitrievich Novikov publish a brief paper suggesting microwave searches for the black-body radiation predicted by Gamow, Alpher, and Herman, where they name the CMB radiation phenomenon as detectable.
- 1964–65 – Arno Penzias and Robert Woodrow Wilson measure the temperature to be approximately 3 K. Robert Dicke, James Peebles, P. G. Roll, and D. T. Wilkinson interpret this radiation as a signature of the big bang.
- 1966 – Rainer K. Sachs and Arthur M. Wolfe theoretically predict microwave background fluctuation amplitudes created by gravitational potential variations between observers and the last scattering surface (see Sachs-Wolfe effect)
- 1968 – Martin Rees and Dennis Sciama theoretically predict microwave background fluctuation amplitudes created by photons traversing time-dependent potential wells
- 1969 – R. A. Sunyaev and Yakov Zel'dovich study the inverse Compton scattering of microwave background photons by hot electrons (see Sunyaev–Zel'dovich effect)
- 1983 – Researchers from the Cambridge Radio Astronomy Group and the Owens Valley Radio Observatory first detect the Sunyaev–Zel'dovich effect from clusters of galaxies
- 1983 – RELIKT-1 Soviet CMB anisotropy experiment was launched.
- 1990 – FIRAS on the Cosmic Background Explorer (COBE) satellite measures the black body form of the CMB spectrum with exquisite precision, and shows that the microwave background has a nearly perfect black-body spectrum and thereby strongly constrains the density of the intergalactic medium.
- January 1992 – Scientists that analysed data from the RELIKT-1 report the discovery of anisotropy in the cosmic microwave background at the Moscow astrophysical seminar.
- 1992 – Scientists that analysed data from COBE DMR report the discovery of anisotropy in the cosmic microwave background.
- 1995 – The Cosmic Anisotropy Telescope performs the first high resolution observations of the cosmic microwave background.
- 1999 – First measurements of acoustic oscillations in the CMB anisotropy angular power spectrum from the TOCO, BOOMERANG, and Maxima Experiments. The BOOMERanG experiment makes higher quality maps at intermediate resolution, and confirms that the universe is "flat".
- 2002 – Polarization discovered by DASI.
- 2003 – E-mode polarization spectrum obtained by the CBI. The CBI and the Very Small Array produces yet higher quality maps at high resolution (covering small areas of the sky).
- 2003 – The Wilkinson Microwave Anisotropy Probe spacecraft produces an even higher quality map at low and intermediate resolution of the whole sky (WMAP provides no high-resolution data, but improves on the intermediate resolution maps from BOOMERanG).
- 2004 – E-mode polarization spectrum obtained by the CBI.
- 2004 – The Arcminute Cosmology Bolometer Array Receiver produces a higher quality map of the high resolution structure not mapped by WMAP.
- 2005 – The Arcminute Microkelvin Imager and the Sunyaev–Zel'dovich Array begin the first surveys for very high redshift clusters of galaxies using the Sunyaev–Zel'dovich effect.
- 2005 – Ralph A. Alpher is awarded the National Medal of Science for his groundbreaking work in nucleosynthesis and prediction that the universe expansion leaves behind background radiation, thus providing a model for the Big Bang theory.
- 2006 – The long-awaited three-year WMAP results are released, confirming previous analysis, correcting several points, and including polarization data.
- 2006 – Two of COBE's principal investigators, George Smoot and John Mather, received the Nobel Prize in Physics in 2006 for their work on precision measurement of the CMBR.
- 2006–2011 – Improved measurements from WMAP, new supernova surveys ESSENCE and SNLS, and baryon acoustic oscillations from SDSS and WiggleZ, continue to be consistent with the standard Lambda-CDM model.
- 2010 – The first all-sky map from the Planck telescope is released.
- 2013 – An improved all-sky map from the Planck telescope is released, improving the measurements of WMAP and extending them to much smaller scales.
- 2014 – On March 17, 2014, astrophysicists of the BICEP2 collaboration announced the detection of inflationary gravitational waves in the B-mode power spectrum, which if confirmed, would provide clear experimental evidence for the theory of inflation. However, on 19 June 2014, lowered confidence in confirming the cosmic inflation findings was reported.
- 2015 – On January 30, 2015, the same team of astronomers from BICEP2 withdrew the claim made on the previous year. Based on the combined data of BICEP2 and Planck, the European Space Agency announced that the signal can be entirely attributed to dust in the Milky Way.
- 2018 – The final data and maps from the Planck telescope is released, with improved measurements of the polarization on large scales.
- 2019 – Planck telescope analyses of their final 2018 data continue to be released.
In popular culture
- In the Stargate Universe TV series (2009-2011), an Ancient spaceship, Destiny, was built to study patterns in the CMBR which indicate that the universe as we know it might have been created by some form of sentient intelligence.
- In Wheelers, a novel (2000) by Ian Stewart & Jack Cohen, CMBR is explained as the encrypted transmissions of an ancient civilization. This allows the Jovian "blimps" to have a society older than the currently-observed age of the universe.
- In The Three-Body Problem, a 2008 novel by Liu Cixin, a probe from an alien civilization compromises instruments monitoring the CMBR in order to deceive a character into believing the civilization has the power to manipulate the CMBR itself.
- The 2017 issue of the Swiss 20 francs bill lists several astronomical objects with their distances – the CMB is mentioned with 430 · 1015 light-seconds.
- In the 2021 Marvel series WandaVision, a mysterious television broadcast is discovered within the Cosmic Microwave Background.
- Computational packages for cosmologists
- Cosmic neutrino background
- Cosmic microwave background spectral distortions – Fluctuations in the energy spectrum of the microwave background
- Cosmological perturbation theory
- Axis of evil (cosmology) – Name given to an anomaly in astronomical observations of the Cosmic Microwave Background
- Gravitational wave background
- Heat death of the universe – Possible "fate" of the universe.
- Horizons: Exploring the Universe
- Lambda-CDM model – Model of big-bang cosmology
- Observational cosmology – Study of the origin of the universe (structure and evolution)
- Observation history of galaxies – Gravitationally bound astronomical structure
- Physical cosmology – Branch of astronomy
- Timeline of cosmological theories – Timeline of theories about physical cosmology
- Sunyaev, R. A. (1974). Longair, M. S. (ed.). The thermal history of the universe and the spectrum of relic radiation. Confrontation of Cosmological Theories with Observational Data. IAUS. 63. Dordrecht: Springer. pp. 167–173. Bibcode:1974IAUS...63..167S. doi:10.1007/978-94-010-2220-0_14. ISBN 978-90-277-0457-3.
- Penzias, A. A.; Wilson, R. W. (1965). "A Measurement of Excess Antenna Temperature at 4080 Mc/s". The Astrophysical Journal. 142 (1): 419–421. Bibcode:1965ApJ...142..419P. doi:10.1086/148307.
- Smoot Group (28 March 1996). "The Cosmic Microwave Background Radiation". Lawrence Berkeley Lab. Retrieved 2008-12-11.
- Kaku, M. (2014). "First Second of the Big Bang". How the Universe Works. Season 3. Episode 4. Discovery Science.
- Fixsen, D. J. (2009). "The Temperature of the Cosmic Microwave Background". The Astrophysical Journal. 707 (2): 916–920. arXiv:0911.1955. Bibcode:2009ApJ...707..916F. doi:10.1088/0004-637X/707/2/916. S2CID 119217397.
- Dodelson, S. (2003). "Coherent Phase Argument for Inflation". AIP Conference Proceedings. 689: 184–196. arXiv:hep-ph/0309057. Bibcode:2003AIPC..689..184D. CiteSeerX 10.1.1.344.3524. doi:10.1063/1.1627736. S2CID 18570203.
- Baumann, D. (2011). "The Physics of Inflation" (PDF). University of Cambridge. Archived from the original (PDF) on 2018-09-21. Retrieved 2015-05-09.
- Chluba, J.; et al. (2021). "New Horizons in Cosmology with Spectral Distortions of the Cosmic Microwave Background". Voyage 2050 Proposals. arXiv:1909.01593. Bibcode:2021ExA...tmp...42C. doi:10.1007/s10686-021-09729-5. S2CID 202539910.
- White, M. (1999). "Anisotropies in the CMB". Proceedings of the Los Angeles Meeting, DPF 99. UCLA. arXiv:astro-ph/9903232. Bibcode:1999dpf..conf.....W.
- Wright, E.L. (2004). "Theoretical Overview of Cosmic Microwave Background Anisotropy". In W. L. Freedman (ed.). Measuring and Modeling the Universe. Carnegie Observatories Astrophysics Series. Cambridge University Press. p. 291. arXiv:astro-ph/0305591. Bibcode:2004mmu..symp..291W. ISBN 978-0-521-75576-4.
- The Planck Collaboration (2020), "Planck 2018 results. I. Overview, and the cosmological legacy of Planck", Astronomy and Astrophysics, 641: A1, arXiv:1807.06205, Bibcode:2020A&A...641A...1P, doi:10.1051/0004-6361/201833880, S2CID 119185252
- The Planck Collaboration (2014), "Planck 2013 results. XXVII. Doppler boosting of the CMB: Eppur si muove", Astronomy, 571 (27): A27, arXiv:1303.5087, Bibcode:2014A&A...571A..27P, doi:10.1051/0004-6361/201321556, S2CID 5398329
- Guth, A. H. (1998). The Inflationary Universe: The Quest for a New Theory of Cosmic Origins. Basic Books. p. 186. ISBN 978-0201328400. OCLC 35701222.
- Cirigliano, D.; de Vega, H.J.; Sanchez, N. G. (2005). "Clarifying inflation models: The precise inflationary potential from effective field theory and the WMAP data". Physical Review D (Submitted manuscript). 71 (10): 77–115. arXiv:astro-ph/0412634. Bibcode:2005PhRvD..71j3518C. doi:10.1103/PhysRevD.71.103518. S2CID 36572996.
- Abbott, B. (2007). "Microwave (WMAP) All-Sky Survey". Hayden Planetarium. Archived from the original on 2013-02-13. Retrieved 2008-01-13.
- Gawiser, E.; Silk, J. (2000). "The cosmic microwave background radiation". Physics Reports. 333–334 (2000): 245–267. arXiv:astro-ph/0002044. Bibcode:2000PhR...333..245G. CiteSeerX 10.1.1.588.3349. doi:10.1016/S0370-1573(00)00025-9. S2CID 15398837.
- Smoot, G. F. (2006). "Cosmic Microwave Background Radiation Anisotropies: Their Discovery and Utilization". Nobel Lecture. Nobel Foundation. Retrieved 2008-12-22.
- Hobson, M.P.; Efstathiou, G.; Lasenby, A.N. (2006). General Relativity: An Introduction for Physicists. Cambridge University Press. pp. 388. ISBN 978-0-521-82951-9.
- Unsöld, A.; Bodo, B. (2002). The New Cosmos, An Introduction to Astronomy and Astrophysics (5th ed.). Springer-Verlag. p. 485. Bibcode:2001ncia.book.....U. ISBN 978-3-540-67877-9.
- 29. Cosmic Microwave Background: Particle Data Group P.A. Zyla (LBL, Berkeley) et al.
- Gamow, G. (1948). "The Origin of Elements and the Separation of Galaxies". Physical Review. 74 (4): 505–506. Bibcode:1948PhRv...74..505G. doi:10.1103/PhysRev.74.505.2.
- Gamow, G. (1948). "The evolution of the universe". Nature. 162 (4122): 680–682. Bibcode:1948Natur.162..680G. doi:10.1038/162680a0. PMID 18893719. S2CID 4793163.
- Alpher, R. A.; Herman, R. C. (1948). "On the Relative Abundance of the Elements". Physical Review. 74 (12): 1737–1742. Bibcode:1948PhRv...74.1737A. doi:10.1103/PhysRev.74.1737.
- Alpher, R. A.; Herman, R. C. (1948). "Evolution of the Universe". Nature. 162 (4124): 774–775. Bibcode:1948Natur.162..774A. doi:10.1038/162774b0. S2CID 4113488.
Assis, A. K. T.; Neves, M. C. D. (1995). "History of the 2.7 K Temperature Prior to Penzias and Wilson" (PDF) (3): 79–87. Cite journal requires
|journal=(help) but see also Wright, E. L. (2006). "Eddington's Temperature of Space". UCLA. Retrieved 2008-12-11.
- Penzias, A. A. (2006). "The origin of elements" (PDF). Science. Nobel Foundation. 205 (4406): 549–54. doi:10.1126/science.205.4406.549. PMID 17729659. Retrieved 2006-10-04.
- Dicke, R. H. (1946). "The Measurement of Thermal Radiation at Microwave Frequencies". Review of Scientific Instruments. 17 (7): 268–275. Bibcode:1946RScI...17..268D. doi:10.1063/1.1770483. PMID 20991753. This basic design for a radiometer has been used in most subsequent cosmic microwave background experiments.
- The Cosmic Microwave Background Radiation (Nobel Lecture) by Robert Wilson 8 Dec 1978, p. 474
- Dicke, R. H.; et al. (1965). "Cosmic Black-Body Radiation". Astrophysical Journal. 142: 414–419. Bibcode:1965ApJ...142..414D. doi:10.1086/148306.
- The history is given in Peebles, P. J. E (1993). Principles of Physical Cosmology. Princeton University Press. pp. 139–148. ISBN 978-0-691-01933-8.
- "The Nobel Prize in Physics 1978". Nobel Foundation. 1978. Retrieved 2009-01-08.
- Narlikar, J. V.; Wickramasinghe, N. C. (1967). "Microwave Background in a Steady State Universe" (PDF). Nature. 216 (5110): 43–44. Bibcode:1967Natur.216...43N. doi:10.1038/216043a0. hdl:11007/945. S2CID 4199874.
- McKellar, A. (1941). "Molecular Lines from the Lowest States of Diatomic Molecules Composed of Atoms Probably Present in Interstellar Space". Publications of the Dominion Astrophysical Observatory. Vancouver, B.C., Canada. 7 (6): 251–272. Bibcode:1941PDAO....7..251M.
- Peebles, P. J. E.; et al. (1991). "The case for the relativistic hot big bang cosmology". Nature. 352 (6338): 769–776. Bibcode:1991Natur.352..769P. doi:10.1038/352769a0. S2CID 4337502.
- Harrison, E. R. (1970). "Fluctuations at the threshold of classical cosmology". Physical Review D. 1 (10): 2726–2730. Bibcode:1970PhRvD...1.2726H. doi:10.1103/PhysRevD.1.2726.
- Peebles, P. J. E.; Yu, J. T. (1970). "Primeval Adiabatic Perturbation in an Expanding Universe". Astrophysical Journal. 162: 815–836. Bibcode:1970ApJ...162..815P. doi:10.1086/150713.
- Zeldovich, Y. B. (1972). "A hypothesis, unifying the structure and the entropy of the Universe". Monthly Notices of the Royal Astronomical Society. 160 (7–8): 1P–4P. Bibcode:1972MNRAS.160P...1Z. doi:10.1016/S0026-0576(07)80178-4.
- Doroshkevich, A. G.; Zel'Dovich, Y. B.; Syunyaev, R. A. (1978) [12–16 September 1977]. "Fluctuations of the microwave background radiation in the adiabatic and entropic theories of galaxy formation". In Longair, M. S.; Einasto, J. (eds.). The large scale structure of the universe; Proceedings of the Symposium. Tallinn, Estonian SSR: Dordrecht, D. Reidel Publishing Co. pp. 393–404. Bibcode:1978IAUS...79..393S. While this is the first paper to discuss the detailed observational imprint of density inhomogeneities as anisotropies in the cosmic microwave background, some of the groundwork was laid in Peebles and Yu, above.
- Smoot, G. F.; et al. (1992). "Structure in the COBE differential microwave radiometer first-year maps". Astrophysical Journal Letters. 396 (1): L1–L5. Bibcode:1992ApJ...396L...1S. doi:10.1086/186504.
- Bennett, C.L.; et al. (1996). "Four-Year COBE DMR Cosmic Microwave Background Observations: Maps and Basic Results". Astrophysical Journal Letters. 464: L1–L4. arXiv:astro-ph/9601067. Bibcode:1996ApJ...464L...1B. doi:10.1086/310075. S2CID 18144842.
- Grupen, C.; et al. (2005). Astroparticle Physics. Springer. pp. 240–241. ISBN 978-3-540-25312-9.
- Miller, A. D.; et al. (1999). "A Measurement of the Angular Power Spectrum of the Microwave Background Made from the High Chilean Andes". Astrophysical Journal. 521 (2): L79–L82. arXiv:astro-ph/9905100. Bibcode:1999ApJ...521L..79T. doi:10.1086/312197. S2CID 16534514.
- Melchiorri, A.; et al. (2000). "A Measurement of Ω from the North American Test Flight of Boomerang". The Astrophysical Journal Letters. 536 (2): L63–L66. arXiv:astro-ph/9911445. Bibcode:2000ApJ...536L..63M. doi:10.1086/312744. PMID 10859119. S2CID 27518923.
- Hanany, S.; et al. (2000). "MAXIMA-1: A Measurement of the Cosmic Microwave Background Anisotropy on Angular Scales of 10'–5°". Astrophysical Journal. 545 (1): L5–L9. arXiv:astro-ph/0005123. Bibcode:2000ApJ...545L...5H. doi:10.1086/317322. S2CID 119495132.
- de Bernardis, P.; et al. (2000). "A flat Universe from high-resolution maps of the cosmic microwave background radiation". Nature. 404 (6781): 955–959. arXiv:astro-ph/0004404. Bibcode:2000Natur.404..955D. doi:10.1038/35010035. hdl:10044/1/60851. PMID 10801117. S2CID 4412370.
- Pogosian, L.; et al. (2003). "Observational constraints on cosmic string production during brane inflation". Physical Review D. 68 (2): 023506. arXiv:hep-th/0304188. Bibcode:2003PhRvD..68b3506P. doi:10.1103/PhysRevD.68.023506.
- Hinshaw, G.; (WMAP collaboration); Bennett, C. L.; Bean, R.; Doré, O.; Greason, M. R.; Halpern, M.; Hill, R. S.; Jarosik, N.; Kogut, A.; Komatsu, E.; Limon, M.; Odegard, N.; Meyer, S. S.; Page, L.; Peiris, H. V.; Spergel, D. N.; Tucker, G. S.; Verde, L.; Weiland, J. L.; Wollack, E.; Wright, E. L.; et al. (2007). "Three-year Wilkinson Microwave Anisotropy Probe (WMAP) observations: temperature analysis". Astrophysical Journal Supplement Series. 170 (2): 288–334. arXiv:astro-ph/0603451. Bibcode:2007ApJS..170..288H. CiteSeerX 10.1.1.471.7186. doi:10.1086/513698. S2CID 15554608.
- Scott, D. (2005). "The Standard Cosmological Model". Canadian Journal of Physics. 84 (6–7): 419–435. arXiv:astro-ph/0510731. Bibcode:2006CaJPh..84..419S. CiteSeerX 10.1.1.317.2954. doi:10.1139/P06-066. S2CID 15606491.
- Durham, Frank; Purrington, Robert D. (1983). Frame of the universe: a history of physical cosmology. Columbia University Press. pp. 193–209. ISBN 978-0-231-05393-8.
- Assis, A. K. T.; Paulo, São; Neves, M. C. D. (July 1995). "History of the 2.7 K Temperature Prior to Penzias and Wilson" (PDF). Apeiron. 2 (3): 79–87.
- "Converted number: Conversion from K to eV".
- Fixsen, D. J. (1995). "Formation of Structure in the Universe". arXiv:astro-ph/9508159.
- Bennett, C. L.; (WMAP collaboration); Hinshaw, G.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.; Page, L.; Spergel, D. N.; Tucker, G. S.; Wollack, E.; Wright, E. L.; Barnes, C.; Greason, M. R.; Hill, R. S.; Komatsu, E.; Nolta, M. R.; Odegard, N.; Peiris, H. V.; Verde, L.; Weiland, J. L.; et al. (2003). "First-year Wilkinson Microwave Anisotropy Probe (WMAP) observations: preliminary maps and basic results". Astrophysical Journal Supplement Series. 148 (1): 1–27. arXiv:astro-ph/0302207. Bibcode:2003ApJS..148....1B. doi:10.1086/377253. S2CID 115601. This paper warns, "the statistics of this internal linear combination map are complex and inappropriate for most CMB analyses."
- Noterdaeme, P.; Petitjean, P.; Srianand, R.; Ledoux, C.; López, S. (February 2011). "The evolution of the cosmic microwave background temperature. Measurements of TCMB at high redshift from carbon monoxide excitation". Astronomy and Astrophysics. 526: L7. arXiv:1012.3164. Bibcode:2011A&A...526L...7N. doi:10.1051/0004-6361/201016140. S2CID 118485014.
- Wayne Hu. "Baryons and Inertia".
- Wayne Hu. "Radiation Driving Force".
- Hu, W.; White, M. (1996). "Acoustic Signatures in the Cosmic Microwave Background". Astrophysical Journal. 471: 30–51. arXiv:astro-ph/9602019. Bibcode:1996ApJ...471...30H. doi:10.1086/177951. S2CID 8791666.
- WMAP Collaboration; Verde, L.; Peiris, H. V.; Komatsu, E.; Nolta, M. R.; Bennett, C. L.; Halpern, M.; Hinshaw, G.; et al. (2003). "First-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Determination of Cosmological Parameters". Astrophysical Journal Supplement Series. 148 (1): 175–194. arXiv:astro-ph/0302209. Bibcode:2003ApJS..148..175S. doi:10.1086/377226. S2CID 10794058.
- Hanson, D.; et al. (2013). "Detection of B-mode polarization in the Cosmic Microwave Background with data from the South Pole Telescope". Physical Review Letters. 111 (14): 141301. arXiv:1307.5830. Bibcode:2013PhRvL.111n1301H. doi:10.1103/PhysRevLett.111.141301. PMID 24138230. S2CID 9437637.
- Lewis, A.; Challinor, A. (2006). "Weak gravitational lensing of the CMB". Physics Reports. 429 (1): 1–65. arXiv:astro-ph/0601594. Bibcode:2006PhR...429....1L. doi:10.1016/j.physrep.2006.03.002. S2CID 1731891.
- Seljak, U. (June 1997). "Measuring Polarization in the Cosmic Microwave Background". Astrophysical Journal. 482 (1): 6–16. arXiv:astro-ph/9608131. Bibcode:1997ApJ...482....6S. doi:10.1086/304123. S2CID 16825580.
- Seljak, U.; Zaldarriaga M. (March 17, 1997). "Signature of Gravity Waves in the Polarization of the Microwave Background". Phys. Rev. Lett. 78 (11): 2054–2057. arXiv:astro-ph/9609169. Bibcode:1997PhRvL..78.2054S. doi:10.1103/PhysRevLett.78.2054. S2CID 30795875.
- Kamionkowski, M.; Kosowsky A. & Stebbins A. (1997). "A Probe of Primordial Gravity Waves and Vorticity". Phys. Rev. Lett. 78 (11): 2058–2061. arXiv:astro-ph/9609132. Bibcode:1997PhRvL..78.2058K. doi:10.1103/PhysRevLett.78.2058. S2CID 17330375.
- Zaldarriaga, M.; Seljak U. (July 15, 1998). "Gravitational lensing effect on cosmic microwave background polarization". Physical Review D. 2. 58 (2): 023003. arXiv:astro-ph/9803150. Bibcode:1998PhRvD..58b3003Z. doi:10.1103/PhysRevD.58.023003. S2CID 119512504.
- "Scientists Report Evidence for Gravitational Waves in Early Universe". 2014-03-17. Retrieved 2007-06-20.
- Staff (17 March 2014). "BICEP2 2014 Results Release". National Science Foundation. Retrieved 18 March 2014.
- Clavin, Whitney (March 17, 2014). "NASA Technology Views Birth of the Universe". NASA. Retrieved March 17, 2014.
- Overbye, Dennis (March 17, 2014). "Space Ripples Reveal Big Bang's Smoking Gun". The New York Times. Retrieved March 17, 2014.
- Overbye, Dennis (March 24, 2014). "Ripples From the Big Bang". The New York Times. Retrieved March 24, 2014.
- "Gravitational waves: have US scientists heard echoes of the big bang?". The Guardian. 2014-03-14. Retrieved 2014-03-14.
- Ade, P.A.R. (BICEP2 Collaboration) (2014). "Detection of B-Mode Polarization at Degree Angular Scales by BICEP2". Physical Review Letters. 112 (24): 241101. arXiv:1403.3985. Bibcode:2014PhRvL.112x1101B. doi:10.1103/PhysRevLett.112.241101. PMID 24996078. S2CID 22780831.
- Overbye, Dennis (March 17, 2014). "Space Ripples Reveal Big Bang's Smoking Gun". The New York Times.
- Steinhardt, Paul J. (2007). Endless universe : beyond the Big Bang. Weidenfeld & Nicolson. ISBN 978-0-297-84554-6. OCLC 271843490.
- Overbye, Dennis (June 19, 2014). "Astronomers Hedge on Big Bang Detection Claim". The New York Times. Retrieved June 20, 2014.
- Amos, Jonathan (June 19, 2014). "Cosmic inflation: Confidence lowered for Big Bang signal". BBC News. Retrieved June 20, 2014.
- Planck Collaboration Team (9 February 2016). "Planck intermediate results. XXX. The angular power spectrum of polarized dust emission at intermediate and high Galactic latitudes". Astronomy & Astrophysics. 586 (133): A133. arXiv:1409.5738. Bibcode:2016A&A...586A.133P. doi:10.1051/0004-6361/201425034. S2CID 9857299.
- Overbye, Dennis (22 September 2014). "Study Confirms Criticism of Big Bang Finding". The New York Times. Retrieved 22 September 2014.
- Samuel Reich, Eugenie (2013). "Polarization detected in Big Bang's echo". Nature. doi:10.1038/nature.2013.13441. S2CID 211730550.
- The Polarbear Collaboration (2014). "A Measurement of the Cosmic Microwave Background B-Mode Polarization Power Spectrum at Sub-Degree Scales with POLARBEAR". The Astrophysical Journal. 794 (2): 171. arXiv:1403.2369. Bibcode:2014ApJ...794..171P. doi:10.1088/0004-637X/794/2/171. S2CID 118598825.
- "POLARBEAR project offers clues about origin of universe's cosmic growth spurt". Christian Science Monitor. October 21, 2014.
- Clavin, Whitney; Harrington, J.D. (21 March 2013). "Planck Mission Brings Universe Into Sharp Focus". NASA. Retrieved 21 March 2013.
- Staff (21 March 2013). "Mapping the Early Universe". The New York Times. Retrieved 23 March 2013.
- Planck Collaboration (2016). "Planck 2015 results. XIII. Cosmological parameters (See Table 4 on page 31 of pfd)". Astronomy & Astrophysics. 594 (13): A13. arXiv:1502.01589. Bibcode:2016A&A...594A..13P. doi:10.1051/0004-6361/201525830. S2CID 119262962.
- P.A. Zyla et al. (Particle Data Group) (2020). "Review of Particle Physics" (PDF). Progress of Theoretical and Experimental Physics. 2020 (8): 083C01. doi:10.1093/ptep/ptaa104. Cosmic Microwave Background review by Scott and Smoot.
- Bennett, C. "COBE Differential Microwave Radiometers: Calibration Techniques".
- Shosh, S. (2016). "Dipole Modulation of Cosmic Microwave Background Temperature and Polarization". Journal of Cosmology and Astroparticle Physics. 2016 (1): 046. arXiv:1507.04078. Bibcode:2016JCAP...01..046G. doi:10.1088/1475-7516/2016/01/046. S2CID 118553819.
- Rossmanith, G.; Räth, C.; Banday, A. J.; Morfill, G. (2009). "Non-Gaussian Signatures in the five-year WMAP data as identified with isotropic scaling indices". Monthly Notices of the Royal Astronomical Society. 399 (4): 1921–1933. arXiv:0905.2854. Bibcode:2009MNRAS.399.1921R. doi:10.1111/j.1365-2966.2009.15421.x. S2CID 11586058.
- Bernui, A.; Mota, B.; Rebouças, M. J.; Tavakol, R. (2007). "Mapping the large-scale anisotropy in the WMAP data". Astronomy and Astrophysics. 464 (2): 479–485. arXiv:astro-ph/0511666. Bibcode:2007A&A...464..479B. doi:10.1051/0004-6361:20065585. S2CID 16138962.
- Jaffe, T.R.; Banday, A. J.; Eriksen, H. K.; Górski, K. M.; Hansen, F. K. (2005). "Evidence of vorticity and shear at large angular scales in the WMAP data: a violation of cosmological isotropy?". The Astrophysical Journal. 629 (1): L1–L4. arXiv:astro-ph/0503213. Bibcode:2005ApJ...629L...1J. doi:10.1086/444454. S2CID 15521559.
- de Oliveira-Costa, A.; Tegmark, Max; Zaldarriaga, Matias; Hamilton, Andrew (2004). "The significance of the largest scale CMB fluctuations in WMAP". Physical Review D (Submitted manuscript). 69 (6): 063516. arXiv:astro-ph/0307282. Bibcode:2004PhRvD..69f3516D. doi:10.1103/PhysRevD.69.063516. S2CID 119463060.
- Schwarz, D. J.; Starkman, Glenn D.; et al. (2004). "Is the low-ℓ microwave background cosmic?". Physical Review Letters (Submitted manuscript). 93 (22): 221301. arXiv:astro-ph/0403353. Bibcode:2004PhRvL..93v1301S. doi:10.1103/PhysRevLett.93.221301. PMID 15601079. S2CID 12554281.
- Bielewicz, P.; Gorski, K. M.; Banday, A. J. (2004). "Low-order multipole maps of CMB anisotropy derived from WMAP". Monthly Notices of the Royal Astronomical Society. 355 (4): 1283–1302. arXiv:astro-ph/0405007. Bibcode:2004MNRAS.355.1283B. doi:10.1111/j.1365-2966.2004.08405.x. S2CID 5564564.
- Liu, Hao; Li, Ti-Pei (2009). "Improved CMB Map from WMAP Data". arXiv:0907.2731v3 [astro-ph].
- Sawangwit, Utane; Shanks, Tom (2010). "Lambda-CDM and the WMAP Power Spectrum Beam Profile Sensitivity". arXiv:1006.1270v1 [astro-ph].
- Liu, Hao; et al. (2010). "Diagnosing Timing Error in WMAP Data". Monthly Notices of the Royal Astronomical Society. 413 (1): L96–L100. arXiv:1009.2701v1. Bibcode:2011MNRAS.413L..96L. doi:10.1111/j.1745-3933.2011.01041.x. S2CID 118739762.
- Tegmark, M.; de Oliveira-Costa, A.; Hamilton, A. (2003). "A high resolution foreground cleaned CMB map from WMAP". Physical Review D. 68 (12): 123523. arXiv:astro-ph/0302496. Bibcode:2003PhRvD..68l3523T. doi:10.1103/PhysRevD.68.123523. S2CID 17981329. This paper states, "Not surprisingly, the two most contaminated multipoles are [the quadrupole and octupole], which most closely trace the galactic plane morphology."
- O'Dwyer, I.; Eriksen, H. K.; Wandelt, B. D.; Jewell, J. B.; Larson, D. L.; Górski, K. M.; Banday, A. J.; Levin, S.; Lilje, P. B. (2004). "Bayesian Power Spectrum Analysis of the First-Year Wilkinson Microwave Anisotropy Probe Data". Astrophysical Journal Letters. 617 (2): L99–L102. arXiv:astro-ph/0407027. Bibcode:2004ApJ...617L..99O. doi:10.1086/427386.
- Slosar, A.; Seljak, U. (2004). "Assessing the effects of foregrounds and sky removal in WMAP". Physical Review D (Submitted manuscript). 70 (8): 083002. arXiv:astro-ph/0404567. Bibcode:2004PhRvD..70h3002S. doi:10.1103/PhysRevD.70.083002. S2CID 119443655.
- Bielewicz, P.; Eriksen, H. K.; Banday, A. J.; Górski, K. M.; Lilje, P. B. (2005). "Multipole vector anomalies in the first-year WMAP data: a cut-sky analysis". Astrophysical Journal. 635 (2): 750–60. arXiv:astro-ph/0507186. Bibcode:2005ApJ...635..750B. doi:10.1086/497263. S2CID 1103733.
- Copi, C.J.; Huterer, Dragan; Schwarz, D. J.; Starkman, G. D. (2006). "On the large-angle anomalies of the microwave sky". Monthly Notices of the Royal Astronomical Society. 367 (1): 79–102. arXiv:astro-ph/0508047. Bibcode:2006MNRAS.367...79C. CiteSeerX 10.1.1.490.6391. doi:10.1111/j.1365-2966.2005.09980.x. S2CID 6184966.
- de Oliveira-Costa, A.; Tegmark, M. (2006). "CMB multipole measurements in the presence of foregrounds". Physical Review D (Submitted manuscript). 74 (2): 023005. arXiv:astro-ph/0603369. Bibcode:2006PhRvD..74b3005D. doi:10.1103/PhysRevD.74.023005. S2CID 5238226.
- Planck shows almost perfect cosmos – plus axis of evil
- Found: Hawking's initials written into the universe
- Krauss, Lawrence M.; Scherrer, Robert J. (2007). "The return of a static universe and the end of cosmology". General Relativity and Gravitation. 39 (10): 1545–1550. arXiv:0704.0221. Bibcode:2007GReGr..39.1545K. doi:10.1007/s10714-007-0472-9. S2CID 123442313.
- Adams, Fred C.; Laughlin, Gregory (1997). "A dying universe: The long-term fate and evolution of astrophysical objects". Reviews of Modern Physics. 69 (2): 337–372. arXiv:astro-ph/9701131. Bibcode:1997RvMP...69..337A. doi:10.1103/RevModPhys.69.337. S2CID 12173790.
- Guillaume, C.-É., 1896, La Nature 24, series 2, p. 234, cited in "History of the 2.7 K Temperature Prior to Penzias and Wilson" (PDF)
- Eddington, A., The Internal Constitution of the Stars, cited in "History of the 2.7 K Temperature Prior to Penzias and Wilson" (PDF)
- Kragh, H. (1999). Cosmology and Controversy: The Historical Development of Two Theories of the Universe. Princeton University Press. p. 135. ISBN 978-0-691-00546-1. "In 1946, Robert Dicke and coworkers at MIT tested equipment that could test a cosmic microwave background of intensity corresponding to about 20K in the microwave region. However, they did not refer to such a background, but only to 'radiation from cosmic matter'. Also, this work was unrelated to cosmology and is only mentioned because it suggests that by 1950, detection of the background radiation might have been technically possible, and also because of Dicke's later role in the discovery". See also Dicke, R. H.; et al. (1946). "Atmospheric Absorption Measurements with a Microwave Radiometer". Physical Review. 70 (5–6): 340–348. Bibcode:1946PhRv...70..340D. doi:10.1103/PhysRev.70.340.
- George Gamow, The Creation Of The Universe p.50 (Dover reprint of revised 1961 edition) ISBN 0-486-43868-6
- Gamow, G. (2004) . Cosmology and Controversy: The Historical Development of Two Theories of the Universe. Courier Dover Publications. p. 40. ISBN 978-0-486-43868-9.
- Erwin Finlay-Freundlich, "Ueber die Rotverschiebung der Spektrallinien" (1953) Contributions from the Observatory, University of St. Andrews; no. 4, p. 96–102. Finlay-Freundlich gave two extreme values of 1.9K and 6.0K in Finlay-Freundlich, E.: 1954, "Red shifts in the spectra of celestial bodies", Phil. Mag., Vol. 45, pp. 303–319.
- Weinberg, S. (1972). Oxford Astronomy Encyclopedia. John Wiley & Sons. pp. 514. ISBN 978-0-471-92567-5.
- Helge Kragh, Cosmology and Controversy: The Historical Development of Two Theories of the Universe (1999) ISBN 0-691-00546-X. "Alpher and Herman first calculated the present temperature of the decoupled primordial radiation in 1948, when they reported a value of 5 K. Although it was not mentioned either then or in later publications that the radiation is in the microwave region, this follows immediately from the temperature ... Alpher and Herman made it clear that what they had called "the temperature in the univerese" the previous year referred to a blackbody distributed background radiation quite different from the starlight".
- Shmaonov, T. A. (1957). "Commentary". Pribory I Tekhnika Experimenta (in Russian). 1: 83. doi:10.1016/S0890-5096(06)60772-3.
- It is noted that the "measurements showed that radiation intensity was independent of either time or direction of observation ... it is now clear that Shmaonov did observe the cosmic microwave background at a wavelength of 3.2cm"
- Naselsky, P. D.; Novikov, D.I.; Novikov, I. D. (2006). The Physics of the Cosmic Microwave Background. ISBN 978-0-521-85550-1.
- Helge Kragh (1999). Cosmology and Controversy: The Historical Development of Two Theories of the Universe. Princeton University Press. ISBN 978-0-691-00546-1.
- Doroshkevich, A. G.; Novikov, I.D. (1964). "Mean Density of Radiation in the Metagalaxy and Certain Problems in Relativistic Cosmology". Soviet Physics Doklady. 9 (23): 4292–4298. Bibcode:1999EnST...33.4292W. doi:10.1021/es990537g. S2CID 96773397.
- Nobel Prize In Physics: Russia's Missed Opportunities, RIA Novosti, Nov 21, 2006
- Sanders, R.; Kahn, J. (13 October 2006). "UC Berkeley, LBNL cosmologist George F. Smoot awarded 2006 Nobel Prize in Physics". UC Berkeley News. Retrieved 2008-12-11.
- Kovac, J.M.; et al. (2002). "Detection of polarization in the cosmic microwave background using DASI". Nature (Submitted manuscript). 420 (6917): 772–787. arXiv:astro-ph/0209478. Bibcode:2002Natur.420..772K. doi:10.1038/nature01269. PMID 12490941. S2CID 4359884.
- Readhead, A. C. S.; et al. (2004). "Polarization Observations with the Cosmic Background Imager". Science. 306 (5697): 836–844. arXiv:astro-ph/0409569. Bibcode:2004Sci...306..836R. doi:10.1126/science.1105598. PMID 15472038. S2CID 9234000.
- A. Readhead et al., "Polarization observations with the Cosmic Background Imager", Science 306, 836–844 (2004).
- "BICEP2 News | Not Even Wrong".
- Cowen, Ron (2015-01-30). "Gravitational waves discovery now officially dead". Nature. doi:10.1038/nature.2015.16830. S2CID 124938210.
- Planck Collaboration; et al. (2020). "Planck 2018 results. I. Overview and the cosmological legacy of Planck". Astronomy and Astrophysics. 641: A1. arXiv:1807.06205. Bibcode:2020A&A...641A...1P. doi:10.1051/0004-6361/201833880. S2CID 119185252.
- Planck Collaboration; et al. (2020). "Planck 2018 results. V. CMB power spectra and likelihoods". Astronomy and Astrophysics. 641: A5. arXiv:1907.12875. Bibcode:2020A&A...641A...5P. doi:10.1051/0004-6361/201936386. S2CID 198985935.
- Balbi, Amedeo (2008). The music of the big bang : the cosmic microwave background and the new cosmology. Berlin: Springer. ISBN 978-3540787266.
- Evans, Rhodri (2015). The Cosmic Microwave Background: How It Changed Our Understanding of the Universe. Springer. ISBN 9783319099279.
|Wikimedia Commons has media related to Cosmic microwave background maps.|
|Wikiquote has quotations related to: Cosmic microwave background|
- Student Friendly Intro to the CMB A pedagogic, step-by-step introduction to the cosmic microwave background power spectrum analysis suitable for those with an undergraduate physics background. More in depth than typical online sites. Less dense than cosmology texts.
- CMBR Theme on arxiv.org
- Audio: Fraser Cain and Dr. Pamela Gay – Astronomy Cast. The Big Bang and Cosmic Microwave Background – October 2006
- Visualization of the CMB data from the Planck mission
- Copeland, Ed. "CMBR: Cosmic Microwave Background Radiation". Sixty Symbols. Brady Haran for the University of Nottingham. |
Foot and mouth disease
The foot-and-mouth disease (FMD), also Aphthenseuche , aphthae epizooticae and stomatitis epidemic , is a highly contagious viral disease in cattle and pigs and is a notifiable animal disease . Other ungulates such as deer , goats and sheep , but also elephants , rats and hedgehogs can become infected . Horses are not, and humans are rarely, susceptible to FMD.
In 1898, Friedrich Loeffler and Paul Frosch were the first animal virus to prove that the FMD pathogen was viral. The two bacteriologists discovered that intravenously administered lymph from infected animals was the cause of disease for healthy calves even after prior filtration through bacteria- proof diatomite . Foot -and-mouth disease virus is the causative agent of FMD , a highly contagious single (+) strand RNA virus [ss (+) RNA]. It belongs to the genus aphthovirus of the virus family Picornaviridae . The members of this family are non-enveloped small (25-30 nm) viruses with an icosahedral capsid (protein shell), which contains single-stranded ribonucleic acid (RNA) as genetic viral material . After the protein envelope has dissolved, virus replication takes place in the cytoplasm of an infected host cell. The newly formed virions are released after the cell membrane has dissolved by lysis .
Foot-and-mouth disease caused by infection with the virus usually occurs locally and the virus is primarily transmitted through contact and smear infection through direct contact with infected animals, contaminated pens or cattle transport vehicles. However, the virus can also be spread through the air. The clothing and skin of farmers and other people around animals, standing water, uncooked feed waste, feed additives containing infected animal products, and animal products such as cheese or meat can harbor the virus. Cows can get foot-and-mouth disease from infected bulls through semen transmission . Control measures include quarantine , the destruction of infected cattle herds and an export ban on animal products to countries not affected by the disease.
People cannot become infected with foot and mouth disease. For consumers of beef and pork as well as pasteurized milk or products made from them, there is no danger even in the event of an epidemic. However, as the disease spreads extremely quickly among animals, FMD is a serious threat to agriculture.
FMD is widespread almost worldwide. Only in New Zealand have no foot-and-mouth disease outbreaks been registered; in Australia the last outbreak was in 1872. The United States (last occurrence in 1929), Canada (1952), Mexico (1954) and Chile (1988) are also considered to be FMD-free. In Europe, only the Scandinavian countries of Norway (last occurrence in 1952), Finland (1959) and Sweden (1966) have been spared from outbreaks in recent decades . The last epidemic occurred in Austria in 1973. The disease last occurred in Germany in 1988. There were cases of FMD in 2001 in Spain, France, the Netherlands and Ireland. Outbreaks of FMD were last seen in Great Britain in 2001 and 2007. In Switzerland the epidemic occurred in the years 1871/72, 1899–1900, 1911–14, 1920/21, 1939/40 and 1965 with devastating effects. After the last outbreaks in 1968 and 1980, Switzerland is officially free of FMD.
FMD is still widespread in Africa , Asia and parts of South America. These regions are considered enzootic . In Europe, the disease is largely under control through state veterinary surveillance, but it is always introduced. The diverse transmission routes and the rapid rate of spread of FMD quickly lead to epizootics, so that the epidemic is a constant threat to Europe.
The focus of the infection is a strongly pronounced viraemia phase, with which the generalized spread of the pathogen in the host and its settlement in the target organs are associated. The FMD virus has a high affinity for the skin and cutaneous mucous membranes ( epitheliotropism ). Affected are the mucous membranes of the oral cavity, esophagus and rumen pillars as well as the hairless skin of the nostrils, muzzle, proboscis, udder and claws. The pathogen also affects the skeletal and heart muscles ( myotropism ). Neurotropic properties are rarely observed.
Cattle are mainly infected by air , while infection in pigs usually occurs via the oral route. The aphtha ("primary aphthous") arising at the entrance gate usually escapes observation. From the primary place of multiplication (pigs: tonsils, cattle: pharyngeal mucosa and bronchioles), the virus reaches the lymphoreticular system (especially liver and spleen) in an initial viraemic phase via lymph and blood. The course of the disease is determined by the further success of the virus in multiplying in the primarily affine organs. In the case of strong multiplication, colliquational necrosis occurs there , followed by generalized viraemia, in the course of which the FMD pathogen reaches the muscles, skin, mucous membranes and occasionally the CNS. The viraemia lasts four days. After generalization, the viral RNA is widely detectable in various epithelia. A visible sign of the organ manifestation of the virus is the formation of "secondary naphtha". Predisposing factors such as particular mechanical stress favor their development. The formation of aphthae takes place in the stratum spinosum of the epidermis : the infected keratocytes are destroyed and the resulting cavities fill with clear fluid that confluences into a large bladder. The base of the bladder is formed by the intact stratum basale with the underlying, well-perfused papillary body . After bursting, the aphthae have a tendency to often flat erosions. Depending on the myotropic affinity of the pathogen strain, cell damage of variable extent occurs in heart and muscle cells.
Clinical symptoms and course of disease
The incubation period is two to seven days. The outbreak of the epidemic is characterized by a fever phase of up to three days (up to 42 degrees Celsius), which is associated with severe disorders of the general condition. In dairy cows, a sudden drop in milk yield up to the point where the milk runs dry can be observed. Even in the fever phase, a strong production of viscous saliva ("MKS beard") begins with simultaneous reddening of the oral mucosa. Loss of appetite, rumination disorders and the appearance of "saliva pools" in the vicinity of animals characterize the progression of the disease.
As the disease progresses, fluid-filled blisters ( aphthae ) the size of peas and pigeons' eggs form on the mouth of the mouth , in the entire mucous membrane of the mouth and in the tongue area . At the same time, other aphthae develop in the claw area, on the udder skin and on the teats. The general condition is severely disturbed. The animals show expressions of pain in the form of closed mouths, smacking jaw movements and lameness. After the blisters burst, erosions - in some cases large areas - form and the healing process begins. At the same time, other animals in the herd are constantly falling ill. Healing of the lesions in the mouth area takes up to 14 days. The claw naphtha heals within a month. The convalescence phase is often disturbed by secondary bacterial infections.
In addition to benign disease variants with partially mild symptoms ( mortality 2–5%), there is also a malignant form of the disease (mortality up to 80%). The cause is strongly virulent pathogens with pronounced myotropism. This form occurs preferentially in calves. Even with mild courses, heart muscle damage caused by myocarditis is in the foreground in this age group . Affected animals die within 24 hours with the symptoms of a severe general illness.
The natural infection leaves a resilient immunity against the respective virus type for up to 12 months. Dreaded and expected consequential and long-term damage after surviving an infection are bacterial secondary infections, mastitis , sole horn and claw changes up to " shoing " (complete detachment of the claw horn), muscle damage, myocarditis as well as persistent massive depression and loss of fitness.
The incubation period is one to three days (in individual cases up to a maximum of 12). The initial feverish phase lasts for up to four days. Overall, the epidemic is less dramatic than in cattle.
The claws are primarily affected by the aphthous ulcer, which is mainly visible on the coronet and in the gap between the claws. The changes to the proboscis and oral mucosa are rather inconspicuous. In the sow there are also sucking phthens.
The clinical picture of the older animals is characterized by (supporting leg) lameness of varying degrees of severity up to the fact that they are stuck. Initially only a few animals are affected, the disease spreads within a few days in the herd. Sudden deaths occur in suckling pigs and runners due to heart muscle damage. The epithelial lesions on the proboscis and teats heal within two weeks, with coronary and sole defects, the course of the disease is usually made more difficult by purulent secondary infections.
Post-infection immunity lasts for five to seven months. Late damage and consequential damage include mastitis, metritis and abortions, shoing of the animals and loss of performance caused by myocarditis.
Further diseases of the vesicular disease complex in pigs cannot be distinguished from the clinical picture, all of which are associated with vesicular formation: vesicular stomatitis (VS), swine vesicular disease (SVD) and vesicular rash (VES). Selenium poisoning is also an option for differential diagnosis .
Sheep and goat
The incubation period of the so-called "benign mouth disease" is 2–14 days. In sheep and goats, the signs of illness are less noticeable than in cattle.
In sheep, the focus is on the formation of aphthae on the coronet and in the gap between the claws. Changes to the oral mucosa and lips are often uncharacteristic. Infestation of the herd is slow and incomplete (within three to six weeks). Often the only signs of the disease are painful and severe lameness. From the third day of lameness, the reddened sores become clearly visible after the aphthae have burst. The mortality of the adult animals is low. In lambs, the malignant myocarditic form dominates with fever, diarrhea and apathy. The losses are up to 80%; the animals perish without any signs of aphthous ulcer formation. The lesions heal within two to three weeks, and secondary infections complicate the course.
In goats , the disease is either mild or severe, combined with myocardial damage and high mortality. There is a febrile phase with general disturbances and a decline in milk. The formation of aphthae in the mucous membrane of the mouth is clear, but if it ruptures soon, it only lasts for a short time. In the head area there can be swelling with standing up of the hair (so-called dickkop ). Often there is rhinitis. Claw naphtha is rarely seen.
Late and consequential damages in small ruminants are hoof infections, abortions, metritis and mastitis. The type-specific immunity after field virus infection is one to two years and longer.
Due to their low susceptibility, people are affected by the disease only extremely rarely and the prognosis for diseases is favorable. The infection occurs directly through contact with infected animals or as a result of a laboratory infection. Indirect transmission via infected milk is also possible.
With an incubation period of two to six days, the disease proceeds as a biphasic-cyclic infection, just as it does in cloven-hoofed animals. After a short, moderate phase of fever and unspecific general symptoms such as nausea, exhaustion, headache and body aches, painful aphthae can appear in the reddened oral mucosa , but preferably on the skin of the hands (fingertips) and feet and in the genital area. The skin erosions that develop after the aphthae have dried out usually heal completely within ten days.
In terms of differential diagnosis, the disease, which is commonly referred to as hand, foot, and mouth rash , which is also viral and is associated with very similar symptoms , must be distinguished . It is described more often in humans, especially young children. This disease is of a different virus from the family of Picornaviridae, the enterovirus Coxsackie A caused.
In addition to the externally visible changes, there are other aphthae of various healing stages in the throat and esophagus. On rumen pillars and Psalterblättern only schorfbedeckte lesions are often visible. In the absence of aphthous ulcers in the mucous membranes, catarrhal swellings or minor bleeding appear. Petechial bleeding may also be visible under the epicardium . No lesions can be observed in the myocardium and skeletal muscles of older animals and the virus only appears to multiply in these locations in young animals. The high mortality in young animals is thought to be due to acute myocarditis, which can be seen to the naked eye by venous congestion and large blood clots , v. a. in the left ventricle. The damaged heart muscles are soft, poorly contracted and show gray-white stripes and spots of variable sizes (“tiger heart”). The left ventricle and cardiac septum are particularly affected . This form of the disease is often accompanied by the complete absence of aphthous changes in the usual predilection sites. In the case of a peracute course, there may even be no visible changes in the heart muscle. Occasionally the skeletal muscles are also affected by streaky changes.
In the early stages, the lesions can only be made visible under the microscope . The first histological changes in the stratum spinosum are characterized by vacuolar degeneration, increasing eosinophilic staining of the cell cytoplasm and the formation of edema in the intercellular space. This is followed by cell necrosis and reactive infiltration with leukocytes ( monocytes , granulocytes ). The so-called aphthous ulcers develop from the lesions that have become visible in the meantime, as the epithelium is separated from the underlying tissue and the cavity is filled with clear, vesicular fluid. In some cases the amount of fluid can be minimal. The epithelium can also become necrotic or tear apart from mechanical trauma without aphthous ulcers taking place. The histological picture of lymphohistiocytic myocarditis with hyaline- lumpy degeneration (“Zenker's muscle degeneration”) and necrosis of the heart muscle cells (myocytes) arises in the heart muscle.
If FMD is suspected, the actual outbreak must be officially established. According to Section 1 of the “Ordinance on Protection against Foot and Mouth Disease” (FMD-VO), an outbreak of the disease is only considered proven when the pathogen is detected in the form of virus antigen or viral RNA . This also applies in the case of missing clinical symptoms. In addition, the serological detection of antibodies against FMD or a titer increase in animals that have been proven not to be vaccinated is binding. If there is an epizootiological connection to a primary epidemic focus ( secondary outbreak ), the results of clinical or pathological - anatomical examinations alone may be sufficient.
In cattle, the clinical picture is usually clear. The diagnosis of small ruminants is often made difficult by inapparent and mild forms. The occurrence of sudden lameness of most of the herd at the same time increased peracute mortality of newborn and / or very young lambs are the first evidence of FMD before the pathognomonic evidence of Aphthe nbildung. In pig herds, differential diagnosis of foot-and-mouth disease must be considered as soon as there is frequent lameness in connection with blisters in areas predisposed to aphthous ulcers.
A thorough examination of the population under suitable conditions (good lighting, mechanical cleaning of soiled predilection sites ) in the event of suspicion, precise knowledge of the clinical picture and a veterinarian who is sufficiently sensitized to the epidemic are prerequisites for rapid detection of the disease.
FMD is a notifiable animal disease. In the event of suspicion, the animal owner, the nursing staff or the practical veterinarian must immediately consult the official veterinarian. This examines the herd, if necessary takes samples for further laboratory diagnostic clarification and makes additional initial animal health orders in accordance with the FMD Regulation (see under measures).
The samples taken must be forwarded to the national FMD reference laboratory as quickly as possible (by courier or helicopter) without any loss of time . Suspicious samples are to be announced to the investigation facility in advance so that immediate further processing can be ensured.
For the detection of infectious virus, antigen or nucleic acid , lymph and cover material from fresh aphthous ulcers are best suited. If there are no aphthae, swabs can alternatively be removed from the transition to the healthy tissue. In addition, nasal swabs as well as organ samples from killed animals (e.g. changed areas of rumen pillars, heart and udder) can be used as sample material. The samples are mixed with PBS buffer and glycerine and sent cooled at a neutral pH value. If more than a week has passed since the infection, the virus detection from throat mucus samples (so-called "probang" samples) replaces the swab removal from the nose. The dispatch of these samples must be frozen. Blood samples not only contain virus antibodies (from the 5th day after infection), but are also used for virus detection in the viremic phase. The additional submission of serum samples is mandatory, especially for small ruminants .
Laboratory diagnostic procedures
Approved detection methods in FMD diagnostics as well as reference laboratory protocols for carrying out the test can be found in the "Manual of Diagnostic Tests and Vaccines for Terrestrial Animals"; Chapter 2.1.1. (Foot and mouth disease) from the OIE . At the national level, information on the diagnosis of foot-and-mouth disease is set out in the "Federal Catalog of Measures for Animal Diseases", Part III.2.
The FMD diagnostics may only be carried out by high security laboratories of security level 4 due to the high level of pathogen contagion. The German FMD reference laboratory is part of the Friedrich Loeffler Institute . The German research facility is located on the island of Riems . The international reference laboratory is located at the Institute of Animal Health in Pirbright , England.
The most urgent task of laboratory diagnostics is the determination of a primary outbreak in order to be able to initiate the culling of infected stocks and the establishment of blocking measures as quickly as possible . Once the suspicion has been confirmed, the virus is characterized so that a possible vaccine recommendation can be made. In addition, epidemiological studies on a molecular genetic basis are carried out in order to determine the geographical origin of the pathogen (“tracing back”). The laboratory diagnostic methods are divided into pathogen detection and antibody detection.
The detection of viral antigens or viral nucleic acids of the FMD pathogen is sufficient for a positive test result.
The preferred method for virus antigen detection with simultaneous determination of the serotype is the enzyme-linked immunosorbent assay ( ELISA ). Due to its 10–100 times higher sensitivity (better sensitivity and specificity) and lower susceptibility to interference, it has replaced the classic complement fixation reaction (KBR) as a fast detection method (<1 day). It is carried out as an indirect, so-called "double sandwich ELISA". Rabbit antibodies bound to the microtiter test plate hold the virus antigen to be detected. After a subsequent incubation with guinea pig antibodies, the antigen-antibody reaction by adding a peroxidase - conjugate and a specific substrate made optically visible. The eight rows of the microtiter plate each contain antisera against the seven different known FMD serotypes. The ELISA thus allows a preliminary type diagnosis, which, however, requires further confirmation with monoclonal antibodies for safety . In the free eighth row, differential diagnostics can be used to investigate further viral pathogens (e.g. pig vesicular disease ). Questionable results can be checked again in the repeated ELISA run after the sample material has passed through cell culture.
Only 80% of all samples that are positive in the cell culture show a reaction in the ELISA. Due to the serious consequences of a confirmed suspicion, virus isolation by means of cell culture is always used in Germany in parallel with the ELISA. Infectious sample material from suspected clinical cases is inoculated into susceptible cell cultures (e.g. calf thyroid cells, hamster kidney cells) or into baby mice two to seven days old in order to multiply potential infectious virus. At least one to three days pass before a cytopathogenic effect occurs or the test animal dies. A subsequent virus replication phase is followed by identification and characterization using ELISA or reverse transcriptase polymerase chain reaction (RT-PCR).
In order to have a second laboratory method available with a sensitivity comparable to that of cell culture, tests for the detection of viral nucleic acid have been established. RT-PCR can be used to detect genome fragments of the FMD virus from a wide variety of diagnostic materials. In combination with real-time quantitative PCR , a sensitivity can be achieved that is comparable to virus isolation. Process steps that can be automated enable a higher sample throughput. The classic PCRs were used for type-independent FMD detection by selecting a genome region that was largely conserved for all serotypes for the amplification . However, the use of specific primers also allows reliable differentiation of the seven serotypes. The detection of viral RNA in tissue samples is also possible by means of in situ hybridization . This technique is only used by a few specialized laboratories, although simplified test systems for “field use” are already in the development phase.
The virus genome is also used for molecular epidemiology research. The basis is the comparison of genetic differences between individual virus variants. Based on the sequencing data of the 1D gene (encoded for the viral envelope protein VP1), family trees have now been created that show the genetic relationship between vaccine and field virus strains of all seven serotypes. Many laboratories have developed their own methods here. The amplification of the viral RNA using RT-PCR, followed by the decoding of the nucleotide sequence (sequencing) is usually the method of choice for generating this data. The databases of the reference laboratories now contain more than 3000 partial sequences.
Serological tests for FMD are mainly used to clarify the following questions: a) Certification for the export of certain animals as free of FMD infection and vaccination; b). Confirmation of suspected disease cases; c). Evidence of the absence of infection and d). Confirmation of the effectiveness of a vaccination. Serological detection methods for FMD antibodies (Ab) are divided into those that detect immunoglobulins against structural proteins (SP) and those that detect antibodies directed against non-structural proteins (NSP).
Detection of structural protein antibodies
The virus neutralization test and various types of antibody ELISAs ("solid-phase competition ELISA" (SPCE); "liquid-phase blocking ELISA" (LPBE)) for the detection of structural proteins are used as serotype-specific serological tests. They are mandatory in the international pet trade. They recognize antibodies that are produced in response to vaccination as well as natural FMD infection. As a result, they are useful for detecting recent infections in non-vaccinated individuals or for monitoring the immune status of a population in vaccination programs.
As a highly sensitive method, they require a strong similarity between the circulating field strain and the FMD virus antigen used in the test. Using poly- or monoclonal AK, the ELISA tests are versatile, quick to perform and can also be prepared with inactivated antigens. False positive reactions in the ELISA are to be expected with a small percentage of samples in the low titer range. The virus neutralization test depends on the use of cell cultures and is therefore less versatile. It also lasts longer (four days) and is more susceptible to contamination. Combined use of the ELISA as a screening method with subsequent testing of reagents in the VNT reduces the probability of false positive results to a minimum.
Detection of non-structural protein antibodies
Assays for antibodies to non-structural proteins are useful in detecting recent or ongoing virus replication in the host, regardless of its vaccination status. In contrast to structural proteins, non-structural proteins are highly conserved within the FMD virus species . The detection of these antibodies is consequently not serotype-specific, which is an advantage as a detection method on an international level.
The traditional test for the detection of FMD-NSP was the immunodiffusion test . It was used to detect “virus infection associated antigen” (VIAA) with the main component Protein 3D. However, the latter can also be formed by vaccinated individuals.
Modern processes use the genetically engineered proteins 3ABC and 3AB. Antibodies against these proteins are generally seen as reliable indicators of an FMD infection, since they show a high experimental immunogenicity . Antibodies against expressed recombinant viral NSP (e.g. 3A, 3B, 2B, 2C, 3ABC) can be detected by various ELISA variants or by immunoblotting . Three indirect ELISA methods are approved for FMD (South America, Denmark, Brescia). These ELISAs either use purified antigens that are bound directly to the test plate or use polyclonal / monoclonal antibodies to “capture” specific antigens from semi-purified preparations.
The enzyme-linked immunoelectrotransfer blot assay (EITB) was first published in 1993. It is widely used in South America, where it is also used as a confirmatory test for reagents found in ELISA screening.
The methods have a specificity of 99%. However, a lack of purity of the vaccines can adversely affect the diagnostic specificity. The presence of NSP in some vaccine preparations can lead to incorrect assignment of animals that have already been vaccinated repeatedly.
The test sensitivity is considered unsatisfactory in experimentally vaccinated cattle that have been exposed to a field virus infection and subsequently showed virus persistence (vaccinated "carriers"). They only show a local immune response, but not immunoglobulins G against NSP. However, it appears that carrier animals can be detected using Ig A from saliva. These antibodies are evidently only produced by actually infected animals over a longer period of time.
Until December 31, 1991, compulsory vaccinations of cattle were carried out in the EU to prevent an FMD epidemic . Vaccinations lead to serious barriers to trade: like infected animals, vaccinated people have antibodies in their blood and can therefore only be differentiated from each other with specially marked vaccines . There is also the risk of the pathogen spreading through vaccinated animals. Therefore vaccinations by the EU were stopped. Therapy measures are also generally not permitted.
If FMD is suspected, the farm in question is closed, sheep are usually culled as a precaution and samples are examined for FMD. Furthermore, a restricted area with a radius of at least three kilometers will be set up, all animal populations will be examined for FMD and animal transport will be prohibited.
If the suspicion is confirmed, the population is culled and disposed of harmlessly, as are the neighboring populations within a radius of one kilometer. In the three-kilometer restricted area, no animals or sperm may be transported for 15 days , the roads will be closed. After these 15 days, animals may only be transported with a permit (animals may only be transported for slaughter). Milk may only be used for separate processing. An observation area will be set up within a radius of 10 km from the outbreak. There animals can be transported within the area with a permit. If no further illnesses have occurred within 30 days of the epidemic, rats and mice are controlled, as well as cleaning and disinfection.
Since the virus is very resilient, it can persist for months in the ground, stalls, waste and straw. In the event of an infestation, extensive disinfection with formic acid or heat (at least 60 ° C) must therefore be carried out.
Measures such as a ban on animal transport are often taken when an FMD epidemic occurs in neighboring countries. Because of its high resistance, even the wheels of cars are disinfected when crossing the border in the event of major epidemics.
In the UK in February 2001 broke out epizootic. During this outbreak, which spread sporadically to mainland Europe, it came to the culling of more than four million animals. It was not until January 14, 2002, after three months without reports of new cases, that the island was declared free of the disease again.
There are always isolated outbreaks in the rest of Europe, Africa, Asia and South America, as most recently on August 3, 2007 in Surrey (Great Britain).
- Wolfgang Bisping: Compendium of the state control of animal diseases. Stuttgart: Enke 1999, ISBN 3-7773-1423-4 , pp. 101-104
- Hans Plonait, Klaus Bickhardt (Hrsg.): Textbook of pig diseases . 2., rework. Edition Berlin: Parey 1997, ISBN 3-8263-3149-4 , pp. 66-68
- Hartwig Bostedt; Kurt Dedié: Sheep and Goat Diseases . 2., rework. and exp. Aufl. Stuttgart: Ulmer 1996, ISBN 3-8252-8008-X , pp. 35–37 (diseases of domestic animals; UTB for science: large series)
- Gustav Rosenberger (Ed.): Diseases of the cattle. 3rd, unchanged. Edition Berlin [u. a.]: Blackwell-Wiss.-Verl. 1994, ISBN 3-8263-3029-3 , pp. 835-843 (Blackwell Science)
- Winfried Hofmann, Hartwig Bostedt: cattle diseases. Vol. 1: Internal and surgical diseases. Stuttgart: Ulmer 1992, ISBN 3-8252-8044-6 , p. 243f. (Diseases of pets)
- Anton Mayr (Ed.): Rolle / Mayr. Medical microbiology, infection and epidemic science for veterinarians, biologists and agricultural scientists and those interested in related fields: textbook for practice and study. 6., rework. Aufl. Stuttgart: Enke 1993, ISBN 3-432-84686-X , pp. 311-317.
- Animal diseases in the tropics and subtropics . Ed. by the British Veterinary Association. Konstanz: Terra-Verlag 1968, pp. 51-57.
- Joachim Beer: Foot and Mouth Disease. In: J. Beer (ed.): Infectious diseases of domestic animals. Jena: Fischer-Verlag 1974.
- Karl Wurm, AM Walter: Infectious Diseases. In: Ludwig Heilmeyer (ed.): Textbook of internal medicine. Springer-Verlag, Berlin / Göttingen / Heidelberg 1955; 2nd edition, ibid. 1961, pp. 9-223, here: pp. 209 f.
- FLI : Foot and Mouth Disease: Official method and case definition
- Daniel Baumann, Daniel Freudenreich: One virus particle is enough for an infection. In: Berliner Zeitung . August 7, 2007, accessed June 19, 2015 .
- FLI : Foot and Mouth Disease: Official method and case definition
- NÖN : 3,732 cattle had to be culled , week 32/2013
- Animal Disease Report 2011 by the BMELV . In: Deutsches Tierärzteblatt. (DTBL) Volume 60, May 2012, pp. 714–715.
- ADNS (Animal Disease Notification System): Animal disease situation per country and per disease, different years.
- Article cattle diseases in the Historical Lexicon of Switzerland
- Foot and Mouth Disease
- Swiss Association for the History of Veterinary Medicine SVGVM: Fighting Foot and Mouth Disease in the 20th Century ( Memento of the original from January 1, 2016 in the Internet Archive ) Info: The archive link was inserted automatically and has not yet been checked. Please check the original and archive link according to the instructions and then remove this notice.
- Keystone: Foot and Mouth Disease (1965/1966) ( Memento of the original dated November 30, 2018 in the Internet Archive ) Info: The archive link was inserted automatically and has not yet been checked. Please check the original and archive link according to the instructions and then remove this notice.
- OIE : OIE Members' official FMD status map
- Stern, August 6, 2007: From the laboratory to the stable.Retrieved : December 25, 2014 |
Presentation on theme: "Coordinates and Design"— Presentation transcript:
1 Coordinates and Design Chapter OneCoordinates and Design
2 What is a Cartesian Plane The Cartesian Plane (or coordinate grid) is made up of two directed real lines that intersect perpendicularly at their respective zero points.ORIGINThe point where the x-axis and the y-axis cross(0,0)
3 Parts of a Cartesian Plane The horizontal axis is called the x-axis.The vertical axis is called the y-axis.
4 Quadrants The Coordinate Grid is made up of 4 Quadrants. QUADRANT I QUADRANT IIQUADRANT IIIQUADRANT IV
5 Signs of the Quadrants The signs of the quadrants are either positive (+) or negative (-).QUADRANT IQUADRANT IIQUADRANT IIIQUADRANT IV(+, +)(-, +)(-, -)(+, -)
6 1.1 The Cartesian Plane Identify Points on a Coordinate Grid A: (x, y) B: (x, y)C: (x, y)D: (x, y)HINT: To find theX coordinate count howmany units to the rightif positive,or how many units to theleft if negative.
7 1.1 The Cartesian Plane . Identify Points on a Coordinate Grid . B: (5, 3)C: (9, 3)D: (9, 7)
8 When we read coordinates we read them in the order x then yPlot the following pointson the smart boardA: (9, -2)B: (7, -5)C: (2, -4)D: (2, -1)E: (0, 1)F: (-2, 3)G: (-7, 4)
9 What are common mistakes when constructing a Coordinate Plane? Units not the same in terms of intervalsSwitch the order that they appearWrong symbols for quadrants
10 AssignmentTextbook: Page 9 #5, 7, for questions 9 and 10 plot on two separate graphs. Graph paper is provided for you. Challenge #14, 16
11 1.2 Create DesignsPut your thinking cap on! What is the following question asking us to find?Label each vertex of each shape.Question!What is a vertex?
12 1.2 Create Designs A vertex is a point where two sides of a figure What is a vertex?A vertex is a point where two sides of a figuremeet.The plural is vertices!The vertices of the Triangle areA (x, y)B (x, y)C (x, y)BAA (4, 4)B (0, 4)C (2, 0)C
13 Create DesignsGraphic Artists use coordinate grids to help them make certain designs. Flags, corporate logos can all be constructed through the use of our coordinate grids.
14 1.2 Create Designs Study the following Flag. How many vertices can you find in the design.Imagine seeing this on a coordinate grid.Notice how it is centered and equally distributed on each side.
19 1.2 Create Designs Assignment: You have been hired to create a flag for the company “Flags R Us!” They are looking for a new creative design that can be based on an interest or hobby of yours. The flag design can be a cool pattern or related to any sport, hobby, or activity you are involved with.The flag needs to have a minimum of 10 Vertices.They want a detailed location of any 10 vertices located on the bottom of your design (list the coordinates).It is your responsibility to use a coordinate grid to create your own pattern.
20 Evaluation Your Flag will be evaluated as following” Neatness: (Have you made sure to color inside the lines).Vertices: (Do you have at least 10).Design: (Have you used designs and shapes to create an image).Handout: (Do you have all the vertices clearly labeled in a legend).
23 1.3 TRANSFORMATIONSThis section will focus on the use of Translations, Reflections, Rotations, and describe the image resulting from a transformation.
24 1.3 Transformations Transformations: Translation Reflection Rotation Include translations, reflections, and rotations.Translation Reflection Rotation
25 Translation Translations are SLIDES!!! Let's examine some translations related to coordinate geometry.
26 1.3 Transformations Translation: A slide along a straight line Count the number of horizontal units and vertical units represented by the translation arrow.Label the vertices A, B, CLabel the new translation A’, B’, C’The horizontal distance is 8 units to the right, and the vertical distance is 2 units down(+8 -2)
27 1.3 Transformations Translation: Count the number of horizontal units the image has shifted.Count the number of vertical units the image has shifted.We would say theTransformation is:1 unit left,6 units upor(-1+,6)
28 In this example we have moved each vertex 6 units along a straight line. If you have noticed the corresponding A is now labeled A’What about the other letters?
29 A translation "slides" an object a fixed distance in a given direction A translation "slides" an object a fixed distance in a given direction. The original object and its translation have the same shape and size, and they face in the same direction
30 When you are sliding down a water slide, you are experiencing a translation. Your body is moving a given distance (the length of the slide) in a given direction. You do not change your size, shape or the direction in which you are facing.
32 1.3 Translations 4 a) What is the translation shown in this picture? 6 units right, 5 units upOr(+6,+5)
33 Horizontal Distance is: 1.3 Translations4 b) What is the translation in the diagram below?Horizontal Distance is:6 units leftVertical Distance is:4 units upOr(-6,+4)
34 1.3 Translations #5 B) The coordinates of the translation image are P'(+7, +4), Q’(+7, –2),R'(+6, +1), S'(+5, +2).C) The translation arrow isshown: 3 units right and6 units down. (+3, -6)
35 ReflectionsIs figure A’B’C’D’ a reflection image of figure ABCD in the line of reflection, n?How do you know?Figure A'B'C'D' IS a reflection image of figure ABCD in the line of reflection, n.Each vertex in the red figure is the same distance from the line of reflection, n, as its reflected vertex in the blue image.
36 A reflection is often called a flip A reflection is often called a flip. Under a reflection, the figure does not change size.It is simply flipped over the line of reflection.Reflecting over the x-axis:When you reflect a point across the x-axis, the x-coordinate remains the same, but the y-coordinate is transformed into its opposite.
43 Transformations Rotation: A turn about a fixed point called “the center of rotation” The rotation can be clockwise or counterclockwise
44 1.3 Transformations Rotation: A turn about a fixed point called “the center of rotation”The rotation can be clockwise or counterclockwise.
45 Transformations Assignment Page 27 Lets go over #13 and #14 as a class.Page # 15,16,17, and18 on your own!
46 1.3 TransformationsPg 27. #13a) The coordinates for ∆HAT are H(–3, –2), A(–1, –3), and T(–3, –6).The coordinates for ∆HAT are H(–3, –2), A(–1, –3), and T(–3, –6).b) The rotation is 180 counterclockwise.Discuss the different angles of rotation:90, 190, 270, 360
47 RotationsPg 27 #15.a) The coordinates for the centre of rotation are (–4, –4).b) Rotating the figure 90° clockwise will produce the same image as rotating it 270° in the opposite direction, or counterclockwise.
48 Homework Questions#16a) The coordinates for the centre of rotation are (+2, –1).b) The direction and angle of the rotation could be 180° clockwise or 180° counterclockwise.
49 Homework Questions#17a) The figure represents the parallelogram rotated about C, 270° clockwise.b) The coordinates for Q'R'S'T' are Q'(–1, –1), R'(–1, +2), S'(+1, +1), and T'(+1, –2).
50 Homework Questions# 18b) The rotation image is identical to the original image.
51 Geometric Transformations REFLECTIONPlease send feedback to Answers and discussion are in the notes for each slide.by D. Fisher
52 Reflection, Rotation, or Translation 1.Reflection, Rotation, or TranslationRotation
53 Reflection, Rotation, or Translation 1.Reflection, Rotation, or TranslationRotation
54 Reflection, Rotation, or Translation 2.Reflection, Rotation, or TranslationTranslation
55 Reflection, Rotation, or Translation 2.Reflection, Rotation, or TranslationTranslation
56 Reflection, Rotation, or Translation 3.Reflection, Rotation, or TranslationReflection
57 Reflection, Rotation, or Translation 3.Reflection, Rotation, or TranslationREFLECTIONReflection
58 Reflection, Rotation, or Translation 4.Reflection, Rotation, or TranslationReflection
59 Reflection, Rotation, or Translation 4.Reflection, Rotation, or TranslationReflectionREFLECTION
60 Reflection, Rotation, or Translation 5.Reflection, Rotation, or TranslationRotation
61 Reflection, Rotation, or Translation 5.Reflection, Rotation, or TranslationRotationROTATION
62 Reflection, Rotation, or Translation 7.Reflection, Rotation, or TranslationReflection
63 Reflection, Rotation, or Translation 7.Reflection, Rotation, or TranslationReflection
64 Reflection, Rotation, or Translation 6.Reflection, Rotation, or TranslationTranslation
65 TRANSLATION – MOVE FROM ONE POINT TO ANOTHER 8.Reflection, Rotation, or TranslationTRANSLATION – MOVE FROM ONE POINT TO ANOTHERTranslation
66 Why is this not perfect reflection? 10.Why is this not perfect reflection?The zebras have slightly different striping. One has its nose closer to the ground.
67 Why is this not perfect reflection? 10.Why is this not perfect reflection?ZEBRAS HAVE SLIGHTLY DIFFERENT STRIPINGThe zebras have slightly different striping. One has its nose closer to the ground.
68 Reflection, Rotation, or Translation 11.Reflection, Rotation, or TranslationPROBABLY DOESN’T FIT ANY CATEGORYReflection is probably the best answer because the inside part of the bird’s foot is slightly shorter than the outside part. However, this example from nature does not really fit exactly in any of the categories.
69 Reflection, Rotation, or Translation 12.Reflection, Rotation, or TranslationTranslation.TRANSLATION
70 Reflection, Rotation, or Translation 13.Reflection, Rotation, or TranslationReflection. However, rotation of 180o will be the same.Why possibly both?Either reflected or rotated 180°
71 Reflection, Rotation, or Translation 14.Reflection, Rotation, or TranslationROTATIONRotation
72 Reflection, Rotation, or Translation 15.Reflection, Rotation, or TranslationREFLECTION IN SEVERAL DIRECTIONSReflection in several directions.
73 Reflection, Rotation, or Translation 16.Reflection, Rotation, or TranslationRotationROTATION
74 Reflection, Rotation, or Translation 17.Reflection, Rotation, or TranslationReflection. Note the position of the purple tips; rotation of 180o would cause the top purple tip to be on the bottom.
75 Reflection, Rotation, or Translation 18.Reflection, Rotation, or TranslationTranslation.
76 Reflection, Rotation, or Translation 19.Reflection in multiple mirrors.Reflection in multiple mirrors.
77 Reflection, Rotation, or Translation 20.Reflection, Rotation, or TranslationTranslation. Watch the colors.
78 Reflection, Rotation, or Translation 21.Reflection, Rotation, or TranslationReflection. Note the position of the red parts.
79 Reflection, Rotation, or Translation 22.Reflection, Rotation, or TranslationRotation. Note the red parts.
80 Transformations Assignment Page # 1-10, 12, 15, 16, 18 and 21 on your own!
82 BattleGraph Directions Each team will hide their 4 battleships in their HIDDEN Mathematical Ocean by writing the correct number of points for each battleship with its corresponding letterAll ships must be either horizontal or verticalShips may not overlapDraw a rectangle around the correct number of points for each battleship
83 BattleGraph Example Keep this board HIDDEN from the other team! This is the INSIDE board.
84 ATTACKERS & DEFENDERSTeams will take turns being the ATTACKERS and the DEFENDERSThe ATTACKERS will select a place to attack by giving an ordered pair of numbers to the DEFENDERSThe ATTACKERS will then write the ordered pair in the box to the side and circle that point on their VISIBLE Mathematical OceanThe DEFENDERS will find the coordinate on their HIDDEN Mathematical Ocean and circle itThe DEFENDERS will say if the attack was a HIT (ATTACKERS fill-in circle) or a MISS (ATTACKERS leave circle empty)Teams will then switch roles
85 Winning BattleGraphIf the coordinate is not written in the box on the side, the attack is automatically a MISSIf the coordinate is not in the Mathematical Ocean, the attack is automatically a MISSIf the ATTACKERS sink one of your battleships, you must tell tell them. Otherwise you will LOSE one turn.The ATTACKERS will connect the points once the entire ship is SUNK.To WIN the game you must sink all of the the other team’s battleships before they sink all of yours
86 BattleGraph Example Keep this board VISIBLE! This is the OUTSIDE board.Use this board to ATTACK.
87 Get Ready to Hide Your Battleships Aircraft Carrier(5 A points)Cruiser(4 C points)Destroyer(3 D points)Submarine(2 S points)on the HIDDEN Mathematical Ocean
88 Battleships Use this board to HIDE your battleships. 1 Aircraft Carrier(AAAAA)1 Cruiser(CCCC)1 Destroyer(DDD)1 Submarine(SS)Use this board to HIDE your battleships.Keep this board HIDDEN from the other team!This is the INSIDE board.Home Page
89 Use this board to ATTACK. Keep this board VISIBLE!This is the OUTSIDE board. |
This unit of 1st grade math worksheets will focus on the concept of one more than and one less than. Ccss math content 1 nbt a 1 ccss math content 1 nbt c 5 ccss math content 2 nbt b 8.
Students should be provided a 100 bead abacus or base ten blocks for these two digit addition problems.
Math worksheets grade 1 more less. This is a good way to work on number order and place value with your students. 1 more or 1 less. Free grade 1 math worksheets.
Practice skills contain identifying more or less quantities choosing items that are fewer or more in number coloring and drawing activities and more. And p m reading time. First grade math made easy provides practice at all the major topics for grade 1 with emphasis on addition and subtraction concepts.
These printable 1st grade math worksheets help students master basic math skills the initial focus is on numbers and counting followed by arithmetic and concepts related to fractions time money measurement and geometry simple word problems review all these concepts. Each piece of candy has a number on it. 1st grade math worksheets printable pdfs.
We hope you find them very useful and interesting. We have crafted many worksheets covering various aspects of this topic and many more. 1st grade math worksheets on addition add one to other numbers adding double digit numbers addition with carrying etc subtraction subtraction word problems subtraction of small numbers subtracting double digits etc numbers number lines ordering numbers comparing numbers ordinal numbers etc telling time a m.
Add two single digit numbers using mental math strategies such as the trick with nine and eight doubles or doubles plus one more. Worksheets math grade 1. Grade 1 number operations in base ten grade 2 number operations in base ten.
Below you will find a wide range of our printable worksheets in chapter more and less of section mixed operations. More or less fewer worksheets the printable worksheets in this page strengthen the knowledge of kindergarten and 1st grade kids in comparing two or more quantities. Learn how the workbook correlates to the common core state standards for mathematics.
This coloring math worksheet gives your child practice finding 1 more and 1 less than numbers up to 20. These worksheets are appropriate for first grade math. It also helps to introduce the concept of adding and subtracting as each problem is essentially a 1 or 1 math problem.
Your child will connect the spaceships and rockets to the planets and stars with 1 or 10 more or less in this coloring math worksheet. Students are not required to remember these sums by heart in 1st grade. Download free printable math worksheet and practice maths quickly.
More or less weight worksheet for grade 1 kids to learn maths in an easy and fun way. It includes a review of kindergarten topics and a preview of topics in grade 2. |
Vector projection equation
What is projection formula?
: a perspective formula projected so as to represent it in two dimensions — compare structural formula.
How do you calculate projection?
To forecast sales, multiply the number of units by the price you sell them for. Create projections for each month. Your sales forecast will show a projection of $12,000 in car wash sales for April. As the projected month passes, look at the difference between expected outcomes and actual results.
How do you calculate a vector?
VectorsA vector has magnitude (size) and direction: a − b. A vector is often written in bold, like a or b. The vector a is broken up into. We can then add vectors by adding the x parts and adding the y parts: When we break up a vector like that, each part is called a component: |a| ||a||
What is projection rule?
Projection law states that in any triangle: Where , A , B , C are the three angled of the triangle and a , b , c are the corresponding opposite side of the angles. Projection law or the formula of projection law express the algebraic sum of the projection of any two side in term of the third side.
What is the projection of B vector on a vector?
The vector projection of b onto a is the vector with this length that begins at the point A points in the same direction (or opposite direction if the scalar projection is negative) as a.
What is vector projection used for?
Vector projection is useful in physics applications involving force and work. When the box is pulled by vector some of the force is wasted pulling up against gravity. In real life this may be useful because of friction, but for now this energy is inefficiently wasted in the horizontal movement of the box.
Is dot product a projection?
The dot product as projection. The dot product of the vectors a (in blue) and b (in green), when divided by the magnitude of b, is the projection of a onto b.
What is the vector projection of U onto V?
The distance we travel in the direction of v, while traversing u is called the component of u with respect to v and is denoted compvu. The vector parallel to v, with magnitude compvu, in the direction of v is called the projection of u onto v and is denoted projvu. Note projvu is a vector and compvu is a scalar.
What is a vector in math?
Definition of a vector. A vector is an object that has both a magnitude and a direction. Geometrically, we can picture a vector as a directed line segment, whose length is the magnitude of the vector and with an arrow indicating the direction. The direction of the vector is from its tail to its head.
What is the length of a vector?
The magnitude of a vector is the length of the vector. The magnitude of the vector a is denoted as ∥a∥. See the introduction to vectors for more about the magnitude of a vector. Formulas for the magnitude of vectors in two and three dimensions in terms of their coordinates are derived in this page.
What is projection types of projection?
Following are the types to projections: One Point (one principal vanishing point) Two Point (Two principal vanishing point) Three point (Three principal Vanishing point) Cavalier Cabinet Multi view Axonometric Isometric Dimetric Trimetric Projections Parallel Projections Perspective Projections Orthographic (
What is projection in Triangle?
Projection Formula gives the relation between angles and sides of a triangle. We can find the length of a side of the triangle if other two sides and corresponding angles are given using projection formula. |
In the common-emitter section of this chapter, we saw a SPICE analysis where the output waveform resembled a half-wave rectified shape: only half of the input waveform was reproduced, with the other half being completely cut off. Since our purpose at that time was to reproduce the entire waveshape, this constituted a problem. The solution to this problem was to add a small bias voltage to the amplifier input so that the transistor stayed in active mode throughout the entire wave cycle. This addition was called a bias voltage.
A half-wave output is not problematic for some applications. Some applications may require this very kind of amplification, because it is possible to operate an amplifier in modes other than full-wave reproduction and specific applications require different ranges of reproduction, thus it is useful to describe the degree to which an amplifier reproduces the input waveform by designating it according to class. Amplifier class operation is categorized with alphabetical letters: A, B, C, and AB.
For Class A operation, the entire input waveform is faithfully reproduced.
Operation can only be obtained when the transistor spends its entire time in the active mode, never reaching either cutoff or saturation. To achieve this, sufficient DC bias voltage is usually set at the level necessary to drive the transistor exactly halfway between cutoff and saturation. This way, the AC input signal will be perfectly “centered” between the amplifier’s high and low signal limit levels.
Class A: The amplifier output is a faithful reproduction of the input.
Operation is what we had the first time an AC signal was applied to the common-emitter amplifier with no DC bias voltage. The transistor spent half its time in active mode and the other half in cutoff with the input voltage too low (or even of the wrong polarity!) to forward-bias its base-emitter junction.
Class B: Bias is such that half (180°) of the waveform is reproduced.
By itself, an amplifier operating in class B mode is not very useful. In most circumstances, the severe distortion introduced into the waveshape by eliminating half of it would be unacceptable. However, class B operation is a useful mode of biasing if two amplifiers are operated as a push-pull pair, each amplifier handling only half of the waveform at a time:
Class B push pull amplifier: Each transistor reproduces half of the waveform. Combining the halves produces a faithful reproduction of the whole wave.
Transistor Q1 “pushes” (drives the output voltage in a positive direction with respect to ground), while transistor Q2 “pulls” the output voltage (in a negative direction, toward 0 volts with respect to ground). Individually, each of these transistors is operating in class B mode, active only for one-half of the input waveform cycle. Together, however, both functions as a team to produce an output waveform identical in shape to the input waveform.
A decided advantage of class B (push-pull) amplifier design over the class A design is greater output power capability. With a class A design, the transistor dissipates considerable energy in the form of heat because it never stops conducting current. At all points in the wave cycle, it is in the active (conducting) mode, conducting substantial current and dropping substantial voltage. There is substantial power dissipated by the transistor throughout the cycle. In a class B design, each transistor spends half the time in cutoff mode, where it dissipates zero power (zero current = zero power dissipation). This gives each transistor a time to “rest” and cool while the other transistor carries the burden of the load. Class A amplifiers are simpler in design but tend to be limited to low-power signal applications for the simple reason of transistor heat dissipation.
Another class of amplifier operation known as class AB is somewhere between class A and class B: the transistor spends more than 50% but less than 100% of the time conducting current.
If the input signal bias for an amplifier is slightly negative (opposite of the bias polarity for class A operation), the output waveform will be further “clipped” than it was with class B biasing, resulting in an operation where the transistor spends most of the time in cutoff mode:
Class C: Conduction is for less than a half cycle (< 180°).
At first, this scheme may seem utterly pointless. After all, how useful could an amplifier be if it clips the waveform as badly as this? If the output is used directly with no conditioning of any kind, it would indeed be of questionable utility. However, with the application of a tank circuit (parallel resonant inductor-capacitor combination) to the output, the occasional output surge produced by the amplifier can set in motion a higher-frequency oscillation maintained by the tank circuit. This may be likened to a machine where a heavy flywheel is given an occasional “kick” to keep it spinning:
Class C amplifier driving a resonant circuit.
Called class C operation, this scheme also enjoys high power efficiency since the transistor(s) spend the vast majority of time in the cutoff mode, where they dissipate zero power. The rate of output waveform decay (decreasing oscillation amplitude between “kicks” from the amplifier) is exaggerated here for the benefit of illustration. Because of the tuned tank circuit on the output, this circuit is usable only for amplifying signals of definite, fixed amplitude. A class C amplifier may be used in an FM (frequency modulation) radio transmitter. However, the class C amplifier may not directly amplify an AM (amplitude modulated) signal due to distortion.
Another kind of amplifier operation, significantly different from Class A, B, AB, or C, is called Class D. It is not obtained by applying a specific measure of bias voltage as are the other classes of operation, but requires a radical re-design of the amplifier circuit itself. It is a little too early in this chapter to investigate exactly how a class D amplifier is built, but not too early to discuss its basic principle of operation.
A class D amplifier reproduces the profile of the input voltage waveform by generating a rapidly-pulsing square wave output. The duty cycle of the output waveform (time “on” versus total cycle time) varies with the instantaneous amplitude of the input signal. The plots in (Figure below demonstrate this principle.
Class D amplifier: Input signal and unfiltered output.
The greater the instantaneous voltage of the input signal, the greater the duty cycle of the output square-wave pulse. If there can be any goal stated of the class D design, it is to avoid active-mode transistor operation. Since the output transistor of a class D amplifier is never in the active mode, only cutoff or saturated, there will be little heat energy dissipated by it. This results in very high power efficiency for the amplifier. Of course, the disadvantage of this strategy is the overwhelming presence of harmonics on the output. Fortunately, since these harmonic frequencies are typically much greater than the frequency of the input signal, these can be filtered out by a low-pass filter with relative ease, resulting in an output more closely resembling the original input signal waveform. Class D technology is typically seen where extremely high power levels and relatively low frequencies are encountered, such as in industrial inverters (devices converting DC into AC power to run motors and other large devices) and high-performance audio amplifiers.
A term you will likely come across in your studies of electronics is something called quiescent, which is a modifier designating the zero input condition of a circuit. Quiescent current, for example, is the amount of current in a circuit with zero input signal voltage applied. Bias voltage in a transistor circuit forces the transistor to operate at a different level of collector current with zero input signal voltage than it would without that bias voltage. Therefore, the amount of bias in an amplifier circuit determines its quiescent values.
Quiescent Current of Amplifiers
In a class A amplifier, the quiescent current should be exactly half of its saturation value (halfway between saturation and cutoff, cutoff by definition being zero). Class B and Class C amplifiers have quiescent current values of zero since these are supposed to be cut-off with no signal applied. Class AB amplifiers have very low quiescent current values, just above cutoff. To illustrate this graphically, a “load line” is sometimes plotted over a transistor’s characteristic curves to illustrate its range of operation while connected to a load resistance of specific value shown in the figure below.
Example load line drawn over transistor characteristic curves from Vsupply to saturation current.
A load line is a plot of collector-to-emitter voltage over a range of collector currents. At the lower-right corner of the load line, voltage is at maximum and current is at zero, representing a condition of cutoff. At the upper-left corner of the line, voltage is at zero while current is at a maximum, representing a condition of saturation. Dots marking where the load line intersects the various transistor curves represent realistic operating conditions for those base currents given.
Quiescent operating conditions may be shown on this graph in the form of a single dot along the load line. For a class A amplifier, the quiescent point will be in the middle of the load line as in (Figure below.)
Quiescent point (dot) for class A.
In this illustration, the quiescent point happens to fall on the curve representing a base current of 40 µA. If we were to change the load resistance in this circuit to a greater value, it would affect the slope of the load line, since a greater load resistance would limit the maximum collector current at saturation, but would not change the collector-emitter voltage at cutoff. Graphically, the result is a load line with a different upper-left point and the same lower-right point as in
Load line resulting from increased load resistance.
Note how the new load line doesn’t intercept the 75 µA curve along its flat portion as before. This is very important to know because the non-horizontal portion of a characteristic curve represents a condition of saturation. Having the load line intercept the 75 µA curve outside of the curve’s horizontal range means that the amplifier will be saturated at that amount of base current. Increasing the load resistor value is what caused the load line to intercept the 75 µA curve at this new point, and it indicates that saturation will occur at a lesser value of base current than before.
With the old, lower-value load resistor in the circuit, a base current of 75 µA would yield a proportional collector current (base current multiplied by β). In the first load line graph, a base current of 75 µA gave a collector current almost twice what was obtained at 40 µA, as the β ratio would predict. However, collector current increases marginally between base currents 75 µA and 40 µA, because the transistor begins to lose sufficient collector-emitter voltage to continue to regulate collector current.
To maintain linear (no-distortion) operation, transistor amplifiers shouldn’t be operated at points where the transistor will saturate; that is, where the load line will not potentially fall on the horizontal portion of a collector current curve. We’d have to add a few more curves to the graph in Figure below before we could tell just how far we could “push” this transistor with increased base currents before it saturates.
More base current curves shows saturation detail.
It appears in this graph that the highest-current point on the load line falling on the straight portion of a curve is the point on the 50 µA curve. This new point should be considered the maximum allowable input signal level for class A operation. Also for class A operation, the bias should be set so that the quiescent point is halfway between this new maximum point and cutoff are shown in Figure below.
New quiescent point avoids saturation region.
Now that we know a little more about the consequences of different DC bias voltage levels, it is time to investigate practical biasing techniques. DC voltage source (battery) connected in series with the AC input signal to bias the amplifier for whatever desired class of operation. In real life, the connection of a precisely-calibrated battery to the input of an amplifier is simply not practical. Even if it were possible to customize a battery to produce just the right amount of voltage for any given bias requirement, that battery would not remain at its manufactured voltage indefinitely. Once it started to discharge and its output voltage drooped, the amplifier would begin to drift toward class B operation.
Take this circuit, illustrated in the common-emitter section for SPICE simulation, for instance, in the figure below.
Impractical base battery bias.
That 2.3-volt “Vbias” battery would not be practical to include in a real amplifier circuit. A far more practical method of obtaining bias voltage for this amplifier would be to develop the necessary 2.3 volts using a voltage divider network connected across the 15-volt battery. After all, the 15-volt battery is already there by necessity, and voltage divider circuits are easy to design and build. Let’s see how this might look in the figure below.
Voltage divider bias.
If we choose a pair of resistor values for R2 and R3 that will produce 2.3 volts across R3 from a total of 15 volts (such as 8466 Ω for R2 and 1533 Ω for R3), we should have our desired value of 2.3 volts between base and emitter for biasing with no signal input. The only problem is, this circuit configuration places the AC input signal source directly in parallel with R3 of our voltage divider. This is not acceptable, as the AC source will tend to overpower any DC voltage dropped across R3. Parallel components must have the same voltage, so if an AC voltage source is directly connected across one resistor of a DC voltage divider, the AC source will “win” and there will be no DC bias voltage added to the signal.
One way to make this scheme work, although it may not be obvious why it will work, is to place a coupling capacitor between the AC voltage source and the voltage divider as in Figure below.
Coupling capacitor prevents voltage divider bias from flowing into signal generator.
The capacitor forms a high-pass filter between the AC source and the DC voltage divider, passing almost all of the AC signal voltage on the transistor while blocking all DC voltage from being shorted through the AC signal source. This makes much more sense if you understand the superposition theorem and how it works. According to superposition, any linear, bilateral circuit can be analyzed in a piecemeal fashion by only considering one power source at a time, then algebraically adding the effects of all power sources to find the final result. If we were to separate the capacitor and R2—R3 voltage divider circuit from the rest of the amplifier, it might be easier to understand how this superposition of AC and DC would work.
With only the AC signal source in effect, and a capacitor with arbitrarily low impedance at the signal frequency, almost all the AC voltage appears across R3:
Due to the coupling capacitor’s very low impedance at the signal frequency, it behaves much like a piece of wire, thus it can be omitted for this step in superposition analysis.
With only the DC source in effect, the capacitor appears to be an open circuit, and thus neither it nor the shorted AC signal source will have any effect on the operation of the R2—R3 voltage divider in the figure below.
The capacitor appears to be an open circuit as far at the DC analysis is concerned
Combining these two separate analyses in Figure below, we get a superposition of (almost) 1.5 volts AC and 2.3 volts DC, ready to be connected to the base of the transistor.
Combined AC and DC circuit.
Enough talk—it’s about time for a SPICE simulation of the whole amplifier circuit in in the figure below. We will use a capacitor value of 100 µF to obtain an arbitrarily low (0.796 Ω) impedance at 2000 Hz:
SPICE simulation of voltage divider bias.
voltage divider biasing vinput 1 0 sin (0 1.5 2000 0 0) c1 1 5 100u r1 5 2 1k r2 4 5 8466 r3 5 0 1533 q1 3 2 0 mod1 rspkr 3 4 8 v1 4 0 dc 15 .model mod1 npn .tran 0.02m 0.78m .plot tran v(1,0) i(v1) .end
Note the substantial distortion in the output waveform in Figure above. The sine wave is being clipped during most of the input signal’s negative half-cycle. This tells us the transistor is entering into cutoff mode when it shouldn’t (I’m assuming a goal of class A operation as before). Why is this? This new biasing technique should give us exactly the same amount of DC bias voltage as before, right?
With the capacitor and R2—R3 resistor network unloaded, it will provide exactly 2.3 volts worth of DC bias. However, once we connect this network to the transistor, it is no longer unloaded. Current drawn through the base of the transistor will load the voltage divider, thus reducing the DC bias voltage available for the transistor. Using the diode current source transistor model in Figure below to illustrate, the bias problem becomes evident.
Diode transistor model shows loading of voltage divider.
A voltage divider output depends not only on the size of its constituent resistors but also on how much current is being divided away from it through a load. The base-emitter PN junction of the transistor is a load that decreases the DC voltage dropped across R3, due to the fact that both the bias current and IR3 are drawn through the R2 resistor, upsetting the divider ratio formerly set by the resistance values of R2 and R3. To obtain a DC bias voltage of 2.3 volts, the values of R2 and/or R3 must be adjusted to compensate for the effect of base current loading. To increase the DC voltage dropped across R3, lower the value of R2, raise the value of R3 or both.
No distortion of the output after adjusting R2 and R3.
voltage divider biasing vinput 1 0 sin (0 1.5 2000 0 0) c1 1 5 100u r1 5 2 1k r2 4 5 6k <--- R2 decreased to 6 k r3 5 0 4k <--- R3 increased to 4 k q1 3 2 0 mod1 rspkr 3 4 8 v1 4 0 dc 15 .model mod1 npn .tran 0.02m 0.78m .plot tran v(1,0) i(v1) .end
The new resistor values of 6 kΩ and 4 kΩ (R2 and R3, respectively) in Figure above results in class A waveform reproduction, just the way we wanted.
by Robert Keim
by Cabe Atwell
by Gary Elinoff
by Robert Keim |
Grade Level: 6 (5-7)
Time Required: 1 hour
Expendable Cost/Group: US $0.00
Group Size: 2
Activity Dependency: None
Subject Areas: Chemistry, Physical Science
SummaryStudents learn about the periodic table and how pervasive the elements are in our daily lives. After reviewing the table organization and facts about the first 20 elements, they play an element identification game. They also learn that engineers incorporate these elements into the design of new products and processes. Acting as computer and animation engineers, students creatively express their new knowledge by creating a superhero character based on of the elements they now know so well. They will then pair with another superhero and create a dynamic duo out of the two elements, which will represent a molecule.
Information in the periodic table of the elements helps engineers in all disciplines, because they use elements in all facets of materials design. Exploiting the characteristics of the various elements helps engineers design stronger bridges, lighter airplanes, non-corrosive buildings, as well as agriculture, food, drinking water and medical products. Since everything known to humans is composed of these elements, everything that engineers create uses this knowledge.
After this activity, students should be able to:
- Identify three elements and several of their characteristics.
- Describe how engineers always use their knowledge about element properties when designing and creating virtually everything we see around us.
- Use the superhero analogy to make models of both atoms and molecules.
Each TeachEngineering lesson or activity is correlated to one or more K-12 science,
technology, engineering or math (STEM) educational standards.
All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN),
a project of D2L (www.achievementstandards.org).
In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics;
within type by subtype, then by grade, etc.
Each TeachEngineering lesson or activity is correlated to one or more K-12 science, technology, engineering or math (STEM) educational standards.
All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN), a project of D2L (www.achievementstandards.org).
In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics; within type by subtype, then by grade, etc.
|NGSS Performance Expectation|
MS-PS1-1. Develop models to describe the atomic composition of simple molecules and extended structures. (Grades 6 - 8)
Do you agree with this alignment? Thanks for your feedback!
|Click to view other curriculum aligned to this Performance Expectation|
|This activity focuses on the following Three Dimensional Learning aspects of NGSS:|
|Science & Engineering Practices||Disciplinary Core Ideas||Crosscutting Concepts|
|Develop a model to predict and/or describe phenomena.|
Alignment agreement: Thanks for your feedback!
|Substances are made from different types of atoms, which combine with one another in various ways. Atoms form molecules that range in size from two to thousands of atoms.|
Alignment agreement: Thanks for your feedback!Solids may be formed from molecules, or they may be extended structures with repeating subunits
Alignment agreement: Thanks for your feedback!
|Time, space, and energy phenomena can be observed at various scales using models to study systems that are too large or too small.|
Alignment agreement: Thanks for your feedback!
Develop an evidence based scientific explanation of the atomic model as the foundation for all chemistry
Do you agree with this alignment? Thanks for your feedback!
All matter is made of atoms, which are far too small to see directly through a light microscope. Elements have unique atoms and thus, unique properties. Atoms themselves are made of even smaller particles
Do you agree with this alignment? Thanks for your feedback!
For Part 1: Engineering the Elements Matching Game, the teacher needs:
- Elements Matching Game Images PowerPoint file
- A computer projector or overhead projector to show the PowerPoint slides
For Part 1: Engineering the Elements Matching Game, each group needs:
- 1 set of Elements Matching Game Cards
For Part 2: Designing Element Superheroes, teacher needs:
- 1 set of either Elements Matching Game Cards (okay to re-use from Part 1) or Mystery Elements Cards (this option requires students to do more research)
For Part 2: Designing Element Superheroes and Dynamic Duos, each group needs:
- Student access to information about all the elements (such as physical science books or the Internet)
- Colored pencils or markers
Worksheets and AttachmentsVisit [ ] to print or download.
More Curriculum Like This
Students examine the periodic table and the properties of elements. They learn the basic definition of an element and the 18 elements that compose most of the matter in the universe. The periodic table is described as one method of organization for the elements.
Students learn how to classify materials as mixtures, elements or compounds and identify the properties of each type. The concept of separation of mixtures is also introduced since nearly every element or compound is found naturally in an impure state such as a mixture of two or more substances, and...
A basic understanding of the periodic table of the elements. A basic understanding of the structure of an atom is helpful, as presented in the The Fundamental Building Blocks of Matter lesson in the Mixtures & Solutions unit.
Let's make a list of all the elements we can think of and write them on the board (or on an overhead transparency). Remember that the elements in the periodic table cannot be further broken down to form a different element. Think of elements as the most basic building blocks. These building blocks are what combine to create everything we see around us. (If some students suggest compounds [such as water or air], clarify the difference between elements and compounds [water is a compound of hydrogen and oxygen elements; air is mostly nitrogen and oxygen].)
Who remembers that the periodic table organizes the elements based on their properties? Today let's learn about some of those properties. (See Figure 2. Show the periodic table, poster size or via overhead projector using the attached Periodic Table Visual Aid or from the Internet using the dynamic periodic table at http://www.dayah.com/periodic/.) Let's find the elements you already know. (Point out the locations of all the elements in the student-generated list.)
The periodic table tells us a lot of information about the elements. First of all, elements are arranged in different groups (vertical) and periods (horizontal). So, the elements with similar properties are grouped together. The periodic table has several categories, such as: non-metals, halogens, noble gases, metalloids, alkali metals, alkaline earth metals and poor metals. What else can we learn by looking at the periodic table? (Possible answers: element names, element abbreviations, atomic numbers, numbers of protons, rare earth elements, etc.) What can we learn from how they are arranged in the table? (They are arranged by their number of protons, or atomic number.)
Why do you think engineers must understand the periodic table? (Answer: Understanding the elements of the periodic table and how they interact with each other is important for engineers because they work with all types of materials. Knowledge of the characteristics of the various elements helps them design stronger bridges, lighter airplanes, non-corrosive buildings, the buttons on your toys and games, as well as food and medical applications.) It is essential for engineers to understand the properties of the different elements so that they know what to expect or look for when designing something new. Engineers are always trying to improve things — like airplanes, air conditioning systems, computers or cell phones. Better designs often include an improvement in the materials used, and materials are made of elements, or compounds of one or more elements. An engineer keeps the different element properties in mind when designing.
Today we are going to learn more about the properties of elements in the periodic table. We will learn about the engineering applications of many of the elements. With this information, we will work as computer and animation engineers who are designing a superhero who has similar characteristics to an element. Then we will make a periodic table of superheroes that could be used in a TV show or computer game!
We will then discover the nature of atoms interacting as molecules by forming pairs of elemental characters to make superhero groups and describe the combined behavior of our two different elements.
Looking at the periodic table in a science book (or Figure 2 or the attached Periodic Table Visual Aid or the dynamic periodic table at http://www.dayah.com/periodic/), the vertical columns are referred to as groups while the horizontal rows are known as periods. From left to right, the groups are classified as alkali metals (group 1), alkaline earth metals (group 2), transition metals (groups 3-12), poor metals/metalloids/non metals (groups 13-16), halogens (group 17), and noble gases (group 18). In most periodic tables, the different groups are labeled with different colors. In addition to the main groups and rows, two mini-periods are often separated from the main table and placed below it. The lanthanide and actinide series are known as rare earth elements. All of the elements occur either naturally in that form, arise from the decay of those natural elements, or are synthetic (human-made). Depending on the age of the table, the number of synthetic elements may vary.
Elements are arranged based on their number of protons, which is commonly referred to as their atomic number. They increase in number from left to right and from top to bottom. In addition, the order corresponds to the atomic mass of the element as well (from smallest to largest) for most of the elements. Elements are further arranged based on common properties between elements, such as electron configurations and electronegativity. In this activity, students learn about the first 20 elements on the periodic table, all of which are present in our daily lives.
When two or more atoms join together they create a molecule. For example, one molecule of water is made up of two hydrogen atoms and one oxygen atom.
Before the Activity
- Gather materials.
- For Part 1, prepare a computer projector or overhead projector to show the attached Elements Matching Game Images Power Point presentation to the class. Also, print and cut apart three sets of the attached Elements Matching Game Cards. Shuffle each set.
- For Part 2, either re-use one of the Part 1 sets of Elements Matching Game Cards, or print out and cut apart one set of the attached Mystery Elements Cards. Student teams will each blindly choose a card from the set you provide. The first set provides property and characteristic information on the first 20 elements. The second set provides more of a challenge; its cards provide property and characteristic information on the first 30 elements without identifying the element names, so students must first identify "their" element before proceeding with the activity.
With the Students: Part 1 — Engineering the Elements Matching Game
- To conduct the Elements Matching Game, divide the class into three teams.
- Give each team a set of element game cards to distribute evenly among their teammates.
- Explain the activity to the students.
- To begin, the teacher shows pictures and clues of 20 unidentified elements (using the attached Elements Matching Game Images PowerPoint file). (Answers may be found on the last slide.)
- Students look at their game cards until someone discovers that they are holding the card that matches the unknown element.
- The first person who raises their hand, (and correctly) declares that they have the matching element, scores a point for their team. (Each team has the same set of cards, so teams are competing to identify each element first).
- The student who correctly identifies the element reads the rest of their card to the class.
- The teacher shows the next slide (image of another element), and the game continues.
- After 20 elements are matched, the team with the most points is declared the winner.
- Reiterate the point that the elements combine together to create many different compounds that are used by engineers. Ask students if they re surprised to learn how many engineering applications the elements have when put together in different combinations. For example, engineers use lithium in cell phone batteries and aircraft parts.
With the Students: Part 2 — Designing Element Superheroes
- Explain to students that some engineers are involved in graphic design, special effects and computer animation. They develop handheld electronic and computer games as well as the animated movies and TV shows that students might watch. Often, these graphics and animations are designed for educational purposes — to teach viewers about a school subject. Today, students act as computer and animation engineers and develop a new educational character based on the elements in the periodic table.
- Divide the class into teams of two students each. Assign one element per team by having them randomly choose an element from either the Elements Matching Game Cards (the first 20 elements, identified) or the Mystery Elements Cards.(the first 30 elements, unidentified) (If using the mystery cards, the students must conduct research to determine the name of their element. Provide resources such as physical science books, periodic table handouts or Internet access. Students may need assistance for some of the more unfamiliar elements.)
- After each team has an element, ask them to design a superhero based on the characteristics of that element to use for a new educational animation series. Before designing, direct them to choose a specific audience for their character (elementary, middle or high school students), and keep that audience in mind when determining the nature, aesthetics (the looks) and super power of their character. For example, think of the various animated characters that are popular today with younger kids (perhaps Dora the Explorer or Sponge Bob), compared to those popular with teens (perhaps football/skateboard game characters or Japanese anime). What are the differences in the visual look and nature of these characters? (Perhaps colors [bright vs. dark], shapes [simple vs. complex], nature [childlike vs. mature], etc.) The point is to create something that appeals to your target audience.
- Direct students to refer to the properties on their cards, and design their superhero to have similar characteristics. The superhero's main power should be related to an item on the information card. Guide students to brainstorm together to come up with creative ideas. If it helps to generate ideas, show the attached Element Superhero Example (also Figure 1). Remind students of the brainstorming tips used by engineers:
- No negative comments allowed.
- Encourage wild ideas.
- Record all ideas.
- Build on the ideas of others.
- Stay focused on the topic.
- Allow only one conversation at a time.
- Remind students that each group must come up with a name for the superhero that relates to the name of the element.
- Have students draw their superhero (as time permits). Each team drawing should include the element's symbol and atomic number, as well as a short description of the hero's powers and properties.
- Once the students have finished or made reasonable progress on their super heroes, remind them that not all superheroes work alone and have them brainstorm different pairs or groups of superheroes.
- Now pair each group of students with another group of students. Have them come up with a new super group. They should be thinking about what strengths and weaknesses each elemental superhero brings to the group and what the group is best at, based on the individual strengths of the elements.
- Have each team make a quick "engineering design" presentation of their first individual elements and superheroes and then their dynamic due to the class. See the Assessment section for suggested presentation requirements.
- Arrange the superhero element drawings on a wall by having the students arrange them to mimic their relative periodic table locations.
- Conclude by having the entire class participate in a Human Periodic Table, as described in the Assessment section.
atom: The basic unit of matter; the smallest unit of an element, having all the characteristics of that element; consists of negatively-charged electrons and a positively-charged center called a nucleus.
atomic number: The number of positive charges (or protons) in the nucleus of an atom of a given element, and therefore also the number of electrons normally surrounding the nucleus.
brainstorming: A method of shared problem solving in which all members of a group quickly and spontaneously contribute many ideas.
compound: (chemistry) A pure substance composed of two or more elements whose composition is constant.
electron: Particle with a negative charge orbiting the nucleus of an atom.
element: (chemistry) A substance that cannot be separated into a simpler substance by chemical means.
engineer: A person who applies their understanding of science and math to creating things for the benefit of humanity and our world.
Materials science: The study of the characteristics and uses of various materials, such as glass, plastics and metals.
molecule: A group of atoms bonded together.
nucleus: Dense, central core of an atom (made of protons and neutrons).
periodic table: (chemistry) A table in which the chemical elements are arranged in order of increasing atomic number. Elements with similar properties are arranged in the same column (called a group), and elements with the same number of electron shells are arranged in the same row (called a period).
proton: Particle in the nucleus of an atom with a positive charge. Elements are arranged in the periodic table based on their number of protons (or atomic number).
synthetic element: (chemistry) An element too unstable to be found naturally on Earth.
Information Pooling: Ask the class to think of all the elements they know. Compile a list on an overhead projector transparency or the classroom chalk board as the students make suggestions. If some students suggest compounds (such as water or air), clarify the difference between elements and compounds. When no more suggestions are forthcoming, bring out the periodic table, and point out the locations of all the elements suggested by the students.
Activity Embedded Assessment
Pairs Check: After student teams create their superhero character from their element card, have them check with another group to verify that they have the correct information included in their design sketch.
Engineering Design Presentations: Have each team present their design of an element superhero. Require the presentations to include: the name of the element, the element clues that were given, specifically how the element was identified, the chemical symbol, the atomic number, the name of the superhero, how the superhero's look relates to the element, how the superhero's powers relate to the element, the audience for the character, and how they designed it for that audience, and a drawing of the superhero.
Human Periodic Table: Ask students to clear an area in the classroom (move desks aside or go outside) and arrange themselves like the common periodic table. As time permits, go around (as they are arranged) and ask them to explain the logic of their element position in the table, using what they learned during the activity.
Extra Fun Facts: If students have access to more science books and/or the Internet, have them, in addition to determining the name of their element, find out another fun fact about the element. At the end of the activity, in their class presentation or while they are describing their position in the Human Periodic Table (see Assessment section), have teams share this fact with the class.
Complete the Table: Make a full superhero periodic table by assigning the rest of the elements to the students. Have them research the elements enough to design a superhero with similar characteristics. Then hang these on the wall with the original 20 element superheroes.
- For lower grades, add the chemical symbol and/or atomic number to each element card to provide an easy clue.
- For more advanced, modify the element cards by removing the first clue from each one (this is the easiest clue).
Additional Multimedia Support
A great online resource is the "dynamic periodic table" at Michael Dayah's website. It provides colorful, interactive and current information on series, properties, electrons, isotopes, element characteristics (and more), and in the language of your choice. If possible, project it on your classroom wall from a computer/Internet connection as you discuss the periodic table and elements with the class, Or, use the PDF letter and legal sizes for color handouts. Click on "About" to fully explore the capabilities of this resource. See: http://www.dayah.com/periodic/.
Dictionary.com. Lexico Publishing Group, LLC., http://www.dictionary.com, accessed July 24, 2007. (Source of some vocabulary definitions, with some adaptation)
Periodic Table of the Naturally Occurring Elements. Publications Warehouse, U.S. Geological Survey Circular 1143, Version 1.0, USGS Online Publications. Accessed July 24, 2007. http://pubs.usgs.gov/
Copyright© 2006 by Regents of the University of Colorado.
ContributorsMegan Podlogar; Lauren Cooper; Brian Kay; Malinda Schaefer Zarske; Denise W. Carlson
Supporting ProgramIntegrated Teaching and Learning Program, College of Engineering, University of Colorado Boulder
The contents of this digital library curriculum were developed under a grant from the Fund for the Improvement of Postsecondary Education (FIPSE), U.S. Department of Education, and National Science Foundation GK-12 grant no 0338326. However, these contents do not necessarily represent the policies of the Department of Education or National Science Foundation, and you should not assume endorsement by the federal government.
Last modified: January 23, 2021 |
In Making the grade: Part I we considered what the gradient of a curve might mean, and how to find it by appealing directly to the definition. In particular, we used direct arguments - which were really quite involved - to calculate the gradients of the curves x2 and sin(x). To perform this kind of calculation every time we need to calculate such a gradient would be a nightmare - especially if we had a complicated function. In this article we think about the process of manipulating the algebraic expressions with which we usually describe functions in order to perform this calculation. This is differentiation as we know and love it!
The other kind of gradient.
Image DHD Photo Gallery
The whole point of having a set of formal rules is to allow us to temporarily forget the exact meaning and to concentrate on calculation. After all, we can only concentrate on a few things at a time. Of course, it is vital to keep the meaning in the back of our minds, as a check that the answer is sensible. Furthermore, by having a set of rules disjoint from a particular context, we can apply the rules in many different settings.
The rules allow us to differentiate just about any algebraic expression we care to write down. Of course we have to decide which formal rules to apply in a given situation, and in what order. Sometimes it is not clear which rule we should apply - there are a number of things we could do correctly. What then to do? How should we decide? As we shall see, the answer to these questions is surprising and illustrates the intimate way in which calculus and algebra interact.
Calculating gradients using the calculusIn the previous article we calculated the gradient by considering the change in divided by the change in , that is,
Let’s generalize this and consider where is any natural number, that is, . Of course, we have already considered the cases when (the straight line) and when (the quadratic).
In order to calculate (1) when we need to consider
To simplify this we need to expand out the term
When we do this for small values of we get
We could do this by hand for other values of by multiplying out the brackets, but this is tricky, time-consuming and it is all too easy to slip up. In fact, there is a very regular pattern to the coefficients of the terms in the expansions above. If we ignore the ’s and ’s we obtain the pattern known as Pascal’s Triangle, part of which is shown below.
In general, the number in the position in from the left on the th row is given by the formula
These numbers are known as the binomial coefficients, because is the coefficient of when we expand . This result is known as the binomial theorem and it allows us to exploit this pattern to write as
We are currently interested in calculating the quantity (1) when . To do this we note that, for all values of ,
Using this we have
We don’t really need to know exactly what the " stuff" is in the above expression in this case. This is because it is multiplied by , and letting tend to zero wipes out all these terms. What we are left with is the function So we express this result as
whenever is a natural number (ie, ). If in this argument we have
which confirms that the gradient of the straight line is constant.
This result may be expanded so that (2) holds whenever is a real number, although this takes a little more work.
To take another example, let (remember that for all ). So we have the formula
Functions are fundamental to modern mathematics and you simply can’t avoid using them. The idea of a function is to take two sets of objects known as the inputs and outputs. To every input the function assigns a unique output:
Most often the inputs and outputs are sets of numbers, such as the real line. The function is also most often described using a formula, in the form of an algebraic expression. This is exactly the idea of a function we have considered so far, although we haven’t been explicit about it! This is also the way we will continue to think about functions.
The reason we pause now to think of functions in a more abstract way is simply to acknowledge that a function is much more general than a formula. In fact, the function introduced in the previous article was built from two formulae bolted together. Recall these were
The trigonometric functions , , etc. are constructed with reference to a geometrical shape - in this case a circle of radius . Other ways of building functions involve an infinite series (that is a sum) or a sequence of formulae. We won’t consider these in this article, but just concentrate on how we can build up functions from simple operations.
Let us assume our input is a number . The simplest operations we could perform on our variable are the arithmetic ones. That is addition, multiplication and the two inverse operations of subtraction and division. Because we can manipulate the formulae using algebra we can often write one formula in different ways.
For example, consider
Figure 1: The function (3)
We can think of as multiplied by . Both these functions are shown in Figure 2. Try to imagine what happens when you multiply the values on each graph together.
Figure 2: The functions x2 and 3-x2
Alternatively, to calculate we might subtract from . The graphs of these functions are shown in Figure 3. Since we can recreate Figure 1 by subtracting one from the other. No doubt there are other ways of constructing the same function .
Figure 3: The functions x4 and 3x2
Functions can also be applied in order, one after the other, as in
Figure 4: The function sin(x2)
Of course, we have to ensure that any output from Function 1 is a legitimate input for Function 2. In this case we say the two functions have been composed. For example, a function such as sin(x2) can be thought of as the function that maps x to sin(x) applied to the result of the function that maps x to x2. Note that the
order really does matter here and sin(x2) and sin(x)2 are very different functions: see Figures 4 and 5.
Figure 5: The function sin(x)2
Given the numerous ways we could express a function such as (3), how should we go about differentiating it? This is the question we address in the rest of this article.
Linearity of the differential calculus
You don't always get the same result if you do things in a different order!
The first general rule allows us to calculate the derivative of two functions which have been added together. If we want to find the gradient of f(x)+g(x) we simply find the gradients of f(x) and g(x) separately and then add the results. In a more condensed (and easier to read) form this may be expressed as:
For example, to calculate the gradient of the function defined in (3), we write this as the unfactored form and can then apply the rules as follows:
Any book on calculus will contain many similar examples and exercises for you to practice.
Before we go any further, we need a word of warning about notation. In particular, there are many ways of writing the derivative of a function at the point . Different authors have different preferences. So far we have used the notation
which was promoted by Leibnitz. Another notation, used by Newton, has two forms:
Although neater in some circumstances, it is very easy to misread a dot or apostrophe and so care is needed. We will use both kinds of notation.
Linearity, which is expressed in the formulae (4) and (5), together with our result (2) allows us to calculate the derivative of any polynomial by breaking it into separate parts. In fact (4) and (5) involve two general functions. What would be really useful would be two rules which allow us to calculate gradients when general functions are multiplied or composed together, that is to say, rules which allow us to find
where and are any differentiable functions. We make a huge assumption in believing that such general rules really exist. However, if they do then the rules applied to in various different ways must respect the result (2). For example, may be written as , or as . The rule for (6), if it exists, must give when applied to each of these ways of writing . Otherwise we could obtain different answers for the derivative. So, we look at different ways of writing as a product, and try to find a rule which is consistent, at least for these.
Let’s start by defining and split this up into
We know using (2) that
Our task is to write in terms of and as an attempt to gain some insight into what the general rule (6) might be. That is, we write
where A and B are unknown functions of x. Now, using algebra we can confirm that
Thus if we take and we have a correct general rule whenever we split into (8). This rule may be written as
Immediately, by linearity, it follows that (8) holds for any polynomial.
Can we find a rule for general functions, like , which are not polynomials? Certainly, if a general rule exists, when we apply this to , the rule should reduce to (8). In fact, the rule for general functions turns out to be precisely (8), and is better known as the Product Rule.
Similarly, if we write
we have . Again, we know using (2) that
Our task is to write in terms of and as an attempt to gain some insight into what the general rule (7) might be. But
which suggests a general rule
There's a rule for everything - even chains!
Image DHD Photo Gallery
In fact, the rule for general functions turns out to be precisely (9) again, and is better known as the Chain Rule.
These rules are really very comprehensive, and proofs may be found in any text book on advanced calculus. A huge range of functions can be build up, and conversely decomposed, using these rules. When trying to differentiate a complicated function, the method is to decompose it into simpler components, and work with these separately. The derivatives of these simple parts can be recombined using the general rules to find the derivative of the original function. For completeness we state these rules again below, in two common forms of notation and for each give a worked example.
The chain rule
The following rule allows us to differentiate functions built up by compositions. Let’s assume that we have some function which we can choose to write as a composition . Let , then
or, using alternative notation,
Let us return and consider the function . Writing and , we already know that
Substituting these into (10) gives
Note: we have here, not , as we have .
The product rule
Functions can be built up by components which are multiplied together. For example, . To differentiate this we use the formula
or, using alternative notation,
This time consider the function . This can be decomposed into the two functions , each of which we know how to differentiate. Therefore we can use the rule (11) immediately to write
The quotient rule
Functions can be built up by components which are divided one by the other. Actually, since dividing by is identical to multiplying by to work out the derivative of we could just apply the product rule to , and the chain rule to composed with . But it is convenient to have a separate rule for this, which is
or, using alternative notation,
This time we differentiate . We know how to differentiate , even though it is itself a composition. We applied (10) to this and it has derivative . Using (12) gives
Applying the rules
To finish, instead of giving lots of different examples (which can be found in any calculus text), we take the reverse approach and think about one example in more detail. In particular, we return to the function . We know from the previous result (2) that the derivative of is . We write this as
Using the rules of algebra we could write this function in a number of ways. We can also think of . Alternatively, or even .
Taking the first of these, since the derivative of is , we may apply the product rule to give
We could also apply the product rule to one of the other representations of this function. In particular, we could calculate the derivative as
Similarly we could think of being a composition of two functions: as "-squared, all squared", that is to say, . In this case we may apply the chain rule to see that
Notice that in each case we get the same correct answer.
We need not stop there. For example, we could write and apply the quotient rule. If we do this we have
The point is that in order to find the derivative of we may do anything algebraically legitimate and apply any of the rules for differentiation correctly. The way we choose to find the derivative, as long of course as it is applied correctly, does not matter. We could ask: How many ways are there of differentiating ? Can you think of others?
About the author
Chris in the Volcanoes National Park, Hawaii, Summer 2003
Chris Sangwin is a member of staff in the School of Mathematics and Statistics at the University of Birmingham. He is a Research Fellow in the Learning and Teaching Support Network centre for Mathematics, Statistics, and Operational Research. His interests lie in mathematical Control Theory.
Chris would like to thank Mr Martin Brown, of Thomas Telford School, for his helpful advice and encouragement during the writing of this article. |
Who Goes There?
Next: Count to 10 Up: Non-Programmers Tutorial For Python Previous: Hello, World Contents
print "Halt!" s = raw_input("Who Goes there? ") print "You may pass,", s
When I ran it here is what my screen showed:
Halt! Who Goes there? Josh You may pass, Josh
Of course when you run the program your screen will look different
because of the
raw_input statement. When you ran the program
you probably noticed (you did run the program, right?) how you had to
type in your name and then press Enter. Then the program printed out
some more text and also your name. This is an example of input. The
program reaches a certain point and then waits for the user to input
some data that the program can use later.
Of course, getting information from the user would be useless if we didn't have anywhere to put that information and this is where variables come in. In the previous program s is a variable. Variables are like a box that can store some piece of data. Here is a program to show examples of variables:
a = 123.4 b23 = 'Spam' first_name = "Bill" b = 432 c = a + b print "a + b is", c print "first_name is", first_name print "Sorted Parts, After Midnight or",b23
And here is the output:
a + b is 555.4 first_name is Bill Sorted Parts, After Midnight or Spam
Variables store data. The variables in the above program are a, b23,
first_name, b, and c. The two basic types are strings and numbers. Strings are a sequence of letters, numbers and other characters. In this example b23 and
first_name are variables that are storing strings. Spam, Bill, a + b is, and
first_name is are the strings in this program. The characters are surrounded by " or '. The other type of variables are numbers.
Okay, so we have these boxes called variables and also data that can go into the variable. The computer will see a line like
first_name = "Bill" and it reads it as Put the string Bill into the box (or variable)
first_name. Later on it sees the statement c = a + b and it reads it as Put a + b or 123.4 + 432 or 555.4 into c.
Here is another example of variable usage:
a = 1 print a a = a + 1 print a a = a * 2 print a
And of course here is the output:
1 2 4
Even if it is the same variable on both sides the computer still reads it as: First find out the data to store and than find out where the data goes.
One more program before I end this chapter:
num = input("Type in a Number: ") str = raw_input("Type in a String: ") print "num =", num print "num is a ",type(num) print "num * 2 =",num*2 print "str =", str print "str is a ",type(str) print "str * 2 =",str*2
The output I got was:
Type in a Number: 12.34 Type in a String: Hello num = 12.34 num is a <type 'float'> num * 2 = 24.68 str = Hello str is a <type 'string'> str * 2 = HelloHello
num was gotten with input while
str was gotten with
raw_input returns a string while input returns a number. When you want the user to type in a number use input but if you want the user to type in a string use
The second half of the program uses type which tells what a
variable is. Numbers are of type int or float (which are
short for 'integer' and 'floating point' respectively). Strings are
of type string. Integers and floats can be worked on by
mathematical functions, strings cannot. Notice how when python
multiples a number by a integer the expected thing happens. However
when a string is multiplied by a integer the string has that many
copies of it added i.e.
str * 2 = HelloHello.
The operations with strings do slightly different things than operations with numbers. Here are some interative mode examples to show that some more.
>>> "This"+" "+"is"+" joined." 'This is joined.' >>> "Ha, "*5 'Ha, Ha, Ha, Ha, Ha, ' >>> "Ha, "*5+"ha!" 'Ha, Ha, Ha, Ha, Ha, ha!' >>>
Here is the list of some string operations:
#This programs calculates rate and distance problems print "Input a rate and a distance" rate = input("Rate:") distance = input("Distance:") print "Time:",distance/rate
> python rate_times.py Input a rate and a distance Rate:5 Distance:10 Time: 2 > python rate_times.py Input a rate and a distance Rate:3.52 Distance:45.6 Time: 12.9545454545
#This program calculates the perimeter and area of a rectangle print "Calculate information about a rectangle" length = input("Length:") width = input("Width:") print "Area",length*width print "Perimeter",2*length+2*width
> python area.py Calculate information about a rectangle Length:4 Width:3 Area 12 Perimeter 14 > python area.py Calculate information about a rectangle Length:2.53 Width:5.2 Area 13.156 Perimeter 15.46
#Converts Fahrenheit to Celsius temp = input("Farenheit temperature:") print (temp-32.0)*5.0/9.0
> python temperature.py Farenheit temperature:32 0.0 > python temperature.py Farenheit temperature:-40 -40.0 > python temperature.py Farenheit temperature:212 100.0 > python temperature.py Farenheit temperature:98.6 37.0
Write a program that gets 2 string variables and 2 integer variables from the user, concatenates (joins them together with no spaces) and displays the strings, then multiplies the two numbers on a new line.
Next: Count to 10 Up: Non-Programmers Tutorial For Python Previous: Hello, World Contents Josh Cogliati email@example.com Wikibooks Version: Wikibooks Non-programmers Python Tutorial Contents |
More Statistics Lesson
Data can be organized and summarized using a variety of methods. Tables are commonly used, and there are many graphical and numerical methods as well. The appropriate type of representation for a collection of data depends in part on the nature of the data, such as whether the data are numerical or nonnumerical.
In these lessons, we will learn some common graphical methods for describing and summarizing data: Frequency Distributions, Bar Graphs, Circle Graphs, Histograms, Scatterplots and Timeplots.
The frequency, or count, of a particular category or numerical value is the number of times that the category or value appears in the data. A frequency distribution is a table or graph that presents the categories or numerical values along with their associated frequencies. The relative frequency of a category or a numerical value is the associated frequency divided by the total number of data.
Relative frequencies may be expressed in terms of percents, fractions, or decimals. A relative frequency distribution is a table or graph that presents the relative frequencies of the categories or numerical values. Note that the total for the relative frequencies is 100%. If decimals were used instead of percents, the total would be 1. The sum of the relative frequencies in a relative frequency distribution is always 1.
Differences between frequency distribution table and relative frequency distribution table
A commonly used graphical display for representing frequencies, or counts, is a bar graph, or bar chart. In a bar graph, rectangular bars are used to represent the categories of the data, and the height of each bar is proportional to the corresponding frequency or relative frequency. All of the bars are drawn with the same width, and the bars can be presented either vertically or horizontally. Bar graphs enable comparisons across several categories, making it easy to identify frequently and infrequently occurring categories.
Bar graphs are commonly used to compare frequencies, They are sometimes used to compare numerical data that could be displayed in a table, such as temperatures, dollar
amounts, percents, heights, and weights.
A bar graph is a graph that compares amounts in each category to each other using bars.
How to read and interpret a bar graph?
A segmented bar graph is used to show how different subgroups or subcategories contribute to an entire group or category. In a segmented bar graph, each bar represents a category that consists of more than one subcategory. Each bar is divided into segments that represent the different subcategories. The height of each segment is proportional to the frequency or relative frequency of the subcategory that the segment represents. How to interpret percentage segmented bar charts?
Bar graphs can also be used to compare different groups using the same categories. It is sometimes called a double bar graph.
Interpreting Double Bar Graphs
How to interpret data shown in a double bar graph?
Circle graphs, often called pie charts, are used to represent data with a relatively small number of categories. They illustrate how a whole is separated into parts. The area of the circle graph representing each category is proportional to the part of the whole that the category represents.
Each part of a circle graph is called a sector. Because the area of each sector is proportional to the percent of the whole that the sector represents, the measure of the central angle of a sector is proportional to the percent of 360 degrees that the sector represents.
Creating a Circle Graph
How to create a circle graph, or “pie chart” from some given data?
When a list of data is large and contains many different values of a numerical variable, it is useful to organize it by grouping the values into intervals, often called classes. To do this, divide the entire interval of values into smaller intervals of equal length and then count the values that fall into each interval. In this way, each interval has a frequency and a relative frequency. The intervals and their frequencies (or relative frequencies) are often displayed in a histogram.
Histograms are graphs of frequency distributions that are similar to bar graphs, but they have a number line for the horizontal axis. Also, in a histogram, there are no regular spaces between the bars. Any spaces between bars in a histogram indicate that there are no data in the intervals represented by the spaces.
How to create a histogram from the given data?
How to create a relative frequency histogram?
Relative frequency histogram has percentage of data values on the vertical axis rather than the frequency.
Step 1: Find the total number of data values.
Step 2: Find the percent of data values in each interval (organize in a table)
Step 3: Draw Histogram.
To study connection between a histogram and the corresponding frequency histogram, consider the histogram below showing Kyle’s 20 homework grades for a semester. Notice that since each bar represents a single whole number (6,7,8,9 or 10), those numbers are best placed in the middle of the bars on the horizontal axis. In this case Kyle has one grade of 6 and five grades of 7.
a) Make a relative frequency histogram of these grades by copying the histogram but making a scale that shows proportion of all grades on the vertical axis rather than frequency.
b) Compare the shape, centre, and spread of the two histograms.
Differences between a bar graph and a histogram
• Bar graph shows the number of items in specific categories.
• Drawn with space between the columns.
• Do not have to be organized into equal intervals of data.
• Bars show categories of data.
• Histogram shows frequency of data divided into equal intervals.
• No space between the columns.
• Must be organized into equal intervals of data.
• Bars show continuous data.
All examples used thus far have involved data resulting from a single characteristic or variable. These types of data are referred to as univariate, that is, data observed for one variable. Sometimes data are collected to study two different variables in the same population of individuals or objects. Such data are called bivariate data. We might want to study the variables separately or investigate a relationship between the two variables. If the variables were to be analyzed separately, each of the graphical methods for univariate numerical data presented above could be applied.
To show the relationship between two numerical variables, the most useful type of graph is a scatterplot. In a scatterplot, the values of one variable appear on the horizontal axis of a rectangular coordinate system and the values of the other variable appear on the vertical axis. For each individual or object in the data, an ordered pair of numbers is collected, one number for each variable, and the pair is represented by a point in the coordinate system.
A scatterplot makes it possible to observe an overall pattern, or trend, in the relationship between the two variables. Also, the strength of the trend as well as striking deviations from the trend are evident. In many cases, a line or a curve that best represents the trend is also displayed in the graph and is used to make predictions about the population.
Scatter Plots : Introduction to Positive and Negative Correlation
A scatter plot is a graph of a collection of ordered pair (x,y)
The graph looks like a bunch of dots, but some of the graphs are a general shape or move in a general direction.
If the x-coordinates and the y-coordinates both increase, then it is positive correlation. This means that as the value of one variable increases, the other increases as well. The variables are related.
If the x-coordinates and the y-coordinates have one increasing and one decreasing, then it is negative correlation. This means that as one increases, the other decreases.
If there seems to be no pattern, and the points looked scattered, then it is no correlation. This means that the two variables are not related. As one variable increases, there is no effect on the other variable.
Which scatterplots below show a linear trend?
Sometimes data are collected in order to observe changes in a variable over time. For example, sales for a department store may be collected monthly or yearly.
A time plot (sometimes called a time series) is a graphical display useful for showing changes in data collected at regular intervals of time. A time plot of a variable plots each observation corresponding to the time at which it was measured. A time plot uses a coordinate plane similar to a scatterplot, but the time is always on the horizontal axis, and the variable measured is always on the vertical axis. Additionally, consecutive observations are connected by a line segment to emphasize increases and decreases over time.
What is a time plot?
Try the free Mathway calculator and
problem solver below to practice various math topics. Try the given examples, or type in your own
problem and check your answer with the step-by-step explanations.
We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page. |
Observations from NASA’s Wide-field Infrared Survey Explorer (WISE) mission indicate the family of asteroids some believed was responsible for the demise of the dinosaurs is not likely the culprit, keeping open the case on one of Earth’s greatest mysteries.
While scientists are confident a large asteroid crashed into Earth approximately 65 million years ago, leading to the extinction of dinosaurs and some other life forms on our planet, they do not know exactly where the asteroid came from or how it made its way to Earth. A 2007 study using visible-light data from ground-based telescopes first suggested the remnant of a huge asteroid, known as Baptistina, as a possible suspect.
According to that theory, Baptistina crashed into another asteroid in the main belt between Mars and Jupiter about 160 million years ago. The collision sent shattered pieces as big as mountains flying. One of those pieces was believed to have impacted Earth, causing the dinosaurs’ extinction.
Since this scenario was first proposed, evidence developed that the so-called Baptistina family of asteroids was not the responsible party. With the new infrared observations from WISE, astronomers say Baptistina may finally be ruled out.
“As a result of the WISE science team’s investigation, the demise of the dinosaurs remains in the cold case files,” said Lindley Johnson, program executive for the Near Earth Object (NEO) Observation Program at NASA Headquarters in Washington. “The original calculations with visible light estimated the size and reflectivity of the Baptistina family members, leading to estimates of their age, but we now know those estimates were off. With infrared light, WISE was able to get a more accurate estimate, which throws the timing of the Baptistina theory into question.”
WISE surveyed the entire celestial sky twice in infrared light from January 2010 to February 2011. The asteroid-hunting portion of the mission, called NEOWISE, used the data to catalogue more than 157,000 asteroids in the main belt and discovered more than 33,000 new ones.
Visible light reflects off an asteroid. Without knowing how reflective the surface of the asteroid is, it’s hard to accurately establish size. Infrared observations allow a more accurate size estimate. They detect infrared light coming from the asteroid itself, which is related to the body’s temperature and size. Once the size is known, the object’s reflectivity can be re-calculated by combining infrared with visible-light data.
The NEOWISE team measured the reflectivity and the size of about 120,000 asteroids in the main belt, including 1,056 members of the Baptistina family. The scientists calculated the original parent Baptistina asteroid actually broke up closer to 80 million years ago, half as long as originally proposed.
This calculation was possible because the size and reflectivity of the asteroid family members indicate how much time would have been required to reach their current locations — larger asteroids would not disperse in their orbits as fast as smaller ones. The results revealed a chunk of the original Baptistina asteroid needed to hit Earth in less time than previously believed, in just about 15 million years, to cause the extinction of the dinosaurs.
“This doesn’t give the remnants from the collision very much time to move into a resonance spot, and get flung down to Earth 65 million years ago,” said Amy Mainzer, a co-author of a new study appearing in the Astrophysical Journal and the principal investigator of NEOWISE at NASA’s Jet Propulsion Laboratory (JPL) in Pasadena. Calif. “This process is thought to normally take many tens of millions of years.” Resonances are areas in the main belt where gravity nudges from Jupiter and Saturn can act like a pinball machine to fling asteroids out of the main belt and into the region near Earth.
The asteroid family that produced the dinosaur-killing asteroid remains at large. Evidence that a 10-kilometer (about 6.2-mile) asteroid impacted Earth 65 million years ago includes a huge, crater-shaped structure in the Gulf of Mexico and rare minerals in the fossil record, which are common in meteorites but seldom found in Earth’s crust. In addition to the Baptistina results, the NEOWISE study shows various main belt asteroid families have similar reflective properties. The team hopes to use NEOWISE data to disentangle families that overlap and trace their histories.
“We are working on creating an asteroid family tree of sorts,” said Joseph Masiero, the lead author of the study. “We are starting to refine our picture of how the asteroids in the main belt smashed together and mixed up.” |
This article needs additional citations for verification. (April 2010) (Learn how and when to remove this template message)
In 3D computer graphics, 3D modeling is the process of developing a mathematical representation of any surface of an object (either inanimate or living) in three dimensions via specialized software. The product is called a 3D model. Someone who works with 3D models may be referred to as a 3D artist. It can be displayed as a two-dimensional image through a process called 3D rendering or used in a computer simulation of physical phenomena. The model can also be physically created using 3D printing devices.
Three-dimensional (3D) models represent a physical body using a collection of points in 3D space, connected by various geometric entities such as triangles, lines, curved surfaces, etc. Being a collection of data (points and other information), 3D models can be created by hand, algorithmically (procedural modeling), or scanned. Their surfaces may be further defined with texture mapping.
3D models are widely used anywhere in 3D graphics and CAD. Their use predates the widespread use of 3D graphics on personal computers. Many computer games used pre-rendered images of 3D models as sprites before computers could render them in real-time. The designer can then see the model in various directions and views, this can help the designer see if the object is created as intended to compared to their original vision. Seeing the design this way can help the designer/company figure out changes or improvements needed to the product.
Today, 3D models are used in a wide variety of fields. The medical industry uses detailed models of organs; these may be created with multiple 2-D image slices from an MRI or CT scan. The movie industry uses them as characters and objects for animated and real-life motion pictures. The video game industry uses them as assets for computer and video games. The science sector uses them as highly detailed models of chemical compounds. The architecture industry uses them to demonstrate proposed buildings and landscapes in lieu of traditional, physical architectural models. The engineering community uses them as designs of new devices, vehicles and structures as well as a host of other uses. In recent decades the earth science community has started to construct 3D geological models as a standard practice. 3D models can also be the basis for physical devices that are built with 3D printers or CNC machines.
Almost all 3D models can be divided into two categories.
- Solid – These models define the volume of the object they represent (like a rock). Solid models are mostly used for engineering and medical simulations, and are usually built with constructive solid geometry
- Shell/boundary – these models represent the surface, e.g. the boundary of the object, not its volume (like an infinitesimally thin eggshell). Almost all visual models used in games and film are shell models.
Solid and shell modeling can create functionally identical objects. Differences between them are mostly variations in the way they are created and edited and conventions of use in various fields and differences in types of approximations between the model and reality.
Shell models must be manifold (having no holes or cracks in the shell) to be meaningful as a real object. Polygonal meshes (and to a lesser extent subdivision surfaces) are by far the most common representation. Level sets are a useful representation for deforming surfaces which undergo many topological changes such as fluids.
The process of transforming representations of objects, such as the middle point coordinate of a sphere and a point on its circumference into a polygon representation of a sphere, is called tessellation. This step is used in polygon-based rendering, where objects are broken down from abstract representations ("primitives") such as spheres, cones etc., to so-called meshes, which are nets of interconnected triangles. Meshes of triangles (instead of e.g. squares) are popular as they have proven to be easy to rasterise (the surface described by each triangle is planar, so the projection is always convex); . Polygon representations are not used in all rendering techniques, and in these cases the tessellation step is not included in the transition from abstract representation to rendered scene.
There are three popular ways to represent a model:
- Polygonal modeling – Points in 3D space, called vertices, are connected by line segments to form a polygon mesh. The vast majority of 3D models today are built as textured polygonal models, because they are flexible and because computers can render them so quickly. However, polygons are planar and can only approximate curved surfaces using many polygons.
- Curve modeling – Surfaces are defined by curves, which are influenced by weighted control points. The curve follows (but does not necessarily interpolate) the points. Increasing the weight for a point will pull the curve closer to that point. Curve types include nonuniform rational B-spline (NURBS), splines, patches, and geometric primitives
- Digital sculpting – Still a fairly new method of modeling, 3D sculpting has become very popular in the few years it has been around. There are currently three types of digital sculpting: Displacement, which is the most widely used among applications at this moment, uses a dense model (often generated by subdivision surfaces of a polygon control mesh) and stores new locations for the vertex positions through use of an image map that stores the adjusted locations. Volumetric, loosely based on voxels, has similar capabilities as displacement but does not suffer from polygon stretching when there are not enough polygons in a region to achieve a deformation. Dynamic tessellation is similar to voxel but divides the surface using triangulation to maintain a smooth surface and allow finer details. These methods allow for very artistic exploration as the model will have a new topology created over it once the models form and possibly details have been sculpted. The new mesh will usually have the original high resolution mesh information transferred into displacement data or normal map data if for a game engine.
The modeling stage consists of shaping individual objects that are later used in the scene. There are a number of modeling techniques, including:
Modeling can be performed by means of a dedicated program (e.g., Cinema 4D, Maya, 3ds Max, Blender, LightWave, Modo) or an application component (Shaper, Lofter in 3ds Max) or some scene description language (as in POV-Ray). In some cases, there is no strict distinction between these phases; in such cases modeling is just part of the scene creation process (this is the case, for example, with Caligari trueSpace and Realsoft 3D).
3D models can also be created using the technique of Photogrammetry with dedicated programs such as RealityCapture, Metashape, 3DF Zephyr, and Meshroom, and cleanup applications such as MeshLab, netfabb or MeshMixer. Photogrammetry creates models using algorithms to interpret the shape and texture of real-world objects and environments based on photographs taken from many angles of the subject.
Complex materials such as blowing sand, clouds, and liquid sprays are modeled with particle systems, and are a mass of 3D coordinates which have either points, polygons, texture splats, or sprites assigned to them.
The first widely available commercial application of human virtual models appeared in 1998 on the Lands' End web site. The human virtual models were created by the company My Virtual Mode Inc. and enabled users to create a model of themselves and try on 3D clothing. There are several modern programs that allow for the creation of virtual human models (Poser being one example).
The development of cloth simulation software such as Marvelous Designer, CLO3D and Optitex, has enabled artists and fashion designers to model dynamic 3D clothing on the computer. Dynamic 3D clothing is used for virtual fashion catalogs, as well as for dressing 3D characters for video games, 3D animation movies, for digital doubles in movies as well as for making clothes for avatars in virtual worlds such as SecondLife.
Compared to 2D methods
3D photorealistic effects are often achieved without wireframe modeling and are sometimes indistinguishable in the final form. Some graphic art software includes filters that can be applied to 2D vector graphics or 2D raster graphics on transparent layers.
Advantages of wireframe 3D modeling over exclusively 2D methods include:
- Flexibility, ability to change angles or animate images with quicker rendering of the changes;
- Ease of rendering, automatic calculation and rendering photorealistic effects rather than mentally visualizing or estimating;
- Accurate photorealism, less chance of human error in misplacing, overdoing, or forgetting to include a visual effect.
Disadvantages compare to 2D photorealistic rendering may include a software learning curve and difficulty achieving certain photorealistic effects. Some photorealistic effects may be achieved with special rendering filters included in the 3D modeling software. For the best of both worlds, some artists use a combination of 3D modeling followed by editing the 2D computer-rendered images from the 3D model.
3D model market
A large market for 3D models (as well as 3D-related content, such as textures, scripts, etc.) still exists – either for individual models or large collections. Several online marketplaces for 3D content allow individual artists to sell content that they have created, including TurboSquid, CGStudio, CreativeMarket, Sketchfab, CGTrader and Cults. Often, the artists' goal is to get additional value out of assets they have previously created for projects. By doing so, artists can earn more money out of their old content, and companies can save money by buying pre-made models instead of paying an employee to create one from scratch. These marketplaces typically split the sale between themselves and the artist that created the asset, artists get 40% to 95% of the sales according to the marketplace. In most cases, the artist retains ownership of the 3d model; the customer only buys the right to use and present the model. Some artists sell their products directly in its own stores offering their products at a lower price by not using intermediaries.
Over the last several years numerous marketplaces specialized in 3D printing models have emerged. Some of the 3D printing marketplaces are combination of models sharing sites, with or without a built in e-com capability. Some of those platforms also offer 3D printing services on demand, software for model rendering and dynamic viewing of items, etc. 3D printing file sharing platforms include Shapeways, Sketchfab, Pinshape, Thingiverse, TurboSquid, CGTrader, Threeding, MyMiniFactory, and GrabCAD.
3D printing is a form of additive manufacturing technology where a three dimensional object is created by laying down or build from successive layers of material.
3D printing is a great way to create objects because you can create objects that you couldn't make otherwise without having complex expensive molds created or by having the objects made with multiple parts. A 3D printed part can be edited by simply editing the 3D model. That avoids having to do any additional tooling which can save time and money. 3D printing is great for testing out an idea without having to go through the production process which is great for getting a physical form of the person/company's idea
In recent years, there has been an upsurge in the number of companies offering personalized 3D printed models of objects that have been scanned, designed in CAD software, and then printed to the customer's requirements. As previously mentioned, 3D models can be purchased from online marketplaces and printed by individuals or companies using commercially available 3D printers, enabling the home-production of objects such as spare parts, mathematical models, and even medical equipment.
3D modeling is used in various industries like film, animation and gaming, interior design and architecture. They are also used in the medical industry to create interactive representations of anatomy. A wide number of 3D software are also used in constructing digital representation of mechanical models or parts before they are actually manufactured. CAD/CAM related software are used in such fields, and with these software, not only can you construct the parts, but also assemble them, and observe their functionality.
3D modeling is also used in the field of Industrial Design, wherein products are 3D modeled before representing them to the clients. In Media and Event industries, 3D modeling is used in Stage/Set Design.
The OWL 2 translation of the vocabulary of X3D can be used to provide semantic descriptions for 3D models, which is suitable for indexing and retrieval of 3D models by features such as geometry, dimensions, material, texture, diffuse reflection, transmission spectra, transparency, reflectivity, opalescence, glazes, varnishes, and enamels (as opposed to unstructured textual descriptions or 2.5D virtual museums and exhibitions using Google Street View on Google Arts & Culture, for example). The RDF representation of 3D models can be used in reasoning, which enables intelligent 3D applications which, for example, can automatically compare two 3D models by volume.
Testing a 3D solid model
3D solid models can be tested in different ways depending on what is needed by using simulation, mechanism design, and analysis. If a motor is designed and assembled correctly (this can be done differently depending on what 3D modeling program is being used), using the mechanism tool the user should be able to tell if the motor or machine is assembled correctly by how it operates. Different design will need to be tested in different ways. For example; a pool pump would need a simulation ran of the water running through the pump to see how the water flows through the pump. These test verify if a product is developed correctly or if it needs to me modified to meet its requirements.
- List of 3D modeling software
- List of common 3D test models
- List of file formats : 3D graphics
- 3D computer graphics software
- 3D printing
- 3D scanner
- 3D scanning
- Additive Manufacturing File Format
- Building information modeling
- Cloth modeling
- Computer facial animation
- Digital geometry
- Edge loop
- Geological modeling
- Industrial CT scanning
- Marching cubes
- Open CASCADE
- Polygon mesh
- Polygonal modeling
- Scaling (geometry)
- Stanford Bunny
- Triangle mesh
- Utah teapot
Media related to 3D modeling at Wikimedia Commons
|Look up modeler in Wiktionary, the free dictionary.|
- "ERIS Project Starts". ESO Announcement. Retrieved 14 June 2013.
- "What is Solid Modeling? 3D CAD Software. Applications of Solid Modeling". Brighthub Engineering. Retrieved 2017-11-18.
- "3D Scanning Advancements in Medical Science". Konica Minolta. Archived from the original on 2011-09-07. Retrieved 24 October 2011.
- Jon Radoff, Anatomy of an MMORPG Archived 2009-12-13 at the Wayback Machine, August 22, 2008
- "Lands' End First With New 'My Virtual Model' Technology: Takes Guesswork Out of Web Shopping for Clothes That Fit". PRNewswire. Lands' End. February 12, 2004. Retrieved 2013-11-24.
- "All About Virtual Fashion and the Creation of 3D Clothing". CGElves. Retrieved 25 December 2015.
- "3D Clothes made for The Hobbit using Marvelous Designer". 3DArtist. Retrieved 9 May 2013.
- "What is 3D Printing? The definitive guide". 3D Hubs. Retrieved 2017-11-18.
- "3D Printing Toys". Business Insider. Retrieved 25 January 2015.
- "Printout3D—Wolfram Language Documentation". reference.wolfram.com. Retrieved 2016-08-06.
- "New Trends in 3D Printing – Customized Medical Devices". Envisiontec. Retrieved 25 January 2015.
- "3D virtual reality models help yield better surgical outcomes: Innovative technology improves visualization of patient anatomy, study finds". ScienceDaily. Retrieved 2019-09-19.
- Sikos, L. F. (2016). Rich Semantics for Interactive 3D Models of Cultural Artifacts. Communications in Computer and Information Science. 672. Springer International Publishing. pp. 169–180. doi:10.1007/978-3-319-49157-8_14.
- Yu, D.; Hunter, J. (2014). "X3D Fragment Identifiers—Extending the Open Annotation Model to Support Semantic Annotation of 3D Cultural Heritage Objects over the Web". International Journal of Heritage in the Digital Era. 3 (3): 579–596. doi:10.1260/2047-49126.96.36.1999. |
How to perform the iodine clock reaction if you are a science teacher who wants to amaze your students, or just an average amateur chemist, then this is the experiment for you. It is the purpose of this experiment to determine the order of h 2 o 2, a reactant in the iodine clock reaction the reaction to be studied in this experiment is the acid buffered oxidation of iodide to triiodide by hydrogen peroxide:. Iodine clock reaction (persulfate variation) the iodine clock reaction is a classical chemical clock demonstration experiment to display chemical kinetics in action it was discovered by hans heinrich landolt in 1886 . A clock reaction produced by mixing chlorate and iodine solutions in perchloric acid media is reported this is the first example of a clock reaction using chlorate as a reagent. Clock reactions investigate reaction kinetics by mixing substances which, after a delay, suddenly start to change colour the iodine clock was first described by hans heinrich landolt in 1886.
The iodine clock reaction is a classical chemical clock demonstration experiment to display chemical kinetics in action (1) two colorless solutions are mixed and at first there is no visible reaction. Chemical kinetics: the iodine-clock reaction: s 2 o 8 2 − (aq) + 2 i − (aq) → i 2(aq) + 2 so 4 2− (aq) to measure the rate of this reaction we must measure the rate of concentration change of one of. This iodine clock experiment is so cool, we feel cheated that we never got to do it in high school chemistry lab but it turns out the popular experiment is so simple to do— involving only a . Iodine clock challenge demonstration and inquiry introduction the demonstration of an “iodine clock” involves a chemical reaction that suddenly turns blue due to the formation of the.
The iodide and iodate anions are often used for quantitative volumetric analysis, for example in iodometry and the iodine clock reaction (in which iodine also serves as a test for starch, forming a dark blue complex), and aqueous alkaline iodine solution is used in the iodoform test for methyl ketones. Chem 122l general chemistry laboratory revision 20 determination of the rate constant for an iodine clock reaction to learn about integrated rate laws. Reaction kinetics: the iodine clock part 2 - analyzing the kinetic data (week 2) in this part of the laboratory, you will use two methods to determine the rate law for the iodine. 13 the rate of an iodine clock reaction for exp 13 • know how to calculate the reaction order for reageants potassium persulfate k₂s₂o₈.
First discovered in 1886 by hans heinrich landolt, the iodine clock reaction is one of the best classical chemical kinetics experiments here’s what to expect: two clear solutions are mixed. The iodine clock reaction was used to examine the relation of temperature, concentration of reactants, and the presence of a catalyst with the rate of reaction of a solution five different runs was used in the experiment with each run involving the reaction of mixtures a and b mixture a and b were . 189 iodine clock reaction 63 this is the hydrogen peroxide/ potassium iodide ‘clock’ reaction a solution of hydrogen peroxide is mixed with one containing potassium iodide, starch.
Demonstration this is the hydrogen peroxide/ potassium iodide ‘clock’ reaction a solution of hydrogen peroxide is mixed with one containing potassium iodide, starch and sodium thiosulfate. Experiment 6: rate law of an iodine clock reaction 63 where y is the only unknown in the equation solve for y by finding the antilog of both. Linette guare vitamin c clock reaction concept: a delayed chemical reaction occurs with the mixture of iodine and starch by adding vitamin c materials:. In reaction # 1 iodide ions react with hydrogen peroxide to produce iodine element which is blue in the presence of starch but, before that can actually happen, the vitamin c quickly reacts and consumes the elemental iodine.
A reaction that readily lends itself to kinetic investigations is the iodine clock reaction, so called because of its kinetics are so well known and reliable in reality, two reactions are involved,. Part a: determining the complete rate law the order of reaction with respect to the iodate ion, m, must be determined for the following rate it is assumed that.
An introduction to how order of reaction can be found experimentally note: there is a neat piece of video on youtube showing an iodine clock reaction (not necessarily the one i am talking about here, but it doesn't matter). The study of kinetics using the iodine clock reaction has provided an interesting experience for students for many years it sometimes proves frustrating for those preparing solutions for the laboratory because solutions that perform properly in one lab may degrade before other lab sections meet. The iodine clock reaction times how long it takes for a fixed amount of thiosulphate ions to be used up, ie the time taken for the iodide ions to reach a fixed number of moles produced in the reaction between potassium iodide and an oxidising agent (usually hydrogen peroxide, or sodium . Experiment 2- rate of a chemical reaction – the iodine clock reaction chem 261 experiment 3 the iodine clock reactions formal report iodine clock reaction lab report sm. |
This article relies largely or entirely upon a single source. (August 2008)
An equatorial bulge is a difference between the equatorial and polar diameters of a planet, due to the centrifugal force of its rotation. A rotating body tends to form an oblate spheroid rather than a sphere. The Earth has an equatorial bulge of 42.77 km (26.58 mi): that is, its diameter measured across the equatorial plane (12,756.27 km (7,926.38 mi)) is 42.77 km more than that measured between the poles (12,713.56 km (7,899.84 mi)). An observer standing at sea level on either pole, therefore, is 21.36 km closer to Earth's centrepoint than if standing at sea level on the equator. The value of Earth's radius may be approximated by the average of these radii.
An often-cited result of Earth's equatorial bulge is that the highest point on Earth, measured from the center outwards, is the peak of Mount Chimborazo in Ecuador, rather than Mount Everest. But since the ocean, like the Earth and the atmosphere, bulges, Chimborazo is not as high above sea level as Everest is.
Basing this on centripetal force, the relationship applies. Viewing the globe as a series of rotating discs, the mass M and radius R at the poles get very small at the same time and thus produce a smaller force for the same velocity. Moving towards the equator, while R gets much bigger, M increases quicker than R thus producing a greater force at the equator. This may be because Earth’s core is included in the cross sectional disc at the equator; the density of the Earth's core is significantly higher than that of the Earth's outer layers, so it contributes more to the mass of the disc. There is a bulge in the water envelope of the oceans surrounding earth. So the fact that water is a fluid and the Earth experiences its greatest centrifugal force at the equator and the fact that the greatest bulge of that water envelope occurs at the equator demonstrates that centrifugal force of earth’s rotation is helping to produce that bulge independent of tides. Sea level at the equator is 21.36 km higher than sea level at the poles in terms of their distances from the center of the planet.
The equilibrium as a balance of energies
Gravity tends to contract a celestial body into a sphere, the shape for which all the mass is as close to the center of gravity as possible. Rotation causes a distortion from this spherical shape; a common measure of the distortion is the flattening (sometimes called ellipticity or oblateness), which can depend on a variety of factors including the size, angular velocity, density, and elasticity.
To get a feel for the type of equilibrium that is involved, imagine someone seated in a spinning swivel chair, with weights in their hands. If the person in the chair pulls the weights towards them, they are doing work and their rotational kinetic energy increases. The increase of rotation rate is so strong that at the faster rotation rate the required centripetal force is larger than with the starting rotation rate.
Something analogous to this occurs in planet formation. Matter first coalesces into a slowly rotating disk-shaped distribution, and collisions and friction convert kinetic energy to heat, which allows the disk to self-gravitate into a very oblate spheroid.
As long as the proto-planet is still too oblate to be in equilibrium, the release of gravitational potential energy on contraction keeps driving the increase in rotational kinetic energy. As the contraction proceeds the rotation rate keeps going up, hence the required force for further contraction keeps going up. There is a point where the increase of rotational kinetic energy on further contraction would be larger than the release of gravitational potential energy. The contraction process can only proceed up to that point, so it halts there.
As long as there is no equilibrium there can be violent convection, and as long as there is violent convection friction can convert kinetic energy to heat, draining rotational kinetic energy from the system. When the equilibrium state has been reached then large scale conversion of kinetic energy to heat ceases. In that sense the equilibrium state is the lowest state of energy that can be reached.
The Earth's rotation rate is still slowing down, though gradually, by about two thousandths of a second per rotation every 100 years. Estimates of how fast the Earth was rotating in the past vary, because it is not known exactly how the moon was formed. Estimates of the Earth's rotation 500 million years ago are around 20 modern hours per "day".
The Earth's rate of rotation is slowing down mainly because of tidal interactions with the Moon and the Sun. Since the solid parts of the Earth are ductile, the Earth's equatorial bulge has been decreasing in step with the decrease in the rate of rotation.
Differences in gravitational acceleration
Because of a planet's rotation around its own axis, the gravitational acceleration is less at the equator than at the poles. In the 17th century, following the invention of the pendulum clock, French scientists found that clocks sent to French Guiana, on the northern coast of South America, ran slower than their exact counterparts in Paris. Measurements of the acceleration due to gravity at the equator must also take into account the planet's rotation. Any object that is stationary with respect to the surface of the Earth is actually following a circular trajectory, circumnavigating the Earth's axis. Pulling an object into such a circular trajectory requires a force. The acceleration that is required to circumnavigate the Earth's axis along the equator at one revolution per sidereal day is 0.0339 m/s². Providing this acceleration decreases the effective gravitational acceleration. At the equator, the effective gravitational acceleration is 9.7805 m/s2. This means that the true gravitational acceleration at the equator must be 9.8144 m/s2 (9.7805 + 0.0339 = 9.8144).
At the poles, the gravitational acceleration is 9.8322 m/s2. The difference of 0.0178 m/s2 between the gravitational acceleration at the poles and the true gravitational acceleration at the equator is because objects located on the equator are about 21 kilometers further away from the center of mass of the Earth than at the poles, which corresponds to a smaller gravitational acceleration.
In summary, there are two contributions to the fact that the effective gravitational acceleration is less strong at the equator than at the poles. About 70 percent of the difference is contributed by the fact that objects circumnavigate the Earth's axis, and about 30 percent is due to the non-spherical shape of the Earth.
The diagram illustrates that on all latitudes the effective gravitational acceleration is decreased by the requirement of providing a centripetal force; the decreasing effect is strongest on the equator.
The fact that the Earth's gravitational field slightly deviates from being spherically symmetrical also affects the orbits of satellites. The principal effect is to cause nodal precession, so that the plane of the orbit does not remain fixed in inertial space. Smaller effects include deviation of orbits away from pure ellipses. This is especially important in the case of the trajectories of GPS satellites.
Other celestial bodies
Generally any celestial body that is rotating (and that is sufficiently massive to draw itself into spherical or near spherical shape) will have an equatorial bulge matching its rotation rate. Saturn is the planet with the largest equatorial bulge in the Solar System (11808 km, 7337 miles). However, Haumea is the dwarf planet with the largest equatorial bulge, indeed greater than that of Saturn. This gives it the shape of a tri-axial ellipsoid.
The following is a table of the equatorial bulge of some major celestial bodies of the Solar System:
|Body||Equatorial diameter||Polar diameter||Equatorial bulge||Flattening ratio|
|Earth||12,756.27 km||12,713.56 km||42.77 km||1:298.2575|
|Mars||6,805 km||6,754.8 km||50.2 km||1:135.56|
|Ceres||975 km||909 km||66 km||1:14.77|
|Jupiter||143,884 km||133,709 km||10,175 km||1:14.14|
|Saturn||120,536 km||108,728 km||11,808 km||1:10.21|
|Uranus||51,118 km||49,946 km||1,172 km||1:43.62|
|Neptune||49,528 km||48,682 km||846 km||1:58.54|
The flattening coefficient for the equilibrium configuration of a self-gravitating spheroid, composed of uniform density incompressible fluid, rotating steadily about some fixed axis, for a small amount of flattening, is approximated by:
where and are respectively the equatorial and polar radius, is the mean radius, is the angular velocity, is the rotation period, is the universal gravitational constant, is the total body mass, and is the body density.
- Hadhazy, Adam. "Fact or Fiction: The Days (and Nights) Are Getting Longer". Scientific American. Retrieved 5 December 2011.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- "Rotational Flattening". utexas.edu.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> |
Asteroids: Sizes and Shapes
Asteroids come in all different shapes and sizes. A few of the largest asteroids are nearly spherical, like the Earth. Ceres is a good example:
Most asteroids, however, are very irregular, like Eros (which was visited by the NEAR spacecraft in 2000):
Ceres is also the largest asteroid, but it is still much smaller than any of the planets. The other asteroids are smaller still. This diagram shows many of the largest asteroids in comparison to each other and to Mars. The location of each asteroid indicates its place in the Main Belt and its location above or below the plane of the planet’s orbits:
Credit: Clark R. Chapman, “Asteroids,” in The New Solar System.
The smallest asteroids are just small rocks orbiting through space.
To find out the size and other information about any numbered asteroid, visit the Small Body Database Browser at NASA’s Jet Propulsion Laboratory (JPL) web site.
Just type in the name or number of an asteroid like 1 Ceres or 433 Eros in the box and hit “Return.” That will take you to a page with lots of information about the asteroid. |
|dB||Power ratio||Amplitude ratio|
|6||3||.981 ≈ 4||1||.995 ≈ 2|
|3||1||.995 ≈ 2||1||.413 ≈ √|
|−3||0||.501 ≈ 1⁄2||0||.708 ≈ √|
|−6||0||.251 ≈ 1⁄4||0||.501 ≈ 1⁄2|
|An example scale showing power ratios x, amplitude ratios √, and dB equivalents 10 log10 x.|
The decibel (symbol: dB) is a relative unit of measurement corresponding to one tenth of a bel (B). It is used to express the ratio of one value of a power or field quantity to another, on a logarithmic scale, the logarithmic quantity being called the power level or field level, respectively. Two signals whose levels differ by one decibel have a power ratio of 101/10 (approximately 1.25893) and an amplitude (field quantity) ratio of 101⁄20 (1.12202).
It can be used to express a change in value (e.g., +1 dB or −1 dB) or an absolute value. In the latter case, it expresses the ratio of a value to a fixed reference value; when used in this way, a suffix that indicates the reference value is often appended to the decibel symbol. For example, if the reference value is 1 volt, then the suffix is "V" (e.g., "20 dBV"), and if the reference value is one milliwatt, then the suffix is "m" (e.g., "20 dBm").
Two different scales are used when expressing a ratio in decibels, depending on the nature of the quantities: power and field (root-power). When expressing a power ratio, the number of decibels is ten times its logarithm in base 10. dB change in level. When expressing field (root-power) quantities, a change in amplitude by a factor of 10 corresponds to a 20 dB change in level. The decibel scales differ by a factor of two so that the related power and field levels change by the same number of decibels with linear loads.That is, a change in power by a factor of 10 corresponds to a 10
The definition of the decibel is based on the measurement of power in telephony of the early 20th century in the Bell System in the United States. One decibel is one tenth (deci-) of one bel, named in honor of Alexander Graham Bell; however, the bel is seldom used. Today, the decibel is used for a wide variety of measurements in science and engineering, most prominently in acoustics, electronics, and control theory. In electronics, the gains of amplifiers, attenuation of signals, and signal-to-noise ratios are often expressed in decibels.
In the International System of Quantities, the decibel is defined as a unit of measurement for quantities of type level or level difference, which are defined as the logarithm of the ratio of power- or field-type quantities.
The decibel originates from methods used to quantify signal loss in telegraph and telephone circuits. The unit for loss was originally Miles of Standard Cable (MSC). 1 MSC corresponded to the loss of power over a 1 mile (approximately 1.6 km) length of standard telephone cable at a frequency of 5000 radians per second (795.8 Hz), and matched closely the smallest attenuation detectable to the average listener. The standard telephone cable implied was "a cable having uniformly distributed resistance of 88 Ohms per loop-mile and uniformly distributed shunt capacitance of 0.054 microfarads per mile" (approximately corresponding to 19 gauge wire).
In 1924, Bell Telephone Laboratories received favorable response to a new unit definition among members of the International Advisory Committee on Long Distance Telephony in Europe and replaced the MSC with the Transmission Unit (TU). 1 TU was defined such that the number of TUs was ten times the base-10 logarithm of the ratio of measured power to a reference power. The definition was conveniently chosen such that 1 TU approximated 1 MSC; specifically, 1 MSC was 1.056 TU. In 1928, the Bell system renamed the TU into the decibel, being one tenth of a newly defined unit for the base-10 logarithm of the power ratio. It was named the bel, in honor of the telecommunications pioneer Alexander Graham Bell. The bel is seldom used, as the decibel was the proposed working unit.
The naming and early definition of the decibel is described in the NBS Standard's Yearbook of 1931:
Since the earliest days of the telephone, the need for a unit in which to measure the transmission efficiency of telephone facilities has been recognized. The introduction of cable in 1896 afforded a stable basis for a convenient unit and the "mile of standard" cable came into general use shortly thereafter. This unit was employed up to 1923 when a new unit was adopted as being more suitable for modern telephone work. The new transmission unit is widely used among the foreign telephone organizations and recently it was termed the "decibel" at the suggestion of the International Advisory Committee on Long Distance Telephony.
The decibel may be defined by the statement that two amounts of power differ by 1 decibel when they are in the ratio of 100.1 and any two amounts of power differ by N decibels when they are in the ratio of 10N(0.1). The number of transmission units expressing the ratio of any two powers is therefore ten times the common logarithm of that ratio. This method of designating the gain or loss of power in telephone circuits permits direct addition or subtraction of the units expressing the efficiency of different parts of the circuit ...
In 1954, J. W. Horton argued that the use of the decibel as a unit for quantities other than transmission loss led to confusion, and suggested the name logit for "standard magnitudes which combine by multiplication", to contrast with the name unit for "standard magnitudes which combine by addition". [ clarification needed ]
In April 2003, the International Committee for Weights and Measures (CIPM) considered a recommendation for the inclusion of the decibel in the International System of Units (SI), but decided against the proposal. However, the decibel is recognized by other international bodies such as the International Electrotechnical Commission (IEC) and International Organization for Standardization (ISO). The IEC permits the use of the decibel with field quantities as well as power and this recommendation is followed by many national standards bodies, such as NIST, which justifies the use of the decibel for voltage ratios. The term field quantity is deprecated by ISO 80000-1, which favors root-power. In spite of their widespread use, suffixes (such as in dBA or dBV) are not recognized by the IEC or ISO.
ISO 80000-3 describes definitions for quantities and units of space and time.
The ISO Standard 80000-3:2006 defines the following quantities. The decibel (dB) is one-tenth of a bel: 1 dB = 0.1 B. The bel (B) is 1⁄2 ln(10) nepers: 1 B = 1⁄2 ln(10) Np. The neper is the change in the level of a field quantity when the field quantity changes by a factor of e, that is 1 Np = ln(e) = 1, thereby relating all of the units as nondimensional natural log of field-quantity ratios, 1 dB = 0.11513… Np = 0.11513…. Finally, the level of a quantity is the logarithm of the ratio of the value of that quantity to a reference value of the same kind of quantity.
Therefore, the bel represents the logarithm of a ratio between two power quantities of 10:1, or the logarithm of a ratio between two field quantities of √:1.
Two signals whose levels differ by one decibel have a power ratio of 101/10, which is approximately 1.25893, and an amplitude (field quantity) ratio of 101⁄20 (1.12202).
The bel is rarely used either without a prefix or with SI unit prefixes other than deci ; it is preferred, for example, to use hundredths of a decibel rather than millibels. Thus, five one-thousandths of a bel would normally be written '0.05 dB', and not '5 mB'.
The method of expressing a ratio as a level in decibels depends on whether the measured property is a power quantity or a root-power quantity; see Power, root-power, and field quantities for details.
When referring to measurements of power quantities, a ratio can be expressed as a level in decibels by evaluating ten times the base-10 logarithm of the ratio of the measured quantity to reference value. Thus, the ratio of P (measured power) to P0 (reference power) is represented by LP, that ratio expressed in decibels,which is calculated using the formula:
The base-10 logarithm of the ratio of the two power quantities is the number of bels. The number of decibels is ten times the number of bels (equivalently, a decibel is one-tenth of a bel). P and P0 must measure the same type of quantity, and have the same units before calculating the ratio. If P = P0 in the above equation, then LP = 0. If P is greater than P0 then LP is positive; if P is less than P0 then LP is negative.
Rearranging the above equation gives the following formula for P in terms of P0 and LP:
When referring to measurements of field quantities, it is usual to consider the ratio of the squares of F (measured field) and F0 (reference field). This is because in most applications power is proportional to the square of field, and historically their definitions were formulated to give the same value for relative ratios in such typical cases. Thus, the following definition is used:
The formula may be rearranged to give
Similarly, in electrical circuits, dissipated power is typically proportional to the square of voltage or current when the impedance is constant. Taking voltage as an example, this leads to the equation for power gain level LG:
where Vout is the root-mean-square (rms) output voltage, Vin is the rms input voltage. A similar formula holds for current.
The term root-power quantity is introduced by ISO Standard 80000-1:2009 as a substitute of field quantity. The term field quantity is deprecated by that standard.
Although power and field quantities are different quantities, their respective levels are historically measured in the same units, typically decibels. A factor of 2 is introduced to make changes in the respective levels match under restricted conditions such as when the medium is linear and the same waveform is under consideration with changes in amplitude, or the medium impedance is linear and independent of both frequency and time. This relies on the relationship
holding.In a nonlinear system, this relationship does not hold by the definition of linearity. However, even in a linear system in which the power quantity is the product of two linearly related quantities (e.g. voltage and current), if the impedance is frequency- or time-dependent, this relationship does not hold in general, for example if the energy spectrum of the waveform changes.
For differences in level, the required relationship is relaxed from that above to one of proportionality (i.e., the reference quantities P0 and F0 need not be related), or equivalently,
must hold to allow the power level difference to be equal to the field level difference from power P1 and V1 to P2 and V2. An example might be an amplifier with unity voltage gain independent of load and frequency driving a load with a frequency-dependent impedance: the relative voltage gain of the amplifier is always 0 dB, but the power gain depends on the changing spectral composition of the waveform being amplified. Frequency-dependent impedances may be analyzed by considering the quantities power spectral density and the associated field quantities via the Fourier transform, which allows elimination of the frequency dependence in the analysis by analyzing the system at each frequency independently.
Since logarithm differences measured in these units often represent power ratios and field ratios, values for both are shown below. The bel is traditionally used as a unit of logarithmic power ratio, while the neper is used for logarithmic field (amplitude) ratio.
|Unit||In decibels||In bels||In nepers||Power ratio||Field ratio|
|1 dB||1 dB||0.1 B||0.11513 Np||101⁄10 ≈ 1.25893||101⁄20 ≈ 1.12202|
|1 Np||8.68589 dB||0.868589 B||1 Np||e2 ≈ 7.38906||e ≈ 2.71828|
|1 B||10 dB||1 B||1.1513 Np||10||101⁄2 ≈ 3.16228|
The unit dBW is often used to denote a ratio for which the reference is 1 W, and similarly dBm for a 1 mW reference point.
(31.62 V / 1 V)2 ≈ 1 kW / 1 W, illustrating the consequence from the definitions above that LG has the same value, 30 dB, regardless of whether it is obtained from powers or from amplitudes, provided that in the specific system being considered power ratios are equal to amplitude ratios squared.
A change in power ratio by a factor of 10 corresponds to a change in level of 10 dB. A change in power ratio by a factor of 2 or 1⁄2 is approximately a change of 3 dB. More precisely, the change is ±3.0103 dB, but this is almost universally rounded to "3 dB" in technical writing. This implies an increase in voltage by a factor of √ ≈ 1.4142. Likewise, a doubling or halving of the voltage, and quadrupling or quartering of the power, is commonly described as "6 dB" rather than ±6.0206 dB.
Should it be necessary to make the distinction, the number of decibels is written with additional significant figures. 3.000 dB corresponds to a power ratio of 103⁄10, or 1.9953, about 0.24% different from exactly 2, and a voltage ratio of 1.4125, 0.12% different from exactly √. Similarly, an increase of 6.000 dB corresponds to the power ratio is 106⁄10 ≈ 3.9811, about 0.5% different from 4.
The decibel is useful for representing large ratios and for simplifying representation of multiplied effects such as attenuation from multiple sources along a signal chain. Its application in systems with additive effects is less intuitive.
The logarithmic scale nature of the decibel means that a very large range of ratios can be represented by a convenient number, in a manner similar to scientific notation. This allows one to clearly visualize huge changes of some quantity. See Bode plot and Semi-log plot . For example, 120 dB SPL may be clearer than "a trillion times more intense than the threshold of hearing".
Level values in decibels can be added instead of multiplying the underlying power values, which means that the overall gain of a multi-component system, such as a series of amplifier stages, can be calculated by summing the gains in decibels of the individual components, rather than multiply the amplification factors; that is, log(A × B × C) = log(A) + log(B) + log(C). Practically, this means that, armed only with the knowledge that 1 dB is a power gain of approximately 26%, 3 dB is approximately 2× power gain, and 10 dB is 10× power gain, it is possible to determine the power ratio of a system from the gain in dB with only simple addition and multiplication. For example:
However, according to its critics, the decibel creates confusion, obscures reasoning, is more related to the era of slide rules than to modern digital processing, and is cumbersome and difficult to interpret.
According to Mitschke, dB at a certain point, then when both are operating together we should expect the combined sound pressure level to increase to 93 dB, but certainly not to 180 dB!"; "suppose that the noise from a machine is measured (including the contribution of background noise) and found to be 87 dBA but when the machine is switched off the background noise alone is measured as 83 dBA. [...] the machine noise [level (alone)] may be obtained by 'subtracting' the 83 dBA background noise from the combined level of 87 dBA; i.e., 84.8 dBA."; "in order to find a representative value of the sound level in a room a number of measurements are taken at different positions within the room, and an average value is calculated. [...] Compare the logarithmic and arithmetic averages of [...] 70 dB and 90 dB: logarithmic average = 87 dB; arithmetic average = 80 dB.""The advantage of using a logarithmic measure is that in a transmission chain, there are many elements concatenated, and each has its own gain or attenuation. To obtain the total, addition of decibel values is much more convenient than multiplication of the individual factors." However, for the same reason that humans excel at additive operation over multiplication, decibels are awkward in inherently additive operations: "if two machines each individually produce a sound pressure level of, say, 90
Addition on a logarithmic scale is called logarithmic addition, and can be defined by taking exponentials to convert to a linear scale, adding there, and then taking logarithms to return. For example, where operations on decibels are logarithmic addition/subtraction and logarithmic multiplication/division, while operations on the linear scale are the usual operations:
Note that the logarithmic mean is obtained from the logarithmic sum by subtracting , since logarithmic division is linear subtraction.
Quantities in decibels are not necessarily additive,thus being "of unacceptable form for use in dimensional analysis".
The human perception of the intensity of sound and light more nearly approximates the logarithm of intensity rather than a linear relationship (see Weber–Fechner law), making the dB scale a useful measure.
The decibel is commonly used in acoustics as a unit of sound pressure level. The reference pressure for sound in air is set at the typical threshold of perception of an average human and there are common comparisons used to illustrate different levels of sound pressure. Sound pressure is a field quantity, therefore the field version of the unit definition is used:
where prms is the root mean square of the measured sound pressure and pref is the standard reference sound pressure of 20 micropascals in air or 1 micropascal in water.
Use of the decibel in underwater acoustics leads to confusion, in part because of this difference in reference value.
The human ear has a large dynamic range in sound reception. The ratio of the sound intensity that causes permanent damage during short exposure to that of the quietest sound that the ear can hear is greater than or equal to 1 trillion (1012). dB re 20 μPa.Such large measurement ranges are conveniently expressed in logarithmic scale: the base-10 logarithm of 1012 is 12, which is expressed as a sound pressure level of 120
Since the human ear is not equally sensitive to all sound frequencies, the acoustic power spectrum is modified by frequency weighting (A-weighting being the most common standard) to get the weighted acoustic power before converting to a sound level or noise level in decibels.
The decibel is used in telephony and audio. Similarly to the use in acoustics, a frequency weighted power is ofter used. For audio noise measurements in electrical circuits, the weightings are called psophometric weightings.
In electronics, the decibel is often used to express power or amplitude ratios (as for gains) in preference to arithmetic ratios or percentages. One advantage is that the total decibel gain of a series of components (such as amplifiers and attenuators) can be calculated simply by summing the decibel gains of the individual components. Similarly, in telecommunications, decibels denote signal gain or loss from a transmitter to a receiver through some medium (free space, waveguide, coaxial cable, fiber optics, etc.) using a link budget.
The decibel unit can also be combined with a reference level, often indicated via a suffix, to create an absolute unit of electric power. For example, it can be combined with "m" for "milliwatt" to produce the "dBm". A power level of 0 dBm corresponds to one milliwatt, and 1 dBm is one decibel greater (about 1.259 mW).
In professional audio specifications, a popular unit is the dBu. This is relative to the root mean square voltage which delivers 1 mW (0 dBm) into a 600-ohm resistor, or √≈ 0.775 VRMS. When used in a 600-ohm circuit (historically, the standard reference impedance in telephone circuits), dBu and dBm are identical.
In an optical link, if a known amount of optical power, in dBm (referenced to 1 mW), is launched into a fiber, and the losses, in dB (decibels), of each component (e.g., connectors, splices, and lengths of fiber) are known, the overall link loss may be quickly calculated by addition and subtraction of decibel quantities.
In spectrometry and optics, the blocking unit used to measure optical density is equivalent to −1 B.
In connection with video and digital image sensors, decibels generally represent ratios of video voltages or digitized light intensities, using 20 log of the ratio, even when the represented intensity (optical power) is directly proportional to the voltage generated by the sensor, not to its square, as in a CCD imager where response voltage is linear in intensity. Thus, a camera signal-to-noise ratio or dynamic range quoted as 40 dB represents a ratio of 100:1 between optical signal intensity and optical-equivalent dark-noise intensity, not a 10,000:1 intensity (power) ratio as 40 dB might suggest. Sometimes the 20 log ratio definition is applied to electron counts or photon counts directly, which are proportional to sensor signal amplitude without the need to consider whether the voltage response to intensity is linear.
However, as mentioned above, the 10 log intensity convention prevails more generally in physical optics, including fiber optics, so the terminology can become murky between the conventions of digital photographic technology and physics. Most commonly, quantities called "dynamic range" or "signal-to-noise" (of the camera) would be specified in 20 log dB, but in related contexts (e.g. attenuation, gain, intensifier SNR, or rejection ratio) the term should be interpreted cautiously, as confusion of the two units can result in very large misunderstandings of the value.
Photographers typically use an alternative base-2 log unit, the stop, to describe light intensity ratios or dynamic range.
Suffixes are commonly attached to the basic dB unit in order to indicate the reference value by which the ratio is calculated. For example, dBm indicates power measurement relative to 1 milliwatt.
In cases where the unit value of the reference is stated, the decibel value is known as "absolute". If the unit value of the reference is not explicitly stated, as in the dB gain of an amplifier, then the decibel value is considered relative.
The SI does not permit attaching qualifiers to units, whether as suffix or prefix, other than standard SI prefixes. Therefore, even though the decibel is accepted for use alongside SI units, the practice of attaching a suffix to the basic dB unit, forming compound units such as dBm, dBu, dBA, etc., is not. (re 1 μV/m) = LE/(1 μV/m) for the electric field strength E relative to 1 μV/m reference value.The proper way, according to the IEC 60027-3, is either as Lx (re xref) or as Lx/xref, where x is the quantity symbol and xref is the value of the reference quantity, e.g., LE
Outside of documents adhering to SI units, the practice is very common as illustrated by the following examples. There is no general rule, with various discipline-specific practices. Sometimes the suffix is a unit symbol ("W","K","m"), sometimes it is a transliteration of a unit symbol ("uV" instead of μV for microvolt), sometimes it is an acronym for the unit's name ("sm" for square meter, "m" for milliwatt), other times it is a mnemonic for the type of quantity being calculated ("i" for antenna gain with respect to an isotropic antenna, "λ" for anything normalized by the EM wavelength), or otherwise a general attribute or identifier about the nature of the quantity ("A" for A-weighted sound pressure level). The suffix is often connected with a hyphen, as in "dB‑Hz", or with a space, as in "dB HL", or with no intervening character, as in "dBm", or enclosed in parentheses, as in "dB(sm)".
Since the decibel is defined with respect to power, not amplitude, conversions of voltage ratios to decibels must square the amplitude, or use the factor of 20 instead of 10, as discussed above.
Probably the most common usage of "decibels" in reference to sound level is dB SPL, sound pressure level referenced to the nominal threshold of human hearing: The measures of pressure (a field quantity) use the factor of 20, and the measures of power (e.g. dB SIL and dB SWL) use the factor of 10.
See also dBV and dBu above.
Attenuation constants, in fields such as optical fiber communication and radio propagation path loss, are often expressed as a fraction or ratio to distance of transmission. dB/m represents decibel per meter, dB/mi represents decibel per mile, for example. These quantities are to be manipulated obeying the rules of dimensional analysis, e.g., a 100-meter run with a 3.5 dB/km fiber yields a loss of 0.35 dB = 3.5 dB/km × 0.1 km.
dBm is a unit of level used to indicate that a power ratio is expressed in decibels (dB) with reference to one milliwatt (mW). It is used in radio, microwave and fiber-optical communication networks as a convenient measure of absolute power because of its capability to express both very large and very small values in a short form compared to dBW, which is referenced to one watt (1000 mW).
The neper is a logarithmic unit for ratios of measurements of physical field and power quantities, such as gain and loss of electronic signals. The unit's name is derived from the name of John Napier, the inventor of logarithms. As is the case for the decibel and bel, the neper is a unit defined in the international standard ISO 80000. It is not part of the International System of Units (SI), but is accepted for use alongside the SI.
In telecommunications, return loss is the loss of power in the signal returned/reflected by a discontinuity in a transmission line or optical fiber. This discontinuity can be a mismatch with the terminating load or with a device inserted in the line. It is usually expressed as a ratio in decibels (dB);
Signal-to-noise ratio is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. SNR is defined as the ratio of signal power to the noise power, often expressed in decibels. A ratio higher than 1:1 indicates more signal than noise.
In telecommunications, a third-order intercept point (IP3 or TOI) is a specific figure of merit associated with the more general third-order intermodulation distortion (IMD3), which is a measure for weakly nonlinear systems and devices, for example receivers, linear amplifiers and mixers. It is based on the idea that the device nonlinearity can be modeled using a low-order polynomial, derived by means of Taylor series expansion. The third-order intercept point relates nonlinear products caused by the third-order nonlinear term to the linearly amplified signal, in contrast to the second-order intercept point that uses second-order terms.
In electronics, gain is a measure of the ability of a two-port circuit to increase the power or amplitude of a signal from the input to the output port by adding energy converted from some power supply to the signal. It is usually defined as the mean ratio of the signal amplitude or power at the output port to the amplitude or power at the input port. It is often expressed using the logarithmic decibel (dB) units. A gain greater than one, that is amplification, is the defining property of an active component or circuit, while a passive circuit will have a gain of less than one.
In information theory, the Shannon–Hartley theorem tells the maximum rate at which information can be transmitted over a communications channel of a specified bandwidth in the presence of noise. It is an application of the noisy-channel coding theorem to the archetypal case of a continuous-time analog communications channel subject to Gaussian noise. The theorem establishes Shannon's channel capacity for such a communication link, a bound on the maximum amount of error-free information per time unit that can be transmitted with a specified bandwidth in the presence of the noise interference, assuming that the signal power is bounded, and that the Gaussian noise process is characterized by a known power or power spectral density. The law is named after Claude Shannon and Ralph Hartley.
A logarithmic scale is a way of displaying numerical data over a very wide range of values in a compact way—typically the largest numbers in the data are hundreds or even thousands of times larger than the smallest numbers. Such a scale is nonlinear: the numbers 10 and 20, and 60 and 70, are not the same distance apart on a log scale. Rather, the numbers 10 and 100, and 60 and 600 are equally spaced. Thus moving a unit of distance along the scale means the number has been multiplied by 10. Often exponential growth curves are displayed on a log scale, otherwise they would increase too quickly to fit within a small graph. Another way to think about it is that the number of digits of the data grows at a constant rate. For example, the numbers 10, 100, 1000, and 10000 are equally spaced on a log scale, because their numbers of digits is going up by 1 each time: 2, 3, 4, and 5 digits. In this way, adding two digits multiplies the quantity measured on the log scale by a factor of 100.
Sound pressure or acoustic pressure is the local pressure deviation from the ambient atmospheric pressure, caused by a sound wave. In air, sound pressure can be measured using a microphone, and in water with a hydrophone. The SI unit of sound pressure is the pascal (Pa).
•Intensity of Sound: Sound energy flowing per second through a unit area held perpendicular to the direction of a sound wave is called the Intesity of Sound. UNITS: Its unit is Watt per square meter W/m2. •It is a physical quantity and can be measured accurately.
Sound power or acoustic power is the rate at which sound energy is emitted, reflected, transmitted or received, per unit time. It is defined as "through a surface, the product of the sound pressure, and the component of the particle velocity, at a point on the surface in the direction normal to the surface, integrated over that surface." The SI unit of sound power is the watt (W). It relates to the power of the sound force on a surface enclosing a sound source, in air. For a sound source, unlike sound pressure, sound power is neither room-dependent nor distance-dependent. Sound pressure is a property of the field at a point in space, while sound power is a property of a sound source, equal to the total power emitted by that source in all directions. Sound power passing through an area is sometimes called sound flux or acoustic flux through that area.
Line level is the specified strength of an audio signal used to transmit analog sound between audio components such as CD and DVD players, television sets, audio amplifiers, and mixing consoles.
Crest factor is a parameter of a waveform, such as alternating current or sound, showing the ratio of peak values to the effective value. In other words, crest factor indicates how extreme the peaks are in a waveform. Crest factor 1 indicates no peaks, such as direct current or a square wave. Higher crest factors indicate peaks, for example sound waves tend to have high crest factors.
A link budget is an accounting of all of the power gains and losses that a communication signal experiences in a telecommunication system; from a transmitter, through a medium to the receiver. It is an equation giving the received power from the transmitter power, after the attenuation of the transmitted signal due to propagation, as well as the antenna gains and feedline and other losses, and amplification of the signal in the receiver or any repeaters it passes through. A link budget is a design aid, calculated during the design of a communication system to determine the received power, to ensure that the information is received intelligibly with an adequate signal-to-noise ratio. Randomly varying channel gains such as fading are taken into account by adding some margin depending on the anticipated severity of its effects. The amount of margin required can be reduced by the use of mitigating techniques such as antenna diversity or frequency hopping.
Decibels relative to full scale is a unit of measurement for amplitude levels in digital systems, such as pulse-code modulation (PCM), which have a defined maximum peak level. The unit is similar to the units dBov and decibels relative to overload (dBO).
dBc is the power ratio of a signal to a carrier signal, expressed in decibels. For example, phase noise is expressed in dBc/Hz at a given frequency offset from the carrier. dBc can also be used as a measurement of Spurious-Free Dynamic Range (SFDR) between the desired signal and unwanted spurious outputs resulting from the use of signal converters such as a digital-to-analog converter or a frequency mixer.
In telecommunications, the carrier-to-noise ratio, often written CNR or C/N, is the signal-to-noise ratio (SNR) of a modulated signal. The term is used to distinguish the CNR of the radio frequency passband signal from the SNR of an analog base band message signal after demodulation, for example an audio frequency analog message signal. If this distinction is not necessary, the term SNR is often used instead of CNR, with the same definition.
A minimum detectable signal is a signal at the input of a system whose power allows it to be detected over the background electronic noise of the detector system. It can alternately be defined as a signal that produces a signal-to-noise ratio of a given value m at the output. In practice, m is usually chosen to be greater than unity. In some literature, the name sensitivity is used for this concept.
In science and engineering, a power level and a field level are logarithmic measures of certain quantities referenced to a standard reference value of the same type.
A power quantity is a power or a quantity directly proportional to power, e.g., energy density, acoustic intensity, and luminous intensity. Energy quantities may also be labelled as power quantities in this context.
[…] the decibel represents a reduction in power of 1.258 times […]
[…] a pressure ratio of 1.122 equals + 1.0 dB […]
[…] the decibel represents a reduction in power of 1.258 times […]
[…] a pressure ratio of 1.122 equals + 1.0 dB […] |
We recently wrote about the picture that this James Webb Space Telescope he captured Saturn. As pointed out several times, this new scientific tool can help scientists study not only nearby celestial objects (perhaps within the solar system), but also more distant ones. soon after -in cosmic terms- The Big Bang.
Click on the image to enlarge it to maximum resolution
One of the latest news concerns the discovery of three supermassive black holes which would emit the electromagnetic radiation it captured JWST extension only 1.1 billion years after the Big Bang, 1 billion years after the Big Bang and 570 million years after the Big Bang. In human terms it is so “much time” but when you think of the universe, it’s actually a much shorter time. That was the result of the first analyses.
The James Webb Space Telescope and supermassive black holes
Second what is reported It was possible to discover this thanks to the research called Cosmic Evolution Early Release Science (or simply CEERS). supermassive black hole The most distant active substance ever observed (570 million years after the Big Bang) is named CEERS 1019. Among its distinctive features is that it is the lowest-mass supermassive black hole among the black holes currently known in the early Universe. In particular, its properties indicate 9 million solar masses, i.e. similar (albeit more massive) than that present at the center of the Milky Way.
One of the questions scientists need to answer is “How could a black hole with these properties form so early?”. Answering such questions, in addition to being able to observe other primordial black holes and galaxies, is crucial for understanding the early stages of the evolution of the Universe, but also its future. One of the assumptions about CEERS 1019 is that it formed after the merger of two or more galaxies, which determines its mass but also its activity, including star formation.
Thanks to James Webb Space Telescope However, two supermassive black holes have also been identified. It’s about CEERS 2782which existed when the Big Bang had only lasted 1.1 billion years CEERS 746, existed 1 billion years after the Big Bang. Both have a mass of about 10 million solar masses each and are therefore quite quiet “Light”. The second is then surrounded by a dust cover, which is probably also related to a particularly intense phase of star formation.
The overall picture of CEERS survey The NIRCam image has a width of 23 arc minutes and was obtained with observations made on December 21, 22 and 24 last year. In order to capture the greatest possible amount of information, the F115W, F150W, F200W, F277W, F356W and F444W filters were used. Instead, analysis of black holes was done to correctly determine their properties (including distance and age). NIRS spec. For those who are curious, on the ESA website JWST extension You can download the file NIR camera However, in full resolution is 525 MB in size (which, however, gives you incredible detail of the space telescope’s field of view). |
What is Multiplication? There are four elementary arithmetic operations in mathematics and all the calculations are based on them. The basic operations include addition, subtraction, division, and multiplication. Addition and subtraction are the basic ones and the easiest out of these four. Students face difficulty in grasping the concept of multiplication and division. Multiplication is an operation that is symbolized using the following ways: a×b, a.b, a*b, ab. The first case is when two positive integers are multiplied; here, the result is always positive. The second case is when two negative integers are multiplied, and the result is always positive. If we multiply a positive and a negative integer, the result is always negative. All the sections I promised are here. Let me know if you have an area in your curriculum that I don't cover at all.
- Commutative Property - That's the one that states that order does not change the math.
- Decimal Multiplication - Paying attention to place value is paramount doing well here.
- Distributive Property - This is where parathesis and brackets first begin to appear.
- Factor Pairs to 100 - These skills help towards mastering algebra quickly.
- Factors of an Integer - We learn how to regroup a formed product..
- Fixed Number Multiplication Facts - The multiplier on these sheets is always a set number.
- Fraction Multiplication (Unlike Denominators) - I since a little cross multiplication in your future.
- Lattice Method of Multiplication - When you are finding the product of really big numbers, this comes in handy.
- Long Multiplication - This is a nice way to cover a wide range of product values.
- Math Fact Families - This should be your first step before times tables.
- Missing Factors Multiplication - This can also be thought of as one-step algebra, in most problems.
- Mixed Multiplication Review - Once your students have some confidence, send them over to this section.
- Multiplication as Scaling - When we are modelling the growth of something very quickly.
- Multiplication of Whole Numbers - This is more of concept than a topic.
- Multiplication 2 Digit (In Grid) - The grid style worksheets make it much easier to stay organized.
- Multiplication 3 by 2 Digit (In Grid) - Hundreds multiplied by tens.
- Multiplication 3 by 3 Digit (In Grid) - Multiplying two sets of hundreds.
- Multiplication 3 by 4 Digit (In Grid) - Thousands times hundreds
- Multiplication 4 Digit (In Grid) - Thousands by thousands in a nice simple grid.
- Multiplication as Repeated Addition - This is a great way to introduce multiplication to students.
- Multiplication of Complex Numbers - You can work off of the FOIL method. We show you how through the lessons.
- Multiplication Puzzles - A nice way to review and have fun with math.
- Multiplication Word Problems - These worksheets show you how to spot multiplication in story-based problems.
- Multiply and Divide Within 100 - You can work on these operations at the same time.
- Multiply or Divide Word Problems - We work on spotting keywords with the questions themselves to get an idea of what operation applies.
- Multiplying and Adding Rational and Irrational Numbers - Things get a little more complex in this upper High School material.
- Multiplying Binomials - A little algebra to get you going along.
- Multiplying by Powers of 10 - It is really nice if all those zeroes are on a paycheck.
- Multiplying Fractions by Whole Numbers - This is confusing, at first, but very simple once you get the hang of it.
- Multiplying Mixed Numbers Word Problems - Where would you go with this one? Sometimes we see mixed numbers and we get lost. We show you how to push forward.
- Multiply Multi-Digit Whole Numbers - You can approach these problems from a wide number of places.
- Multiplying Single Digits by Multiples of 10 - It is not just adding a place value change.
- Multiplying Using Arrays - A great way to help you learn your times tables.
- Products of Exponents - This topic is covered from all possible angles.
- Rapid Single Digit Horizontal Multiplication - This is how we start to form mental math mastery.
- Rapid Single Digit Vertical Multiplication - Another way of looking at it.
- Rewriting Multiplication - Many different ways to look at these problems.
- Solving Multiplication and Division Equations - The first step into algebra shows how these operations are polar opposites.
- The Properties of Multiplication - We look at properties that we can use to our advantage mostly when we are working with algebra.
- Times Tables - This is how I committed multiplying to memory.
- Unique Properties of Multiplication - We look at the four most commonly explored forms.
- Visual Multiplication - A great way to show the growth of product.
How to Approach Teaching Kids This Operation
I remember a few decades back when I was preparing my first unit with students on this operation. It was a bit overwhelming try to make it seamless for students. My first unit went okay, my second one went better, and so on and so forth. As long as you keep fixing what could have been better, it will get there. Let me share what I have learned about teaching this operation to students.
1. Always Start with Manipulatives - When you provide students things to put in their hands makes sure that it something simple and uniform. Pennies are great, so are toy cars and blocks. They need to not too big or small. I always draw the numbers next to manipulatives and I have students do so as well.
2. Introduce Multiplication as Repeated Addition - I start by having students convert addition problems into multiplication such as 4 + 4 + 4 = 4 x 3. I tell them that this is just another way to write it. This gives something that they are very comfortable with to build up off of.
3. Products of Zero and One - These are just rules that they will be comfortable quickly. Anything multiplied by zero is equal to zero. Anything multiplied by one is one.
4. Get to Those Times Tables - I do a daily five-minute challenge in my room. Leave 5-10 missing areas and have students come up and fill them in. At this point we are relying pretty heavy that they are practicing at home. After a few days of my class challenge, I have a good idea of who is studying on their own and who is not. I also coach them through on how to approach this. They should master the 0-3 scale first, then 4-7, followed by 8-10. Once they have that cold, it is time to move on to 11 and 12.
5. The Commutative Property Eases It Up - That the old property that says when we use this operation order does not matter. This means that 4 x 7 and 7 x 4 or equivalent. I find that this particularly helps a great many students. When they have difficulty with a problem, I encourage them to reorder the multiplicand and the multiplier. This does not always save the day, but it is helpful very often. |
for an activity according to grade level and/or core democratic
Civics Online hopes to not only provide a rich array of multi-media
primary sources, but to also give teachers ideas on using those
sources in the classroom. Explore the general activities
below, or investigate the activities created
for specific grade levels and core democratic values.
following strategies are based on the classroom use of primary documents
and the incorporation of interactive learning. As far as possible,
these strategies should be integrated into the social studies classroom
with the goal of placing students in learning situations that will
promote critical thinking and application of knowledge. These strategies
are intended as a springboard for dialogue and discussion. Teachers
are encouraged to adapt and modify the strategies for their own
of close textual reading and analysis of historical documents should
be a regular feature of the social studies classroom. Considerations
in this activity are: establishing historical context, targeting
the purpose of the document, identifying the social-political bias,
and recognizing what is at stake in the issue. Language analysis
should consider key words, tone, and intent.
far as possible, students should be placed in collaborative groups
to dialogue their responses to documents (print, on line, video).
An inductive method should be the framework for these discussions.
At times the teacher may lead Socratic discussions or might conduct
a debriefing. However, it is desirable to establish an ongoing framework
for analysis and evaluation.
analysis should be a component of other classroom activities with
the goal of developing a more articulate and well informed civic
activity calls upon students to take various roles associated with
an historical, social, or civics related issue. Students will research
the point of view of their assigned character and will participate
in one of a number of short or long term activities that might include:
simulated media interviews, a debate, a court case, a panel discussion,
an historical simulation, or a simple debriefing.
activity might involve a short term one period pro-con debate on
a focused historical or constitutional issue. It could also be part
of a longer project based on more extensive research used by students
working in teams resulting in a more formal closure project.
Thesis-Antithesis-Synthesis writing activity.
from a set of documents and information, the class (working in collaborative
groups) must design an activity in which their group recreates an
event and analyzes it from multiple perspectives. The event could
be an historical situation, a current event, or a constitutional
issue. Through dramatic scripting and role playing, the group will
prepare for class presentation a briefing on their topic. This activity
could be part of a larger unit as a major project or could be utilized
as part of a daily activity on a short term basis.
a unit closure activity or long term project, the simulation could
take the form of a One Act Play.
would work together in news teams digesting a number of documentary
materials. The materials would be presented in print packets or
online. Each student would be assigned a job simulating a team
of reporters or television news magazine staffers. At the end of
the activity (single period or longer term), the teams will present
their findings to the class. Many creative options may be utilized
for the reports (video, online, role playing, etc.).
social, and historical issues should, from time to time, be considered
in the form of classroom court. The court should engage every student
in the class in some formal role (judge, court reporter, expert
witness, etc.). The case may be worked up from on line or print
documents. If desired, an appeals process may be used. Student "reporters"
will debrief the class in a "Court TV" simulation.
class might consider an issue or social problem through the simulation
of a congressional hearing. The class might prepare by watching
one of the many such hearings shown on C-Span.
research (print as well as online) should be part of the preparation
for this activity. Teams of students representing various sides
of the issue should collaborate to produce testimony.
of students would role play the Senate or Congressional subcommittee.
They would share their conclusions with the class. Another group
would role play a news team covering the hearings for a television
activity might be the writing of legislation based on the findings
of the hearings.
activity is a longer term project that would involve a broader social
or constitutional issue (such as freedom of speech, civil rights,
or the right to bear arms). Role playing might be used or students
could work up their own points of view.
symposium discussion would center around a few (3-5) central questions.
These questions could be posed by the teacher or worked up by a
student committee. The class would do its research (print and online) in teams which would work together to digest their findings
in light of the key questions.
symposium itself could be set up in round table fashion with the
entire class or could be focused on a smaller group of representatives
selected from the research teams. Input from the teams could be
rotated among members.
perspective segments could be researched and pretaped by students.
Classroom link ups
social, historical, or constitutional issues could be considered
by individually paired classrooms in different districts. Ideally,
these online linkages would pair diverse districts such as an urban
classroom with a rural or suburban one.
classrooms would work together on commonly accessed documents, or
they could focus on the ways in which each classroom perceives important
civic questions differently.
term or short term projects could be worked up including many of
the formats outlined above. Interactive links (video conferences,
URL exchanges, etc) would provide opportunities for ongoing dialogue.
Actual on site student exchanges could be arranged for further collaboration.
would be assigned roles of representatives to the Constitutional
Convention of 1787, only as citizens of contemporary American society.
Their task would be to revise and rewrite the original constitution
to better reflect the civic needs and demands of a 21st century
student would receive an index card with a brief description of
his assigned role. Diverse teams of students would be created to
act as revision committees to reexamine the original document section
by section. The Bill of Rights would be the only part of the original
document not subject to change.
completing their reexamination, the teams will report their findings
to the class. Then through debate and compromise, the committee
of the whole will decide on what (if any) revisions in the original
document should be made. The entire class must reach consensus on
the wording of any changes.
much as possible, writing activities should connect to the standards
especially the Core Democratic Values. Also, each activity should
emphasize stating a position clearly and using specific evidence
to support the position.
citizen's journal (An ongoing free response collection of personal
reactions to historical documents, films, online materials, class
speakers, and artifacts.) The journal would allow complete freedom
of expression and would be "graded" only as a required activity.
The journal would encourage students to identify issues of interest
and to react to them informally.
jackdaw collection (Role playing a number of documents focusing
on a civic or constitutional issue. May include letters, editorials,
public documents, broadsides, pictures, cartoons, and artifacts).
The collection may be a closure activity for a research project
or connected to a debate or other activity.
role specific position papers (Written in response to a series of
documents, these letters may be written from the viewpoint of roles
assigned by the teacher). By providing a point of view, the assignment
will encourage students to look at issues from multiple perspectives.
legislative draft (After consideration of a local, state, or national
issue, small collaborative groups will brainstorm new legislation
in response to the issue. They will act as subcommittees for their
town council, county commission, zoning board, state legislature,
or congress. The final product will be presented to the whole for
debate and a final vote).
activity could be a closure activity in a legislative decision making
unit or it could be part of a current events unit or a social problems
citizen's letter (This is assigned in response to a local, state,
or national issue.)
researching the issue, the student must draft a letter to the appropriate
governmental agency or elected official. Before the letters are
mailed, they must be presented in small feedback groups of peers
for evaluation. They should be critiqued for logic, clarity, and
persuasive use of language and historical-legal precedent.
letters and responses to them should be posted on the classroom
bulletin board. A variation of this activity would be a request
for information or a clarification of a policy from a governmental
agency or official.
Interview (In response to a guest speaker or outside subject.) The
assignment would include framing several focused questions based
on some research, notes summarizing the responses, and a synthesis
paragraph summarizing the results of the interview. The interview
could be keyed to a topic determined by the teacher or a focus group
descriptive response (To an artifact or photograph.) As part of
an array or collection, the artifact becomes the subject of a descriptive
narrative in which the student connects the content of the artifact
to larger democratic issues and values.
a dialectical approach, the student breaks down an issue into three
components starting with its main thesis, its opposing thesis, and
a future focus synthesis fusing the two issues in a new way.
method can be used by discussion groups or as a three paragraph
format for written expression. The synthesis would include creative
solutions that point toward building a consensus. It encourages
the development of multiple perspectives.
LifeThis activity could be done in connection with the values of diversity or justice. The concept of the right to life can be presented in the context of The Golden Rule and the right of everyone to be respected as an individual. The Golden Rule can be taught as both a classroom rule and as a legal right. The teacher might display (in poster form) and discuss the Golden rule as interpreted by various world religions. The teacher might supplement with a story time selection such as The Rag Coat by Lauren Mills (Little Brown, 1991) which addresses issues of fair treatment. Also, Chicken Sunday by Patricia Polacco (Paperstar, 1992) is a fine multicultural treatment.
LifeAt this level, it might be appropriate to introduce multicultural pictures of children from a variety of societies. The purpose of this activity is to compare and contrast the economic, social, educational, and physical well being of children in a world-wide perspective. Questions can be raised about whether all children have equal access to food, shelter, education, and family support. After the discussion, a writing and art activity would allow students to express their thoughts about quality of life issues as they confront children. Students could choose a picture and write a story about the young person depicted and what they might be facing. Students should be encouraged to put themselves in the place of their chosen photograph. The papers may be illustrated as part of a parallel art activity.
LifeThe right to life is a motif in many adolescent novels. However, the issue might be most memorably presented in The Diary of Anne Frank, that is often required reading in middle school. The Holocaust is a horrific example of what happens when a political regime is based on the systematic disenfranchisement of citizens who have no constitutional protection of their basic rights. Whether presented in a literary or social studies context, the right to life is a key to understanding the difference between the right of citizens in a constitutional democracy and the victims of totalitarian genocide. After a debriefing discussion, the class might write essays on the Right to Life that would be shared in small groups and posted in the classroom.
LifeThe concept of life as a constitutional precept should be established in its full historical sense. Advanced classes might benefit from an overview of the Enlightenment philosophers such as John Locke and John Jacques Rousseau who influenced Thomas Jefferson. The students should fully understand what “natural rights” means as a basis for the American constitutional and legal system. The connection should then be made to a contemporary issue that connects the issue to the student. These might include: cloning, bioethics, or capital punishment. Discussion, debate, and writing should follow. This is a good issue for outside interviews or class speakers from the professional community.
LibertyPost a large color photograph of the restored Statue of Liberty. Discuss the location, meaning, and details of the statue and why so many Americans contributed so much for the restoration of Lady Liberty. Additional photographs of immigrants at Ellis Island may be added to the discussion. Art supplies could be furnished to allow the class to create their own poster of the statue and what it means to them. The finished posters should be displayed and discussed by the class.
LibertyAnnounce an individual “liberty” period of class time (15-20 minutes) in which each student will be able to spend the time at his/her own discretion on an activity of their own design. Limits should be broad and choices unlimited (within the teacher's classroom code). A debriefing would follow to discuss the individual choices and productivity of the period.
On a subsequent day, another “liberty” period would be held, this time with a democratic decision making model to determine the activity for the entire class. Again, a debriefing would analyze the success of the period.
On a third day, the teacher would dictate the activity of the special period with no liberty or democratic choice. The debriefing of this activity would include a comparison and contrast of the three different experiences. The teacher at this time would introduce the concepts of liberty, democracy, and tyranny to describe each of the three experiences. Follow up activities could include planning more student designed activities based on the liberty and democracy models. Writing activities could include the creation of "definitions" posters and
LibertyThe class will read and consider the explicit and implicit meaning of “The Pledge of Allegiance”, especially the phrase “...with liberty and justice for all.” The teacher will introduce some selected documents and case studies into follow up discussions. These cases should focus the ways in which the concepts of liberty and justice interface. Are they the same? Are there times when liberty and justice conflict? Are there limits on personal freedom in a democratic society? How does the constitution and rule of law help to determine these limits? Sample cases could include the Elian Gonzalez matter, free speech issues, or the American Revolution.
LibertyAfter reading and discussing several seminal documents that address the concept of liberty in American democracy, students should write a personal essay in which they define and defend their own ideas about liberty and personal freedom as citizens. The essay must address the problem of how to adjudicate disputes between individual “liberties” and whether our constitution places limits on personal freedom. Grading rubrics for the essay should include citation of historical examples and references to the constitution and court cases. A good place to begin the class discussion is the 1919 Schenk vs. U.S. case and the famous Holmes opinion on free speech (“clear and present danger”). Also, the writings of Henry David Thoreau and Ralph Waldo Emerson may prove provocative.
The Pursuit of HappinessCreate a classroom collection of pictures showing Americans at work and at play. The collection should reflect racial, ethnic, regional, economic, and gender diversity. The phrase "pursuit of happiness" should head the collection. After discussing the concept in plain terms and looking at the pictures, the class could do an art project in which each student would create a collage of fun things that his family does to pursue happiness. Students could draw, paint, or make a montage of happiness from their own personal perspective.
The Pursuit of HappinessAfter discussing the preamble to The Declaration of Independence, especially the phrase "life, liberty, and the pursuit of happiness", the class will be assigned an interview questionnaire. The questions will constitute a simple survey to be given to people at home asking them to try to define "pursuit of happiness" in a variety of ways. Categories might include: economic, educational, personal, family, political, and travel interests. After bringing their survey results back, the class will create a colorful statistical and graphic chart on the bulletin board. This chart will act as a working definition of the varied ways Americans pursue happiness.
The activity could also be part of a basic statistics introduction and part of a graphic design project. Also, it could be part of a history unit involving the American Revolution and the ideas that motivated our fight for independence from English rule.
The Pursuit of HappinessThe class might consider "pursuit of happiness" from the standpoint of immigration. The teacher might compile a packet of documents consisting of first hand testimony from first generation immigrants. Letters, interviews, and oral history sources should be included. Ellis Island, the "New" immigration of the late 19th century, and The Great Migration of southern Blacks to northern industrial cities should be considered. In each case, the conditions facing each group prior to migration should be detailed. Also, comparisons should be drawn between their old and new condition.
Evaluative activities might include: writing, role playing, enactments of historic scenarios, and graphic design of bulletin boards.
The Pursuit of HappinessThe class will conduct a debate on the subject of gun control. After researching and discussing Amendment II of The Constitution and the intended meaning of "the right to bear arms", the class will be divided into two teams to prepare their debate. One significant aspect of the debate should include whether gun ownership should be included in a citizen’s right to "pursue happiness" if the owner uses his firearm for hunting, competitive shooting, collecting or other peaceful activity. Are there times when "pursuit of happiness" might conflict with other rights such as "life" or "liberty"? How should such conflicts be resolved in our democratic system?
Part of the research for the debate could include interviews with guest speakers representing both sides of the issue. A closure activity might be a position paper defending one side of the argument and pointing toward possible solutions.
Common GoodDisplay and discuss pictures of significant American historic moments in which the country came together for the common good. These might include: the Pilgrims at Plymouth , the minute men at Concord, the signing of The Constitution, Martin Luther King and the 1963 March on Washington, a World War II victory parade, Earth Day, Habitat For Humanity, and a Red Cross blood drive. Appropriate holidays might be selected to consider pictures that reflect the commitment of citizens to the greater good of all.
Common GoodA suitable holiday or week of observance might be chosen to develop a class service project. For example, Earth Week could be a good time to begin a class recycling project. The class could build recycling centers for the school and surrounding neighborhood. Plastic, aluminum, and paper could be collected with the goals of beautification and contribution of profits toward a charitable purpose.
The project should begin with a definition of "common good" and the design of a poster symbolizing the concept. The class could then brainstorm and develop a service project of its own design.
Common GoodIn connection with its American history study, the class should focus on the development and purpose of several Utopian communities. Examples might include: Brook Farm, New Harmony, Amana, or the Shakers. An alternate focus could be the development of the Israeli Kibbutz system or the many efforts to forge the common good by the pioneers. John Smith’s efforts to save the Jamestown settlement or the Lewis and Clarke expedition would work well. Twentieth Century examples might include ways that American citizens responded to The Great Depression or World War II.
Supplemental research should include biographies of key figures and a tally list of specific ways that individual citizens contributed to the greater benefit of society to meet a common threat or need.
Project ideas include: skits, poster-charts, essays, and oral presentations. Debriefing should include discussion of contemporary parallels to the responsibility of citizenship in today’s world.
Common GoodDiscuss the concept of "common good" as a basic tenet of civic responsibility alongside the concept of individualism. The class should then be presented with a question: "How should a society of individuals dedicated to the notion of pursuing its own happiness also meet its commitments to work together for the greater benefit of others?"
The class will brainstorm the question by working in small groups to fill out a dichotomy sheet listing individual, contemporary, and historical examples of individualism on the one side and common good on the other. Then, each group will select and research one example of a situation in which the needs of both the individual and the common good were met at the same time.
Library time should be provided. The groups will creatively demonstrate their findings.
JusticeThe class will examine the Golden Rule as the basis for understanding the concept of justice. The focus for this should be the classroom rules about respecting others and waiting your turn to speak and being a good listener. While the concept of justice as a constitutional principle might be too advanced for this grade level, it can be embedded in a discussion of respect, cooperation, and fair treatment through class procedures.
JusticeMost children know the meaning of the phrases “that’s not fair” or “no fair”. Fairness, a key component of the broader idea of “justice” is a daily feature of playground ethics. In this way, the teacher could approach the idea of justice through reviewing the rules of a particular sport (like baseball or basketball) and the role that the umpire or referee plays in adjudicating disputes. This could be done in a class discussion of situations involving rules violations or it could be introduced in an actual playing situation where one side might be given unfair advantage (say unlimited double dribbles or 5 outs) over the other. The class would then debrief after the game to discuss the “unfair” or “unjust” nature of the rules and the impact that those uneven rules have on the outcome of the game. Analogies to real life situations and the rule of law would follow.
JusticeThe formal concept of justice as a constitutional and legal concept should be introduced. The teacher could use a case study like the Elian Gonzalez situation or another current situation such as school violence, pollution cases, or the Diallo shooting and trial in New York. After some research, consideration of pertinent documents, and study of possible redress, the class would then debate whether justice (as they understand it) has been achieved. Does justice involve changing laws, providing material compensation, or formal apologies? How is justice ultimately achieved in a democracy?
JusticeThe class will do a comparative study of three historical events which involve racial injustice and the constitutional process of redress. These are: Indian removal, slavery, and Japanese-American internment. The class will research the historical context, constitutional issues, and documentation of legal redress in each case. Then the class will be divided into debate groups to define and argue key issues that cut across all three cases. The ultimate question to be determined is whether justice was finally meted out to all three oppressed groups. The groups must compare and contrast the constitutional, economic, legislative, and legal redress in each historical case. A good closure activity would be a position paper defending a position on the nature of justice and legal redress involving minorities in American democracy.
EqualityEquality in America is not about sameness. Each person in our society is a unique individual who is encouraged to reach their full potential with equal protection under The Constitution. Therefore, a good activity to demonstrate this equality of opportunity for individuals is the creation of a classroom display of pictures show casing each member of the class. The display should be organized around an American flag or other patriotic symbol. Students can bring a picture from home or school pictures may be used if available. A connected activity might be the creation of self portraits created by the students in an art lesson. After completing the display, the class might discuss the display in connection with the ideal of equality, fairness, tolerance for others, and individualism.
Students might share a personal interest, hobby, pet, or favorite toy in discussing their self portrait. The teacher should endeavor to link the presentations to the importance of the individual in the American system and how our Constitution guarantees equality of opportunity for all to pursue their interests within the law.
EqualityThe tricky relationship between equality and individualism might be demonstrated through a class "olympics" competition. Set up a series of competitions that test a variety of physical and mental skills. Some ideas include: a softball throw, stationary jump, walking a line, free throw shooting, bird identification, spelling contest, geography competition, vocabulary definitions, math skills test, etc. Be sure to select a variety of safe games that will allow each student to be successful in one or two areas. Also, create enough challenges that each student may not be successful in some areas.
After tabulating the results, ask the class to discuss the fairness of the competition. Were all students given an equal chance to compete? What determined the success of the winners?
What factors influenced the outcomes of the competitions? Were the games chosen so that each student might have a chance to succeed? How might individual students improve their results if the events were held again? The debriefing might involve creation of a chart showing ways that students might improve their performance. Analogies might be drawn from history that show how equality of opportunity has not always existed. How have these inequities been addressed? A good case study might be the integration of major league baseball by Jackie Robinson in 1947 or the opportunities opened for women in the space program by astronaut Sally Ride. Research into celebrities who overcame initial failure or disadvantage to eventually succeed through individual initiative will complete the unit. The class might choose individual subjects from a list that includes such names as: Michael Jordan, Roger Staubach, Jim Abbott, Glenn Cunningham, Mildred “Babe” Zaharias, Oprah Winfrey, Gloria Estephan, Albert Einstein, Thomas Edison, Selena, and Colin Powell.
EqualityRead and discuss The Declaration of Independence. What did Jefferson mean when he wrote that "all men are created equal." What exceptions to this statement existed in 1776? How long did it take women, slaves, native Americans, and non property owners to achieve "equality"? Does equality mean equality of condition or equality of opportunity?
A good brainstorming activity is to make a chart of the ways people are and aren't equal.
Then compare this chart to The Bill of Rights. What inequities does The Constitution address? What inequities are a function of individualism and lie outside our constitutional system? This discussion might be the focus of small groups.
After reporting the results of their discussions to the whole, the class might be assigned an impromptu essay on the relationship between equality and individualism in America. How can we promote equality while protecting the rights of individual citizens? An alternate topic might be to define equality as an American value. Essays should include concrete examples from history or current events.
EqualityBreak the class into several study groups. Assign each one of the following fairness and equity laws: The Civil Rights Act of 1964 (Public Law 88-352), The Civil Rights Act of 1965 (Public Law 89-110), Title VII of the Civil Rights Act of 1964,
Title IX of the Educational Amendments of 1972, the Rehabilitation Act of 1973 and the Americans With Disabilities Act of 1990, and The Equal Rights Amendment (ERA) written by Alice Paul in 1921.
After researching the assignment, the groups should report to the class orally. The report should outline the conditions that led to the legislation and the specific ways that the legislation was designed to remediate an inequity. The presentations might include a creative component: a skit, a debate, a comic book, a poster, or a series of role playing interviews.
A follow up activity would assign the same groups the task of researching a current social
inequity that might be addressed by new legislation. After more research and planning, the groups would write a proposal for new laws that would remedy the inequity. Each proposal must show either constitutional precedent or demonstrate the need for a constitutional amendment. A formal written proposal should be submitted by each group.
Some good web sites are:
The Southern Poverty Law Center - http://www.splcenter.org/teaching tolerance/tt-index.html
Other organizations are:
Anti-Defimation League - http://www.adl.org
NAACP - http://www.naacp.org
National Organization for Women - http://www.now.org
DiversityThe class can create a diversity map of The United States. The teacher will put a large outline map of The United States (6-8 feet long) on one of the bulletin boards. The class will collect colorful pictures (from magazines) of Americans doing different jobs.
Each day, the class will discuss the different jobs that Americans do and will add cut out pictures of these diverse Americans to a collage inside the map.
When finished, the class should discuss their impressions of all of the different jobs and kinds of people who help to make America work. The pictures should include a diversity of professions, jobs, and kinds of people.
DiversityThe class can create a number of "diversity circles". The teacher will help the class decide the number and kinds of circles they want to create. The circles may include: sports, science, American history, politics, entertainment, etc. The circles will be posted in large spaces on classroom walls or bulletin boards (3 feet or more in diameter). The circles will consist of pictures and biographical blurbs researched and written by teams of students.
The circles may be set up by chronology, important contributions, or other criteria determined by the teams. Ideally, 4-6 different themes should be traced with 4-6 students in each team. The only general rule for each circle is that of diversity. Each team must strive to find and include the widest possible range of important contributors to their thematic circle as possible.
This activity combines historical research and cross disciplinary thinking.
DiversityThe class will undertake a study of American immigration. The teacher should organize the statistical and historical documentation of the key phases of American growth.
The activity should begin with the introduction of "the melting pot" metaphor coined by Hector St. Jean de Crevecoeur. After studying the documentary evidence, students will look into their own connection to immigration by interviewing family members to determine the facts of their own "coming to America".
As a group activity, the class will jointly create a "living" time line by tracing the history of immigration patterns from colonial times to the present. They will then connect immigration to other key trends and events in American history. Finally, each student will add his/her own family's immigration story to the line as specifically as they can. The student stories will be in the form of pictograms and written blurbs.
The time line should be large enough to include each student's story and several concurrent broad historical trend lines.
The debriefing discussion should pose the question of whether the "melting pot" really works as a way to describe immigration. Does diversity imply that our differences really "melt" away? Would a "stew pot" or "tossed salad" be a better metaphor?
The debriefing could culminate in a written essay response.
DiversityAfter studying the Declaration of Independence, in particular the second paragraph regarding the precepts of equality that it presents, the class will look at documents from 3 or 4 subsequent historical situations that call into question the idea that "all men are created equal" in our society. The teacher may select these situations from such examples as: Indian removal, Asian exclusion, anti immigrant nativism, gender exclusion, the Jim Crow era, integration and civil rights, etc.
The class will be divided up into 3-4 teams to study the historical context of their assigned topic and packets (or online) documents pertaining to their topic. Each group will create a one act play or series of dramatic vignettes that will be presented to the rest of the class. Each presentation must show how subsequent history resolved their situation.
A follow up debriefing should address the following questions. Was justice achieved? Has America always lived up to its ideal of equality? Is America a more diverse society today? Why has diversity in our population caused so many problems? Are the concepts of equality and diversity compatible? How has the constitution grown to make America more diverse since 1787? What does population growth and increasing diversity mean for America's future?
The debriefing could take the form of a panel discussion, a debate, or a written response.
TruthJust as there is a bond between citizens and the government, there is a bond between students and a teacher. Thus, it might be emphasized that telling the truth and refraining from lying is an important ethical rule that must be followed in the classroom. To illustrate this principle, the teacher might choose an appropriate selection for reading time from a trade publication. Aesop’s Fables or "The Boy Who Cried Wolf" might be effective in illustrating the point.
TruthFree speech is not a license to lie, cheat, or deceive. The class might benefit from creating a list of ten great reasons to tell the truth. To prepare for the activity, the teacher might display pictures of people who have been known for their honesty. The class could create an honesty mural to go with their list. Other related activities might include: a school wide survey, brainstorming some case studies involving moral reasoning, and researching how other cultures, past and present, view honesty. There are some excellent ideas in What Do You Stand For?: A Kid’s Guide to Building Character by Barbara Lewis (Free Spirit Publishing, 1998).
TruthThe relationship of trust between and government and its citizens is based on the free flow of information and public discussion of issues based on reliable facts. After considering some key historical cases involving governmental attempts to suppress the truth (e,g. The Peter Zenger case, deceptions involving the Vietnam war, the Watergate cover up, the Clinton impeachment, and human rights violations in The People’s republic of China), the class could conduct a survey of local political leaders and government officials. The teacher and a class committee could invite a panel of community leaders and journalists to participate in a question and answer session and discussion of truth in government. The class might select a controversial current local issue as the focus of the discussion. Closure activities might include a class debriefing, writing editorials on the topic, and creating a video news program on the guest speakers and their comments.
TruthA unit on consumerism might prove effective in studying the relationship between truth and the government. Ralph Nader’s Unsafe at Any Speed, Upton Sinclair’s muckraking classic - The Jungle - , or a recent 20/20 expose might kick off the unit. The teacher might prepare a packet of cases involving government action based on social research (e.g. The Triangle Shirtwaist fire, fire retardant child sleep-wear, the DDT ban, the tobacco litigation and settlement).
Students would then work in investigating teams researching recent legislation, the history of research behind the law, and current enforcement. The teams will present a brief on their finding to the class. The teacher should prepare an initial list of possible topics for the project. An option would include video taped "news magazine" presentations. Students should provide a list of sources used in their research.
Popular SovereigntyAs an integral part of class procedure, the teacher might consider “voting time” as a weekly activity. The decisions should involve simple choices that the students will have an interest in: treats on Friday, quiet time music selections, books selections for reading time, or recess activities. Former Speaker of the House “Tip” O’Neill has written that “all politics are local”. By learning to exercise free choice at the grassroots, students may develop a life long appreciation of democratic choice.
Popular SovereigntyIt might be fun and instructive at election time to build and decorate a voting booth and ballot box for class decision making. The class could view pictures of voting and, if the school itself is a polling place, could visit the polls to witness democracy in action (if officials permit). The class voting booth could be used during the year for special decision making events (special activities, class government, student of the week, special class rules, etc) or a mock election. Student groups could make the ballots, establish the choices, and count the results. The goal, of course, is to establish an understanding of majority rule and the collective power of the people in a democracy. As a supplemental unit, the class could learn about the history of elections tracing the results of local, state, or federal elections over time. The presidency might be a useful focus for tracing the evolution of political parties, the evolution of the popular vote, and voter participation. A class project could involve the creation of an extensive bulletin board display on the history of presidential elections.
Popular SovereigntyMiddle school “mock elections” could be held, especially during the fall general elections. A full slate of candidates should be developed, after some research into the issues (local, state, and federal). As many students as possible should take roles as candidates preparing platforms and speeches. Other students might act as news persons to conduct interviews. Others would act as election officials to supervise voting and to count ballots. Leading up to election day, art activities might include poster, bumper sticker, banner making. Bulletin boards could display an array of election memorabilia and campaign art. If feasible, the election could involve other classes to participate in a campaign rally and the election itself. Results of the election could be published in a classroom newspaper written by the entire class. Video could be a part of the experience with interviews, speeches, and news coverage of the election.
Popular SovereigntyVoting patterns could be studied by criteria such as: age, race, education and gender. A good historical case is the Lincoln -Douglas debate regarding the extension of slavery. A related issue is the problem of redistricting congressional boundaries along more equitable lines for minorities. A statistical comparison of voting in redistricted areas might provoke good discussion and debate about the impact of popular sovereignty in local areas. Another vital aspect of popular sovereignty is the constitutional recourse available to citizens when their wishes are violated by elected officials. Cases of initiative, referendum, and recall might be studied (especially those available on the local level). Discussion, debate, and writing activities should follow.
PatriotismStudents will learn the Pledge of Allegiance and the “Star Spangled Banner” in group recitation and singing. In addition, the class should learn about the history of the American flag and its proper display. To support this activity, the class can create a collection of flag art and pictures in the classroom. Other patriotic songs like “My Country Tis of Thee” and “America the Beautiful” could be also sung and discussed with art projects developed around the lyrics.
PatriotismStudents will study a variety of patriotic images and art (from The New England Patriots football logo to Uncle Sam to Norman Rockwell to World War II posters). They will then consider the word patriotism in brainstorming an inductive definition of the concept by determining what each of the images and paintings has in common. This definition will be compared to the formal dictionary definition. Both definitions and a visual display will be posted in the classroom.
PatriotismThe class will read and consider some traditional patriotic stories like those of Nathan Hale or Barbara Fritchie. Then, after discussing the stories and the qualities of individual patriotism, the class will brainstorm and research ways in which they (as individual citizens) might act patriotically. The teacher should encourage students to think broadly about patriotism as good citizenship in showing love and devotion to their country and its values. The class will then decide on a “good citizenship” project which enacts the values of patriotism that they have learned. This could involve writing letters of appreciation to war veterans, cleaning up a park memorial, or establishing a patriotic window display for a downtown business. The class could invite a veteran or elected official as guest speaker for the dedication of their project.
PatriotismThe class will respond to the question “My Country, Right or Wrong?” in a debate/discussion of whether patriotism and love of one’s country is always blind and unconditional. To prepare for the debate, the class should consider a series of historical cases in which the actions of the American government might be questioned on moral or ethical grounds. Examples might include: Indian Removal, The Spanish American War, the My Lai massacre, use of Agent Orange, the Golf of Tonkin Resolution, the Alien and Sedition Acts, conscientious objectors, Thoreau's night in jail, etc.
The purpose of the debate is to provoke higher level thinking about patriotism and its connection to moral and ethical values. For example, is it possible to be both a dissenter and a patriot? What separates a patriot from a zealot? How do our traditions of individualism and free speech interface with our value for patriotism and love of country?
The activity could involve cluster groups which nominate representatives to the class debate. The debate could involve role playing of historical figures from the cases studied. Class moderators and questioners would supervise the debate. The teacher would conduct a debriefing. An essay assignment on the question would follow as a closure activity.
Rule of LawTraffic signs and pedestrian rules provide opportunities for an introduction to the rule of law. In reviewing the traffic signs, stop lights, and crossing lanes around the school, the teacher should stress the importance of knowing what traffic signs mean and why it is important to obey them. Posters of traffic signs should be posted in the classroom. Students should be able to explain their rights as pedestrians and why traffic laws exist for the good of all. Art activities might include drawings of important traffic signs, stop lights, and mapping of each student's route home from school with street crossings and signs included.
Rule of LawThe evolution of law (in civil rights cases for example) might prove useful in learning about how law evolves from a living constitution. Starting with the 3/5 clause and moving through the Fugitive Slave Law, the constitutional amendments, Reconstruction, the Jim Crow era, Plessy v. Ferguson, Brown v. Board of Education, and the Civil Rights Acts of 1964 and 1965 the class will create an illustrated chart of the changes in civil rights law. The chart should contain a section for noting “causes” in recognition of the fact that each change in law is the result of a demonstrated need or omission in the existing law.
A future focus activity would be to brainstorm and research areas of the law that might need to be changed to meet new problems (e.g. cloning, parent rights, the rights of children). The teacher might supply news stories for further research. The class might write their own legal codes to address these social problems.
Rule of LawMiddle school is a good place to introduce a comparative study of how the rule of law is or is not implemented in countries around the world. Cases of free speech and human rights violations in China, Latin America, Africa, and in the former Soviet Bloc countries are well documented. Executions, illegal searches, political imprisonment, and genocide might be contrasted to how political problems are handled in a constitutional system as in the United States. Even in the United States, there are cases like Japanese Internment or the Red Scare in which the rule of law has been violated for political and perceived national security reasons. After doing the reading and the research, student groups will prepare panel discussions and drama groups to enact the scenarios under study. Student news groups will prepare “60 Minutes” segments briefing the class on various cases and their resolution. An important segment would be tracing the rule of law in the American Constitutional system.
Rule of LawDepending upon whether the group is a history class or a government class, several cases might prove stimulating in reaching a deep understanding of the rule of law in our constitutional system. The Watergate story with an emphasis on the documentation of President Nixon’s violation of law is a classic study of how elected officials are not above the constitution. Another approach might be to look at the evolution of the rights of the accused in the Brown/ Miranda/ Gideon cases. Also, a study of the conditions of women and African Americans before and after “protective” laws might prove useful. In addition, government classes might do comparative studies of constitutions (current and historic) from other countries. The emphasis should be on close study of primary documents. Small group discussion should be followed by large group debriefing. A writing activity on a critical question might provide closure.
Separation of PowersThe three branches of government may be introduced through large pictures of public buildings in Washington D.C. Pictures of The White House, The Supreme Court, and The Capitol Building may be placed on a bulletin board. The display might also include photographs of the president, Supreme Court justices, and members of congress and the senate. Above the three part display, an American flag might be displayed to symbolize the unifying quality of The Constitution and the way in which the three branches make up our federal government. In discussing the display, the teacher should explain in broad terms the role that each branch of government plays. As an activity, the students might create drawings or posters expressing their impressions of each branch of government.
Separation of PowersThe separation of powers can be studied through an interdisciplinary presentation of Sir Isaac Newton’s Third Law, "For every action, there is an opposed and equal reaction." By using a balance scale and weights, the teacher can demonstrate not only an important physical law but a key principle of democracy, the separation of powers. Once the scientific principle is understood, the teacher might introduce Montesquieu’s idea of "checks and balances" which is based on Newtonian thinking. A useful focus might be to consider the power to wage war in The Constitution. Why must the president ask Congress for a "declaration of war"? What power does Congress have to check the president ‘s use of the military? What powers does the Supreme Court have to check Congress and the president? As a case study, President Roosevelt’s December 8, 1941 speech to a joint session of Congress asking for a declaration of war against Japan might be considered. What would the response of The Supreme Court have been if the president had declared war without the approval of Congress? How could the Congress have checked the president if he had acted without their approval?
After consideration of the case and related questions, the class could create news headlines and brief stories reporting each scenario. An alternative project would be to do simulated interviews and CNN style news briefs.
Separation of PowersWith the assistance of the teachers, the class should read Articles I, II, and III of The Constitution. After discussion, the class may be divided into three teams (legislative, executive, and judicial) to create a chart that outlines the defined powers of their assigned branch of government. The charts should be posted on the class bulletin board.
Next, the teacher should distribute packets on a simulated case scenario. Some examples might be: "The President Sends U.S. Troops into Battle", "Congress Votes to Jail Political Dissidents", and "Supreme Court To Decide on Free Speech Rights of Middle School Students". The packets should detail the scenario and action taken or recommended by the respective branch of government. For evaluative purposes, the teacher should draw up a rubric referencing the case to Articles I, II, or III of The Constitution.
After studying each case, the student teams should analyze the three cases from the point of view of their assigned branch of government. Their report to the class should point out the constitutional problems in each case and should recommend action as justified in The Constitution. A final activity might be an essay assignment focusing on the proper sharing of power among the three branches of government and what the separation of power means to average citizens.
Separation of PowersThe class should read and review Articles I, II , and II of The Constitution. Then using the Legal Information Institute web site ( http://supct.law.cornell.edu/supct/cases/historic.htm), students should study briefs of the Marbury v Madison (1803) and McCullough v Maryland (1819) cases to fully understand the concepts of judicial review and broad congressional authority "within the scope of the constitution." Now, the class might do an in depth study of one or more cases involving questions of the separation of power between the three branches.
Suggested cases are: President Jackson's war against The Bank of The United States(1832-36), President Roosevelt's handling of The Northern Securities Trust(1902), Plessy v Ferguson (1896) and The War Powers Act (1973). The class could be divided into four research/study groups, each taking one of the cases. The groups would prepare a brief tracing the history of the case and the constitutional issues at stake. Their presentation should also identify the resolution of the case and link the resolution to issues of separation of power.
Some key discussion topics: How might these cases be resolved today? Does the balance of power among the three branches shift over time? How do politics and social change affect the balance? Is there equilibrium among the branches or does power shift over time? What are some issues today that reveal the shifting balance? Can we trace the history of the shifting balance of power? Which of the three branches seems to be in ascendance today?
A good closure activity might be an impromptu position paper or take home essay based on some of the issues raised by the presentations and discussion.
Representative GovernmentBasic representative democracy might start in the early elementary years through
a regular series of classroom elections. The elections could involve weekly choices such as story time material, recess games, bulletin board themes, or class colors. In addition, class elections could be held each month for teacher assistant. Students can help design and build a classroom ballot box. Art projects might include designing and creating ballots, election day banners, and a voting booth. Student committees can tabulate results and post them on a special election bulletin board.
Representative GovernmentThe teacher might organize a class "council" (or "senate") at the beginning of the year. The purpose of the council would be to represent the students in the class in making decisions that would affect the entire class during the school year ahead. The teacher should seek input from the class in establishing the "constitution" for the council. Such matters as term of office, size of constituency, powers of the council, meeting dates, and qualifications for office should be determined through class discussion. Before the elections are held, a brainstorming activity should explore the desired traits for leaders elected to the council.
A parallel biographical reading and research project could be assigned on the topic of "Great Leaders of Democracy, Yesterday and Today". The result would be a bulletin board display showing the final list of traits and several historical examples demonstrating each trait of good leadership. Election speeches for nominees are optional. For more participation and rotation in office, elections could be held once each semester. A recommended web site for stories about community leaders and activism is: The American Promise - http://www.pbs.org/ap/
Representative GovernmentMiddle school American history is a good place to research the policies of Alexander Hamilton and Thomas Jefferson concerning representative government.
The class might be divided into "Hamiltonians" and "Jeffersonians". each group would research the position of their leader on the topic of the powers of the central government vs. the power of the citizens. The activity would fit nicely into a unit on The Constitution and the compromises that resulted in our bicameral system. Closure might include a debate between members of each group on key points, the creation of a comparative chart, and
brainstorming how different our system might be today if either Jefferson's or Hamilton's
ideas had prevailed. Related activities might include: essays, editorials, news stories, video interviews, and role playing.
Representative GovernmentHigh school students might benefit from a comparative study of several different constitutions from around the world to measure the depth and effectiveness of representative government in The Constitution of The United States. The constitutions of the former U.S.S.R. and The Union of South Africa would be useful. The class might also be divided into study groups to determine the powers allotted to elected representatives in such bodies as the Japanese Diet, the Isreali Kinessett, and the British House of Commons.
For background, the class should review Article I of The Constitution and the writings of John Locke and Jean Jacques Rousseau. With the "pure democracy" of the New England town meeting at one end and totalitarian dictatorship on the other, where does the American republic stand in comparison to other countries in empowering its citizens?
Freedom of ReligionA simple but effective activity is the posting of religious holidays from a diversity of world religions on a class calendar. The calendar should be inclusive of all of the major world faiths. As a supplementary activity, a pictorial glossary of key concepts from each religion might be posted. In addition, students might create art depicting the major holidays and high holy days as they study them on the class calendar.
Freedom of ReligionStudents might begin to understand the concept of religious "pluralism" by studying the early colonies and the many religious groups that migrated to America during the colonial period. A map tracing the religious influences in the early colonies might emphasize the point graphically. The project could be expanded to trace the subsequent migration of new religious influences during the nineteenth and twentieth centuries. Consideration of religious pluralism should include an understanding of the principles of religious liberty, freedom of conscience, and separation of church and state found in the First Amendment of the Constitution. The study should also acknowledge those who do not profess a religious belief and their equal protection under the Constitution.
Freedom of ReligionReligious liberty under The Constitution might be presented in a comparative study of how religious persecution has been legalized in other political systems. The German treatment of Jews during the Holocaust, the conflict between Catholics and Protestants in Northern Ireland, and the Spanish Inquisition are opportune historical subjects to develop. More recently the "ethnic cleansing" in the Balkans and the slaughter in Rawanda address the theme in graphic terms. Student research groups might work on different topics. As a supplement, the teacher might compile documents packets with primary sources, pictures, news stories, and eyewitness accounts for class discussion. As a closure discussion, students might address the question of how religious conflicts like those in their study have been avoided in America. How does The Constitution provide equal protection for the beliefs of all of its citizens? An essay assignment on the subject might be part of a language arts activity.
Freedom of ReligionA good debate-discussion topic might be to address the relationship between religion and politics in American life. In what ways has religious belief shaped the political and social views of millions of American citizens? The class might undertake a comparative study of the history of recent American elections (say going back to the 1960’s) to see how religious affiliation has influenced the outcome. Voting statistics indicating party loyalty, religious affiliation, financial contributions, economic status, educational level, and ethnicity could be researched. The teacher might provide a packet with historical perspective from Machiavelli to William Jennings Bryan to Madeline Murray O’Hare to the South Carolina primary race between John McCain and George W. Bush.
Class activities include a debate, small group consideration of the documents packets, and an essay taking a position on the relationship between politics and religion in America.
FederalismDiscuss the idea of a government and how it begins with people and their needs. Have students draw or create a pictogram of their concept of what government does for its citizens.
FederalismLearn the original 13 states and their relationship to each other in a federal system. Learn about westward expansion and how new land meant the creation of new states. Learn the current 50 states and their relationship to the federal government. Begin to conceptualize how federal and state governments share power and serve different functions for their citizens. Have students create posters indicating the differences between the federal government and their state.
FederalismOrganize student groups to debate and discuss the different roles of federal and state governments. Using the Constitution and a mock Supreme Court, the class could conduct a debate
over a selected issue (gun control, civil rights, etc) and decide whether the issue should fall under
state or federal control. Some research required.
FederalismAfter consideration of the documents, the class should be divided into two groups
(Federalists vs Anti Federalists). Each group will prepare for a symposium-debate on the question of
Federalism and the sharing of political power in a democracy. Students will play historical roles based
on the major historical figures representing the evolution of their group's position and philosophy.
Representatives from both groups will meet with the teacher to determine the 3-5 key questions that
will be the focus of the symposium. Each student in both groups must prepare a role and stay in that
role for the duration of the debate. The discussion will stay focused on the preselected questions.
Each student will submit a position paper (with historical examples) representing his character's
hypothetical position on the selected questions. Extensive research required.
Civilian Control of the MilitaryThrough pictures and photographs, the class should observe the differences between the civilian and the military functions of government. A bulletin board display, divided into two large areas, would demonstrate various important jobs and services that the federal government provides to its citizens. The display will be a graphic introduction to the many ways that government serves the people. Also, it will introduce the differences between the military and the civilian roles.
Civilian Control of the MilitaryDisplay an array of pictures of the United States military in action including ships, planes, missiles, and vehicles. Also, include photographs of the President reviewing troops and photographs of other world leaders who wear a military uniforms (e.g. President Pinochet of Argentina and Fidel Castro of Cuba). Discuss with the class the relationship between the president and the military in our democratic system. Ask them to observe that the President wears civilian clothes and never wears a military uniform while in other countries there are sometimes generals who take over the civilian government.
Brainstorm a list of reasons why, in our constitution, the military is under the control of the executive branch. Why would a military dictatorship be attractive to some countries? As a project, the class could create a flow chart tracing the military organization of The United States and the relationship of each branch of service to the Department of Defense and the president. Pictures and information for the chart can be obtained from internet sites.
Civilian Control of the MilitaryThe class will read Article II, section 2 of The Constitution. After a discussion of the reasons for making the President, a civilian, the commander in chief of the military, the class will study a case involving a challenge to presidential authority by the military. The case might include: the firing of General MacArthur by President Truman in 1951 or the fictional situation posed by the films Seven Days in May or Fail Safe(available in rental). After considering the documents or viewing the films, the teacher should conduct a debriefing on the situation and the constitutional implications. As a closure activity, a mock court martial of the military figures in the case could be held with students preparing roles. An essay could also be assigned discussing the merits of civilian control of the military.
Civilian Control of the MilitaryAfter reviewing Article II, section 2 of The Constitution, the class will consider President Eisenhower’s remarks in 1960 concerning the "undue influence of the military-industrial complex." The teacher should prepare a packet which includes Eisenhower’s speech, remarks by military leaders like General Curtis LeMay, and other documents concerning the control and use of nuclear weapons. The focus of these documents will prepare discussion of the issue of "The Constitution in a Nuclear Age". After consideration of the documents, the class will be divided into two groups: one representing support for civilian control, the other representing the military point of view. After preparing several discussion points provided by the teacher, the class will engage in a round table discussion defending their assigned point of view. As a supplemental case, the teacher could provide a documents packet on the 1945 decision to use atomic weapons by President Truman and the various options facing him and the military perspective at the time.
This activity may be an extended term project and could involve additional research and writing. A shorter activity would involve group discussion of the packet and questions. |
Students analyze the behavior of a piecewise function
Students learn the concept of piecewise functions
Students learn about conditionals (how to write piecewise functions in code)
Students will understand that functions can perform different computations based on characteristics of their inputs
Students will begin to see how Examples indicate the need for piecewise functions
Students will understand that cond statements capture pairs of questions and answers when coding a piecewise function
Computer for each student (or pair), running WeScheme or DrRacket with the bootstrap-teachpack installed
Student workbooks, and something to write with
Class poster (List of rules, language table, course calendar)
- Luigi’s Pizza
To get started with this lesson, complete Luigi’s Pizza Worksheet.
The code for the cost function is written below:
Up to now, all of the functions you’ve seen have done the same thing to their inputs:
This was evident when going from EXAMPLEs to the function definition: circling what changes essentially gives away the definition, and the number of variables would always match the number of things in the Domain.
green-triangle always made green triangles, no matter what the size was.
safe-left? always compared the input coordinate to -50, no matter what that input was.
update-danger always added or subtracted the same amount
and so on...
Turn to Page 67, fill in the Contract and EXAMPLEs for this function, then circle and label what changes.
It may be worthwhile to have some EXAMPLEs and definitions written on the board, so students can follow along.
The cost function is special, because different toppings can result in totally different expressions being evaluated. If you were to circle everything that changes in the example, you would have the toppings circles and the price. That’s two changeable things, but the Domain of the function only has one thing in it - therefore, we can’t have two variables.
Have students refer back to prior Design Recipe pages - it is essential that they realize that this is a fundamentally new situation, which will require a new technique in the Design Recipe!
Of course, price isn’t really an independent variable, since the price depends entirely on the topping. For example: if the topping is "cheese" the function will simply produce 9.00, if the topping is "pepperoni" the function will simply produce 10.50, and so on. The price is still defined in terms of the topping, and there are four possible toppings - four possible conditions - that the function needs to care about. The cost function makes use of a special language feature called conditionals, which is abbreviated in the code as cond.
- Each conditional has at least one clause. Each clause has a Boolean "question" and a "result". In Luigi’s function, there is a clause for cheese, another for pepperoni, and so on. If the "question" evaluates to true, the "result" expression gets evaluated and returned. If the "question" is false, the computer will skip to the next clause.
Look at the cost function:
How many clauses are there for the cost function?
Identify the question in the first clause.
Identify the question in the second clause.
Square brackets enclose the question and answer for each clause. When students identify the questions, they should find the first expression within the square brackets. There can only be one expression in each answer.
- The last clause in a conditional can be an else clause, which gets evaluated if all the questions are false.
In the cost function, what gets returned if all the questions are false? What would happen if there was no else clause? Try removing that clause from the code and evaluating an unknown topping, and see what happens.
else clauses are best used as a catch-all for cases that you can’t otherwise enumerate. If you can state a precise question for a clause, write the precise question instead of else. For example, if you have a function that does different things depending on whether some variable x is larger than 5, it is better for beginners to write the two questions (> x 5) and (<= x 5) rather than have the second question be else. Explicit questions make it easier to read and maintain programs.
Functions that use conditions are called piecewise functions, because each condition defines a separate piece of the function. Why are piecewise functions useful? Think about the player in your game: you’d like the player to move one way if you hit the "up" key, and another way if you hit the "down" key. Moving up and moving down need two different expressions! Without cond, you could only write a function that always moves the player up, or always moves it down, but not both.
- Right now the else clause produces a String, even though the Range of the function is Number. Do you think this is a problem? Why or why not? As human beings, having output that breaks that contract might not be a problem: we know that the functions will produce the cost of a pizza or an error message. But what if the output of this code didn’t go to humans at all? What if we want to use this function from within some other code? Is it possible that that code might get confused? To find out, uncomment the last line of the program (start cost by removing the semicolon. When you click "Run", the simulator will use cost function to run the cash register. See what happens if you order off the menu!
To fix this, let’s change the else clause to return a really high price. After all, if the customer is willing to pay a million bucks, Luigi will make whatever pizza they want! |
A galaxy is a gravitationally bound system of stars, stellar remnants, interstellar gas, dust, and dark matter. The word galaxy is derived from the Greek galaxias (γαλαξίας), literally "milky", a reference to the Milky Way. Galaxies range in size from dwarfs with just a few hundred million (108) stars to giants with one hundred trillion (1014) stars, each orbiting its galaxy's center of mass.
Galaxies are categorized according to their visual morphology as elliptical, spiral, or irregular. Many galaxies are thought to have supermassive black holes at their centers. The Milky Way's central black hole, known as Sagittarius A*, has a mass four million times greater than the Sun. As of March 2016, GN-z11 is the oldest and most distant observed galaxy with a comoving distance of 32 billion light-years from Earth, and observed as it existed just 400 million years after the Big Bang.
Recent estimates of the number of galaxies in the observable universe range from 200 billion (×1011) 2 to 2 trillion (×1012) or more, 2 containing more stars than all the grains of sand on planet Earth. Most of the galaxies are 1,000 to 100,000 parsecs in diameter (approximately 3000 to 300,000 light years) and separated by distances on the order of millions of parsecs (or megaparsecs). For comparison, the Milky Way has a diameter of at least 30,000 parsecs (100,000 LY) and is separated from the Andromeda Galaxy, its nearest large neighbor, by 780,000 parsecs (2.5 million LY).
The space between galaxies is filled with a tenuous gas (the intergalactic medium) having an average density of less than one atom per cubic meter. The majority of galaxies are gravitationally organized into groups, clusters, and superclusters. The Milky Way is part of the Local Group, which is dominated by it and the Andromeda Galaxy and is part of the Virgo Supercluster. At the largest scale, these associations are generally arranged into sheets and filaments surrounded by immense voids. The largest structure of galaxies yet recognised is a cluster of superclusters that has been named Laniakea, which contains the Virgo supercluster.
- 1 Etymology
- 2 Nomenclature
- 3 Observation history
- 4 Types and morphology
- 5 Other types of galaxies
- 6 Properties
- 7 Formation and evolution
- 8 Larger-scale structures
- 9 Multi-wavelength observation
- 10 See also
- 11 Notes
- 12 References
- 13 Bibliography
- 14 External links
The origin of the word galaxy derives from the Greek term for the Milky Way, galaxias (γαλαξίας, "milky one"), or kyklos galaktikos ("milky circle") due to its appearance as a "milky" band of light in the sky. In Greek mythology, Zeus places his son born by a mortal woman, the infant Heracles, on Hera's breast while she is asleep so that the baby will drink her divine milk and will thus become immortal. Hera wakes up while breastfeeding and then realizes she is nursing an unknown baby: she pushes the baby away, some of her milk spills, and it produces the faint band of light known as the Milky Way.
In the astronomical literature, the capitalized word "Galaxy" is often used to refer to our galaxy, the Milky Way, to distinguish it from the other galaxies in our universe. The English term Milky Way can be traced back to a story by Chaucer c. 1380:
"See yonder, lo, the Galaxyë
Which men clepeth the Milky Wey,
For hit is whyt."
Galaxies were initially discovered telescopically and were known as spiral nebulae. Most 18th to 19th Century astronomers considered them as either unresolved star clusters or anagalactic nebulae, and were just thought as a part of the Milky Way, but their true composition and natures remained a mystery. Observations using larger telescopes of a few nearby bright galaxies, like the Andromeda Galaxy, began resolving them into huge conglomerations of stars, but based simply on the apparent faintness and sheer population of stars, the true distances of these objects placed them well beyond the Milky Way. For this reason they were popularly called island universes, but this term quickly fell into disuse, as the word universe implied the entirety of existence. Instead, they became known simply as galaxies.
Tens of thousands of galaxies have been catalogued, but only a few have well-established names, such as the Andromeda Galaxy, the Magellanic Clouds, the Whirlpool Galaxy, and the Sombrero Galaxy. Astronomers work with numbers from certain catalogues, such as the Messier catalogue, the NGC (New General Catalogue), the IC (Index Catalogue), the CGCG (Catalogue of Galaxies and of Clusters of Galaxies), the MCG (Morphological Catalogue of Galaxies) and UGC (Uppsala General Catalogue of Galaxies). All of the well-known galaxies appear in one or more of these catalogues but each time under a different number. For example, Messier 109 is a spiral galaxy having the number 109 in the catalogue of Messier, but also codes NGC3992, UGC6937, CGCG 269-023, MCG +09-20-044, and PGC 37617.
The realization that we live in a galaxy which is one among many galaxies, parallels major discoveries that were made about the Milky Way and other nebulae.
The Greek philosopher Democritus (450–370 BCE) proposed that the bright band on the night sky known as the Milky Way might consist of distant stars. Aristotle (384–322 BCE), however, believed the Milky Way to be caused by "the ignition of the fiery exhalation of some stars that were large, numerous and close together" and that the "ignition takes place in the upper part of the atmosphere, in the region of the World that is continuous with the heavenly motions." The Neoplatonist philosopher Olympiodorus the Younger (c. 495–570 CE) was critical of this view, arguing that if the Milky Way is sublunary (situated between Earth and the Moon) it should appear different at different times and places on Earth, and that it should have parallax, which it does not. In his view, the Milky Way is celestial.
According to Mohani Mohamed, the Arabian astronomer Alhazen (965–1037) made the first attempt at observing and measuring the Milky Way's parallax, and he thus "determined that because the Milky Way had no parallax, it must be remote from the Earth, not belonging to the atmosphere." The Persian astronomer al-Bīrūnī (973–1048) proposed the Milky Way galaxy to be "a collection of countless fragments of the nature of nebulous stars." The Andalusian astronomer Ibn Bâjjah ("Avempace", d. 1138) proposed that the Milky Way is made up of many stars that almost touch one another and appear to be a continuous image due to the effect of refraction from sublunary material, citing his observation of the conjunction of Jupiter and Mars as evidence of this occurring when two objects are near. In the 14th century, the Syrian-born Ibn Qayyim proposed the Milky Way galaxy to be "a myriad of tiny stars packed together in the sphere of the fixed stars."
Actual proof of the Milky Way consisting of many stars came in 1610 when the Italian astronomer Galileo Galilei used a telescope to study the Milky Way and discovered that it is composed of a huge number of faint stars. In 1750 the English astronomer Thomas Wright, in his An original theory or new hypothesis of the Universe, speculated (correctly) that the galaxy might be a rotating body of a huge number of stars held together by gravitational forces, akin to the Solar System but on a much larger scale. The resulting disk of stars can be seen as a band on the sky from our perspective inside the disk. In a treatise in 1755, Immanuel Kant elaborated on Wright's idea about the structure of the Milky Way.
The first project to describe the shape of the Milky Way and the position of the Sun was undertaken by William Herschel in 1785 by counting the number of stars in different regions of the sky. He produced a diagram of the shape of the galaxy with the Solar System close to the center. Using a refined approach, Kapteyn in 1920 arrived at the picture of a small (diameter about 15 kiloparsecs) ellipsoid galaxy with the Sun close to the center. A different method by Harlow Shapley based on the cataloguing of globular clusters led to a radically different picture: a flat disk with diameter approximately 70 kiloparsecs and the Sun far from the center. Both analyses failed to take into account the absorption of light by interstellar dust present in the galactic plane, but after Robert Julius Trumpler quantified this effect in 1930 by studying open clusters, the present picture of our host galaxy, the Milky Way, emerged.
Distinction from other nebulae
A few galaxies outside the Milky Way are visible on a dark night to the unaided eye, including the Andromeda Galaxy, Large Magellanic Cloud and the Small Magellanic Cloud. In the 10th century, the Persian astronomer Al-Sufi made the earliest recorded identification of the Andromeda Galaxy, describing it as a "small cloud". In 964, Al-Sufi probably mentioned the Large Magellanic Cloud in his Book of Fixed Stars (referring to "Al Bakr of the southern Arabs", since at a declination of about 70° south it was not visible where he lived); it was not well known to Europeans until Magellan's voyage in the 16th century. The Andromeda Galaxy was later independently noted by Simon Marius in 1612. In 1734, philosopher Emanuel Swedenborg in his Principia speculated that there may be galaxies outside our own that are formed into galactic clusters that are miniscule parts of the universe which extends far beyond what we can see. These views "are remarkably close to the present-day views of the cosmos." In 1750, Thomas Wright speculated (correctly) that the Milky Way is a flattened disk of stars, and that some of the nebulae visible in the night sky might be separate Milky Ways. In 1755, Immanuel Kant used the term "island Universe" to describe these distant nebulae.
Toward the end of the 18th century, Charles Messier compiled a catalog containing the 109 brightest celestial objects having nebulous appearance. Subsequently, William Herschel assembled a catalog of 5,000 nebulae. In 1845, Lord Rosse constructed a new telescope and was able to distinguish between elliptical and spiral nebulae. He also managed to make out individual point sources in some of these nebulae, lending credence to Kant's earlier conjecture.
In 1912, Vesto Slipher made spectrographic studies of the brightest spiral nebulae to determine their composition. Slipher discovered that the spiral nebulae have high Doppler shifts, indicating that they are moving at a rate exceeding the velocity of the stars he had measured. He found that the majority of these nebulae are moving away from us.
In 1917, Heber Curtis observed nova S Andromedae within the "Great Andromeda Nebula" (as the Andromeda Galaxy, Messier object M31, was then known). Searching the photographic record, he found 11 more novae. Curtis noticed that these novae were, on average, 10 magnitudes fainter than those that occurred within our galaxy. As a result, he was able to come up with a distance estimate of 150,000 parsecs. He became a proponent of the so-called "island universes" hypothesis, which holds that spiral nebulae are actually independent galaxies.
In 1920 a debate took place between Harlow Shapley and Heber Curtis (the Great Debate), concerning the nature of the Milky Way, spiral nebulae, and the dimensions of the Universe. To support his claim that the Great Andromeda Nebula is an external galaxy, Curtis noted the appearance of dark lanes resembling the dust clouds in the Milky Way, as well as the significant Doppler shift.
In 1922, the Estonian astronomer Ernst Öpik gave a distance determination that supported the theory that the Andromeda Nebula is indeed a distant extra-galactic object. Using the new 100 inch Mt. Wilson telescope, Edwin Hubble was able to resolve the outer parts of some spiral nebulae as collections of individual stars and identified some Cepheid variables, thus allowing him to estimate the distance to the nebulae: they were far too distant to be part of the Milky Way. In 1936 Hubble produced a classification of galactic morphology that is used to this day.
In 1944, Hendrik van de Hulst predicted that microwave radiation with wavelength of 21 cm would be detectable from interstellar atomic hydrogen gas; and in 1951 it was observed. This radiation is not affected by dust absorption, and so its Doppler shift can be used to map the motion of the gas in our galaxy. These observations led to the hypothesis of a rotating bar structure in the center of our galaxy. With improved radio telescopes, hydrogen gas could also be traced in other galaxies. In the 1970s, Vera Rubin uncovered a discrepancy between observed galactic rotation speed and that predicted by the visible mass of stars and gas. Today, the galaxy rotation problem is thought to be explained by the presence of large quantities of unseen dark matter.
Beginning in the 1990s, the Hubble Space Telescope yielded improved observations. Among other things, Hubble data helped establish that the missing dark matter in our galaxy cannot solely consist of inherently faint and small stars. The Hubble Deep Field, an extremely long exposure of a relatively empty part of the sky, provided evidence that there are about 125 billion (×1011) galaxies in the observable universe. 1.25 Improved technology in detecting the spectra invisible to humans (radio telescopes, infrared cameras, and x-ray telescopes) allow detection of other galaxies that are not detected by Hubble. Particularly, galaxy surveys in the Zone of Avoidance (the region of the sky blocked at visible-light wavelengths by the Milky Way) have revealed a number of new galaxies.
In 2016, a study published in The Astrophysical Journal and led by Christopher Conselice of the University of Nottingham using 3D modeling of images collected over 20 years by the Hubble Space Telescope concluded that there are over 2 trillion (×1012) galaxies in the observable universe. 2
Types and morphology
Galaxies come in three main types: ellipticals, spirals, and irregulars. A slightly more extensive description of galaxy types based on their appearance is given by the Hubble sequence. Since the Hubble sequence is entirely based upon visual morphological type (shape), it may miss certain important characteristics of galaxies such as star formation rate in starburst galaxies and activity in the cores of active galaxies.
The Hubble classification system rates elliptical galaxies on the basis of their ellipticity, ranging from E0, being nearly spherical, up to E7, which is highly elongated. These galaxies have an ellipsoidal profile, giving them an elliptical appearance regardless of the viewing angle. Their appearance shows little structure and they typically have relatively little interstellar matter. Consequently, these galaxies also have a low portion of open clusters and a reduced rate of new star formation. Instead they are dominated by generally older, more evolved stars that are orbiting the common center of gravity in random directions. The stars contain low abundances of heavy elements because star formation ceases after the initial burst. In this sense they have some similarity to the much smaller globular clusters.
The largest galaxies are giant ellipticals. Many elliptical galaxies are believed to form due to the interaction of galaxies, resulting in a collision and merger. They can grow to enormous sizes (compared to spiral galaxies, for example), and giant elliptical galaxies are often found near the core of large galaxy clusters.
A shell galaxy is a type of elliptical galaxy where the stars in the galaxy's halo are arranged in concentric shells. About one-tenth of elliptical galaxies have a shell-like structure, which has never been observed in spiral galaxies. The shell-like structures are thought to develop when a larger galaxy absorbs a smaller companion galaxy. As the two galaxy centers approach, the centers start to oscillate around a center point, the oscillation creates gravitational ripples forming the shells of stars, similar to ripples spreading on water. For example, galaxy NGC 3923 has over twenty shells.
Spiral galaxies resemble spiraling pinwheels. Though the stars and other visible material contained in such a galaxy lie mostly on a plane, the majority of mass in spiral galaxies exists in a roughly spherical halo of dark matter that extends beyond the visible component, as demonstrated by the universal rotation curve concept.
Spiral galaxies consist of a rotating disk of stars and interstellar medium, along with a central bulge of generally older stars. Extending outward from the bulge are relatively bright arms. In the Hubble classification scheme, spiral galaxies are listed as type S, followed by a letter (a, b, or c) that indicates the degree of tightness of the spiral arms and the size of the central bulge. An Sa galaxy has tightly wound, poorly defined arms and possesses a relatively large core region. At the other extreme, an Sc galaxy has open, well-defined arms and a small core region. A galaxy with poorly defined arms is sometimes referred to as a flocculent spiral galaxy; in contrast to the grand design spiral galaxy that has prominent and well-defined spiral arms. The speed in which a galaxy rotates is thought to correlate with the flatness of the disc as some spiral galaxies have thick bulges, while others are thin and dense.
In spiral galaxies, the spiral arms do have the shape of approximate logarithmic spirals, a pattern that can be theoretically shown to result from a disturbance in a uniformly rotating mass of stars. Like the stars, the spiral arms rotate around the center, but they do so with constant angular velocity. The spiral arms are thought to be areas of high-density matter, or "density waves". As stars move through an arm, the space velocity of each stellar system is modified by the gravitational force of the higher density. (The velocity returns to normal after the stars depart on the other side of the arm.) This effect is akin to a "wave" of slowdowns moving along a highway full of moving cars. The arms are visible because the high density facilitates star formation, and therefore they harbor many bright and young stars.
Barred spiral galaxy
A majority of spiral galaxies, including our own Milky Way galaxy, have a linear, bar-shaped band of stars that extends outward to either side of the core, then merges into the spiral arm structure. In the Hubble classification scheme, these are designated by an SB, followed by a lower-case letter (a, b or c) that indicates the form of the spiral arms (in the same manner as the categorization of normal spiral galaxies). Bars are thought to be temporary structures that can occur as a result of a density wave radiating outward from the core, or else due to a tidal interaction with another galaxy. Many barred spiral galaxies are active, possibly as a result of gas being channeled into the core along the arms.
Our own galaxy, the Milky Way, is a large disk-shaped barred-spiral galaxy about 30 kiloparsecs in diameter and a kiloparsec thick. It contains about two hundred billion (2×1011) stars and has a total mass of about six hundred billion (6×1011) times the mass of the Sun.
Recently, researchers described galaxies called super-luminous spirals. They are very large with an upward diameter of 437,000 light-years (compared to the Milky Way's 100,000 light-year diameter). With a mass of 340 billion solar masses, they generate a significant amount of ultraviolet and mid-infrared light. They are thought to have an increased star formation rate around 30 times faster than the Milky Way.
- Peculiar galaxies are galactic formations that develop unusual properties due to tidal interactions with other galaxies.
- A ring galaxy has a ring-like structure of stars and interstellar medium surrounding a bare core. A ring galaxy is thought to occur when a smaller galaxy passes through the core of a spiral galaxy. Such an event may have affected the Andromeda Galaxy, as it displays a multi-ring-like structure when viewed in infrared radiation.
- A lenticular galaxy is an intermediate form that has properties of both elliptical and spiral galaxies. These are categorized as Hubble type S0, and they possess ill-defined spiral arms with an elliptical halo of stars (barred lenticular galaxies receive Hubble classification SB0.)
- Irregular galaxies are galaxies that can not be readily classified into an elliptical or spiral morphology.
- An ultra diffuse galaxy (UDG) is an extremely-low-density galaxy. The galaxy may be the same size as the Milky Way but has a visible star count of only 1% of the Milky Way. The lack of luminosity is because there is a lack of star-forming gas in the galaxy which results in old stellar populations.
Despite the prominence of large elliptical and spiral galaxies, most galaxies in the Universe are dwarf galaxies. These galaxies are relatively small when compared with other galactic formations, being about one hundredth the size of the Milky Way, containing only a few billion stars. Ultra-compact dwarf galaxies have recently been discovered that are only 100 parsecs across.
Many dwarf galaxies may orbit a single larger galaxy; the Milky Way has at least a dozen such satellites, with an estimated 300–500 yet to be discovered. Dwarf galaxies may also be classified as elliptical, spiral, or irregular. Since small dwarf ellipticals bear little resemblance to large ellipticals, they are often called dwarf spheroidal galaxies instead.
A study of 27 Milky Way neighbors found that in all dwarf galaxies, the central mass is approximately 10 million solar masses, regardless of whether the galaxy has thousands or millions of stars. This has led to the suggestion that galaxies are largely formed by dark matter, and that the minimum size may indicate a form of warm dark matter incapable of gravitational coalescence on a smaller scale.
Other types of galaxies
Interactions between galaxies are relatively frequent, and they can play an important role in galactic evolution. Near misses between galaxies result in warping distortions due to tidal interactions, and may cause some exchange of gas and dust. Collisions occur when two galaxies pass directly through each other and have sufficient relative momentum not to merge. The stars of interacting galaxies will usually not collide, but the gas and dust within the two forms will interact, sometimes triggering star formation. A collision can severely distort the shape of the galaxies, forming bars, rings or tail-like structures.
At the extreme of interactions are galactic mergers. In this case the relative momentum of the two galaxies is insufficient to allow the galaxies to pass through each other. Instead, they gradually merge to form a single, larger galaxy. Mergers can result in significant changes to morphology, as compared to the original galaxies. If one of the merging galaxies is much more massive than the other merging galaxy then the result is known as cannibalism. The more massive larger galaxy will remain relatively undisturbed by the merger, while the smaller galaxy is torn apart. The Milky Way galaxy is currently in the process of cannibalizing the Sagittarius Dwarf Elliptical Galaxy and the Canis Major Dwarf Galaxy.
Stars are created within galaxies from a reserve of cold gas that forms into giant molecular clouds. Some galaxies have been observed to form stars at an exceptional rate, which is known as a starburst. If they continue to do so, then they would consume their reserve of gas in a time span less than the lifespan of the galaxy. Hence starburst activity usually lasts for only about ten million years, a relatively brief period in the history of a galaxy. Starburst galaxies were more common during the early history of the Universe, and, at present, still contribute an estimated 15% to the total star production rate.
Starburst galaxies are characterized by dusty concentrations of gas and the appearance of newly formed stars, including massive stars that ionize the surrounding clouds to create H II regions. These massive stars produce supernova explosions, resulting in expanding remnants that interact powerfully with the surrounding gas. These outbursts trigger a chain reaction of star building that spreads throughout the gaseous region. Only when the available gas is nearly consumed or dispersed does the starburst activity end.
Starbursts are often associated with merging or interacting galaxies. The prototype example of such a starburst-forming interaction is M82, which experienced a close encounter with the larger M81. Irregular galaxies often exhibit spaced knots of starburst activity.
A portion of the observable galaxies are classified as active galaxies if the galaxy contains an active galactic nucleus (AGN). A significant portion of the total energy output from the galaxy is emitted by the active galactic nucleus, instead of the stars, dust and interstellar medium of the galaxy.
The standard model for an active galactic nucleus is based upon an accretion disc that forms around a supermassive black hole (SMBH) at the core region of the galaxy. The radiation from an active galactic nucleus results from the gravitational energy of matter as it falls toward the black hole from the disc. In about 10% of these galaxies, a diametrically opposed pair of energetic jets ejects particles from the galaxy core at velocities close to the speed of light. The mechanism for producing these jets is not well understood.
- Seyfert galaxies or quasars, are classified depending on the luminosity, are active galaxies that emit high-energy radiation in the form of x-rays.
Blazars are believed to be an active galaxy with a relativistic jet that is pointed in the direction of Earth. A radio galaxy emits radio frequencies from relativistic jets. A unified model of these types of active galaxies explains their differences based on the viewing angle of the observer.
Possibly related to active galactic nuclei (as well as starburst regions) are low-ionization nuclear emission-line regions (LINERs). The emission from LINER-type galaxies is dominated by weakly ionized elements. The excitation sources for the weakly ionized lines include post-AGB stars, AGN, and shocks. Approximately one-third of nearby galaxies are classified as containing LINER nuclei.
Seyfert galaxies are one of the two largest groups of active galaxies, along with quasars. They have quasar-like nuclei (very luminous, distant and bright sources of electromagnetic radiation) with very high surface brightnesses but unlike quasars, their host galaxies are clearly detectable. Seyfert galaxies account for about 10% of all galaxies. Seen in visible light, most Seyfert galaxies look like normal spiral galaxies, but when studied under other wavelengths, the luminosity of their cores is equivalent to the luminosity of whole galaxies the size of the Milky Way.
Quasars (/ˈkweɪzɑr/) or quasi-stellar radio sources are the most energetic and distant members of active galactic nuclei. Quasars are extremely luminous and were first identified as being high redshift sources of electromagnetic energy, including radio waves and visible light, that appeared to be similar to stars, rather than extended sources similar to galaxies. Their luminosity can be 100 times greater than that of the Milky Way.
Luminous infrared galaxy
Luminous infrared galaxies or LIRGs are galaxies with luminosities, the measurement of brightness, above 1011 L☉. LIRGs are more abundant than starburst galaxies, Seyfert galaxies and quasi-stellar objects at comparable total luminosity. Infrared galaxies emit more energy in the infrared than at all other wavelengths combined. A LIRG's luminosity is 100 billion times that of our Sun.
Galaxies have magnetic fields of their own. They are strong enough to be dynamically important: they drive mass inflow into the centers of galaxies, they modify the formation of spiral arms and they can affect the rotation of gas in the outer regions of galaxies. Magnetic fields provide the transport of angular momentum required for the collapse of gas clouds and hence the formation of new stars.
The typical average equipartition strength for spiral galaxies is about 10 μG (microGauss) or 1 nT (nanoTesla). For comparison, the Earth's magnetic field has an average strength of about 0.3 G (Gauss or 30 μT (microTesla). Radio-faint galaxies like M 31 and M 33, our Milky Way's neighbors, have weaker fields (about 5 μG), while gas-rich galaxies with high star-formation rates, like M 51, M 83 and NGC 6946, have 15 μG on average. In prominent spiral arms the field strength can be up to 25 μG, in regions where cold gas and dust are also concentrated. The strongest total equipartition fields (50–100 μG) were found in starburst galaxies, for example in M 82 and the Antennae, and in nuclear starburst regions, for example in the centers of NGC 1097 and of other barred galaxies.
Formation and evolution
Galactic formation and evolution is an active area of research in astrophysics.
Current cosmological models of the early Universe are based on the Big Bang theory. About 300,000 years after this event, atoms of hydrogen and helium began to form, in an event called recombination. Nearly all the hydrogen was neutral (non-ionized) and readily absorbed light, and no stars had yet formed. As a result, this period has been called the "dark ages". It was from density fluctuations (or anisotropic irregularities) in this primordial matter that larger structures began to appear. As a result, masses of baryonic matter started to condense within cold dark matter halos. These primordial structures would eventually become the galaxies we see today.
Evidence for the early appearance of galaxies was found in 2006, when it was discovered that the galaxy IOK-1 has an unusually high redshift of 6.96, corresponding to just 750 million years after the Big Bang and making it the most distant and primordial galaxy yet seen. While some scientists have claimed other objects (such as Abell 1835 IR1916) have higher redshifts (and therefore are seen in an earlier stage of the Universe's evolution), IOK-1's age and composition have been more reliably established. In December 2012, astronomers reported that UDFj-39546284 is the most distant object known and has a redshift value of 11.9. The object, estimated to have existed around "380 million years" after the Big Bang (which was about 13.8 billion years ago), is about 13.42 billion light travel distance years away. The existence of such early protogalaxies suggests that they must have grown in the so-called "dark ages". As of May 5, 2015, the galaxy EGS-zs8-1 is the most distant and earliest galaxy measured, forming 670 million years after the Big Bang. The light from EGS-zs8-1 has taken 13 billion years to reach Earth, and is now 30 billion light-years away, because of the expansion of the universe during 13 billion years.
Early galaxy formation
The detailed process by which early galaxies formed is an open question in astrophysics. Theories can be divided into two categories: top-down and bottom-up. In top-down correlations (such as the Eggen–Lynden-Bell–Sandage [ELS] model), protogalaxies form in a large-scale simultaneous collapse lasting about one hundred million years. In bottom-up theories (such as the Searle-Zinn [SZ] model), small structures such as globular clusters form first, and then a number of such bodies accrete to form a larger galaxy.
Once protogalaxies began to form and contract, the first halo stars (called Population III stars) appeared within them. These were composed almost entirely of hydrogen and helium, and may have been massive. If so, these huge stars would have quickly consumed their supply of fuel and became supernovae, releasing heavy elements into the interstellar medium. This first generation of stars re-ionized the surrounding neutral hydrogen, creating expanding bubbles of space through which light could readily travel.
In June 2015, astronomers reported evidence for Population III stars in the Cosmos Redshift 7 galaxy at z = 6.60. Such stars are likely to have existed in the very early universe (i.e., at high redshift), and may have started the production of chemical elements heavier than hydrogen that are needed for the later formation of planets and life as we know it.
Within a billion years of a galaxy's formation, key structures begin to appear. Globular clusters, the central supermassive black hole, and a galactic bulge of metal-poor Population II stars form. The creation of a supermassive black hole appears to play a key role in actively regulating the growth of galaxies by limiting the total amount of additional matter added. During this early epoch, galaxies undergo a major burst of star formation.
During the following two billion years, the accumulated matter settles into a galactic disc. A galaxy will continue to absorb infalling material from high-velocity clouds and dwarf galaxies throughout its life. This matter is mostly hydrogen and helium. The cycle of stellar birth and death slowly increases the abundance of heavy elements, eventually allowing the formation of planets.
The evolution of galaxies can be significantly affected by interactions and collisions. Mergers of galaxies were common during the early epoch, and the majority of galaxies were peculiar in morphology. Given the distances between the stars, the great majority of stellar systems in colliding galaxies will be unaffected. However, gravitational stripping of the interstellar gas and dust that makes up the spiral arms produces a long train of stars known as tidal tails. Examples of these formations can be seen in NGC 4676 or the Antennae Galaxies.
The Milky Way galaxy and the nearby Andromeda Galaxy are moving toward each other at about 130 km/s, and—depending upon the lateral movements—the two might collide in about five to six billion years. Although the Milky Way has never collided with a galaxy as large as Andromeda before, evidence of past collisions of the Milky Way with smaller dwarf galaxies is increasing.
Such large-scale interactions are rare. As time passes, mergers of two systems of equal size become less common. Most bright galaxies have remained fundamentally unchanged for the last few billion years, and the net rate of star formation probably also peaked approximately ten billion years ago.
Spiral galaxies, like the Milky Way, produce new generations of stars as long as they have dense molecular clouds of interstellar hydrogen in their spiral arms. Elliptical galaxies are largely devoid of this gas, and so form few new stars. The supply of star-forming material is finite; once stars have converted the available supply of hydrogen into heavier elements, new star formation will come to an end.
The current era of star formation is expected to continue for up to one hundred billion years, and then the "stellar age" will wind down after about ten trillion to one hundred trillion years (1013–1014 years), as the smallest, longest-lived stars in our universe, tiny red dwarfs, begin to fade. At the end of the stellar age, galaxies will be composed of compact objects: brown dwarfs, white dwarfs that are cooling or cold ("black dwarfs"), neutron stars, and black holes. Eventually, as a result of gravitational relaxation, all stars will either fall into central supermassive black holes or be flung into intergalactic space as a result of collisions.
Deep sky surveys show that galaxies are often found in groups and clusters. Solitary galaxies that have not significantly interacted with another galaxy of comparable mass during the past billion years are relatively scarce. Only about 5% of the galaxies surveyed have been found to be truly isolated; however, these isolated formations may have interacted and even merged with other galaxies in the past, and may still be orbited by smaller, satellite galaxies. Isolated galaxies[note 2] can produce stars at a higher rate than normal, as their gas is not being stripped by other nearby galaxies.
On the largest scale, the Universe is continually expanding, resulting in an average increase in the separation between individual galaxies (see Hubble's law). Associations of galaxies can overcome this expansion on a local scale through their mutual gravitational attraction. These associations formed early in the Universe, as clumps of dark matter pulled their respective galaxies together. Nearby groups later merged to form larger-scale clusters. This on-going merger process (as well as an influx of infalling gas) heats the inter-galactic gas within a cluster to very high temperatures, reaching 30–100 megakelvins. About 70–80% of the mass in a cluster is in the form of dark matter, with 10–30% consisting of this heated gas and the remaining few percent of the matter in the form of galaxies.
Most galaxies in the Universe are gravitationally bound to a number of other galaxies. These form a fractal-like hierarchical distribution of clustered structures, with the smallest such associations being termed groups. A group of galaxies is the most common type of galactic cluster, and these formations contain a majority of the galaxies (as well as most of the baryonic mass) in the Universe. To remain gravitationally bound to such a group, each member galaxy must have a sufficiently low velocity to prevent it from escaping (see Virial theorem). If there is insufficient kinetic energy, however, the group may evolve into a smaller number of galaxies through mergers.
Clusters of galaxies consist of hundreds to thousands of galaxies bound together by gravity. Clusters of galaxies are often dominated by a single giant elliptical galaxy, known as the brightest cluster galaxy, which, over time, tidally destroys its satellite galaxies and adds their mass to its own.
Superclusters contain tens of thousands of galaxies, which are found in clusters, groups and sometimes individually. At the supercluster scale, galaxies are arranged into sheets and filaments surrounding vast empty voids. Above this scale, the Universe appears to be the same in all directions (isotropic and homogeneous).
The Milky Way galaxy is a member of an association named the Local Group, a relatively small group of galaxies that has a diameter of approximately one megaparsec. The Milky Way and the Andromeda Galaxy are the two brightest galaxies within the group; many of the other member galaxies are dwarf companions of these two galaxies. The Local Group itself is a part of a cloud-like structure within the Virgo Supercluster, a large, extended structure of groups and clusters of galaxies centered on the Virgo Cluster. And the Virgo Supercluster itself is a part of the Pisces-Cetus Supercluster Complex, a giant galaxy filament.
The peak radiation of most stars lies in the visible spectrum, so the observation of the stars that form galaxies has been a major component of optical astronomy. It is also a favorable portion of the spectrum for observing ionized H II regions, and for examining the distribution of dusty arms.
The dust present in the interstellar medium is opaque to visual light. It is more transparent to far-infrared, which can be used to observe the interior regions of giant molecular clouds and galactic cores in great detail. Infrared is also used to observe distant, red-shifted galaxies that were formed much earlier in the history of the Universe. Water vapor and carbon dioxide absorb a number of useful portions of the infrared spectrum, so high-altitude or space-based telescopes are used for infrared astronomy.
The first non-visual study of galaxies, particularly active galaxies, was made using radio frequencies. The Earth's atmosphere is nearly transparent to radio between 5 MHz and 30 GHz. (The ionosphere blocks signals below this range.) Large radio interferometers have been used to map the active jets emitted from active nuclei. Radio telescopes can also be used to observe neutral hydrogen (via 21 cm radiation), including, potentially, the non-ionized matter in the early Universe that later collapsed to form galaxies.
Ultraviolet and X-ray telescopes can observe highly energetic galactic phenomena. Ultraviolet flares are sometimes observed when a star in a distant galaxy is torn apart from the tidal forces of a nearby black hole. The distribution of hot gas in galactic clusters can be mapped by X-rays. The existence of supermassive black holes at the cores of galaxies was confirmed through X-ray astronomy.
- Galaxies to the left side of the Hubble classification scheme are sometimes referred to as "early-type", while those to the right are "late-type".
- The term "field galaxy" is sometimes used to mean an isolated galaxy, although the same term is also used to describe galaxies that do not belong to a cluster but may be a member of a group of galaxies.
- Sparke & Gallagher III 2000, p. i
- Hupp, E.; Roy, S.; Watzke, M. (August 12, 2006). "NASA Finds Direct Proof of Dark Matter". NASA. Retrieved April 17, 2007.
- Uson, J. M.; Boughn, S. P.; Kuhn, J. R. (1990). "The central galaxy in Abell 2029 – An old supergiant". Science. 250 (4980): 539–540. Bibcode:1990Sci...250..539U. doi:10.1126/science.250.4980.539. PMID 17751483.
- Hoover, A. (June 16, 2003). "UF Astronomers: Universe Slightly Simpler Than Expected". Hubble News Desk. Archived from the original on July 20, 2011. Retrieved March 4, 2011. Based upon:
- Graham, A. W.; Guzman, R. (2003). "HST Photometry of Dwarf Elliptical Galaxies in Coma, and an Explanation for the Alleged Structural Dichotomy between Dwarf and Bright Elliptical Galaxies". The Astronomical Journal. 125 (6): 2936–2950. arXiv:astro-ph/0303391. Bibcode:2003AJ....125.2936G. doi:10.1086/374992.
- Jarrett, T. H. "Near-Infrared Galaxy Morphology Atlas". California Institute of Technology. Retrieved January 9, 2007.
- Finley, D.; Aguilar, D. (November 2, 2005). "Astronomers Get Closest Look Yet At Milky Way's Mysterious Core". National Radio Astronomy Observatory. Retrieved August 10, 2006.
- Gott III, J. R.; et al. (2005). "A Map of the Universe". The Astrophysical Journal. 624 (2): 463–484. arXiv:astro-ph/0310571. Bibcode:2005ApJ...624..463G. doi:10.1086/428890.
- Christopher J. Conselice; et al. (2016). "The Evolution of Galaxy Number Density at z < 8 and its Implications". The Astrophysical Journal. 830 (2): 83. arXiv:1607.03909v2. Bibcode:2016ApJ...830...83C. doi:10.3847/0004-637X/830/2/83.
- Fountain, Henry (17 October 2016). "Two Trillion Galaxies, at the Very Least". The New York Times. Retrieved 17 October 2016.
- Mackie, Glen (1 February 2002). "To see the Universe in a Grain of Taranaki Sand". Centre for Astrophysics and Supercomputing. Retrieved 28 January 2017.
- "Galaxy Clusters and Large-Scale Structure". University of Cambridge. Retrieved January 15, 2007.
- Gibney, Elizabeth (2014). "Earth's new address: 'Solar System, Milky Way, Laniakea'". Nature. doi:10.1038/nature.2014.15819.
- Harper, D. "galaxy". Online Etymology Dictionary. Retrieved November 11, 2011.
- Waller & Hodge 2003, p. 91
- Konečný, Lubomír. "Emblematics, Agriculture, and Mythography in The Origin of the Milky Way" (PDF). Academy of Sciences of the Czech Republic. Archived from the original (PDF) on July 20, 2006. Retrieved January 5, 2007.
- Rao, J. (September 2, 2005). "Explore the Archer's Realm". Space.com. Retrieved January 3, 2007.
- Plutarch (2006). The Complete Works Volume 3: Essays and Miscellanies. Chapter 3: Echo Library. p. 66. ISBN 978-1-4068-3224-2.
- Montada, J. P. (September 28, 2007). "Ibn Bâjja". Stanford Encyclopedia of Philosophy. Retrieved July 11, 2008.
- Heidarzadeh 2008, pp. 23–25
- Mohamed 2000, pp. 49–50
- Bouali, H.-E.; Zghal, M.; Lakhdar, Z. B. (2005). "Popularisation of Optical Phenomena: Establishing the First Ibn Al-Haytham Workshop on Photography" (PDF). The Education and Training in Optics and Photonics Conference. Retrieved July 8, 2008.
- O'Connor, John J.; Robertson, Edmund F., "Abu Arrayhan Muhammad ibn Ahmad al-Biruni", MacTutor History of Mathematics archive, University of St Andrews.
- Al-Biruni 2004, p. 87
- Heidarzadeh 2008, p. 25, Table 2.1
- Livingston, J. W. (1971). "Ibn Qayyim al-Jawziyyah: A Fourteenth Century Defense against Astrological Divination and Alchemical Transmutation". Journal of the American Oriental Society. 91 (1): 96–103 . doi:10.2307/600445. JSTOR 600445.
- Galileo Galilei, Sidereus Nuncius (Venice, (Italy): Thomas Baglioni, 1610), pages 15 and 16.
English translation: Galileo Galilei with Edward Stafford Carlos, trans., The Sidereal Messenger (London, England: Rivingtons, 1880), pages 42 and 43.
- O'Connor, J. J.; Robertson, E. F. (November 2002). "Galileo Galilei". University of St. Andrews. Retrieved January 8, 2007.
- Thomas Wright, An Original Theory or New Hypothesis of the Universe … (London, England: H. Chapelle, 1750). From p.48: " … the stars are not infinitely dispersed and distributed in a promiscuous manner throughout all the mundane space, without order or design, … this phænomenon [is] no other than a certain effect arising from the observer's situation, … To a spectator placed in an indefinite space, … it [i.e., the Milky Way (Via Lactea)] [is] a vast ring of stars … "
On page 73, Wright called the Milky Way the Vortex Magnus (the great whirlpool) and estimated its diameter at 8.64×1012 miles (13.9×1012 km).
- Evans, J. C. (November 24, 1998). "Our Galaxy". George Mason University. Archived from the original on June 30, 2012. Retrieved January 4, 2007.
- Immanuel Kant, Allgemeine Naturgeschichte und Theorie des Himmels … [Universal Natural History and Theory of the Heavens … ], (Königsberg and Leipzig, (Germany): Johann Friederich Petersen, 1755).
Available in English translation by Ian Johnston at: Vancouver Island University, British Columbia, Canada Archived August 29, 2014, at the Wayback Machine.
- William Herschel (1785). "XII. On the construction of the heavens". Giving Some Accounts of the Present Undertakings, Studies, and Labours, of the Ingenious, in Many Considerable Parts of the World. Philosophical Transactions of the Royal Society of London. vol. 75. London. pp. 213–266. doi:10.1098/rstl.1785.0012. ISSN 0261-0523. Herschel's diagram of the galaxy appears immediately after the article's last page.
- Paul 1993, pp. 16–18
- Trimble, V. (1999). "Robert Trumpler and the (Non)transparency of Space". Bulletin of the American Astronomical Society. 31 (31): 1479. Bibcode:1999AAS...195.7409T.
- Kepple & Sanner 1998, p. 18
- "The Large Magellanic Cloud, LMC". Observatoire de Paris. Mar 11, 2004. Archived from the original on June 22, 2017.
- "Abd-al-Rahman Al Sufi (December 7, 903 – May 25, 986 A.D.)". Observatoire de Paris. Retrieved April 19, 2007.
- Gordon, Kurtiss J. "History of our Understanding of a Spiral Galaxy: Messier 33". Caltech.edu. Retrieved 11 June 2018.
- See text quoted from Wright's An original theory or new hypothesis of the Universe in Dyson, F. (1979). Disturbing the Universe. Pan Books. p. 245. ISBN 978-0-330-26324-5.
- "Parsonstown | The genius of the Parsons family | William Rosse". parsonstown.info.
- Slipher, V. M. (1913). "The radial velocity of the Andromeda Nebula". Lowell Observatory Bulletin. 1: 56–57. Bibcode:1913LowOB...2...56S.
- Slipher, V. M. (1915). "Spectrographic Observations of Nebulae". Popular Astronomy. Vol. 23. pp. 21–24. Bibcode:1915PA.....23...21S.
- Curtis, H. D. (1988). "Novae in Spiral Nebulae and the Island Universe Theory". Publications of the Astronomical Society of the Pacific. 100: 6. Bibcode:1988PASP..100....6C. doi:10.1086/132128.
- Weaver, H. F. "Robert Julius Trumpler". US National Academy of Sciences. Retrieved January 5, 2007.
- Öpik, E. (1922). "An estimate of the distance of the Andromeda Nebula". The Astrophysical Journal. 55: 406. Bibcode:1922ApJ....55..406O. doi:10.1086/142680.
- Hubble, E. P. (1929). "A spiral nebula as a stellar system, Messier 31". The Astrophysical Journal. 69: 103–158. Bibcode:1929ApJ....69..103H. doi:10.1086/143167.
- Sandage, A. (1989). "Edwin Hubble, 1889–1953". Journal of the Royal Astronomical Society of Canada. 83 (6): 351–362. Bibcode:1989JRASC..83..351S. Retrieved January 8, 2007.
- Tenn, J. "Hendrik Christoffel van de Hulst". Sonoma State University. Retrieved January 5, 2007.
- López-Corredoira, M.; et al. (2001). "Searching for the in-plane Galactic bar and ring in DENIS". Astronomy and Astrophysics. 373 (1): 139–152. arXiv:astro-ph/0104307. Bibcode:2001A&A...373..139L. doi:10.1051/0004-6361:20010560.
- Rubin, V. C. (1983). "Dark matter in spiral galaxies". Scientific American. Vol. 248 no. 6. pp. 96–106. Bibcode:1983SciAm.248f..96R. doi:10.1038/scientificamerican0683-96.
- Rubin, V. C. (2000). "One Hundred Years of Rotating Galaxies". Publications of the Astronomical Society of the Pacific. 112 (772): 747–750. Bibcode:2000PASP..112..747R. doi:10.1086/316573.
- "Observable Universe contains ten times more galaxies than previously thought". www.spacetelescope.org. Retrieved 17 October 2016.
- "Hubble Rules Out a Leading Explanation for Dark Matter". Hubble News Desk. October 17, 1994. Retrieved January 8, 2007.
- "How many galaxies are there?". NASA. November 27, 2002. Retrieved January 8, 2007.
- Kraan-Korteweg, R. C.; Juraszek, S. (2000). "Mapping the hidden Universe: The galaxy distribution in the Zone of Avoidance". Publications of the Astronomical Society of Australia. 17 (1): 6–12. arXiv:astro-ph/9910572. Bibcode:2000PASA...17....6K. doi:10.1071/AS00006.
- "Universe has two trillion galaxies, astronomers say". The Guardian. 13 October 2016. Retrieved 14 October 2016.
- "The Universe Has 10 Times More Galaxies Than Scientists Thought". space.com. 13 October 2016. Retrieved 14 October 2016.
- Barstow, M. A. (2005). "Elliptical Galaxies". Leicester University Physics Department. Archived from the original on 2012-07-29. Retrieved June 8, 2006.
- "Galaxies". Cornell University. October 20, 2005. Archived from the original on 2014-06-29. Retrieved August 10, 2006.
- "Galactic onion". www.spacetelescope.org. Retrieved 2015-05-11.
- Williams, M. J.; Bureau, M.; Cappellari, M. (2010). "Kinematic constraints on the stellar and dark matter content of spiral and S0 galaxies". Monthly Notices of the Royal Astronomical Society. 400 (4): 1665–1689. arXiv:0909.0680. Bibcode:2009MNRAS.400.1665W. doi:10.1111/j.1365-2966.2009.15582.x.
- Smith, G. (March 6, 2000). "Galaxies — The Spiral Nebulae". University of California, San Diego Center for Astrophysics & Space Sciences. Archived from the original on July 10, 2012. Retrieved November 30, 2006.
- Van den Bergh 1998, p. 17
- "Fat or flat: Getting galaxies into shape". phys.org. February 2014
- Bertin & Lin 1996, pp. 65–85
- Belkora 2003, p. 355
- Eskridge, P. B.; Frogel, J. A. (1999). "What is the True Fraction of Barred Spiral Galaxies?". Astrophysics and Space Science. 269/270: 427–430. Bibcode:1999Ap&SS.269..427E. doi:10.1023/A:1017025820201.
- Bournaud, F.; Combes, F. (2002). "Gas accretion on spiral galaxies: Bar formation and renewal". Astronomy and Astrophysics. 392 (1): 83–102. arXiv:astro-ph/0206273. Bibcode:2002A&A...392...83B. doi:10.1051/0004-6361:20020920.
- Knapen, J. H.; Perez-Ramirez, D.; Laine, S. (2002). "Circumnuclear regions in barred spiral galaxies — II. Relations to host galaxies". Monthly Notices of the Royal Astronomical Society. 337 (3): 808–828. arXiv:astro-ph/0207258. Bibcode:2002MNRAS.337..808K. doi:10.1046/j.1365-8711.2002.05840.x.
- Alard, C. (2001). "Another bar in the Bulge". Astronomy and Astrophysics Letters. 379 (2): L44–L47. arXiv:astro-ph/0110491. Bibcode:2001A&A...379L..44A. doi:10.1051/0004-6361:20011487.
- Sanders, R. (January 9, 2006). "Milky Way galaxy is warped and vibrating like a drum". UCBerkeley News. Retrieved May 24, 2006.
- Bell, G. R.; Levine, S. E. (1997). "Mass of the Milky Way and Dwarf Spheroidal Stream Membership". Bulletin of the American Astronomical Society. 29 (2): 1384. Bibcode:1997AAS...19110806B.
- "We Just Discovered a New Type of Colossal Galaxy". Futurism. 2016-03-21. Retrieved 2016-03-21.
- Ogle, Patrick M.; Lanz, Lauranne; Nader, Cyril; Helou, George (2016-01-01). "Superluminous Spiral Galaxies". The Astrophysical Journal. 817 (2): 109. arXiv:1511.00659. Bibcode:2016ApJ...817..109O. doi:10.3847/0004-637X/817/2/109. ISSN 0004-637X.
- Gerber, R. A.; Lamb, S. A.; Balsara, D. S. (1994). "Ring Galaxy Evolution as a Function of "Intruder" Mass". Bulletin of the American Astronomical Society. 26: 911. Bibcode:1994AAS...184.3204G.
- "ISO unveils the hidden rings of Andromeda" (Press release). European Space Agency. October 14, 1998. Archived from the original on August 28, 1999. Retrieved May 24, 2006.
- "Spitzer Reveals What Edwin Hubble Missed". Harvard-Smithsonian Center for Astrophysics. May 31, 2004. Archived from the original on 2006-09-07. Retrieved December 6, 2006.
- Barstow, M. A. (2005). "Irregular Galaxies". University of Leicester. Archived from the original on 2012-02-27. Retrieved December 5, 2006.
- Phillipps, S.; Drinkwater, M. J.; Gregg, M. D.; Jones, J. B. (2001). "Ultracompact Dwarf Galaxies in the Fornax Cluster". The Astrophysical Journal. 560 (1): 201–206. arXiv:astro-ph/0106377. Bibcode:2001ApJ...560..201P. doi:10.1086/322517.
- Groshong, K. (April 24, 2006). "Strange satellite galaxies revealed around Milky Way". New Scientist. Retrieved January 10, 2007.
- Schirber, M. (August 27, 2008). "No Slimming Down for Dwarf Galaxies". ScienceNOW. Retrieved August 27, 2008.
- "Galaxy Interactions". University of Maryland Department of Astronomy. Archived from the original on May 9, 2006. Retrieved December 19, 2006.
- "Interacting Galaxies". Swinburne University. Retrieved December 19, 2006.
- "Happy Sweet Sixteen, Hubble Telescope!". NASA. April 24, 2006. Retrieved August 10, 2006.
- "Starburst Galaxies". Harvard-Smithsonian Center for Astrophysics. August 29, 2006. Retrieved August 10, 2006.
- Kennicutt Jr., R. C.; et al. (2005). Demographics and Host Galaxies of Starbursts. Starbursts: From 30 Doradus to Lyman Break Galaxies. Springer. p. 187. Bibcode:2005ASSL..329..187K. doi:10.1007/1-4020-3539-X_33.
- Smith, G. (July 13, 2006). "Starbursts & Colliding Galaxies". University of California, San Diego Center for Astrophysics & Space Sciences. Archived from the original on July 7, 2012. Retrieved August 10, 2006.
- Keel, B. (September 2006). "Starburst Galaxies". University of Alabama. Retrieved December 11, 2006.
- Keel, W. C. (2000). "Introducing Active Galactic Nuclei". University of Alabama. Retrieved December 6, 2006.
- Lochner, J.; Gibb, M. "A Monster in the Middle". NASA. Retrieved December 20, 2006.
- Heckman, T. M. (1980). "An optical and radio survey of the nuclei of bright galaxies — Activity in normal galactic nuclei". Astronomy and Astrophysics. 87: 152–164. Bibcode:1980A&A....87..152H.
- Ho, L. C.; Filippenko, A. V.; Sargent, W. L. W. (1997). "A Search for "Dwarf" Seyfert Nuclei. V. Demographics of Nuclear Activity in Nearby Galaxies". The Astrophysical Journal. 487 (2): 568–578. arXiv:astro-ph/9704108. Bibcode:1997ApJ...487..568H. doi:10.1086/304638.
- Beck, Rainer (2007). "Galactic magnetic fields". Scholarpedia. 2. p. 2411. Bibcode:2007SchpJ...2.2411B. doi:10.4249/scholarpedia.2411. Retrieved 2015-11-05.
- "Construction Secrets of a Galactic Metropolis". www.eso.org. ESO Press Release. Retrieved October 15, 2014.
- "Protogalaxies". Harvard-Smithsonian Center for Astrophysics. November 18, 1999. Archived from the original on 2008-03-25. Retrieved January 10, 2007.
- Firmani, C.; Avila-Reese, V. (2003). "Physical processes behind the morphological Hubble sequence". Revista Mexicana de Astronomía y Astrofísica. 17: 107–120. arXiv:astro-ph/0303543. Bibcode:2003RMxAC..17..107F.
- McMahon, R. (2006). "Astronomy: Dawn after the dark age". Nature. 443 (7108): 151–2. Bibcode:2006Natur.443..151M. doi:10.1038/443151a. PMID 16971933.
- Wall, Mike (December 12, 2012). "Ancient Galaxy May Be Most Distant Ever Seen". Space.com. Retrieved December 12, 2012.
- "Cosmic Detectives". The European Space Agency (ESA). April 2, 2013. Retrieved April 15, 2013.
- "HubbleSite – NewsCenter – Astronomers Set a New Galaxy Distance Record (05/05/2015) – Introduction". hubblesite.org. Retrieved 2015-05-07.
- "This Galaxy Far, Far Away Is the Farthest One Yet Found". Retrieved 2015-05-07.
- "Astronomers unveil the farthest galaxy". Retrieved 2015-05-07.
- Overbye, Dennis (2015-05-05). "Astronomers Measure Distance to Farthest Galaxy Yet". The New York Times. ISSN 0362-4331. Retrieved 2015-05-07.
- Oesch, P. A.; van Dokkum, P. G.; Illingworth, G. D.; Bouwens, R. J.; Momcheva, I.; Holden, B.; Roberts-Borsani, G. W.; Smit, R.; Franx, M. (2015-02-18). "A Spectroscopic Redshift Measurement for a Luminous Lyman Break Galaxy at z=7.730 using Keck/MOSFIRE". The Astrophysical Journal. 804 (2): L30. arXiv:1502.05399. Bibcode:2015ApJ...804L..30O. doi:10.1088/2041-8205/804/2/L30.
- "Signatures of the Earliest Galaxies". Retrieved 15 September 2015.
- Eggen, O. J.; Lynden-Bell, D.; Sandage, A. R. (1962). "Evidence from the motions of old stars that the Galaxy collapsed". The Astrophysical Journal. 136: 748. Bibcode:1962ApJ...136..748E. doi:10.1086/147433.
- Searle, L.; Zinn, R. (1978). "Compositions of halo clusters and the formation of the galactic halo". The Astrophysical Journal. 225 (1): 357–379. Bibcode:1978ApJ...225..357S. doi:10.1086/156499.
- Heger, A.; Woosley, S. E. (2002). "The Nucleosynthetic Signature of Population III". The Astrophysical Journal. 567 (1): 532–543. arXiv:astro-ph/0107037. Bibcode:2002ApJ...567..532H. doi:10.1086/338487.
- Barkana, R.; Loeb, A. (2001). "In the beginning: the first sources of light and the reionization of the Universe" (PDF). Physics Reports (Submitted manuscript). 349 (2): 125–238. arXiv:astro-ph/0010468. Bibcode:2001PhR...349..125B. doi:10.1016/S0370-1573(01)00019-9.
- Sobral, David; Matthee, Jorryt; Darvish, Behnam; Schaerer, Daniel; Mobasher, Bahram; Röttgering, Huub J. A.; Santos, Sérgio; Hemmati, Shoubaneh (4 June 2015). "Evidence for POPIII-like Stellar Populations in the Most Luminous LYMAN-α Emitters at the Epoch of Re-ionisation: Spectroscopic Confirmation". The Astrophysical Journal. 808 (2): 139. arXiv:1504.01734. Bibcode:2015ApJ...808..139S. doi:10.1088/0004-637x/808/2/139.
- Overbye, Dennis (17 June 2015). "Traces of Earliest Stars That Enriched Cosmos Are Spied". The New York Times. Retrieved 17 June 2015.
- "Simulations Show How Growing Black Holes Regulate Galaxy Formation". Carnegie Mellon University. February 9, 2005. Archived from the original on June 4, 2012. Retrieved January 7, 2007.
- Massey, R. (April 21, 2007). "Caught in the act; forming galaxies captured in the young Universe". Royal Astronomical Society. Archived from the original on 2013-11-15. Retrieved April 20, 2007.
- Noguchi, M. (1999). "Early Evolution of Disk Galaxies: Formation of Bulges in Clumpy Young Galactic Disks". The Astrophysical Journal. 514 (1): 77–95. arXiv:astro-ph/9806355. Bibcode:1999ApJ...514...77N. doi:10.1086/306932.
- Baugh, C.; Frenk, C. (May 1999). "How are galaxies made?". PhysicsWeb. Archived from the original on 2007-04-26. Retrieved January 16, 2007.
- Gonzalez, G. (1998). The Stellar Metallicity — Planet Connection. Brown dwarfs and extrasolar planets: Proceedings of a workshop ... p. 431. Bibcode:1998ASPC..134..431G.
- Moskowitz, Clara (September 25, 2012). "Hubble Telescope Reveals Farthest View Into Universe Ever". Space.com. Retrieved September 26, 2012.
- Conselice, C. J. (February 2007). "The Universe's Invisible Hand". Scientific American. Vol. 296 no. 2. pp. 35–41. Bibcode:2007SciAm.296b..34C. doi:10.1038/scientificamerican0207-34.
- Ford, H.; et al. (April 30, 2002). "The Mice (NGC 4676): Colliding Galaxies With Tails of Stars and Gas". Hubble News Desk. Retrieved May 8, 2007.
- Struck, C. (1999). "Galaxy Collisions". Physics Reports. 321: 1–137. arXiv:astro-ph/9908269. Bibcode:1999PhR...321....1S. doi:10.1016/S0370-1573(99)00030-7.
- Wong, J. (April 14, 2000). "Astrophysicist maps out our own galaxy's end". University of Toronto. Archived from the original on January 8, 2007. Retrieved January 11, 2007.
- Panter, B.; Jimenez, R.; Heavens, A. F.; Charlot, S. (2007). "The star formation histories of galaxies in the Sloan Digital Sky Survey". Monthly Notices of the Royal Astronomical Society. 378 (4): 1550–1564. arXiv:astro-ph/0608531. Bibcode:2007MNRAS.378.1550P. doi:10.1111/j.1365-2966.2007.11909.x.
- Kennicutt Jr., R. C.; Tamblyn, P.; Congdon, C. E. (1994). "Past and future star formation in disk galaxies". The Astrophysical Journal. 435 (1): 22–36. Bibcode:1994ApJ...435...22K. doi:10.1086/174790.
- Knapp, G. R. (1999). Star Formation in Early Type Galaxies. Star Formation in Early Type Galaxies. 163. Astronomical Society of the Pacific. p. 119. arXiv:astro-ph/9808266. Bibcode:1999ASPC..163..119K. ISBN 978-1-886733-84-8. OCLC 41302839.
- Adams, Fred; Laughlin, Greg (July 13, 2006). "The Great Cosmic Battle". Astronomical Society of the Pacific. Retrieved January 16, 2007.
- "Cosmic 'Murder Mystery' Solved: Galaxies Are 'Strangled to Death'". Retrieved 2015-05-14.
- Pobojewski, S. (January 21, 1997). "Physics offers glimpse into the dark side of the Universe". University of Michigan. Retrieved January 13, 2007.
- McKee, M. (June 7, 2005). "Galactic loners produce more stars". New Scientist. Retrieved January 15, 2007.
- "Groups & Clusters of Galaxies". NASA/Chandra. Retrieved January 15, 2007.
- Ricker, P. "When Galaxy Clusters Collide". San Diego Supercomputer Center. Retrieved August 27, 2008.
- Dahlem, M. (November 24, 2006). "Optical and radio survey of Southern Compact Groups of galaxies". University of Birmingham Astrophysics and Space Research Group. Archived from the original on June 13, 2007. Retrieved January 15, 2007.
- Ponman, T. (February 25, 2005). "Galaxy Systems: Groups". University of Birmingham Astrophysics and Space Research Group. Archived from the original on 2009-02-15. Retrieved January 15, 2007.
- Girardi, M.; Giuricin, G. (2000). "The Observational Mass Function of Loose Galaxy Groups". The Astrophysical Journal. 540 (1): 45–56. arXiv:astro-ph/0004149. Bibcode:2000ApJ...540...45G. doi:10.1086/309314.
- "Hubble Pinpoints Furthest Protocluster of Galaxies Ever Seen". ESA/Hubble Press Release. Retrieved January 22, 2015.
- Dubinski, J. (1998). "The Origin of the Brightest Cluster Galaxies". The Astrophysical Journal. 502 (2): 141–149. arXiv:astro-ph/9709102. Bibcode:1998ApJ...502..141D. doi:10.1086/305901. Archived from the original on May 14, 2011. Retrieved January 16, 2007.
- Bahcall, N. A. (1988). "Large-scale structure in the Universe indicated by galaxy clusters". Annual Review of Astronomy and Astrophysics. 26 (1): 631–686. Bibcode:1988ARA&A..26..631B. doi:10.1146/annurev.aa.26.090188.003215.
- Mandolesi, N.; et al. (1986). "Large-scale homogeneity of the Universe measured by the microwave background". Letters to Nature. 319 (6056): 751–753. Bibcode:1986Natur.319..751M. doi:10.1038/319751a0.
- van den Bergh, S. (2000). "Updated Information on the Local Group". Publications of the Astronomical Society of the Pacific. 112 (770): 529–536. arXiv:astro-ph/0001040. Bibcode:2000PASP..112..529V. doi:10.1086/316548.
- Tully, R. B. (1982). "The Local Supercluster". The Astrophysical Journal. 257: 389–422. Bibcode:1982ApJ...257..389T. doi:10.1086/159999.
- "Near, Mid & Far Infrared". IPAC/NASA. Archived from the original on December 30, 2006. Retrieved January 2, 2007.
- "ATLASGAL Survey of Milky Way Completed". Retrieved 7 March 2016.
- "The Effects of Earth's Upper Atmosphere on Radio Signals". NASA. Retrieved August 10, 2006.
- "Giant Radio Telescope Imaging Could Make Dark Matter Visible". ScienceDaily. December 14, 2006. Retrieved January 2, 2007.
- "NASA Telescope Sees Black Hole Munch on a Star". NASA. December 5, 2006. Retrieved January 2, 2007.
- Dunn, R. "An Introduction to X-ray Astronomy". Institute of Astronomy X-Ray Group. Retrieved January 2, 2007.
- "Unveiling the Secret of a Virgo Dwarf Galaxy". ESO. May 3, 2000. Archived from the original on 2009-01-09. Retrieved January 3, 2007.
- Al-Biruni (2004). The Book of Instruction in the Elements of the Art of Astrology. R. Ramsay Wright (transl.). Kessinger Publishing. ISBN 978-0-7661-9307-9.
- Belkora, L. (2003). Minding the Heavens: the Story of our Discovery of the Milky Way. CRC Press. ISBN 978-0-7503-0730-7.
- Bertin, G.; Lin, C.-C. (1996). Spiral Structure in Galaxies: a Density Wave Theory. MIT Press. ISBN 978-0-262-02396-2.
- Binney, J.; Merrifield, M. (1998). Galactic Astronomy. Princeton University Press. ISBN 978-0-691-00402-0. OCLC 39108765.
- Dickinson, T. (2004). The Universe and Beyond (4th ed.). Firefly Books. ISBN 978-1-55297-901-3. OCLC 55596414.
- Heidarzadeh, T. (2008). A History of Physical Theories of Comets, from Aristotle to Whipple. Springer. ISBN 978-1-4020-8322-8.
- Mo, Houjun; van den Bosch, Frank; White, Simon (2010). Galaxy Formation and Evolution (1 ed.). Cambridge University Press. ISBN 978-0-521-85793-2.
- Kepple, G. R.; Sanner, G. W. (1998). The Night Sky Observer's Guide, Volume 1. Willmann-Bell. ISBN 978-0-943396-58-3.
- Merritt, D. (2013). Dynamics and Evolution of Galactic Nuclei. Princeton University Press. ISBN 978-1-4008-4612-2.
- Mohamed, M. (2000). Great Muslim Mathematicians. Penerbit UTM. ISBN 978-983-52-0157-8. OCLC 48759017.
- Paul, E. R. (1993). The Milky Way Galaxy and Statistical Cosmology, 1890–1924. Cambridge University Press. ISBN 978-0-521-35363-2.
- Sparke, L. S.; Gallagher III, J. S. (2000). Galaxies in the Universe: An Introduction. Cambridge University Press. ISBN 978-0-521-59740-1.
- Van den Bergh, S. (1998). Galaxy Morphology and Classification. Cambridge University Press. ISBN 978-0-521-62335-3.
- Waller, W. H.; Hodge, P. W. (2003). Galaxies and the Cosmic Frontier. Harvard University Press. ISBN 978-0-674-01079-6.
- NASA/IPAC Extragalactic Database (NED) (NED-Distances)
- Galaxies on In Our Time at the BBC
- An Atlas of The Universe
- Galaxies — Information and amateur observations
- The Oldest Galaxy Yet Found
- Galaxy classification project, harnessing the power of the internet and the human brain
- How many galaxies are in our Universe?
- The most beautiful galaxies on Astronoo
- 3-D Video (01:46) – Over a Million Galaxies of Billions of Stars each – BerkeleyLab/animated. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.