text stringlengths 4 602k |
|---|
Oct 11, 2005 · The Theory Behind the Equation. ... (Einstein once joked that in relativity theory, he placed a clock at every point in the universe, each one running at a different rate, but in real life he didn ...
Einstein’s Equation to Theory of relativity. Quite possibly the most famous conditions in science come from extraordinary relativity. The condition — E = mc2 — signifies “energy approaches mass occasions the speed of light squared.”. It shows that energy (E) and mass (m) are compatible; they are various forms of the same things.
An important fact is that Newtonian gravity theory should b invariant under Euclidean motions. In modern language, it means that the theory is invariant under the group action on R3 preserving the standard Euclidean metric (dx1) 2+ (dx2)2 + (dx3) . 1.1.2. Special Relativity. In the celebrity work of Einstein, he noticed that in order to resolve
David B. Malament, in Philosophy of Physics, 2007 2.4 Matter Fields. In classical relativity theory, one generally takes for granted that all that there is, and all that happens, can be described in terms of various matter fields, e.g. material fluids and electromagnetic fields. 21 Each such field is represented by one or more smooth tensor (or spinor) fields on the spacetime manifold M.
Jul 25, 2014 · 5 General. Speed. More precisely a very specific speed. Even more precisely 299,792,458 ms-1, the speed of light. This magic mysterious speed is at the heart of relativity. Its was Einstein’s pondering of this speed that lead to some of the most amazing physics ideas ever. Relativity comes in two forms, Special and General.
E = mc2. One of the results of the theory of special relativity is Einstein's famous equation E = mc 2. In this formula E is energy, m is mass, and c is the constant speed of light. An interesting result of this equation is that energy and mass are related. Any change in an object's energy is also accompanied by a change in mass.
A Brief Outline of the Development of the Theory of Relativity. The entire development starts off from, and is dominated by, the idea of Faraday and Maxwell, according to which all physical ...
In his four papers, published in November 1915, Einstein laid the foundation of the theory. In the third in particular he used general relativity to explain the precession of the perihelion of Mercury. The point at which Mercury has its closest approach to the Sun, its perihelion, moves. This movement could not be explained by the gravitational ...
equation, r T = 0: This is the equation of conservation of energy and momen-tum in the matter sources. In eld theory language, coordinate invariance is a gauge group, the conservation laws of the Bianchi identities arise as Noether identities. Derivatives like r are de ned so that in a freely-falling frame they are the derivatives of special ...
This elusive theory would resolve the apparent conflict between general relativity and quantum mechanics at certain scales, and to many it would be the crowning achievement of all of science. The first few chapters go over the history of great contributions to physics over the years; mentioning Pythagoras, Galileo, and of course Newton.
- Michio Kaku
- related to: theory of relativity equation
shop.alwaysreview.com has been visited by 1M+ users in the past month |
Multiplication Strategies will help students in second and third grade memorize the facts. These strategies will also help students in fourth or fifth grade who are still struggling with this concept.
These multiplication strategies will help students understand the concept of multiplication before they begin memorizing the facts. They are the tools or strategies, students use on their way to memorizing the multiplication facts.
Four multiplication strategies are presented in this packet along with multiplication vocabulary.
Students will have a great time with this amusement park theme of multiplication activities.
Table of Contents:
p. 2 Multiplication Vocabulary, factors, product
p. 3 Multiplication is Groups of numbers
p. 4 Multiplication Commutative Property
p. 5 Draw a Picture sign
p. 6. Repeated Addition sign
p. 7. Skip Counting sign
p. 8 Make an Array sign
p. 9-12 practice page for each strategy
p. 13 Strategy Tickets, choose a strategy
p. 14 Strategy Tickets, blank page
p. 15-21 Multiplication Midway game
Common Core Standards 3.OA.1,2,3,4,5,6,7
Check out these math products from Crockett's Classroom:
Snow Much Fun with Time
Math Strategies for Everyday
Created by Debbie Crockett |
Table of Contents
Volume is used to describe the contents of a 3-dimensional space. Volume is expressed in liters (L) in addition to the SI unit m³. The calculation of volume varies depending on the type of body.
In case of a cylinder, the base of a cylinder is a circle, the area of which is calculated as:
A = π * r2
Based on the formula for the volume (V = A * h) the following equation is used for a cylinder:
Vcylinder = A * h = π * r2 * h
V ⇒ Volume [m³]
Mass is a property of matter:
- Inertial mass provides resistance to the body during accelerated movement;
- Gravitational mass is a measure of how heavy a body is.
Mass of a body can be measured with a scale. In some cases, the mass can be calculated based on the principle of conservation of momentum. However, the calculation of mass varies from case to case.
m ⇒ mass [kg]
The particle count (or the number of particles) corresponds to the absolute sum of particles within a system. It is directly proportional to the amount of substance. In macroscopically small systems, in which the particle count cannot be correctly determined, the amount of substance can be calculated using the Avogadro’s constant:
N = n * NA
N ⇒ number of particles, no unit
NA ⇒ Avogadro’s constant [1/mole]
n ⇒ amount of substance [mole]
Amount of substance and molar mass
The amount of substance provides indirect information regarding the particle count of a system. It is stated in the unit ‘mole.’ The molar mass is required to establish the masses of a substance during an experiment. The atomic mass in a reaction can be found in the periodic table of the elements.
The amounts required during a reaction can be calculated with the following formula:
m = n * M
n ⇒ amount of substance [mole]
M ⇒ molar mass [g/mole]
Example: Magnesium oxide is generated by the reaction between magnesium and oxygen (the ‘burning of magnesium’).
2 Mg + O2 → 2 MgO
According to the reaction equation, two moles of magnesium and one mole of oxygen are required to form 2 moles of magnesium oxide. The molar masses of the substances involved are:
Magnesium: 24.31 g/mol; oxygen: 15.9994 g/mol.
Incorporating the values of molar mass in the above equation, m = n * M, the following results are obtained:
Magnesium: m = 2 mole * 24.31 g/mol = 48.62 g
Oxygen: m = 1 mole * 15.9994 g/mol = 15.9994 g
Thus, for this reaction, you require 48.62 g of magnesium and 15.9994 g of oxygen.
Under constant external conditions, the mass of a body is directly proportional to the volume and is derived from the quotient of mass and volume. At certain temperatures and at constant pressure, the quotient of mass and volume is characteristic of a specific substance. It is called density:
ρ = m / V
ρ ⇒ density (mass density) [kg/cm³]
The density of solid, liquid, and gaseous substances depends on the temperature. The density of gaseous bodies also depends on the pressure.
The particle density is defined as the quotient of particle count and volume. The particle count indicates the density of the substance:
Ci = Ni/V
Ci ⇒ particle density [particles/cm³]
Ni ⇒ particle count of a specific substance
The specific volume is defined as the reciprocal of the density and depends on the volume of the unit of mass as shown in the following equation:
V = 1/p = V/m
v ⇒ specific volume [m³/kg]
Among other things, it is used to create P-V diagrams in thermodynamics, which describe changes in volume and pressure in a system. A P-V diagram is presented below.
Specific heat capacity
The increase in the temperature of a body causes an increase in the kinetic energy of its smallest particles. Heating entails energy input, and cooling involves energy extraction. The heat absorbed by a body is proportional to the mass and temperature change of the body. The constant of proportionality is called specific heating capacity. It is a material constant.
Example: The specific heating capacity for water is cH2O = 4.19 J/(kg*K), i.e., the energy of 4.19 kJ is necessary to increase 1 kg of water by 1 K.
The specific heating capacity, or the heat, is calculated as:
ΔQ = c * m * ΔT
c ⇒ specific heating capacity [J/(kg*K]
ΔQ ⇒ heat output/intake
ΔT ⇒ increase in temperature/decrease in temperature
In nature (including within the human body), the matter is rarely encountered as a pure substance. Substances bind together and depending on the bond, they exhibit different features. Specific combinations of substances, such as cholesterol or calcium phosphate deposits, increase the resistance to blood flow, resulting in arteriosclerosis.
However, substance mixtures may also be useful, for instance, when the iron is formed, stored and released as needed in the liver in the form of ferritin crystals.
The mole fraction is the amount of substance based on the number of moles in a gas or liquid mixture.
Example: A mole of air is based on 80% nitrogen and 20% oxygen. Therefore, the mole fractions are 0.2 oxygen and 0.8 nitrogen.
A mass fraction is the proportion of a dissolved substance relative to the entire mass of the solution.
States of matter
The states of matter define the physical states of a substance depending on temperature and pressure. The 3 different states include:
- Solid, containing a fixed alignment and bond between the atoms.
- Liquid, comprising mobile and unorganized atoms.
- Gas, with almost no bond between the atoms.
Flow of liquids and gases
Blood pressure is a measure of the dynamic pressure in the blood. If this pressure is too high, it can be reduced by increasing the speed of flow, i.e., either by increasing the volume flow or lowering the flow resistance.
The expiratory air must pass through a narrowed portion or glottis in the vocal cords. Rheology is the study of deformation and non-Newtonian flow of liquids as well as the plastic flow of solids. It can be used to describe how high the flow resistance or the volume flow must be to open or close the glottis, the relationships between those factors, and the Bernoulli equation states.
The flow of ideal liquids and gases is illustrated with streamlines. The streamlines are thicker at higher flow speeds.
- Laminar flow: In a laminar flow, despite obstructions or constrictions, the lines do not cease but rather continue onward.
- Turbulent flow: The turbulent flow is characterized by cessation of flow due to obstructions or constrictions.
Flow resistance and Hagen-Poiseuille’s law
If a body is placed in a liquid, it is subjected to a force that, in many cases, turns out to be proportional to the density of the fluid, to the square of the flow speed, and to the cross-sectional area.
Newton’s Law states that every force is counteracted by an equal force (Newton’s action-reaction principle). This counteracting force is described as the resistance of a body and is dependent on viscosity, internal friction, and obstructions in the flow.
The flow resistance is calculated using the Hagen-Poiseuille equation:
RS = 8π * η * Δl / A2
η ⇒ viscosity of the fluid [(n*s) / m²]
Δl ⇒ length of the flow / length of the pipe [m]
A ⇒ cross-section [m²]
Volume flow rate
The volume flow rate states how much volume per unit of time flows through a cross-section. It is defined by the following equation:
V = A / Δt * Δl
V ⇒ volume flow rate [m³/s]
During laminar flow, the number of streamlines remains constant. If the cross-section of the pipe is smaller, the streamlines are condensed, i.e., the flow speed is increased.
The smaller the cross-sectional area, the greater the flow speed as the volume flow rate remains constant.
V1 = V2
The flow speeds are thus inversely correlated with the pipe cross-section.
When a liquid with a specific density flows horizontally through a pipe with changing cross-section–and provided that friction is insignificant–the total pressure remains constant in all parts of the pipe:
p1 = 0,5ρ * v12 = p2 + 0,5ρ * v 22
In slanted pipes, the sum of static pressure, hydrostatic pressure, and dynamic pressure is constant at every point on a streamline.
Ohm’s law and real fluids
In contrast to ideal fluids, real fluids undergo a loss in pressure through internal friction or viscosity. Friction always induces a loss in kinetic energy, resulting in the fluid adhering to the walls of the pipe.
Therefore, a fluid also flows more slowly at the edges. In comparison, the flow speed in the middle of the flow is greater. The graphic course of such a fluid resembles a parabola:
The zenith of the parabolic course of the flow speed is in the center. Fluids that exhibit such a graphic course during flow in cylindrical pipes are called Newtonian fluids. Ohm’s law applies to such fluids:
V = Δρ / RS
A linear relationship exists between the volume flow rate and the pressure difference.
Series circuit of flow resistance
As described above, the pressure in real fluids decreases and thus sinks with the length of the pipe; it is also dependent on the cross-section of the pipe. The flow resistance is similar to the resistance in an electrical system and is thus cumulative.
Pipes through which fluids flow can be connected together:
- Consecutively, i.e., in a row, as the resistance is added together.
- Branching, in which the total flow is constant at each branch node, i.e., the total incoming liquid is equal to the total outgoing liquid. |
Solid Geometry. Student Expectations. 7 th Grade: 7.3.6C Use properties to classify three-dimensional figures, including pyramids, cones, prisms, and cylinders. Solid Geometry is the geometry of three-dimensional space
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
7.3.6C Use properties to classify three-dimensional figures, including pyramids, cones, prisms, and cylinders.
Solid Geometry is the geometry of three-dimensional space
It is called three-dimensional, or 3D because there are three dimensions: width, depth and height.
A geometric object with flat faces and straight edges.
a polyhedron is a three - dimensional figure made up of sides called faces, each face being a polygon.
The lowest part. The surface that a solid object stands on, or the bottom line of a shape such as a triangle or rectangle.
A solid object that has two identical ends and all flat sides.
The cross section is the same all along its length.
The shape of the ends give the prism a name, such as "triangular prism“
It is a polyhedron.
A solid object where:* The base is a polygon (a straight-sided shape)* The sides are triangles which meet at the top (the apex).It is a polyhedron.
A cylinder is a solid object with:* two identical flat circular or elliptical ends* and one curved side.
(3-dimensional) object that has a circular base and one vertex |
Jump, Loop and Call Instructions
After you have understood the tutorial on Introduction to assembly language which includes simple instruction sets like input/output operations, now it’s time to learn how to create loops, function calls and jumps while writing a code in assembly language.
Let us first discuss an important concept that relates RAM & ROM. You can skip this section if you wish but it is an important to understand registers which helps in building up understanding of architecture of microcontrollers.
How a program which is burned in ROM gets executed in RAM?
The program you write is burned in ROM and this program is executed in RAM. The data burned is in the form of logics or binary digits (0 & 1) also called an Opcode or machine code. The program written in assembly language is converted to opcode by assembler.
Each line of the assembly code is assigned a unique opcode by the assembler as shown below. These opcodes are then stored in ROM one after another. There is a register called Program Counter (PC) which always points to the current opcode being executed. When the power is switched on it sets itself to zero and it keeps on incrementing itself as opcodes are executed one after another. This information is used by RAM to extract correct opcode from ROM which is then executed. This process goes on line by line till the whole program is executed.
Let’s take an example:-
PC Mnemonic, Operand Opcode (Machine code)
0000 ORG 0H
0000 MOV R0, #0 7800
0002 MOV A, #55H 7455
0004 JZ NEXT 6003
0006 INC R0 08
0007 AGAIN: INC A 04
0008 INC A 04
0009 NEXT: ADD A, #77H 2477
000B JNC OVER 5005
000D CLR A E4
000E MOV R0, A F8
000F MOV R1, A F9
0010 MOV R2, A FA
0011 MOV R3, A FB
0012 OVER: ADD, R3 2B
0013 JNC AGAIN 50F2
0015 HERE: SJMP HERE 80FE
We can clearly see that the PC increases with execution of program. The PC starts with zero when ‘ORG 00H’ line is executed and then in subsequent lines PC keeps on incrementing as machine codes are executed one after another.
Program Counter is a 2 byte or 16 bit register. Therefore we cannot have internal ROM of more than the number this register can hold (i.e. not exceeding the FFFF hex value).
LOOP AND JUMP INSTRUCTIONS
Let us start with a simple example that will help you to learn how to create loops in assembly. In the following code the instruction DJNZ is used to reduce the counter and is repeated till the counter becomes zero.
MOV A, #0 ; clear A
MOV R1, #10 ; load counter R1 =10
AGAIN: ADD A, # 05 ; add five to register A
DJNZ R1, AGAIN ; repeat until R1=0 (10 times)
MOV R3, A ; save A in R3
In this code R1 acts as a counter. The counter value is initialized i.e. 10 HEX is loaded to R1. In each iteration, the instruction DJNZ decrements R1 by one until it becomes zero. This loop adds 5 HEX to A every time it runs. After ten iterations R1 becomes zero and the instructions below it are executed.
Note: - Some Jump statements can only be performed on some special register A (or bit CY) as mentioned in the table below.
MOV A, #55H ; A= 55 hex
MOV R1, #100 ; the outer counter R1 =100
NEXT: MOV R2, # 20 ; the inner counter
AGAIN: CPL A, # 05 ; add five to register A
DJNZ R2, AGAIN ; repeat until R1=0 (100 times)
DJNZ R1, NEXT ; repeat till 20 times (outer loop)
SJMP refers to short jump and LJMP refers to long jump. All the conditional jumps are short jumps.
SJMP: This instruction is of two bytes in which first one is opcode & second is the address. The relative address of the instruction called should be in between -127 to 127 bytes from the current program counter (PC).
LJMP: This instruction is of three bytes in which the first is the opcode and the second & third are for address. The relative address of the instruction can be anywhere on the ROM.
So it is clear from the above examples that we can use different jump instructions with a condition or counter called conditional loop. And when we create loop inside an existing loop it is called nested loop.
LCALL (long call)
BACK : MOV A, #55H ; load A= 55 hex value
MOV P1, A ; issue value of register A to port1
LCALL DELAY ; to call DELAY function created below
MOV A, #0AAH ;load AAH hex value to A
MOV P1,A ;issue value of register A to port 1
LCALL DELAY ; to call DELAY function as created below
SJMP BACK ; keep doing this
; ________ this is the delay subroutine
MOV R5, #0FFH ; R5= 255 hex, the counter
AGAIN: DJNZ R5, AGAIN ; stay here until R5 becomes zero
RET ; return to caller
In this code we keep on toggling the value of the register of port 1 with two different hex values and a DELAY subroutine is used to control how fast the value is changing. Here in DELAY subroutine the program is kept busy by running an idle loop and counting 256 counts. After the DELAY subroutine is executed once the value of port 1 is toggled and this process goes on infinitely.
By using DELAY we can create PWM (Pulse Width Modulation) to control motors or LED blinking for further details view our tutorial on Input/ output instructions in Assembly Language[coming soon].
We can also use ACALL i.e. absolute call for calling a subroutine that is within 2K byte of PC. |
Fact 1: You can’t directly see a black hole.
Black holes are black, but not in that they are coloured black, but because no light can escape from them. This means that it is impossible to use any kind of equipment to detect the black hole itself. Instead scientists must observe the effects of the black hole to determine where one is. For example, if a star is too close to a black hole, it will be ripped apart, sending out bright x-ray radiation which scientists will then be able to detect.
Fact 2: Look out! Our Milky Way likely has a black hole.
But don’t worry we are too far away from the centre of the galaxy to be drawn into it. However, the European Space Agency has been able to note its effects, and have estimated that it is 4 Million times more massive than our Sun!
Fact 3: Dying stars create stellar black holes.
Smaller stars like our Sun will turn into white dwarfs when they die, but stars that are around 20 times bigger that the Sun will explode into a supernova. This occurs when the natural pressure of the star is too great for the gravitational pull of the core; the outer layers are flung out and the core collapses in on itself, this then collapses further into a singularity which is a point of infinite density, which creates the black hole.
Fact 4: Black holes come in a range of sizes.
Black holes are not a one size fits all kind of deal. There are, in fact, three different sizes, all the way from teeny-weeny to those that fill the centres of galaxies. Primordial black holes are the smallest, these range in size from one atom’s width to the mass of a mountain. Stellar Black holes are the most common and are around 20 times bigger than our Sun. The ones that fill galaxies are called super-massive black holes. They are more than a million times bigger than our sun.
Fact 5: Weird time stuff happens around black holes.
The gravitational pull of the black hole effects time in the area within and surrounding it. This is because of Einstein’s theory of relativity which states that time is effected by speed. This means that if you were falling into a black hole, time for you would be going faster than someone outside but for you it would feel like normal speed and that everything outside was going slow.
Fact 6: The first black hole wasn’t discovered until X-ray astronomy was used.
The first black hole to be discovered was called Cygnus X-1. It was found during a balloon flight in the 1960s but it was not identified as a black hole until a decade later!
Fact 7: The nearest black hole is likely not 1,600 light-years away.
A few years ago, a report was published saying that the closest black hole to earth was only 1,600 lightyears away, this error has been addressed and the distance recalculated to show that it is actually more like 20,000 light years away.
Fact 8: We aren’t sure if wormholes exist.
Although they are very popular in science fiction, it is unsure as to whether wormholes actually exist or not. This does not mean that they do not exist and that, one day, light-speed travel will not be possible, just that there is far too much physics that is as yet unexplained.
Fact 9: Black holes are only dangerous if you get too close.
Black holes are really only dangerous if you are inside them. To be precise, if you are beyond the event horizon. This is the point of no return inside a black hole. This means that it is very unlikely for a black hole to swallow up the universe. If you were to start going into a black hole, some scientists believe that you would stretch out into a long thin line in a process called spaghettification.
Fact 10: Black holes are used all the time in science fiction.
It is impossible to list all of the films and tv shows that mention them! Here are a few honourable mentions though: Interstellar, Horizon, Star Trek, Battlestar Galactica, Stargate and so many more! |
A telomere is a region of repetitive nucleotide sequences at each end of a chromosome, which protects the end of the chromosome from deterioration or from fusion with neighboring chromosomes. Its name is derived from the Greek nouns telos (τέλος) "end" and merοs (μέρος, root: μερ-) "part". For vertebrates, the sequence of nucleotides in telomeres is TTAGGG, with the complementary DNA strand being AATCCC, with a single-stranded TTAGGG overhang. This sequence of TTAGGG is repeated approximately 2,500 times in humans. In humans, average telomere length declines from about 11 kilobases at birth to less than 4 kilobases in old age, with average rate of decline being greater in men than in women.
During chromosome replication, the enzymes that duplicate DNA cannot continue their duplication all the way to the end of a chromosome, so in each duplication the end of the chromosome is shortened (this is because the synthesis of Okazaki fragments requires RNA primers attaching ahead on the lagging strand). The telomeres are disposable buffers at the ends of chromosomes which are truncated during cell division; their presence protects the genes before them on the chromosome from being truncated instead. The telomeres themselves are protected by a complex of shelterin proteins, as well as by the RNA that telomeric DNA encodes (TERRA).
- 1 Discovery
- 2 Nature and function
- 3 Shortening
- 4 Lengthening
- 5 Sequences
- 6 Cancer
- 7 Measurement
- 8 See also
- 9 References
- 10 Further reading
- 11 External links
It was in 1933 when Barbara McClintock, a distinguished American cytogeneticist and the first woman to receive an unshared Noble Prize in Physiology or Medicine, observed that the chromosomes lacking end parts became “sticky” and hypothesised the existence of a special structure at the chromosome tip that would maintain chromosome stability. Similar observations were reported by Hermann Muller who coined the term 'telomere'.
In the early 1970s, Russian theorist Alexei Olovnikov first recognized that chromosomes could not completely replicate their ends. Building on this, and to accommodate Leonard Hayflick's idea of limited somatic cell division, Olovnikov suggested that DNA sequences are lost every time a cell/DNA replicates until the loss reaches a critical level, at which point cell division ends. However, Olovnikov's prediction was not widely known except by a handful of researchers studying cellular aging and immortalization.
In 1975–1977, Elizabeth Blackburn, working as a postdoctoral fellow at Yale University with Joseph Gall, discovered the unusual nature of telomeres, with their simple repeated DNA sequences composing chromosome ends. Blackburn, Carol Greider, and Jack Szostak were awarded the 2009 Nobel Prize in Physiology or Medicine for the discovery of how chromosomes are protected by telomeres and the enzyme telomerase.
Nevertheless, in the 1970s there was no recognition that the telomere-shortening mechanism normally limits cells to a fixed number of divisions, nor was there any animal study suggesting that this could be responsible for aging on the cellular level. There was also no recognition that the mechanism set a limit on lifespans.
It remained for a privately funded collaboration from biotechnology company Geron to isolate the genes for the RNA and protein component of human telomerase in order to establish the role of telomere shortening in cellular aging and telomerase reactivation in cell immortalization.[dead link]
Nature and function
Structure, function and evolutionary biology
citation needed] A protein complex known as shelterin serves to protect the ends of telomeres from being recognised as double-strand breaks by inhibiting homologous recombination (HR) and non-homologous end joining (NHEJ).[
In most prokaryotes, chromosomes are circular and, thus, do not have ends to suffer premature replication termination. A small fraction of bacterial chromosomes (such as those in Streptomyces, Agrobacterium, and Borrelia) are linear and possess telomeres, which are very different from those of the eukaryotic chromosomes in structure and functions. The known structures of bacterial telomeres take the form of proteins bound to the ends of linear chromosomes, or hairpin loops of single-stranded DNA at the ends of the linear chromosomes.
While replicating DNA, the eukaryotic DNA replication enzymes (the DNA polymerase protein complex) cannot replicate the sequences present at the ends of the chromosomes (or more precisely the chromatid fibres). Hence, these sequences and the information they carry may get lost. This is the reason telomeres are so important in context of successful cell division: They "cap" the end-sequences and themselves get lost in the process of DNA replication. But the cell has an enzyme called telomerase, which carries out the task of adding repetitive nucleotide sequences to the ends of the DNA. Telomerase, thus, "replenishes" the telomere "cap" of the DNA. In most multicellular eukaryotic organisms, telomerase is active only in germ cells, some types of stem cells such as embryonic stem cells, and certain white blood cells. Telomerase can be re activated and telomeres reset back to an embryonic state by somatic cell nuclear transfer. There are theories that claim that the steady shortening of telomeres with each replication in somatic (body) cells may have a role in senescence and in the prevention of cancer. This is because the telomeres act as a sort of time-delay "fuse", eventually running out after a certain number of cell divisions and resulting in the eventual loss of vital genetic information from the cell's chromosome with future divisions.
Telomere length varies greatly between species, from approximately 300 base pairs in yeast to many kilobases in humans, and usually is composed of arrays of guanine-rich, six- to eight-base-pair-long repeats. Eukaryotic telomeres normally terminate with 3′ single-stranded-DNA overhang, which is essential for telomere maintenance and capping. Multiple proteins binding single- and double-stranded telomere DNA have been identified. These function in both telomere maintenance and capping. Telomeres form large loop structures called telomere loops, or T-loops. Here, the single-stranded DNA curls around in a long circle, stabilized by telomere-binding proteins. At the very end of the T-loop, the single-stranded telomere DNA is held onto a region of double-stranded DNA by the telomere strand disrupting the double-helical DNA, and base pairing to one of the two strands. This triple-stranded structure is called a displacement loop or D-loop.
Telomere shortening in humans can induce replicative senescence, which blocks cell division. This mechanism appears to prevent genomic instability and development of cancer in human aged cells by limiting the number of cell divisions. However, shortened telomeres impair immune function that might also increase cancer susceptibility. If telomeres become too short, they have the potential to unfold from their presumed closed structure. The cell may detect this uncapping as DNA damage and then either stop growing, enter cellular old age (senescence), or begin programmed cell self-destruction (apoptosis) depending on the cell's genetic background (p53 status). Uncapped telomeres also result in chromosomal fusions. Since this damage cannot be repaired in normal somatic cells, the cell may even go into apoptosis. Many aging-related diseases are linked to shortened telomeres. Organs deteriorate as more and more of their cells die off or enter cellular senescence.
At the very distal end of the telomere is a 300 bp single-stranded portion, which forms the T-Loop. This loop is analogous to a knot, which stabilizes the telomere, preventing the telomere ends from being recognized as break points by the DNA repair machinery. Should non-homologous end joining occur at the telomeric ends, chromosomal fusion will result. The T-loop is held together by several proteins, the most notable ones being TRF1, TRF2, POT1, TIN1, and TIN2, collectively referred to as the shelterin complex. In humans, the shelterin complex consists of six proteins identified as TRF1, TRF2, TIN2, POT1, TPP1, and RAP1.
|This section does not cite any sources. (October 2009) (Learn how and when to remove this template message)|
Telomeres shorten in part because of the end replication problem that is exhibited during DNA replication in eukaryotes only. Because DNA replication does not begin at either end of the DNA strand, but starts in the center, and considering that all known DNA polymerases move in the 5' to 3' direction, one finds a leading and a lagging strand on the DNA molecule being replicated.
On the leading strand, DNA polymerase can make a complementary DNA strand without any difficulty because it goes from 5' to 3'. However, there is a problem going in the other direction on the lagging strand. To counter this, short sequences of RNA acting as primers attach to the lagging strand a short distance ahead of where the initiation site was. The DNA polymerase can start replication at that point and go to the end of the initiation site. This causes the formation of Okazaki fragments. More RNA primers attach further on the DNA strand and DNA polymerase comes along and continues to make a new DNA strand.
Eventually, the last RNA primer attaches, and DNA polymerase, RNA nuclease, and DNA ligase come along to convert the RNA (of the primers) to DNA and to seal the gaps in between the Okazaki fragments. But, in order to change RNA to DNA, there must be another DNA strand in front of the RNA primer. This happens at all the sites of the lagging strand, but it does not happen at the end where the last RNA primer is attached. Ultimately, that RNA is destroyed by enzymes that degrade any RNA left on the DNA. Thus, a section of the telomere is lost during each cycle of replication at the 5' end of the lagging strand's daughter.
However, test-tube studies have shown that telomeres are highly susceptible to oxidative stress. There is evidence that oxidative stress-mediated DNA damage is an important determinant of telomere shortening. Telomere shortening due to free radicals explains the difference between the estimated loss per division because of the end-replication problem (c. 20 bp) and actual telomere shortening rates (50–100 bp), and has a greater absolute impact on telomere length than shortening caused by the end-replication problem. Population-based studies have also indicated an interaction between anti-oxidant intake and telomere length. In the Long Island Breast Cancer Study Project (LIBCSP), authors found a moderate increase in breast cancer risk among women with the shortest telomeres and lower dietary intake of beta carotene, vitamin C or E. These results suggest that cancer risk due to telomere shortening may interact with other mechanisms of DNA damage, specifically oxidative stress.
Telomere shortening is associated with aging, mortality and aging-related diseases. In 2003, Richard Cawthon discovered that those with longer telomeres lead longer lives than those with short telomeres. However, it is not known whether short telomeres are just a sign of cellular age or actually contribute to the aging process.
The phenomenon of limited cellular division was first observed by Leonard Hayflick, and is now referred to as the Hayflick limit. Significant discoveries were subsequently made by a group of scientists organized at Geron Corporation by Geron's founder Michael D. West that tied telomere shortening with the Hayflick limit. The cloning of the catalytic component of telomerase enabled experiments to test whether the expression of telomerase at levels sufficient to prevent telomere shortening was capable of immortalizing human cells. Telomerase was demonstrated in a 1998 publication in Science to be capable of extending cell lifespan, and now is well-recognized as capable of immortalizing human somatic cells.
It is becoming apparent that reversing shortening of telomeres through temporary activation of telomerase may be a potent means to slow aging. The reason that this would extend human life is because it would extend the Hayflick limit. Three routes have been proposed to reverse telomere shortening: drugs, gene therapy, or metabolic suppression, so-called, torpor/hibernation. So far these ideas have not been proven in humans, but it has been demonstrated that telomere shortening is reversed in hibernation and aging is slowed (Turbill, et al. 2012 & 2013) and that hibernation prolongs life-span (Lyman et al. 1981). It has also been demonstrated that telomere extension has successfully reversed some signs of aging in laboratory mice and the nematode worm species Caenorhabditis elegans. It has been hypothesized that longer telomeres and especially telomerase activation might cause increased cancer (e.g. Weinstein and Ciszek, 2002). However, longer telomeres might also protect against cancer, because short telomeres are associated with cancer. It has also been suggested that longer telomeres might cause increased energy consumption.
Techniques to extend telomeres could be useful for tissue engineering, because they might permit healthy, noncancerous mammalian cells to be cultured in amounts large enough to be engineering materials for biomedical repairs.
Two recent studies on long-lived seabirds demonstrate that the role of telomeres is far from being understood. In 2003, scientists observed that the telomeres of Leach's storm-petrel (Oceanodroma leucorhoa) seem to lengthen with chronological age, the first observed instance of such behaviour of telomeres. In 2006, Juola et al. reported that in another unrelated, long-lived seabird species, the great frigatebird (Fregata minor), telomere length did decrease until at least c. 40 years of age (i.e. probably over the entire lifespan), but the speed of decrease slowed down massively with increasing ages, and that rates of telomere length decrease varied strongly between individual birds. They concluded that in this species (and probably in frigatebirds and their relatives in general), telomere length could not be used to determine a bird's age sufficiently well. Thus, it seems that there is much more variation in the behavior of telomere length than initially believed.
Furthermore, Gomes et al. found, in a study of the comparative biology of mammalian telomeres, that telomere length of different mammalian species correlates inversely, rather than directly, with lifespan, and they concluded that the contribution of telomere length to lifespan remains controversial. Harris et al. found little evidence that, in humans, telomere length is a significant biomarker of normal aging with respect to important cognitive and physical abilities. Gilley and Blackburn tested whether cellular senescence in paramecium is caused by telomere shortening, and found that telomeres were not shortened during senescence.
A 2013 pilot study from UCSF took 35 men with localized early-stage prostate cancer and had 10 of them begin "lifestyle changes that included: a plant-based diet (high in fruits, vegetables and unrefined grains, and low in fat and refined carbohydrates); moderate exercise (walking 30 minutes a day, six days a week); stress reduction (gentle yoga-based stretching, breathing, meditation)" and also "weekly group support". When compared to the other 25 study participants, "The group that made the lifestyle changes experienced a 'significant' increase in telomere length of approximately 10 percent. Further, the more people changed their behavior by adhering to the recommended lifestyle program, the more dramatic their improvements in telomere length." A 2014 study entitled "Stand up for health – avoiding sedentary behaviour might lengthen your telomeres: secondary outcomes from a physical activity RCT in older people" indicated somewhat contradictory results, stating, "In the intervention group, there was a negative correlation between changes in time spent exercising and changes in telomere length (rho=-0.39, p=0.07). On the other hand, in the intervention group, telomere lengthening was significantly associated with reduced sitting time (rho=-0.68, p=0.02).
|Group||Organism||Telomeric repeat (5' to 3' toward the end)|
|Vertebrates||Human, mouse, Xenopus||TTAGGG|
|Filamentous fungi||Neurospora crassa||TTAGGG|
|Slime moulds||Physarum, Didymium||TTAGGG|
|Kinetoplastid protozoa||Trypanosoma, Crithidia||TTAGGG|
|Ciliate protozoa||Tetrahymena, Glaucoma||TTGGGG|
|Oxytricha, Stylonychia, Euplotes||TTTTGGGG|
|Higher plants||Arabidopsis thaliana||TTTAGGG|
|Fission yeasts||Schizosaccharomyces pombe||TTAC(A)(C)G(1-8)|
|Budding yeasts||Saccharomyces cerevisiae||TGTGGGTGTGGTG (from RNA template)
or G(2-3)(TG)(1-6)T (consensus)
|This section needs expansion. You can help by adding to it. (June 2008)|
Telomeres are critical for maintaining genomic integrity and studies show that telomere dysfunction or shortening is commonly acquired during the process of tumor development. Short telomeres can lead to genomic instability, chromosome loss and the formation of non-reciprocal translocations; and telomeres in tumor cells and their precursor lesions are significantly shorter than surrounding normal tissue.
Observational studies have found shortened telomeres in many cancers: including pancreatic, bone, prostate, bladder, lung, kidney, and head and neck. In addition, people with many types of cancer have been found to possess shorter leukocyte telomeres than healthy controls. Recent meta-analyses suggest 1.4 to 3.0 fold increased risk of cancer for those with the shortest vs. longest telomeres. However the increase in risk varies by age, sex, tumor type and differences in lifestyle factors.
Some of the same lifestyle factors which increase risk of developing cancer have also been associated with shortened telomeres: including stress, smoking, physical inactivity and diet high in refined sugars Diet and physical activity influence inflammation and oxidative stress. These factors are thought to influence telomere maintenance. Psychologic stress has also been linked to accelerated cell aging, as reflected by decreased telomerase activity and short telomeres. It has been suggested that a combination of lifestyle modifications, including healthy diet, exercise and stress reduction, have the potential to increase telomere length, reverse cellular aging, and reduce the risk for aging-related diseases. In a recent clinical trial for early-stage prostate cancer patients, comprehensive lifestyle changes resulted in a short-term increase in telomerase activity and long-term modification in telomere length. Lifestyle modifications have the potential to naturally regulate telomere maintenance without promoting tumorigenesis, as traditional mechanisms of telomere lengthening involve the use of telomerase activating agents.
Cancer cells require a mechanism to maintain their telomeric DNA in order to continue dividing indefinitely (immortalization). A mechanism for telomere elongation or maintenance is one of the key steps in cellular immortalization and can be used as a diagnostic marker in the clinic. Telomerase, the enzyme complex responsible for elongating telomeres through the addition of telomere repeats to the ends of chromosomes, is activated in approximately 80% of tumors. However, a sizeable fraction of cancerous cells employ alternative lengthening of telomeres (ALT), a non-conservative telomere lengthening pathway involving the transfer of telomere tandem repeats between sister-chromatids.
Telomerase and cancer
||This section may be too technical for most readers to understand. (October 2013) (Learn how and when to remove this template message)|
Telomerase is the natural enzyme that promotes telomere lengthening. It is active in stem cells, germ cells, hair follicles, and 90 percent of cancer cells, but its expression is low or absent in somatic cells. Telomerase functions by adding bases to the ends of the telomeres. Cells with sufficient telomerase activity are considered immortal in the sense that they can divide past the Hayflick limit without entering senescence or apoptosis. For this reason, telomerase is viewed as a potential target for anti-cancer drugs (such as Geron's Imetelstat currently in human clinical trials and telomestatin).
Studies using knockout mice have demonstrated that the role of telomeres in cancer can both be limiting to tumor growth, as well as promote tumorigenesis, depending on the cell type and genomic context.
Telomerase is a "ribonucleoprotein complex" composed of a protein component and an RNA primer sequence that acts to protect the terminal ends of chromosomes from being broken down by enzymes. The telomeres (and the actions of telomerase) are necessary because, during replication, DNA polymerase can synthesize DNA in only a 5' to 3' direction (each DNA strand having a polarity that is determined by the precise manner in which sugar molecules of the strand's "backbone" are linked together) and can do so only by adding nucleotides to RNA primers (that have already been placed at various points along the length of the DNA). The RNA strands are replaced with newly synthesized DNA, but DNA polymerase can only "backfill" deoxyribonucleotides if there is already DNA "upstream" from (i.e., located 5' to) the RNA primer. At the chromosome terminal, however, there is no nucleotide sequence in the 5' direction (and therefore no upstream RNA primer or DNA), so DNA polymerase cannot function and genetic sequence might be lost through chromosomal fraying. Chromosomal ends might also be processed as breaks in double-strand DNA with chromosome-to-chromosome telomere fusions resulting.
Telomeres at the end of DNA prevent the chromosome from growing shorter during replications (with loss of genetic information) by employing "telomerases" to synthesize DNA at the chromosome terminal. These include a protein subgroup of specialized reverse transcriptase enzymes known as TERT (telomerase reverse transcriptases) and are involved in synthesis of telomeres in humans and many other, but not all, organisms. Because DNA replication mechanisms are affected by oxidative stress and because TERT expression is very low in most types of human cell, telomeres shorten every time a cell divides. Among cell types characterized by extensive cell division (such as stem cells and certain white blood cells), however, TERT is expressed at higher levels and telomere shortening is partially or fully prevented.
In addition to its TERT protein component, telomerase also contains a piece of template RNA known as the TERC (telomerase RNA component) or TR (telomerase RNA). In humans, this TERC telomere sequence is a repeating string of TTAGGG, between 3 and 20 kilobases in length. There are an additional 100-300 kilobases of telomere-associated repeats between the telomere and the rest of the chromosome. Telomere sequences vary from species to species, but, in general, one strand is rich in G with fewer Cs. These G-rich sequences can form four-stranded structures (G-quadruplexes), with sets of four bases held in plane and then stacked on top of each other, with either a sodium or a potassium ion between the planar quadruplexes.
Mammalian (and other) somatic cells without telomerase gradually lose telomeric sequences as a result of incomplete replication (Counter et al., 1992). As mammalian telomeres shorten, eventually cells reach their replicative limit and progress into senescence or old age. Senescence involves p53 and pRb pathways and leads to the halting of cell proliferation (Campisi, 2005). Senescence may play an important role in suppression of cancer emergence, although inheriting shorter telomeres probably does not protect against cancer. With critically shortened telomeres, further cell proliferation can be achieved by inactivation of p53 and pRb pathways. Cells entering proliferation after inactivation of p53 and pRb pathways undergo crisis. Crisis is characterized by gross chromosomal rearrangements and genome instability, and almost all cells die.
ALT (Alternative Lengthening of Telomeres) and cancer
||It has been suggested that this section be split out into another article. (Discuss) (August 2015)|
About 5–10% of human cancers activate the alternative lengthening of telomeres (ALT) pathway, which relies on recombination-mediated elongation. Rarely, cells emerge from crisis immortalized through telomere lengthening by either activated telomerase or ALT (Colgina and Reddel, 1999; Reddel and Bryan, 2003). The first description of an ALT cell line demonstrated that their telomeres are highly heterogeneous in length and predicted a mechanism involving recombination (Murnane et al., 1994). Subsequent studies have confirmed a role for recombination in telomere maintenance by ALT (Dunham et al., 2000), however the exact mechanism of this pathway is yet to be determined. ALT cells produce abundant T-circles, possible products of intratelomeric recombination and T-loop resolution (Tomaska et al., 2000; 2009; Cesare and Griffith, 2004; Wang et al., 2004).
Since shorter telomeres are thought by some to be a cause of aging, this raises the question of why longer telomeres are not selected for to ameliorate these effects. A prominent explanation suggests that inheriting longer telomeres would cause increased cancer rates (e.g. Weinstein and Ciszek, 2002). However, a recent literature review and analysis suggests this is unlikely, because shorter telomeres and telomerase inactivation is more often associated with increased cancer rates, and the mortality from cancer occurs late in life when the force of natural selection is very low. An alternative explanation to the hypothesis that long telomeres are selected against due to their cancer promoting effects is the "thrifty telomere" hypothesis, which suggests that the cellular proliferation effects of longer telomeres causes increased energy expenditures. In environments of energetic limitation, shorter telomeres might be an energy sparing mechanism.
Relation to breast cancer
In a healthy female breast, a proportion of cells called luminal progenitors that line the milk ducts have proliferative and differentiation potential and most of them contain critically short telomeres with DNA damage foci. These cells are believed to be the possible common cellular loci where cancers of the breast involving telomere dysregulation may arise. The telomere shortening in these progenitors is not age dependent but is speculated to be basal to luminal epithelial differentiation program-dependent. Also, the telomerase activity is unusually high in these cells when isolated from younger women, but declines with age.
||This section may be too technical for most readers to understand. (June 2009) (Learn how and when to remove this template message)|
Several techniques are currently employed to assess average telomere length in eukaryotic cells. One method is the Terminal Restriction Fragment (TRF) southern blot, which involves hybridization of a radioactive 32P-(TTAGGG)n oligonucleotide probe to Hinf / Rsa I digested genomic DNA embedded on a nylon membrane and subsequently exposed to autoradiographic film or phosphoimager screen. Another histochemical method, termed Q-FISH, involves fluorescent in situ hybridization (FISH). Q-FISH, however, requires significant amounts of genomic DNA (2-20 micrograms) and labor that renders its use limited in large epidemiological studies. Some of these impediments have been overcome with a Real-Time PCR assay for telomere length and Flow-FISH. Real-time PCR assay involves determining the Telomere-to-Single Copy Gene (T/S)ratio, which is demonstrated to be proportional to the average telomere length in a cell.
Another technique, referred to as single telomere elongation length analysis (STELA), was developed in 2003 by Duncan Baird. This technique allows investigations that can target specific telomere ends, which is not possible with TRF analysis. However, due to this technique's being PCR-based, telomeres larger than 25Kb cannot be amplified and there is a bias towards shorter telomeres.
While multiple companies offer telomere length measurement services, the utility of these measurements for widespread clinical or personal use has been questioned by prominent scientists without financial interests in these companies. Nobel Prize winner Elizabeth Blackburn, who was the co-founder of one of these companies and has prominently promoted the clinical utility of telomere length measures, resigned from the company in June 2013 "owing to an impending change in the control of Telome Health".
- Biological clock
- Epigenetic clock
- DNA damage theory of aging
- Maximum life span
- Rejuvenation (aging)
- Senescence, biological aging
- Witzany, G (2008). "The viral origins of telomeres, telomerases and their important role in eukaryogenesis and genome maintenance". Biosemiotics. 1: 191–206. doi:10.1007/s12304-008-9018-0.
- Sadava, D., Hillis, D., Heller, C., & Berenbaum, M. (2011). Life: The science of biology (9th ed.), Sunderland, MA: Sinauer Associates Inc.
- Okuda K, Bardeguez A, Gardner JP, Rodriguez P, Ganesh V, Kimura M, Skurnick J, Awad G, Aviv A (2002). "Telomere length in the newborn" (PDF). Pediatric Research. 52 (3): 377–81. PMID 12193671. doi:10.1203/00006450-200209000-00012.
- Arai Y, Martin-Ruiz CM, Takayama M, Abe Y, Takebayashi T, Koyasu S, Suematsu M, Hirose N, von Zglinicki T (2015). "Inflammation, But Not Telomere Length, Predicts Successful Ageing at Extreme Old Age: A Longitudinal Study of Semi-supercentenarians". EBioMedicine. 2 (10): 1549–48. PMC . PMID 26629551. doi:10.1016/j.ebiom.2015.07.029.
- Dalgård C, Benetos A, Verhulst S, Labat C, Kark JD, Christensen K, Kimura M, Kyvik KO, Aviv A (2015). "Leukocyte telomere length dynamics in women and men: menopause vs age effects". International Journal of Epidemiology. 44 (5): 1688–95. PMC . PMID 26385867. doi:10.1093/ije/dyv165.
- Talks at Google (20 August 2008). "Dr. Elizabeth Blackburn" – via YouTube.
- Passarge, Eberhard. Color atlas of genetics, 2007.
- Olovnikov, Alexei M. (1971). Принцип маргинотомии в матричном синтезе полинуклеотидов [Principle of marginotomy in template synthesis of polynucleotides]. Doklady Akademii Nauk SSSR (in Russian). 201 (6): 1496–99. PMID 5158754.
- Olovnikov AM (September 1973). "A theory of marginotomy. The incomplete copying of template margin in enzymic synthesis of polynucleotides and biological significance of the phenomenon". J. Theor. Biol. 41 (1): 181–90. PMID 4754905. doi:10.1016/0022-5193(73)90198-7.
- "No Nobel physiology and medicine award for Russian gerontologist Aleksey Olovnikov". Telegraph. October 21, 2009.
- Blackburn AM; Gall, Joseph G. (March 1978). "A tandemly repeated sequence at the termini of the extrachromosomal ribosomal RNA genes in Tetrahymena". J. Mol. Biol. 120 (1): 33–53. PMID 642006. doi:10.1016/0022-2836(78)90294-2.
- "The 2009 Nobel Prize in Physiology or Medicine – Press Release". Nobelprize.org. 2009-10-05. Retrieved 2012-06-12.
- Harrison's Principles of Internal Medicine, Ch. 69, Cancer cell biology and angiogenesis, Robert G. Fenton and Dan L. Longo, p. 454.
- "Unravelling the secret of ageing". COSMOS: The Science of Everything. October 5, 2009. Archived from the original on January 14, 2015.
- Blasco, Maria; Paula Martínez (21 Jun 2010). "Role of shelterin in cancer and aging". Aging Cell. 9 (5): 653–66. PMID 20569239. doi:10.1111/j.1474-9726.2010.00596.x.
- Lundblad, 2000; Ferreira et al., 2004
- Maloy, Stanley (July 12, 2002). "Bacterial Chromosome Structure". Retrieved 2008-06-22.
- Robert P. Lanza, Jose B. Cibelli, Catherine Blackwell, Vincent J. Cristofalo, Mary Kay Francis, Gabriela M. Baerlocher, Jennifer Mak, Michael Schertzer, Elizabeth A. Chavez, Nancy Sawyer, Peter M. Lansdorp, Michael D. West1 (28 April 2000). "Extension of Cell Life-Span and Telomere Length in Animals Cloned from Senescent Somatic Cells" (PDF). Science.
- Shampay , Szostak J.W., Blackburn E.H.; Szostak; Blackburn (1984). "DNA sequences of telomeres maintained in yeast". Nature. 310 (5973): 154–57. PMID 6330571. doi:10.1038/310154a0.
- Williams, TL; Levy, DL; Maki-Yonekura, S; Yonekura, K; Blackburn, EH (2010). "Characterization of the yeast telomere nucleoprotein core: Rap1 binds independently to each recognition site". J. Biol. Chem. 285: 35814–24. PMC . PMID 20826803. doi:10.1074/jbc.M110.170167.
- Griffith J, Comeau L, Rosenfield S, Stansel R, Bianchi A, Moss H, de Lange T; Comeau; Rosenfield; Stansel; Bianchi; Moss; De Lange (1999). "Mammalian telomeres end in a large duplex loop". Cell. 97 (4): 503–14. PMID 10338214. doi:10.1016/S0092-8674(00)80760-6.
- Burge S, Parkinson G, Hazel P, Todd A, Neidle S; Parkinson; Hazel; Todd; Neidle (2006). "Quadruplex DNA: sequence, topology and structure". Nucleic Acids Res. 34 (19): 5402–15. PMC . PMID 17012276. doi:10.1093/nar/gkl655.
- Eisenberg DTA (2011). "An evolutionary review of human telomere biology: The thrifty telomere hypothesis and notes on potential adaptive paternal effects". American Journal of Human Biology. 23 (2): 149–67. PMID 21319244. doi:10.1002/ajhb.21127.
- Richter, T; von Zglinicki, T (2007). "A continuous correlation between oxidative stress and telomere shortening in fibroblasts". Exp Gerontol. 42 (11): 1039–42. PMID 17869047. doi:10.1016/j.exger.2007.08.005.
- Shen, J; Gammon, MD; Terry, MB; Wang, Q; Bradshaw, P; Teitelbaum, SL; Neugut, AI; Santella, RM (Apr 2009). "Telomere length, oxidative damage, antioxidants and breast cancer risk". Int J Cancer. 124 (7): 1637–43. doi:10.1002/ijc.24105.
- Cawthon, RM; Smith, KR; O'Brien, E; Sivatchenko, A; Kerber, RA (2003). "Association between telomere length in blood and mortality in people aged 60 years or older". Lancet. 361 (9355): 393–95. doi:10.1016/s0140-6736(03)12384-7.
- Hayflick L, Moorhead PS; Moorhead (1961). "The serial cultivation of human diploid cell strains". Exp Cell Res. 25 (3): 585–621. PMID 13905658. doi:10.1016/0014-4827(61)90192-6.
- Hayflick L. (1965). "The limited in vitro lifetime of human diploid cell strains". Exp. Cell Res. 37 (3): 614–36. PMID 14315085. doi:10.1016/0014-4827(65)90211-9.
- Feng J, Funk WD, Wang SS, Weinrich SL, Avilion AA, Chiu CP, Adams RR, Chang E, Allsopp RC, Yu J; Funk; Wang; Weinrich; Avilion; Chiu; Adams; Chang; Allsopp; Yu (September 1995). "The RNA component of human telomerase". Science. 269 (5228): 1236–41. PMID 7544491. doi:10.1126/science.7544491.
- Bodnar, A.G.; Ouellette, M.; Frolkis, M.; Holt, S.E.; Chiu, C.P.; Morin, G.B.; Harley, C.B.; Shay, J.W.; Lichtsteiner, S.; Wright, W.E. (1998). "Extension of life-span by introduction of telomerase into normal human cells". Science. 279 (5349): 349–52. PMID 9454332. doi:10.1126/science.279.5349.349.
- Sample, Ian (November 28, 2010). "Harvard scientists reverse the ageing process in mice – now for humans". The Guardian. London.
- Jaskelioff, Mariela; Muller, Florian L.; Paik, Ji-Hye; Thomas, Emily; Jiang, Shan; Adams, Andrew C.; Sahin, Ergun; Kost-Alimova, Maria; Protopopov, Alexei; Cadiñanos, Juan; Horner, James W.; Maratos-Flier, Eleftheria; DePinho, Ronald A. (6 January 2011). "Telomerase reactivation reverses tissue degeneration in aged telomerase-deficient mice". Nature. 469 (7328): 102–06. PMC . PMID 21113150. doi:10.1038/nature09603 – via www.nature.com.
- Joeng KS, Song EJ, Lee KJ, Lee J; Song; Lee; Lee (2004). "Long lifespan in worms with long telomeric DNA". Nature Genetics. 36 (6): 607–11. PMID 15122256. doi:10.1038/ng1356.
- Nakagawa S, Gemmell NJ, Burke T; Gemmell; Burke (September 2004). "Measuring vertebrate telomeres: applications and limitations". Mol. Ecol. 13 (9): 2523–33. PMID 15315667. doi:10.1111/j.1365-294X.2004.02291.x.
- Juola, Frans A; Haussmann, Mark F; Dearborn, Donald C; Vleck, Carol M (2006). "Telomere shortening in a long-lived marine bird: Cross-sectional analysis and test of an aging tool". The Auk. 123 (3): 775. ISSN 0004-8038. doi:10.1642/0004-8038(2006)123[775:TSIALM]2.0.CO;2.
- Gomes, NM; Ryder, OA; Houck, ML; Charter, SJ; Walker, W; Forsyth, NR; Austad, SN; Venditti, C; Pagel, M; Shay, JW; Wright, WE (2011). "Comparative biology of mammalian telomeres: hypotheses on ancestral states and the roles of telomeres in longevity determination". Aging Cell. 10 (5): 761–68. PMC . PMID 21518243. doi:10.1111/j.1474-9726.2011.00718.x.
- Harris, SE; Martin-Ruiz, C; von Zglinicki, T; Starr, JM; Deary, IJ (2010). "Telomere length and aging biomarkers in 70-year-olds: the Lothian Birth Cohort 1936". Neurobiol Aging. 33 (7): 1486.e3–1486.e8. PMID 21194798. doi:10.1016/j.neurobiolaging.2010.11.013.
- Gilley, D; Blackburn, EH (1994). "Lack of telomere shortening during senescence in Paramecium". Proc Natl Acad Sci U S A. 91 (5): 1955–58. PMC . PMID 8127914. doi:10.1073/pnas.91.5.1955.
- Fernandez, Elizabeth (2013-09-16). "Lifestyle Changes May Lengthen Telomeres, A Measure of Cell Aging". http://www.ucsf.edu/. University of California, San Francisco. Retrieved 2015-03-16. External link in
- Sjögren, P; Fisher, R; Kallings, L; Svenson, U; Roos, G; Hellénius, M (2014-09-03). "Stand up for health – avoiding sedentary behaviour might lengthen your telomeres: secondary outcomes from a physical activity RCT in older people.". Br J Sports Med. 48: 1407–09. PMID 25185586. doi:10.1136/bjsports-2013-093342.
- Peška, Vratislav; Fajkus, Petr; Fojtová, Miloslava; Dvořáčková, Martina; Hapala, Jan; Dvořáček, Vojtěch; Polanská, Pavla; Leitch, Andrew R.; Sýkorová, Eva; Fajkus, Jiří (May 2015). "Characterisation of an unusual telomere motif (TTTTTTAGGG) in the plant (Solanaceae), a species with a large genome". The Plant Journal. 82 (4): 644–54. doi:10.1111/tpj.12839.
- Fajkus, Petr; Peška, Vratislav; Sitová, Zdeňka; Fulnečková, Jana; Dvořáčková, Martina; Gogela, Roman; Sýkorová, Eva; Hapala, Jan; Fajkus, Jiří (2016). "Allium telomeres unmasked: the unusual telomeric sequence (CTCGGTTATGGG)n is synthesized by telomerase". The Plant Journal. 85 (3): 337–47. doi:10.1111/tpj.13115.
- Raynaud, CM; Sabatier, L; Philipot, O; Olaussen, KA; Soria, JC (2008). "Telomere length, telomeric proteins and genomic instability during the multistep carcinogenic process". Crit Rev Oncol Hematol. 66: 99–117. doi:10.1016/j.critrevonc.2007.11.006.
- Blasco, MA; Lee, HW; Hande, MP; Samper, E; Lansdorp, PM; et al. (1997). "Telomere shortening and tumor formation by mouse cells lacking telomerase RNA". Cell. 91 (1): 25–34. PMID 9335332. doi:10.1016/s0092-8674(01)80006-4.
- Artandi, SE; Chang, S; Lee, SL; Alson, S; Gottlieb, GJ; et al. (2000). "Telomere dysfunction promotes non-reciprocal translocations and epithelial cancers in mice". Nature. 406: 641–45. PMID 10949306. doi:10.1038/35020592.
- Willeit Peter, Willeit Johann, Mayr Anita, Weger Siegfried, Oberhollenzer Friedrich, Brandstätter Anita, Kronenberg Florian, Kiechl Stefan; Willeit; Mayr; Weger; Oberhollenzer; Brandstätter; Kronenberg; Kiechl (2010). "Telomere length and risk of incident cancer and cancer mortality". JAMA. 304 (1): 69–75. PMID 20606151. doi:10.1001/jama.2010.897.
- Ma, H; Zhou, Z; Wei, S; et al. (2011). "Shortened telomere length is associated with increased risk of cancer: a meta-analysis". PLOS ONE. 6 (6): e20466. doi:10.1371/journal.pone.0020466.
- Wentzensen, IM; Mirabello, L; Pfeiffer, RM; Savage, SA (2011). "The association of telomere length and cancer: a meta-analysis". Cancer Epidemiol Biomarkers Prev. 20 (6): 1238–50. doi:10.1158/1055-9965.epi-11-0005.
- Paul, L (Oct 2011). "Diet, nutrition and telomere length". J Nurt Biochem. 22 (10): 895–901. doi:10.1016/j.jnutbio.2010.12.001.
- Epel, ES; Lin, J; Wilhelm, FH; Wolkowitz, OM; Cawthon, R; Adler, NE; Dolbier, C; Mendes, WB; Blackburn, EH (April 2006). "Cell aging in relation to stress arousal and cardiovascular disease risk factors". Psychoneuroendocrinology. 31 (3): 277–87. PMID 16298085. doi:10.1016/j.psyneuen.2005.08.011.
- Ornish, D; Lin, J; Chan, JM; Epel, E; Kemp, C; Weidner, G; Marlin, R; Frenda, SJ; Magbanua, MJ; Daubenmier, J; Estay, I; Hills, NK; Chainani-Wu, N; Carroll, PR; Blackburn, EH (Oct 2013). "Effect of comprehensive lifestyle changes on telomerase activity and telomerelength in men with biopsy-proven low-risk prostate cancer: 5-year follow-up of a descriptive pilot study". Lancet Oncol. 14 (11): 1112–20. doi:10.1016/S1470-2045(13)70366-8.
- Ornish, D; Lin, J; Daubenmier, J; Weidner, G; Epel, E; Kemp, C; Magbanua, MJ; Marlin, R; Yglecias, L; Carroll, PR; Blackburn, EH (Nov 2008). "Increased telomerase activity and comprehensive lifestyle changes: a pilot study". Lancet Oncol. 9 (11): 1048–57. doi:10.1016/S1470-2045(08)70234-1.
- Aschacher; Wolf; Enzmann; Kienzl (2015). "ALINE-1 induces hTERT and ensures telomere maintenance in tumour cell lines". Oncogene. 35: 94–104. PMID 25798839. doi:10.1038/onc.2015.65.
- Henson JD, Neumann AA, Yeager TR, Reddel RR; Neumann; Yeager; Reddel (2002). "Alternative lengthening of telomeres in mammalian cells". Oncogene. 21 (4): 598–610. PMID 11850785. doi:10.1038/sj.onc.1205058.
- Chris Molenaar; Karien Wiesmeijer; Nico P. Verwoerd; Shadi Khazen; Roland Eils; Hans J. Tanke & Roeland W. Dirks (2003-12-15). "Visualizing telomere dynamics in living mammalian cells using PNA probes". The EMBO Journal. The European Molecular Biology Organization. 22 (24): 6631–41. PMC . PMID 14657034. doi:10.1093/emboj/cdg633.
- Philippi C, Loretz B, Schaefer UF, Lehr CM.; Loretz; Schaefer; Lehr (April 2010). "Telomerase as an emerging target to fight cancer – Opportunities and challenges for nanomedicine". Journal of Controlled Release. 146 (2): 228–40. PMID 20381558. doi:10.1016/j.jconrel.2010.03.025.
- Chin L, Artandi SE, Shen Q, et al. (May 1999). "p53 deficiency rescues the adverse effects of telomere loss and cooperates with telomere dysfunction to accelerate carcinogenesis". Cell. 97 (4): 527–38. PMID 10338216. doi:10.1016/S0092-8674(00)80762-X.
- Greenberg RA, Chin L, Femino A, et al. (May 1999). "Short dysfunctional telomeres impair tumorigenesis in the INK4a(delta2/3) cancer-prone mouse". Cell. 97 (4): 515–25. PMID 10338215. doi:10.1016/S0092-8674(00)80761-8.
- Henson, JD; Neumann, AA; Yeager, TR; Reddel, RR (2002). "Alternative lengthening of telomeres in mammalian cells". Oncogene. 21 (4): 598–610. PMID 11850785. doi:10.1038/sj.onc.1205058.
- BBC, World/Mundo. "Resuelven misterio sobre el origen del cáncer de mama".
- Kannan, Nagarajan; Nazmul Huda, LiRen Tu, Radina Droumeva, Geraldine Aubert, Elizabeth Chavez, Ryan R. Brinkman, Peter Lansdorp, Joanne Emerman, Satoshi Abe, Connie Eaves, David Gilley (4 June 2013). "The Luminal Progenitor Compartment of the Normal Human Mammary Gland Constitutes a Unique Site of Telomere Dysfunction". Stem Cell Reports. 1 (1): 28–31. PMC . PMID 24052939. doi:10.1016/j.stemcr.2013.04.003.
- Allshire RC; et al. (1989). "Human telomeres contain at least three types of G-rich repeat distributed non-randomly". Nucleic Acids Res. 17 (12): 4611–27. PMC . PMID 2664709. doi:10.1093/nar/17.12.4611.
- Rufer N; et al. (1998). "Telomere length dynamics in human lymphocyte subpopulations measured by flow cytometry". Nat Biotechnol. 16 (8): 743–47. PMID 9702772. doi:10.1038/nbt0898-743.
- Cawthon, RM (2002). "Telomere measurement by quantitative PCR". Nucleic Acids Research. 30 (10): e47. PMC . PMID 12000852. doi:10.1093/nar/30.10.e47.
- "Titanovo, Inc". Titanovo.com. Retrieved 2015-04-15.
- "Telome Health, Inc". Telomehealth.com. Retrieved 2013-07-13.
- "TeloMe Home". Telome.com. Retrieved 2013-07-13.
- "A Blood Test Offers Clues to Longevity".
- Zglinicki, T. v. (13 March 2012). "Will your telomeres tell your future?" (PDF). BMJ. 344 (mar13 1): e1727. doi:10.1136/bmj.e1727.
- Jo Marchant. "Spit test offers guide to health : Nature News". Nature.com. Retrieved 2013-07-13.
- "Elizabeth Blackburn calls time on 'fountain of youth' firm Telome Health".
- Aubert G.; Lansdorp P.M. (April 2008). "Telomeres and Aging". Physiological Reviews. 88 (2): 557–79. PMID 18391173. doi:10.1152/physrev.00026.2007.
- Cong YS, Wright WE, Shay JW (September 2002). "Human telomerase and its regulation". Microbiol. Mol. Biol. Rev. 66 (3): 407–25, table of contents. PMC . PMID 12208997. doi:10.1128/MMBR.66.3.407-425.2002.
- Eisenberg DTA (2011). "An evolutionary review of human telomere biology: The thrifty telomere hypothesis and notes on potential adaptive paternal effects". American Journal of Human Biology. 23 (2): 149–67. PMID 21319244. doi:10.1002/ajhb.21127.
- Tomaska L.; Nosek J.; Kramara J.; Griffith J.D. (2009). "Telomeric circles: universal players in telomere maintenance". Nature Structural & Molecular Biology. 16 (10): 1010–15. PMC . PMID 19809492. doi:10.1038/nsmb.1660.
- Weinstein BS, Ciszek D; Lansdorp (May 2002). "The reserve-capacity hypothesis: evolutionary origins and modern implications of the trade-off between tumor-suppression and tissue-repair". Exp. Gerontol. 37 (5): 615–27. PMID 11909679. doi:10.1016/S0531-5565(02)00012-8. – A paper detailing the evolutionary origins and medical implications of the vertebrate telomere system, including the pervasive trade-off between cancer prevention and damage repair. Also addresses the probable danger posed by the elongation of telomeres in lab mice.
|Wikimedia Commons has media related to Telomeres.|
- Elizabeth Blackburn's seminars: "Telomeres and Telomerase"
- Telomeres and Telomerase: The Means to the End Nobel Lecture by Elizabeth Blackburn, which includes a reference to the impact of stress, and pessimism on telomere length
- Telomerase and the Consequences of Telomere Dysfunction Nobel Lecture by Carol Greider
- DNA Ends: Just the Beginning Nobel Lecture by Jack Szostak |
Algebra Glossary: S
scientific notation: A standard way of writing very large and very small numbers as the product of two values — a number between 1 and 9 and a power of 10. Scientific notation follows the form N × 10a where N is a number from 1 up to 10, but not 10 itself, and a is an integer (positive or negative number).
sign: A symbol indicating whether a value is positive (+) or negative (−).
simple fraction: A fraction in which both the numerator and the denominator are whole numbers.
simplify: To combine all that can be combined in an expression or equation, and put it in its most easily understandable form.
solution of equation: Value(s) of a variable that make an equation a true statement.
solve: Find the answer or the number that a variable stands for.
square; perfect square: 1. The product of another number times itself. 2. A value with an exponent of 2.
square root: A value resulting when a given value is multiplied by itself.
substitution: A method of replacing a value with its equivalent.
sum: The result of addition.
symmetric property: A characteristic of equations that allows for the exchange of the value(s) on one side of the equal sign with the value(s) on the other side (quantities on the right go to the left; quantities on the left go to the right) without changing the truth of the equation: If x = y, then y = x.
synthetic division: A short-cut division process in which only the coefficients of the terms in an expression are used. The answer is obtained by multiplying and adding. |
When two secant lines intersect each other outside a circle, the products of their segments are equal.
(Note: Each segment is measured from the outside point)
In the figure below, drag the orange dots around to reposition the secant lines.
You can see from the calculations that the two products are always the same.
(Note: Because the lengths are rounded to one decimal place for clarity, the calculations may come out slightly differently on your calculator.)
This theorem works like this: If you have a point outside a circle and draw two secant lines (PAB, PCD) from it, there
is a relationship between the line segments formed. Refer to the figure above. If you multiply the length of PA
by the length of PB, you will get the same result as when you do the same thing to the other secant line.
More formally: When two secant lines AB and CD intersect outside the circle at a point P, then
PA.PB = PC.PD
It is important to get the line segments right. The four segments we are talking about here all start at P, and some overlap each other
along part of their length; PA overlaps PB, and PC overlaps PD.
Relationship to Tangent-Secant Theorem
In the figure above, drag point B around the top until it meets point A. The line is now a tangent to the circle, and PA=PB.
Since PA=PB, then their product is equal to PA2. So:
PA2 = PC.PD
This is the Tangent-Secant Theorem.
Relationship to Tangent Theorems
If you move point B around until it overlaps A, the resulting tangent has a length equal to PA2. Similarly,
if you drag D around the bottom to point C, the that tangent has a length of PC2. From the this theorem
PA2 = PC2
By taking the square root of each side:
PA = PC
confirming that the two tangents froma point to a circle are always equal. |
EL Support Lesson
Students will be able to identify shapes based on their attributes.
Students will be able to describe the differences between shapes using mathematical vocabulary.
- Gather the class together for a read aloud.
- Display the book, The Shape of ThingsAnd explain that a shape is the outline of something.
- Ask students if they know the names of any shapes. Have them turn and talk to a partner to share any shapes that they already know.
- Have students share out the shapes that they know, create a visual word wall to illustrate the shapes students share (e.g., square, rectangle, triangle, etc.).
Explicit Instruction/Teacher modeling(10 minutes)
- Read aloud the book, pausing as you read to note the different kinds of shapes. Refer back to the visual word wall.
- Explain that we can tell the difference between shapes by paying attention to their AttributesOr a characteristic used to describe something.
- Draw a square on the board. Point to the SidesAnd CornersAnd say, "I know this is a square because it has four sides that are the same size and four corners."
- Draw a triangle next to the square. Model a think aloud to compare the shapes, "This shape has three sides and three corners. It is a triangle. It is similar to the square, but I know that it is a triangle because it only has three sides and corners and not four."
Guided practise(5 minutes)
- Model how to draw different 2D shapes (circle, square, rectangle, triangle, hexagon, etc.) using chart paper or the whiteboard.
- Point to each shape and have students practise drawing the shape in the air with fingers.
- Tell students to turn to a partner and choose one of the shapes to draw on their partners back using their fingers. Have the partner try to guess the shape, then have pairs switch roles.
- Point to a circle and a triangle on the board. Ask students, "How are these shapes different?" Have students turn and talk to share with a partner.
- Repeat with remaining shapes.
Group work time(15 minutes)
- Explain that students will now get to create a shape picture using at least one of each of the shapes from the lesson (triangle, square, rectangle, circle, and hexagon). Provide a sample picture to demonstrate one way to do this (e.g., a house scene made using shapes).
- Pass out materials and send students to work independently.
Additional EL adaptations
- Display visual Vocabulary Cards for each target shape.
- Have students practise describing shapes in their home language (L1).
- In a small group, practise identifying shapes using their attributes (three corners, four corners, etc.).
- Provide images of shapes that are different sizes or orientations. Encourage students to identify the same shapes using their attributes. Have students explain how they know what kind of shape it is. Say, "I know this is a square because it has four sides and four corners."
- Collect student work and assess if students were able to represent each of the target shapes.
- Ask students guiding questions to assess their ability to describe shapes by attributes. For example, "Which shape has three sides and three corners? How do you know?"
Review and closing(2 minutes)
- Gather the class back together and ask students to turn and talk to a partner to share their pictures. Have students identify their shapes and explain to their partner the attributes of each shape. For example, "This is a ____, I know it is a ____Because ____."
- Hold up one shape at a time and ask students to shout out the name of the shape. |
8-10 yrs old
Math & Economics
Students will observe crafting recipes, write them as fractions, and then use that knowledge to make an escape!
April 30, 2019
Minecraft World File
Download the world and open with Minecraft: Education Edition.
Common Core Standard Link
Engage NY Link
Associated Engage NY module and lesson.
1) What are equivalent fractions?
2) How are equivalent fractions related?
3) Are fractions division problems, explain why or why not?
Load the supplied world file, and speak to the NPC to teleport to Level 2 (or type /tp @a 154 5 2).
Creating Fractions From Crafting Recipes
Students will enter the world in single player mode and identify fraction out of crafting recipes.
Make Your Escape Performance Task
Students will need to use their knowledge of fraction and division to figure out how many logs of wood will be needed to craft 18 ladders. Then students will make their escape over the wall.
1) The student was able to create fractions after observing crafting recipes.
2) The student was able to explain the relationship between division and fractions.
3) The student was able to use crafting recipes to figure out how many logs of wood will they need to create 18 ladders.
Basic geometric concepts of area and perimeter.
What can we do to slow deforestation?
LEARNING ECONOMICS: PRODUCTIVE RESOURCES
Students learn the use of resources to build.
Ss will relate decimals & fractions w/ a garden
Can you win a bridge building competition?
Students to create a sustainable energy circuit
Solving multiplication problems using arrays
Rube Goldberg Machines
Minecraft and Physics.
Area and Volume
Use the formulas for area and volume.
Building Sustainable Homes
What would happen if we ignored climate change?
Save the Village Using Order of Operations
Goods and Services Simulation
Students take a good from inception to store
Patterns and motifs
Patterns & motifs convey culture and heritage
Loss of Biodiversity
What would happen with an extinction of a specie?
LEARNING ECONOMICS WITH MINECRAFT: CHOICES, COSTS AND BENEFITS
SUBMITTED BY: The Council for Economic Education (CEE). For more ...
Pixel Power! Part 1
Using the Cartesian Plane to make Art!
A math area world to learn how to calculate area
Area and Perimeter Tasks
Students will demonstrate knowledge
Pixel Power! Part 2
How to create the equation of a line. y = mx + c
Craft Your Future: Refurbish
In this custom built Minecraft world, students encounter a variet...
This is a yearlong project for Fifth Grade.
Creation of Geometrical Sculptures in Minecraft
Students will create a zoo in Minecraft.
Craft your Future – Renovation
Minecraft. In this custom built Minecraft world, students encount...
Creating equivalent fraction models
In this lesson, you will be challenged to write code to make quad...
Preventing Urban Spread
How do we use land efficiently?
This project shows how you can create a rainbow in Minecraft usin...
Build a Dam
3rd Grade Engineering Challenge
Creeper Tower Test
5th Grade Engineering Challenge
Parkour w/ Code!
Design a Parkour Course w/ Simple Code
In this lesson, students will create shapes using code and then d...
Recreate real-world objects in Minecraft
Angler Arithmetic – Cool Math!
Fishing Competition Teaches Math in Engaging Way
Explore Treasure Island through Minecraft.
Surface Area and Perimeter
6th Grade CCSS Math
What’s the Probability
Help Archimedes discover the Probability of Drops!
Build a Cathedral
Students to design and build a Cathedral
Learning about Ratios via Minecraft
In this mod, students will learn different ratio rules for a spec...
Fractions Pixel Art
Break down pixel art into fractions.
Craft your future – Construction
In this custom-built Minecraft world, students encounter a variet...
American Flag Three-Act Math
Determine the fraction/percent of each color!
Exploring Systems of Measurement
Students will use Minecraft to reimagine system of measurement in...
Craft Your Future – Restoration
Repeated Addition with Parkour
Build a parkour course to show addition.
Craft your Future – Learning Construction in Minecraft – Introduction
Fraction Capture the Flag
Explore fractions through playing capture the flag
Ice Fishing Derby
5th Grade CCSS Math
To calculate formula
Explore fraction models and build a farm.
Crafting Your Review
Students create a review world for others.
Steve’s New Home
A problem solving maths lesson.
Number Line Railroad
CCSS 4th Grade Math
Building Word Problems
Create a scene that depicts a word problem.
Division Into Equal Numbers
Learn division by rearranging arrays of blocks.
Symmetry in Pixel Art
Study and use lines of symmetry in pixel art.
Angles and Building Bridges
Explore lines and angles to build a bridge.
Two Step Word Problems
Build a two step world problem.
Measurement Mini Game
Play, examine, and create plans for a mini-game.
Fractions in Minecraft
Build math models that correspond to fractions.
Fun, Functional Playground
Students redesign their existing school playground
The Decimal Dungeon – Part 3
Observe & build math models to escape the dungeon!
The Decimal Dungeon – Part 1
MAKING HOMES, PART 1
Use math to design homes in Minecraft.
Points, Lines, Rays & Droppers
Learn about 2D figures by creating dropper games.
Survival Olympics Bar Graphs
Create graphs from Minecraft mini-game data.
Fractions and Multiplication
Multiply #s greater than, less than, or equal to 1
The Decimal Dungeon – Part 2
Base Ten Puzzles
Play a mini-game and solve base ten problems.
Part of Whole Art (fraction)
Create pic using fractional parts
Finding the Unknown
Explore finding an unknown variable.
Define, build, and classify quadrilaterals
The Decimal Dungeon – Part 4
Build a Two-Step Word Problem
Design a two-step word problem in Minecraft.
Round Numbers Video
Learn to round by building math models.
Find the size of different landforms in Minecraft.
Make a Regrouping Video
Build with blocks to demonstrate regrouping.
Use equivalent fractions to build a racetrack.
Regrouping Death Run
Solve Base 10 rounding math problems in Minecraft.
Multiplication and Division
Build models to show multiplication and division.
Lines, Angles & Architecture
Study lines and angles and use to design a facade.
Explore the concept of volume in Minecraft.
Subtraction + Regrouping CTF
Build math models and play Capture the Flag
Minecraft e o espaço escolar
Estudo e intervenção na escola pelo Minecraft
Multi Digit Multiplication
Explore multi digit multiplication in Minecraft.
Coordinate Planes in Minecraft
Plot points and draw lines with basic functions.
to complete and analyze fractions
The Tragedy of the Commons
Can you solve the mystery?
City Planning – Survival Roads
Solve equations to plan out roads.
Long Division in Minecraft
Use blocks to show long division.
Commutative Property Bed Wars
Build math models through mini-games
Explore how number patterns are used in building.
Javelin Line Plots
Throw tridents and track distance on a graph.
Arrays and Multiplication
Relate multiplication to array models.
Use the fill command to build an aquarium.
Elytra Flight Rounding
Solve Base 10 rounding problems thru mini-games.
MAKING HOMES, PART 2
Wither Battle Regrouping
Play a mini-game to regroup numbers in Minecraft.
Explore 1st quadrant
Build a Word Problem
Write and build a word problem in Minecraft.
Number Pattern Architecture
Use arithmetic patterns to create structures.
Build rectangles to find the factors for 1-100.
MAKING HOMES, PART 3
Survival City Making Roads
Students will use multiples of ten to make roads.
Survival City – Making Homes
Survival City Part 2
Use math to design a home.
Dividing Fractions CTF
Mini game to divide hole numbers with fractions.
A review of counting by 1's, 5's, 10's, and 25's.
Math Bed Wars!
Build math models in a Minecraft mini-game.
„გეომეტრიის სამყარო“ არ...
Plot a line graph javelin competition in Minecraft
Fraction Capture the Flag 5NF
Solve problems and build math models to play CTF.
Survival City Part 3
Use math to design a home in Minecraft.
Sample Size/Sound Conclusions
Find a region's composition by taking a sample! |
Confronting Scarcity: Choices In Production
3. Applications of the Production Possibilities Model
An increase in the physical quantity or in the quality of factors of production available to an economy or a technological gain will allow the economy to produce more goods and services; it will shift the economy's production possibilities curve outward. The process through which an economy achieves an outward shift in its production possibilities curve is called economic growth. An outward shift in a production possibilities curve is illustrated in Figure 2.10 "Economic Growth and the Production Possibilities Curve". In Panel (a), a point such as N is not attainable; it lies outside the production possibilities curve. Growth shifts the curve outward, as in Panel (b), making previously unattainable levels of production possible.
Figure 2.10 Economic Growth and the Production Possibilities Curve
An economy capable of producing two goods, A and B, is initially operating at point M on production possibilities curve OMR in Panel (a). Given this production possibilities curve, the economy could not produce a combination such as shown by point N, which lies outside the curve. An increase in the factors of production available to the economy would shift the curve outward to SNT, allowing the choice of a point such as N, at which more of both goods will be produced.
The Sources of Economic Growth
Economic growth implies an outward shift in an economy's production possibilities curve. Recall that when we draw such a curve, we assume that the quantity and quality of the economy's factors of production and its technology are unchanged. Changing these will shift the curve. Anything that increases the quantity or quality of the factors of production available to the economy or that improves the technology available to the economy contributes to economic growth.
Consider, for example, the dramatic gains in human capital that have occurred in the United States since the beginning of the past century. In 1900, about 3.5% of U.S. workers had completed a high school education. By 2009, that percentage rose almost to 92. Fewer than 1% of the workers in 1900 had graduated from college; as late as 1940 only 3.5% had graduated from college. By 2009, over 32% had graduated from college. In addition to being better educated, today's workers have received more and better training on the job. They bring far more economically useful knowledge and skills to their work than did workers a century ago.
Moreover, the technological changes that have occurred within the past 100 years have greatly reduced the time and effort required to produce most goods and services. Automated production has become commonplace. Innovations in transportation (automobiles, trucks, and airplanes) have made the movement of goods and people cheaper and faster. A dizzying array of new materials is available for manufacturing. And the development of modern information technology - including computers, software, and communications equipment - that seemed to proceed at breathtaking pace especially during the final years of the last century and continuing to the present has transformed the way we live and work.
Look again at the technological changes of the last few years described in the Case in Point on advances in technology. Those examples of technological progress through applications of computer technology - from new ways of mapping oil deposits to new methods of milking cows - helped propel the United States and other economies to dramatic gains in the ability to produce goods and services. They have helped shift the countries' production possibilities curve outward. They have helped fuel economic growth.
Table 2.1 "Sources of U.S. Economic Growth, 1960–2007" summarizes the factors that have contributed to U.S. economic growth since 1960. When looking at the period of 1960–2007 as a whole we see that about 65% of economic growth stems from increases in quantities of capital and labor and about 35% from increases in qualities of the factors of production and improvements in technology or innovation. Looking at the three shorter subperiods (1960–1995, 1995-2000, and 2000-2007), we see that the share attributed to quantity increases declined (from 68% to 56% and then 50%), while the share attributed to improvement in the qualities of the factors of production and to technological improvement grew (from 32% to 44% and then to 50%).
Table 2.1 Sources of U.S. Economic Growth, 1960–2007
|Period||Percentage Contribution to Growth||Period Growth Rate|
|Increase in quantity of labor||0.74%|
|Increase in quantity of capital||1.48%|
|Increase in quality of labor||0.23%|
|Increase in quality of capital||0.58%|
|Increase in quantity of labor||0.80%|
|Increase in quantity of capital||1.55%|
|Increase in quality of labor||0.24%|
|Increase in quality of capital||0.56%|
|Increase in quantity of labor||1.09%|
|Increase in quantity of capital||1.43%|
|Increase in quality of labor||0.20%|
|Increase in quality of capital||0.89%|
|Increase in quantity of labor||0.17%|
|Increase in quantity of capital||1.21%|
|Increase in quality of labor||0.22%|
|Increase in quality of capital||0.46%|
Total output for the period shown increased nearly fivefold. The chart shows the percentage of growth accounted for by increases in the quantity of labor and of capital and by increases in the quality of labor and of capital and improvements in technology.
Another way of looking at these data is to note that while the contribution of improved technology has increased over time (from 8% for the 1960–1995 period, to 20% for the 1995–2000 period, and 26% for the 2000–2007 period), most growth comes from more and better-quality factors of production. The study by economists Dale Jorgenson, Mun Ho, and Jon Samuels, on which the data shown in Table 2.1 "Sources of U.S. Economic Growth, 1960–2007" are derived, concludes that "the great preponderance of economic growth in the U.S. involves the replication of existing technologies through investment in equipment and software and expansion of the labour force. Replication generates economic growth with no increase in productivity. Productivity growth is the key economic indicator of innovation…Although innovation contributes only a modest portion of growth, this is vital to long-term gains in the American standard of living".
Waiting for Growth
One key to growth is, in effect, the willingness to wait, to postpone current consumption in order to enhance future productive capability. When Stone Age people fashioned the first tools, they were spending time building capital rather than engaging in consumption. They delayed current consumption to enhance their future consumption; the tools they made would make them more productive in the future.
Resources society could have used to produce consumer goods are being used to produce new capital goods and new knowledge for production instead - all to enhance future production. An even more important source of growth in many nations has been increased human capital. Increases in human capital often require the postponement of consumption. If you are a college student, you are engaged in precisely this effort. You are devoting time to study that could have been spent working, earning income, and thus engaging in a higher level of consumption. If you are like most students, you are making this choice to postpone consumption because you expect it will allow you to earn more income, and thus enjoy greater consumption, in the future.
Think of an economy as being able to produce two goods, capital and consumer goods (those destined for immediate use by consumers). By focusing on the production of consumer goods, the people in the economy will be able to enjoy a higher standard of living today. If they reduce their consumption - and their standard of living - today to enhance their ability to produce goods and services in the future, they will be able to shift their production possibilities curve outward. That may allow them to produce even more consumer goods. A decision for greater growth typically involves the sacrifice of present consumption. |
Concept Extension: Suppose you have a toy airplane, and upon takeoff, it rises 5 feet for every 6 feet that it travels along the horizontal. What would be the slope of its ascent? Would it be a positive value or a negative value? Can you graph the path of the plane's ascent and state the slope? In this Concept, you'll learn how to determine the slope of a line by analyzing vertical change and horizontal change so that you can handle problems such as this one.
The pitch of a roof, the slant of a ladder against a wall, the incline of a road, and even your treadmill incline are all examples of slope.
The slope of a line measures its steepness (either negative or positive).
For example, if you have ever driven through a mountain range, you may have seen a sign stating, “10% incline.” The percent tells you how steep the incline is. You have probably seen this on a treadmill too. The incline on a treadmill measures how steep you are walking uphill. Below is a more formal definition of slope.
The slope of a line is the vertical change divided by the horizontal change.
In the figure below, a car is beginning to climb up a hill. The height of the hill is 3 meters and the length of the hill is 4 meters. Using the definition above, the slope of this hill can be written as . Because , we can say this hill has a 75% positive slope.
Similarly, if the car begins to descend down a hill, you can still determine the slope.
The slope in this instance is negative because the car is traveling downhill.
Another way to think of slope is: .
When graphing an equation, slope is a very powerful tool. It provides the directions on how to get from one ordered pair to another. To determine slope, it is helpful to draw a slope-triangle .
Using the following graph, choose two ordered pairs that have integer values such as (–3, 0) and (0, –2). Now draw in the slope triangle by connecting these two points as shown.
The vertical leg of the triangle represents the rise of the line and the horizontal leg of the triangle represents the run of the line. A third way to represent slope is:
Starting at the left-most coordinate, count the number of vertical units and horizontal units it took to get to the right-most coordinate.
Find the slope of the line graphed below.
Solution: Begin by finding two pairs of ordered pairs with integer values: (1, 1) and (0, –2).
Draw in the slope triangle.
Count the number of vertical units to get from the left ordered pair to the right.
Count the number of horizontal units to get from the left ordered pair to the right.
A more algebraic way to determine a slope is by using a formula. The formula for slope is:
The slope between any two points and is: .
represents one of the two ordered pairs and represents the other. The following example helps show this formula.
Using the slope formula, determine the slope of the equation graphed in Example A.
Solution: Use the integer ordered pairs used to form the slope triangle: (1, 1) and (0, –2). Since (1, 1) is written first, it can be called . That means
Use the formula:
As you can see, the slope is the same regardless of the method you use. If the ordered pairs are fractional or spaced very far apart, it is easier to use the formula than to draw a slope triangle.
Types of Slopes
Slopes come in four different types: negative, zero, positive, and undefined. The first graph of this Concept had a negative slope. The second graph had a positive slope. Slopes with zero slopes are lines without any steepness, and undefined slopes cannot be computed.
Any line with a slope of zero will be a horizontal line with equation .
Any line with an undefined slope will be a vertical line with equation .
We will use the next two graphs to illustrate the previous definitions.
To determine the slope of line , you need to find two ordered pairs with integer values.
(–4, 3) and (1, 3). Choose one ordered pair to represent and the other to represent .
Now apply the formula: .
To determine the slope of line , you need to find two ordered pairs on this line with integer values and apply the formula.
(5, 1) and (5, –6)
It is impossible to divide by zero, so the slope of line cannot be determined and is called undefined .
Find the slope of each line in the graph below:
For each line, identify two coordinate pairs on the line and use them to calculate the slope.
For the green line, one choice is and . This results in a slope of:
For the blue line, one choice is and . This results in a slope of:
The slopes can be seen in this graph:
Sample explanations for some of the practice exercises below are available by viewing the following video. Note that there is not always a match between the number of the practice exercise in the video and the number of the practice exercise listed in the following exercise set. However, the practice exercise is the same in both. CK-12 Basic Algebra: Slope and Rate of Change (13:42)
- Define slope .
- Describe the two methods used to find slope. Which one do you prefer and why?
- What is the slope of all vertical lines? Why is this true?
- What is the slope of all horizontal lines? Why is this true?
Using the graphed coordinates, find the slope of each line.
In 8 – 20, find the slope between the two given points.
- (–5, 7) and (0, 0)
- (–3, –5) and (3, 11)
- (3, –5) and (–2, 9)
- (–5, 7) and (–5, 11)
- (9, 9) and (–9, –9)
- (3, 5) and (–2, 7)
- and (–2, 6)
- (–2, 3) and (4, 8)
- (–17, 11) and (4, 11)
- (31, 2) and (31, –19)
- (0, –3) and (3, –1)
- (2, 7) and (7, 2)
- (0, 0) and
- Determine the slope of .
Determine the slope of
Determine whether each ordered pair is a solution to the equation
and .. and (3, 4)
24. Solve: .
25. Graph the solutions to the equation . |
In mathematics, the logarithm is the inverse function to exponentiation. That means the logarithm of a given number x is the exponent to which another fixed number, the base b, must be raised, to produce that number x. In the simplest case, the logarithm counts the number of occurrences of the same factor in repeated multiplication; e.g. since 1000 = 10 × 10 × 10 = 103, the "logarithm base 10" of 1000 is 3, or log10 (1000) = 3. The logarithm of x to base b is denoted as logb (x), or without parentheses, logb x, or even without the explicit base, log x, when no confusion is possible, or when the base does not matter such as in big O notation.
The logarithm base 10 (that is b = 10) is called the decimal or common logarithm and is commonly used in science and engineering. The natural logarithm has the number e (that is b ≈ 2.718) as its base; its use is widespread in mathematics and physics, because of its simpler integral and derivative. The binary logarithm uses base 2 (that is b = 2) and is frequently used in computer science.
Logarithms were introduced by John Napier in 1614 as a means of simplifying calculations. They were rapidly adopted by navigators, scientists, engineers, surveyors and others to perform high-accuracy computations more easily. Using logarithm tables, tedious multi-digit multiplication steps can be replaced by table look-ups and simpler addition. This is possible because of the fact—important in its own right—that the logarithm of a product is the sum of the logarithms of the factors:
provided that b, x and y are all positive and b ≠ 1. The slide rule, also based on logarithms, allows quick calculations without tables, but at lower precision. The present-day notion of logarithms comes from Leonhard Euler, who connected them to the exponential function in the 18th century, and who also introduced the letter e as the base of natural logarithms.
Logarithmic scales reduce wide-ranging quantities to smaller scopes. For example, the decibel (dB) is a unit used to express ratio as logarithms, mostly for signal power and amplitude (of which sound pressure is a common example). In chemistry, pH is a logarithmic measure for the acidity of an aqueous solution. Logarithms are commonplace in scientific formulae, and in measurements of the complexity of algorithms and of geometric objects called fractals. They help to describe frequency ratios of musical intervals, appear in formulas counting prime numbers or approximating factorials, inform some models in psychophysics, and can aid in forensic accounting.
The concept of logarithm as the inverse of exponentiation extends to other mathematical structures as well. However, in general settings, the logarithm tends to be a multi-valued function. For example, the complex logarithm is the multi-valued inverse of the complex exponential function. Similarly, the discrete logarithm is the multi-valued inverse of the exponential function in finite groups; it has uses in public-key cryptography.
Addition, multiplication, and exponentiation are three of the most fundamental arithmetic operations. The inverse of addition is subtraction, and the inverse of multiplication is division. Similarly, a logarithm is the inverse operation of exponentiation. Exponentiation is when a number b, the base, is raised to a certain power y, the exponent, to give a value x; this is denoted
For example, raising 2 to the power of 3 gives 8:
The logarithm of base b is the inverse operation, that provides the output y from the input x. That is, is equivalent to if b is a positive real number. (If b is not a positive real number, both exponentiation and logarithm can be defined but may take several values, which makes definitions much more complicated.)
One of the main historical motivations of introducing logarithms is the formula
which allowed (before the invention of computers) reducing computation of multiplications and divisions to additions, subtractions and logarithm table looking.
Given a positive real number b such that b ≠ 1, the logarithm of a positive real number x with respect to base b[nb 1] is the exponent by which b must be raised to yield x. In other words, the logarithm of x to base b is the unique real number y such that .
The logarithm is denoted "logb x" (pronounced as "the logarithm of x to base b", "the base-b logarithm of x", or most commonly "the log, base b, of x").
An equivalent and more succinct definition is that the function logb is the inverse function to the function .
Main article: List of logarithmic identities
Several important formulas, sometimes called logarithmic identities or logarithmic laws, relate logarithms to one another.
The logarithm of a product is the sum of the logarithms of the numbers being multiplied; the logarithm of the ratio of two numbers is the difference of the logarithms. The logarithm of the p-th power of a number is p times the logarithm of the number itself; the logarithm of a p-th root is the logarithm of the number divided by p. The following table lists these identities with examples. Each of the identities can be derived after substitution of the logarithm definitions or in the left hand sides.
The logarithm logb x can be computed from the logarithms of x and b with respect to an arbitrary base k using the following formula:
Derivation of the conversion factor between logarithms of arbitrary base
Starting from the defining identity
we can apply logk to both sides of this equation, to get
Solving for yields:
showing the conversion factor from given -values to their corresponding -values to be
Typical scientific calculators calculate the logarithms to bases 10 and e. Logarithms with respect to any base b can be determined using either of these two logarithms by the previous formula:
Given a number x and its logarithm y = logb x to an unknown base b, the base is given by:
which can be seen from taking the defining equation to the power of
Among all choices for the base, three are particularly common. These are b = 10, b = e (the irrational mathematical constant ≈ 2.71828), and b = 2 (the binary logarithm). In mathematical analysis, the logarithm base e is widespread because of analytical properties explained below. On the other hand, base-10 logarithms are easy to use for manual calculations in the decimal number system:
Thus, log10 (x) is related to the number of decimal digits of a positive integer x: the number of digits is the smallest integer strictly bigger than log10 (x). For example, log10(1430) is approximately 3.15. The next integer is 4, which is the number of digits of 1430. Both the natural logarithm and the logarithm to base two are used in information theory, corresponding to the use of nats or bits as the fundamental units of information, respectively. Binary logarithms are also used in computer science, where the binary system is ubiquitous; in music theory, where a pitch ratio of two (the octave) is ubiquitous and the number of cents between any two pitches is the binary logarithm, times 1200, of their ratio (that is, 100 cents per equal-temperament semitone); and in photography to measure exposure values, light levels, exposure times, apertures, and film speeds in "stops".
The following table lists common notations for logarithms to these bases and the fields where they are used. Many disciplines write log x instead of logb x, when the intended base can be determined from the context. The notation blog x also occurs. The "ISO notation" column lists designations suggested by the International Organization for Standardization (ISO 80000-2). Because the notation log x has been used for all three bases (or when the base is indeterminate or immaterial), the intended base must often be inferred based on context or discipline. In computer science, log usually refers to log2, and in mathematics log usually refers to loge. In other contexts, log often means log10.
|Base b||Name for logb x||ISO notation||Other notations||Used in|
|2||binary logarithm||lb x||ld x, log x, lg x, log2 x||computer science, information theory, bioinformatics, music theory, photography|
|e||natural logarithm||ln x[nb 2]||log x
(in mathematics and many programming languages[nb 3]), loge x
|mathematics, physics, chemistry,|
statistics, economics, information theory, and engineering
|10||common logarithm||lg x||log x, log10 x
(in engineering, biology, astronomy)
|various engineering fields (see decibel and see below), |
logarithm tables, handheld calculators, spectroscopy
|b||logarithm to base b||logb x||mathematics|
Main article: History of logarithms
The history of logarithms in seventeenth-century Europe is the discovery of a new function that extended the realm of analysis beyond the scope of algebraic methods. The method of logarithms was publicly propounded by John Napier in 1614, in a book titled Mirifici Logarithmorum Canonis Descriptio (Description of the Wonderful Rule of Logarithms). Prior to Napier's invention, there had been other techniques of similar scopes, such as the prosthaphaeresis or the use of tables of progressions, extensively developed by Jost Bürgi around 1600. Napier coined the term for logarithm in Middle Latin, “logarithmus,” derived from the Greek, literally meaning, “ratio-number,” from logos “proportion, ratio, word” + arithmos “number”.
The common logarithm of a number is the index of that power of ten which equals the number. Speaking of a number as requiring so many figures is a rough allusion to common logarithm, and was referred to by Archimedes as the “order of a number”. The first real logarithms were heuristic methods to turn multiplication into addition, thus facilitating rapid computation. Some of these methods used tables derived from trigonometric identities. Such methods are called prosthaphaeresis.
Invention of the function now known as the natural logarithm began as an attempt to perform a quadrature of a rectangular hyperbola by Grégoire de Saint-Vincent, a Belgian Jesuit residing in Prague. Archimedes had written The Quadrature of the Parabola in the third century BC, but a quadrature for the hyperbola eluded all efforts until Saint-Vincent published his results in 1647. The relation that the logarithm provides between a geometric progression in its argument and an arithmetic progression of values, prompted A. A. de Sarasa to make the connection of Saint-Vincent's quadrature and the tradition of logarithms in prosthaphaeresis, leading to the term “hyperbolic logarithm”, a synonym for natural logarithm. Soon the new function was appreciated by Christiaan Huygens, and James Gregory. The notation Log y was adopted by Leibniz in 1675, and the next year he connected it to the integral
Before Euler developed his modern conception of complex natural logarithms, Roger Cotes had a nearly equivalent result when he showed in 1714 that
By simplifying difficult calculations before calculators and computers became available, logarithms contributed to the advance of science, especially astronomy. They were critical to advances in surveying, celestial navigation, and other domains. Pierre-Simon Laplace called logarithms
As the function f(x) = bx is the inverse function of logb x, it has been called an antilogarithm. Nowadays, this function is more commonly called an exponential function.
A key tool that enabled the practical use of logarithms was the table of logarithms. The first such table was compiled by Henry Briggs in 1617, immediately after Napier's invention but with the innovation of using 10 as the base. Briggs' first table contained the common logarithms of all integers in the range from 1 to 1000, with a precision of 14 digits. Subsequently, tables with increasing scope were written. These tables listed the values of log10 x for any number x in a certain range, at a certain precision. Base-10 logarithms were universally used for computation, hence the name common logarithm, since numbers that differ by factors of 10 have logarithms that differ by integers. The common logarithm of x can be separated into an integer part and a fractional part, known as the characteristic and mantissa. Tables of logarithms need only include the mantissa, as the characteristic can be easily determined by counting digits from the decimal point. The characteristic of 10 · x is one plus the characteristic of x, and their mantissas are the same. Thus using a three-digit log table, the logarithm of 3542 is approximated by
Greater accuracy can be obtained by interpolation:
The value of 10x can be determined by reverse look up in the same table, since the logarithm is a monotonic function.
The product and quotient of two positive numbers c and d were routinely calculated as the sum and difference of their logarithms. The product cd or quotient c/d came from looking up the antilogarithm of the sum or difference, via the same table:
For manual calculations that demand any appreciable precision, performing the lookups of the two logarithms, calculating their sum or difference, and looking up the antilogarithm is much faster than performing the multiplication by earlier methods such as prosthaphaeresis, which relies on trigonometric identities.
Calculations of powers and roots are reduced to multiplications or divisions and lookups by
Trigonometric calculations were facilitated by tables that contained the common logarithms of trigonometric functions.
Another critical application was the slide rule, a pair of logarithmically divided scales used for calculation. The non-sliding logarithmic scale, Gunter's rule, was invented shortly after Napier's invention. William Oughtred enhanced it to create the slide rule—a pair of logarithmic scales movable with respect to each other. Numbers are placed on sliding scales at distances proportional to the differences between their logarithms. Sliding the upper scale appropriately amounts to mechanically adding logarithms, as illustrated here:
For example, adding the distance from 1 to 2 on the lower scale to the distance from 1 to 3 on the upper scale yields a product of 6, which is read off at the lower part. The slide rule was an essential calculating tool for engineers and scientists until the 1970s, because it allows, at the expense of precision, much faster computation than techniques based on tables.
A deeper study of logarithms requires the concept of a function. A function is a rule that, given one number, produces another number. An example is the function producing the x-th power of b from any real number x, where the base b is a fixed number. This function is written as f(x) = b x. When b is positive and unequal to 1, we show below that f is invertible when considered as a function from the reals to the positive reals.
Let b be a positive real number not equal to 1 and let f(x) = b x.
It is a standard result in real analysis that any continuous strictly monotonic function is bijective between its domain and range. This fact follows from the intermediate value theorem. Now, f is strictly increasing (for b > 1), or strictly decreasing (for 0 < b < 1), is continuous, has domain , and has range . Therefore, f is a bijection from to . In other words, for each positive real number y, there is exactly one real number x such that .
We let denote the inverse of f. That is, logb y is the unique real number x such that . This function is called the base-b logarithm function or logarithmic function (or just logarithm).
The function logb x can also be essentially characterized by the product formula
More precisely, the logarithm to any base b > 1 is the only increasing function f from the positive reals to the reals satisfying f(b) = 1 and
As discussed above, the function logb is the inverse to the exponential function . Therefore, Their graphs correspond to each other upon exchanging the x- and the y-coordinates (or upon reflection at the diagonal line x = y), as shown at the right: a point (t, u = bt) on the graph of f yields a point (u, t = logb u) on the graph of the logarithm and vice versa. As a consequence, logb (x) diverges to infinity (gets bigger than any given number) if x grows to infinity, provided that b is greater than one. In that case, logb(x) is an increasing function. For b < 1, logb (x) tends to minus infinity instead. When x approaches zero, logb x goes to minus infinity for b > 1 (plus infinity for b < 1, respectively).
Analytic properties of functions pass to their inverses. Thus, as f(x) = bx is a continuous and differentiable function, so is logb y. Roughly, a continuous function is differentiable if its graph has no sharp "corners". Moreover, as the derivative of f(x) evaluates to ln(b) bx by the properties of the exponential function, the chain rule implies that the derivative of logb x is given by
That is, the slope of the tangent touching the graph of the base-b logarithm at the point (x, logb (x)) equals 1/(x ln(b)).
The derivative of ln(x) is 1/x; this implies that ln(x) is the unique antiderivative of 1/x that has the value 0 for x = 1. It is this very simple formula that motivated to qualify as "natural" the natural logarithm; this is also one of the main reasons of the importance of the constant e.
The derivative with a generalized functional argument f(x) is
The quotient at the right hand side is called the logarithmic derivative of f. Computing f'(x) by means of the derivative of ln(f(x)) is known as logarithmic differentiation. The antiderivative of the natural logarithm ln(x) is:
Related formulas, such as antiderivatives of logarithms to other bases can be derived from this equation using the change of bases.
The natural logarithm of t can be defined as the definite integral:
This definition has the advantage that it does not rely on the exponential function or any trigonometric functions; the definition is in terms of an integral of a simple reciprocal. As an integral, ln(t) equals the area between the x-axis and the graph of the function 1/x, ranging from x = 1 to x = t. This is a consequence of the fundamental theorem of calculus and the fact that the derivative of ln(x) is 1/x. Product and power logarithm formulas can be derived from this definition. For example, the product formula ln(tu) = ln(t) + ln(u) is deduced as:
The equality (1) splits the integral into two parts, while the equality (2) is a change of variable (w = x/t). In the illustration below, the splitting corresponds to dividing the area into the yellow and blue parts. Rescaling the left hand blue area vertically by the factor t and shrinking it by the same factor horizontally does not change its size. Moving it appropriately, the area fits the graph of the function f(x) = 1/x again. Therefore, the left hand blue area, which is the integral of f(x) from t to tu is the same as the integral from 1 to u. This justifies the equality (2) with a more geometric proof.
The power formula ln(tr) = r ln(t) may be derived in a similar way:
The second equality uses a change of variables (integration by substitution), w = x1/r.
The sum over the reciprocals of natural numbers,
is called the harmonic series. It is closely tied to the natural logarithm: as n tends to infinity, the difference,
converges (i.e. gets arbitrarily close) to a number known as the Euler–Mascheroni constant γ = 0.5772.... This relation aids in analyzing the performance of algorithms such as quicksort.
Real numbers that are not algebraic are called transcendental; for example, π and e are such numbers, but is not. Almost all real numbers are transcendental. The logarithm is an example of a transcendental function. The Gelfond–Schneider theorem asserts that logarithms usually take transcendental, i.e. "difficult" values.
Logarithms are easy to compute in some cases, such as log10 (1000) = 3. In general, logarithms can be calculated using power series or the arithmetic–geometric mean, or be retrieved from a precalculated logarithm table that provides a fixed precision. Newton's method, an iterative method to solve equations approximately, can also be used to calculate the logarithm, because its inverse function, the exponential function, can be computed efficiently. Using look-up tables, CORDIC-like methods can be used to compute logarithms by using only the operations of addition and bit shifts. Moreover, the binary logarithm algorithm calculates lb(x) recursively, based on repeated squarings of x, taking advantage of the relation
For any real number z that satisfies 0 < z ≤ 2, the following formula holds:[nb 4]
This is a shorthand for saying that ln(z) can be approximated to a more and more accurate value by the following expressions:
For example, with z = 1.5 the third approximation yields 0.4167, which is about 0.011 greater than ln(1.5) = 0.405465. This series approximates ln(z) with arbitrary precision, provided the number of summands is large enough. In elementary calculus, ln(z) is therefore the limit of this series. It is the Taylor series of the natural logarithm at z = 1. The Taylor series of ln(z) provides a particularly useful approximation to ln(1 + z) when z is small, |z| < 1, since then
For example, with z = 0.1 the first-order approximation gives ln(1.1) ≈ 0.1, which is less than 5% off the correct value 0.0953.
Although the sequence for only converges for , a neat trick can fix this.
As for all , the sequence converges for the same range of z.
Another series is based on the inverse hyperbolic tangent function:
for any real number z > 0.[nb 5] Using sigma notation, this is also written as
This series can be derived from the above Taylor series. It converges quicker than the Taylor series, especially if z is close to 1. For example, for z = 1.5, the first three terms of the second series approximate ln(1.5) with an error of about 3×10−6. The quick convergence for z close to 1 can be taken advantage of in the following way: given a low-accuracy approximation y ≈ ln(z) and putting
the logarithm of z is:
The better the initial approximation y is, the closer A is to 1, so its logarithm can be calculated efficiently. A can be calculated using the exponential series, which converges quickly provided y is not too large. Calculating the logarithm of larger z can be reduced to smaller values of z by writing z = a · 10b, so that ln(z) = ln(a) + b · ln(10).
A closely related method can be used to compute the logarithm of integers. Putting in the above series, it follows that:
If the logarithm of a large integer n is known, then this series yields a fast converging series for log(n+1), with a rate of convergence of .
The arithmetic–geometric mean yields high precision approximations of the natural logarithm. Sasaki and Kanada showed in 1982 that it was particularly fast for precisions between 400 and 1000 decimal places, while Taylor series methods were typically faster when less precision was needed. In their work ln(x) is approximated to a precision of 2−p (or p precise bits) by the following formula (due to Carl Friedrich Gauss):
Here M(x, y) denotes the arithmetic–geometric mean of x and y. It is obtained by repeatedly calculating the average (x + y)/2 (arithmetic mean) and (geometric mean) of x and y then let those two numbers become the next x and y. The two numbers quickly converge to a common limit which is the value of M(x, y). m is chosen such that
to ensure the required precision. A larger m makes the M(x, y) calculation take more steps (the initial x and y are farther apart so it takes more steps to converge) but gives more precision. The constants π and ln(2) can be calculated with quickly converging series.
While at Los Alamos National Laboratory working on the Manhattan Project, Richard Feynman developed a bit-processing algorithm, to compute the logarithm, that is similar to long division and was later used in the Connection Machine. The algorithm uses the fact that every real number 1 < x < 2 is representable as a product of distinct factors of the form 1 + 2−k. The algorithm sequentially builds that product P, starting with P = 1 and k = 1: if P · (1 + 2−k) < x, then it changes P to P · (1 + 2−k). It then increases by one regardless. The algorithm stops when k is large enough to give the desired accuracy. Because log(x) is the sum of the terms of the form log(1 + 2−k) corresponding to those k for which the factor 1 + 2−k was included in the product P, log(x) may be computed by simple addition, using a table of log(1 + 2−k) for all k. Any base may be used for the logarithm table.
Logarithms have many applications inside and outside mathematics. Some of these occurrences are related to the notion of scale invariance. For example, each chamber of the shell of a nautilus is an approximate copy of the next one, scaled by a constant factor. This gives rise to a logarithmic spiral. Benford's law on the distribution of leading digits can also be explained by scale invariance. Logarithms are also linked to self-similarity. For example, logarithms appear in the analysis of algorithms that solve a problem by dividing it into two similar smaller problems and patching their solutions. The dimensions of self-similar geometric shapes, that is, shapes whose parts resemble the overall picture are also based on logarithms. Logarithmic scales are useful for quantifying the relative change of a value as opposed to its absolute difference. Moreover, because the logarithmic function log(x) grows very slowly for large x, logarithmic scales are used to compress large-scale scientific data. Logarithms also occur in numerous scientific formulas, such as the Tsiolkovsky rocket equation, the Fenske equation, or the Nernst equation.
Main article: Logarithmic scale
Scientific quantities are often expressed as logarithms of other quantities, using a logarithmic scale. For example, the decibel is a unit of measurement associated with logarithmic-scale quantities. It is based on the common logarithm of ratios—10 times the common logarithm of a power ratio or 20 times the common logarithm of a voltage ratio. It is used to quantify the loss of voltage levels in transmitting electrical signals, to describe power levels of sounds in acoustics, and the absorbance of light in the fields of spectrometry and optics. The signal-to-noise ratio describing the amount of unwanted noise in relation to a (meaningful) signal is also measured in decibels. In a similar vein, the peak signal-to-noise ratio is commonly used to assess the quality of sound and image compression methods using the logarithm.
The strength of an earthquake is measured by taking the common logarithm of the energy emitted at the quake. This is used in the moment magnitude scale or the Richter magnitude scale. For example, a 5.0 earthquake releases 32 times (101.5) and a 6.0 releases 1000 times (103) the energy of a 4.0. Apparent magnitude measures the brightness of stars logarithmically. In chemistry the negative of the decimal logarithm, the decimal cologarithm, is indicated by the letter p. For instance, pH is the decimal cologarithm of the activity of hydronium ions (the form hydrogen ions H+
take in water). The activity of hydronium ions in neutral water is 10−7 mol·L−1, hence a pH of 7. Vinegar typically has a pH of about 3. The difference of 4 corresponds to a ratio of 104 of the activity, that is, vinegar's hydronium ion activity is about 10−3 mol·L−1.
Semilog (log–linear) graphs use the logarithmic scale concept for visualization: one axis, typically the vertical one, is scaled logarithmically. For example, the chart at the right compresses the steep increase from 1 million to 1 trillion to the same space (on the vertical axis) as the increase from 1 to 1 million. In such graphs, exponential functions of the form f(x) = a · bx appear as straight lines with slope equal to the logarithm of b. Log-log graphs scale both axes logarithmically, which causes functions of the form f(x) = a · xk to be depicted as straight lines with slope equal to the exponent k. This is applied in visualizing and analyzing power laws.
Logarithms occur in several laws describing human perception: Hick's law proposes a logarithmic relation between the time individuals take to choose an alternative and the number of choices they have. Fitts's law predicts that the time required to rapidly move to a target area is a logarithmic function of the distance to and the size of the target. In psychophysics, the Weber–Fechner law proposes a logarithmic relationship between stimulus and sensation such as the actual vs. the perceived weight of an item a person is carrying. (This "law", however, is less realistic than more recent models, such as Stevens's power law.)
Psychological studies found that individuals with little mathematics education tend to estimate quantities logarithmically, that is, they position a number on an unmarked line according to its logarithm, so that 10 is positioned as close to 100 as 100 is to 1000. Increasing education shifts this to a linear estimate (positioning 1000 10 times as far away) in some circumstances, while logarithms are used when the numbers to be plotted are difficult to plot linearly.
Logarithms arise in probability theory: the law of large numbers dictates that, for a fair coin, as the number of coin-tosses increases to infinity, the observed proportion of heads approaches one-half. The fluctuations of this proportion about one-half are described by the law of the iterated logarithm.
Logarithms also occur in log-normal distributions. When the logarithm of a random variable has a normal distribution, the variable is said to have a log-normal distribution. Log-normal distributions are encountered in many fields, wherever a variable is formed as the product of many independent positive random variables, for example in the study of turbulence.
Logarithms are used for maximum-likelihood estimation of parametric statistical models. For such a model, the likelihood function depends on at least one parameter that must be estimated. A maximum of the likelihood function occurs at the same parameter-value as a maximum of the logarithm of the likelihood (the "log likelihood"), because the logarithm is an increasing function. The log-likelihood is easier to maximize, especially for the multiplied likelihoods for independent random variables.
Benford's law describes the occurrence of digits in many data sets, such as heights of buildings. According to Benford's law, the probability that the first decimal-digit of an item in the data sample is d (from 1 to 9) equals log10 (d + 1) − log10 (d), regardless of the unit of measurement. Thus, about 30% of the data can be expected to have 1 as first digit, 18% start with 2, etc. Auditors examine deviations from Benford's law to detect fraudulent accounting.
Analysis of algorithms is a branch of computer science that studies the performance of algorithms (computer programs solving a certain problem). Logarithms are valuable for describing algorithms that divide a problem into smaller ones, and join the solutions of the subproblems.
For example, to find a number in a sorted list, the binary search algorithm checks the middle entry and proceeds with the half before or after the middle entry if the number is still not found. This algorithm requires, on average, log2 (N) comparisons, where N is the list's length. Similarly, the merge sort algorithm sorts an unsorted list by dividing the list into halves and sorting these first before merging the results. Merge sort algorithms typically require a time approximately proportional to N · log(N). The base of the logarithm is not specified here, because the result only changes by a constant factor when another base is used. A constant factor is usually disregarded in the analysis of algorithms under the standard uniform cost model.
A function f(x) is said to grow logarithmically if f(x) is (exactly or approximately) proportional to the logarithm of x. (Biological descriptions of organism growth, however, use this term for an exponential function.) For example, any natural number N can be represented in binary form in no more than log2 N + 1 bits. In other words, the amount of memory needed to store N grows logarithmically with N.
Entropy is broadly a measure of the disorder of some system. In statistical thermodynamics, the entropy S of some physical system is defined as
The sum is over all possible states i of the system in question, such as the positions of gas particles in a container. Moreover, pi is the probability that the state i is attained and k is the Boltzmann constant. Similarly, entropy in information theory measures the quantity of information. If a message recipient may expect any one of N possible messages with equal likelihood, then the amount of information conveyed by any one such message is quantified as log2 N bits.
Lyapunov exponents use logarithms to gauge the degree of chaoticity of a dynamical system. For example, for a particle moving on an oval billiard table, even small changes of the initial conditions result in very different paths of the particle. Such systems are chaotic in a deterministic way, because small measurement errors of the initial state predictably lead to largely different final states. At least one Lyapunov exponent of a deterministically chaotic system is positive.
Logarithms occur in definitions of the dimension of fractals. Fractals are geometric objects that are self-similar in the sense that small parts reproduce, at least roughly, the entire global structure. The Sierpinski triangle (pictured) can be covered by three copies of itself, each having sides half the original length. This makes the Hausdorff dimension of this structure ln(3)/ln(2) ≈ 1.58. Another logarithm-based notion of dimension is obtained by counting the number of boxes needed to cover the fractal in question.
Logarithms are related to musical tones and intervals. In equal temperament, the frequency ratio depends only on the interval between two tones, not on the specific frequency, or pitch, of the individual tones. For example, the note A has a frequency of 440 Hz and B-flat has a frequency of 466 Hz. The interval between A and B-flat is a semitone, as is the one between B-flat and B (frequency 493 Hz). Accordingly, the frequency ratios agree:
Therefore, logarithms can be used to describe the intervals: an interval is measured in semitones by taking the base-21/12 logarithm of the frequency ratio, while the base-21/1200 logarithm of the frequency ratio expresses the interval in cents, hundredths of a semitone. The latter is used for finer encoding, as it is needed for non-equal temperaments.
(the two tones are played at the same time)
|1/12 tone play (help·info)||Semitone play||Just major third play||Major third play||Tritone play||Octave play|
|Frequency ratio r|
|Corresponding number of semitones
|Corresponding number of cents
Natural logarithms are closely linked to counting prime numbers (2, 3, 5, 7, 11, ...), an important topic in number theory. For any integer x, the quantity of prime numbers less than or equal to x is denoted π(x). The prime number theorem asserts that π(x) is approximately given by
in the sense that the ratio of π(x) and that fraction approaches 1 when x tends to infinity. As a consequence, the probability that a randomly chosen number between 1 and x is prime is inversely proportional to the number of decimal digits of x. A far better estimate of π(x) is given by the offset logarithmic integral function Li(x), defined by
The Riemann hypothesis, one of the oldest open mathematical conjectures, can be stated in terms of comparing π(x) and Li(x). The Erdős–Kac theorem describing the number of distinct prime factors also involves the natural logarithm.
The logarithm of n factorial, n! = 1 · 2 · ... · n, is given by
This can be used to obtain Stirling's formula, an approximation of n! for large n.
Main article: Complex logarithm
All the complex numbers a that solve the equation
are called complex logarithms of z, when z is (considered as) a complex number. A complex number is commonly represented as z = x + iy, where x and y are real numbers and i is an imaginary unit, the square of which is −1. Such a number can be visualized by a point in the complex plane, as shown at the right. The polar form encodes a non-zero complex number z by its absolute value, that is, the (positive, real) distance r to the origin, and an angle between the real (x) axis Re and the line passing through both the origin and z. This angle is called the argument of z.
The absolute value r of z is given by
Using the geometrical interpretation of sine and cosine and their periodicity in 2π, any complex number z may be denoted as
for any integer number k. Evidently the argument of z is not uniquely specified: both φ and φ' = φ + 2kπ are valid arguments of z for all integers k, because adding 2kπ radians or k⋅360°[nb 6] to φ corresponds to "winding" around the origin counter-clock-wise by k turns. The resulting complex number is always z, as illustrated at the right for k = 1. One may select exactly one of the possible arguments of z as the so-called principal argument, denoted Arg(z), with a capital A, by requiring φ to belong to one, conveniently selected turn, e.g. −π < φ ≤ π or 0 ≤ φ < 2π. These regions, where the argument of z is uniquely determined are called branches of the argument function.
Euler's formula connects the trigonometric functions sine and cosine to the complex exponential:
Using this formula, and again the periodicity, the following identities hold:
where ln(r) is the unique real natural logarithm, ak denote the complex logarithms of z, and k is an arbitrary integer. Therefore, the complex logarithms of z, which are all those complex values ak for which the ak-th power of e equals z, are the infinitely many values
Taking k such that φ + 2kπ is within the defined interval for the principal arguments, then ak is called the principal value of the logarithm, denoted Log(z), again with a capital L. The principal argument of any positive real number x is 0; hence Log(x) is a real number and equals the real (natural) logarithm. However, the above formulas for logarithms of products and powers do not generalize to the principal value of the complex logarithm.
The illustration at the right depicts Log(z), confining the arguments of z to the interval (−π, π]. This way the corresponding branch of the complex logarithm has discontinuities all along the negative real x axis, which can be seen in the jump in the hue there. This discontinuity arises from jumping to the other boundary in the same branch, when crossing a boundary, i.e. not changing to the corresponding k-value of the continuously neighboring branch. Such a locus is called a branch cut. Dropping the range restrictions on the argument makes the relations "argument of z", and consequently the "logarithm of z", multi-valued functions.
Exponentiation occurs in many areas of mathematics and its inverse function is often referred to as the logarithm. For example, the logarithm of a matrix is the (multi-valued) inverse function of the matrix exponential. Another example is the p-adic logarithm, the inverse function of the p-adic exponential. Both are defined via Taylor series analogous to the real case. In the context of differential geometry, the exponential map maps the tangent space at a point of a manifold to a neighborhood of that point. Its inverse is also called the logarithmic (or log) map.
In the context of finite groups exponentiation is given by repeatedly multiplying one group element b with itself. The discrete logarithm is the integer n solving the equation
where x is an element of the group. Carrying out the exponentiation can be done efficiently, but the discrete logarithm is believed to be very hard to calculate in some groups. This asymmetry has important applications in public key cryptography, such as for example in the Diffie–Hellman key exchange, a routine that allows secure exchanges of cryptographic keys over unsecured information channels. Zech's logarithm is related to the discrete logarithm in the multiplicative group of non-zero elements of a finite field.
Further logarithm-like inverse functions include the double logarithm ln(ln(x)), the super- or hyper-4-logarithm (a slight variation of which is called iterated logarithm in computer science), the Lambert W function, and the logit. They are the inverse functions of the double exponential function, tetration, of f(w) = wew, and of the logistic function, respectively.
From the perspective of group theory, the identity log(cd) = log(c) + log(d) expresses a group isomorphism between positive reals under multiplication and reals under addition. Logarithmic functions are the only continuous isomorphisms between these groups. By means of that isomorphism, the Haar measure (Lebesgue measure) dx on the reals corresponds to the Haar measure dx/x on the positive reals. The non-negative reals not only have a multiplication, but also have addition, and form a semiring, called the probability semiring; this is in fact a semifield. The logarithm then takes multiplication to addition (log multiplication), and takes addition to log addition (LogSumExp), giving an isomorphism of semirings between the probability semiring and the log semiring.
Logarithmic one-forms df/f appear in complex analysis and algebraic geometry as differential forms with logarithmic poles.
The polylogarithm is the function defined by
It is related to the natural logarithm by Li1 (z) = −ln(1 − z). Moreover, Lis (1) equals the Riemann zeta function ζ(s).
One of the interesting and sometimes even surprising aspects of the analysis of data structures and algorithms is the ubiquitous presence of logarithms ... As is the custom in the computing literature, we omit writing the base b of the logarithm when b = 2. |
What is a Coaxial Cable?
• Coaxial cable is a type of cable that has an
inner conductor surrounded by a tubular
insulating layer, surrounded by a tubular
• Coaxial cable (or coax) carries signals of higher
frequency ranges than those in twisted- pair
• because the two media are constructed quite
differently. Instead of having two wires.
Construction of Coaxial Cable
• Coaxial cable contains:
Inner Central Core Conductor: A central core conductor of
solid or stranded wire usually copper.
Outer Metallic Conductor shied : A metal foil which
enclosed between inner and outer insulator.
Inner Insulator : An insulating sheathes which encloses
inner core conductor to separate both inner and outer
conductor from each other.
Plastic Jacket/ Cover: Covering the whole cable
Inner structure of Coaxial Cable
Having inner central core, insulator , metallic shield and plastic
Working of Coaxial Cable
• Data is transmitted through the center core Conductor
enclosed in insulator.
• while the outer metallic wrapping serves as
A second conductor line to ground.
A shield against noise.
• Both of these conductors are parallel and share the
same axis Which completes the circuit. This is why the
wire is called coaxial.
• This outer conductor is also enclosed in an insulator.
• Whole cable is protected by a plastic cover.
Working of Coaxial Cable
Defining that inner core is connected to positive terminal and use
as conductor and outer metallic foil connected to ground also use
as second conductor and shield for noise.
Coaxial Cable Standards &
• Coaxial Cable are categories by Radio
• Each RG number denotes a unique set of specifications.
The gauge of the inner conductor.
The thickness and type of the inner insulator
The construction of shield.
The size and type of the outer casing.
• Each cable defined by an RG rating in adapted for a
Categories of Coaxial Cable
Defines the categories , their impedance and uses
Types Of Coaxial Cable
T here are tow types of coaxial cable:
THINNET CABLE (10BASE2 ETHERNET):
10 refer to the rate of data transfer.
2 refer to distance allowed between computers it should be
no more than 2 meters.
Total segment length is 185 Meters (Distance between the
Total number of nodes (devices) connected - 30 nodes per
Thinnet cable is a flexible coaxial cable about 0.64
centimeters (0.25 inches) thick.
THICKNET CABLE (10BASE5 ETHERNET):
10 refer to the rate of data transfer. It transfers data at the
rate of 10Mbps (Megabits per second)
5 refer to distance between computers it should be no more
than 5 meters.
A maximum of 100 workstations is allowed per trunk and the
distance between should be a maximum of 5 meters.
Segment length is 500 metres.
Thicknet cable is a relatively rigid coaxial cable about 1.27
centimeters (0.5 inches) in diameter
Types of Coaxial Cable
• Hard line:
Used in broadcasting and many other forms
of radio communication.
Using round copper, silver or gold tubing or a
combination of such metals as a shield.
A similar fashion to hard line.
It is constructed with tuned slots cut into the shield.
RG-6 is available in four different types designed for
In addition, the core may be copper clad steel (CCS)
or bare solid copper (BC).
coaxial cable with a third layer of shielding, insulation
The outer shield, which is earthed (grounded),
protects the inner shield from electromagnetic
interference from outside sources.
Types of Coaxial Cable
• Twin-axial cable:
twisted pair within a cylindrical shield.
It allows a nearly perfect differential signal which
is both shielded and balanced to pass through.
• Biaxial cable:
biax is a configuration of two 50 Ω coaxial cables.
Biax is used in some proprietary computer networks.
Others may be familiar with 75Ω biax which at one
time was popular on many cable TV services.
a coaxial form using a solid copper outer sheath.
offers superior screening compared to cables with a
braided outer conductor, especially at higher frequencies.
• Rigid line:
a coaxial line formed by two copper tubes maintained
concentric every other meter using PTFE-supports.
Rigid lines can not be bent, so they often need
Connectors of Coaxial Cable
(metal envelope) surrounding the cables
protects the data transmitted on the medium
from interference (also called noise) that
could corrupt the data.
protects the cable from the external
It is usually made of rubber (or sometimes
Polyvinyl Chloride (PVC) or Teflon).
surrounding the central core is made of a
dielectric material that prevents any contact
with the shield that could cause electrical
interactions (short circuit).
which actually transports the data, generally
consists of a single copper strand or of
several braided strands.
Advantages of Coaxial Cable
• Coaxial cable is still the most common means of data
transmission over short distances.
• The advantages are:
They are cheap to make
Cheap to install
Easy to modify
Great channel capacity
Noise immunity due to low error rate
Disadvantages of Coaxial Cable
• Signals entering the cable can cause unwanted noise
and picture ghosting. making it useless.
• A continuous current flow, even if small, along the
imperfect shield of a coaxial cable can cause visible
or audible interference.
• More expensive than twisted pairs and is not supported
for some network standards.
• Its also has high attenuation, have the need to
Uses of Coaxial Cable
Short coaxial cables are commonly used to connect
home video equipment
in ham radio setups
in measurement electronics.
They used to be common for implementing computer networks, in
Long distance coaxial cable was used in the 20th century to connect
Long Distance telephone
Micro coaxial cables are used in
a range of consumer devices,
ultra-sound scanning equipment.
The most common impedances that are widely used are 50 or 52 ohms,
and 75 ohms, although other impedances are available for specific |
Historic map of the Swedish Gold Coast
When the first Europeans arrived in the late 15th century, many inhabitants of the Gold Coast area were striving to consolidate their newly acquired territories and to settle into a secure and permanent environment. Initially, the Gold Coast did not participate in the export slave trade, rather as Ivor Wilks, a leading historian of Ghana, noted, the Akan purchased slaves from Portuguese traders operating from other parts of Africa, including the Congo and Benin in order to augment the labour needed for the state formation that was characteristic of this period.
The Portuguese were the first Europeans to arrive. By 1471, they had reached the area that was to become known as the Gold Coast. The Gold Coast was so-named because it was an important source of gold. The Portuguese interest in trading for gold, ivory, and pepper so increased that in 1482 the Portuguese built their first permanent trading post on the western coast of present-day Ghana. This fortress, atrade castle called São Jorge da Mina (later called Elmina Castle), was constructed to protect Portuguese trade from European competitors, and after frequent rebuildings and modifications, still stands.
The Portuguese position on the Gold Coast remained secure for over a century. During that time, Lisbon sought to monopolize all trade in the region in royal hands, though appointed officials at São Jorge, and used force to prevent English, French, and Flemish efforts to trade on the coast. By 1598, the Dutch began trading on the Gold Coast. The Dutch built forts at Komenda and Kormantsi by 1612. In 1637 they captured Elmina Castle from the Portuguese and Axim in 1642 (Fort St Anthony). Other European traders joined in by the mid-17th century, largely English, Danes, and Swedes. The coastline was dotted by more than 30 forts and castles built by Dutch, British, and Danish merchants primarily to protect their interests from other Europeans and pirates. The Gold Coast became the highest concentration of European military architecture outside of Europe. Sometimes they were also drawn into conflicts with local inhabitants as Europeans developed commercial alliances with local political authorities. These alliances, often complicated, involved both Europeans attempting to enlist or persuade their closest allies to attack rival European ports and their African allies, or conversely, various African powers seeking to recruit Europeans as mercenaries in their inter-state wars, or as diplomats to resolve conflicts.
Forts were built, abandoned, attacked, captured, sold, and exchanged, and many sites were selected at one time or another for fortified positions by contending European nations.
The Dutch West India Company operated throughout most of the 18th century. The British African Company of Merchants, founded in 1750, was the successor to several earlier organizations of this type. These enterprises built and manned new installations as the companies pursued their trading activities and defended their respective jurisdictions with varying degrees of government backing. There were short-lived ventures by the Swedes and the Prussians. The Danes remained until 1850, when they withdrew from the Gold Coast. The British gained possession of all Dutch coastal forts by the last quarter of the 19th century, thus making them the dominant European power on the Gold Coast.
In the late 17th century, social changes within the polities of the Gold Coast led to transformations in warfare, and to the shift from being a gold exporting and slave importing economy to being a minor local slave exporting economy. To be sure, slavery and slave trading were already firmly entrenched in many African societies before their contact with Europe. In most situations, men as well as women captured in local warfare became slaves. In general, however, slaves in African communities were often treated as members of the society with specific rights, and many were ultimately absorbed into their masters' families as full members. Given traditional methods of agricultural production in Africa, slavery in Africa was quite different from that which existed in the commercial plantation environments of the New World.
Some scholars have challenged the premise that rulers on the Gold Coast engaged in wars of expansion for the sole purpose of acquiring slaves for the export market. For example, the Ashanti waged war mainly to pacify territories that in were under Ashanti control, to exact tribute payments from subordinate kingdoms, and to secure access to trade routes—particularly those that connected the interior with the coast.
It is important to mention, however, that the supply of slaves to the Gold Coast was entirely in African hands. Most rulers, such as the kings of various Akan states engaged in the slave trade, as well as individual local merchants. A good number of the Slaves were also brought from various countries in the region and sold to middle men.
The demographic impact of the slave trade on West Africa was probably substantially greater than the number actually enslaved because a significant number of Africans perished during wars and bandit attacks or while in captivity awaiting transshipment. All nations with an interest in West Africa participated in the slave trade. Relations between the Europeans and the local populations were often strained, and distrust led to frequent clashes. Disease caused high losses among the Europeans engaged in the slave trade, but the profits realized from the trade continued to attract them.
The growth of anti-slavery sentiment among Europeans made slow progress against vested African and European interests that were reaping profits from the traffic. Although individual clergymen condemned the slave trade as early as the 17th century, major Christian denominations did little to further early efforts at abolition. The Quakers, however, publicly declared themselves against slavery as early as 1727. Later in the century, the Danes stopped trading in slaves; Sweden and the Netherlands soon followed.
In 1807, Britain used its naval power and its diplomatic muscle to outlaw trade in slaves by its citizens and to begin a campaign to stop the international trade in slaves. The importation of slaves into the United States was outlawed in 1808. These efforts, however, were not successful until the 1860s because of the continued demand for plantation labour in the New World.
Because it took decades to end the trade in slaves, some historians doubt that the humanitarian impulse inspired the abolitionist movement. According to historian Eric Williams, for example, Europe abolished the trans-Atlantic slave trade only because its profitability was undermined by the Industrial Revolution. Williams argued that mass unemployment caused by the new industrial machinery, the need for new raw materials, and European competition for markets for finished goods are the real factors that brought an end to the trade in human cargo and the beginning of competition for colonial territories in Africa. Other scholars, however, disagree with Williams, arguing that humanitarian concerns as well as social and economic factors were instrumental in ending the African slave trade.
Content created and supplied by: Einstein09 (via Opera News )
Opera News is a free to use platform and the views and opinions expressed herein are solely those of the author and do not represent, reflect or express the views of Opera News. Any/all written content and images displayed are provided by the blogger/author, appear herein as submitted by the blogger/author and are unedited by Opera News. Opera News does not consent to nor does it condone the posting of any content that violates the rights (including the copyrights) of any third party, nor content that may malign, inter alia, any religion, ethnic group, organization, gender, company, or individual. Opera News furthermore does not condone the use of our platform for the purposes encouraging/endorsing hate speech, violation of human rights and/or utterances of a defamatory nature. If the content contained herein violates any of your rights, including those of copyright, and/or violates any the above mentioned factors, you are requested to immediately notify us using via the following email address operanews-external(at)opera.com and/or report the article using the available reporting functionality built into our Platform See More |
What is the radius of the circle where the chord is 2/3 of the radius from the center and has a length of 10 cm?
Did you find an error or inaccuracy? Feel free to write us. Thank you!
Thank you for submitting an example text correction or rephasing. We will review the example in a short time and work on the publish it.
Tips to related online calculators
Pythagorean theorem is the base for the right triangle calculator.
You need to know the following knowledge to solve this word math problem:
Related math problems and questions:
- Chord distance
The circle k (S, 6 cm), calculate the chord distance from the center circle S when the length of the chord is t = 10 cm.
- Circle chord
Calculate the length of the chord of the circle with radius r = 10 cm, length of which is equal to the distance from the center of the circle.
- Chord AB
What is the chord AB's length if its distance from the center S of the circle k(S, 92 cm) is 10 cm?
- The chord
Calculate a chord length which the distance from the center of the circle (S, 6 cm) equals 3 cm.
- Chord 5
It is given circle k / S; 5 cm /. Its chord MN is 3 cm away from the center of the circle . Calculate its length.
- Concentric circles and chord
In a circle with a diameter d = 10 cm, a chord with a length of 6 cm is constructed. What radius have the concentric circle while touch this chord?
In a circle with radius r=60 cm is chord 4× longer than its distance from the center. What is the length of the chord?
- Chord 4
I need to calculate the circumference of a circle, I know the chord length c=22 cm and the distance from the center d=29 cm chord to the circle.
- Circle's chords
In the circle there are two chord length 30 and 34 cm. The shorter one is from the center twice than longer chord. Determine the radius of the circle.
- Two chords
In a circle with radius r = 26 cm two parallel chords are drawn. One chord has a length t1 = 48 cm and the second has a length t2 = 20 cm, with the center lying between them. Calculate the distance of two chords.
- Concentric circles
In the circle with diameter 19 cm is constructed chord 9 cm long. Calculate the radius of a concentric circle that touches this chord.
- Two chords
Calculate the length of chord AB and perpendicular chord BC to circle if AB is 4 cm from the center of the circle and BC 8 cm from the center of the circle.
- Chord 2
Point A has a distance of 13 cm from the center of the circle with a radius r = 5 cm. Calculate the length of the chord connecting the points T1 and T2 of contact of tangents led from point A to the circle.
- Chors centers
The circle with a diameter 17 cm, upper chord /CD/ = 10.2 cm and bottom chord /EF/ = 7.5 cm. The midpoints of the chords H, G is that /EH/ = 1/2 /EF/ and /CG/ = 1/2 /CD/. Determine the distance between the G and H, if CD II EF (parallel).
- Circle chord
What is the length x of the chord circle of diameter 115 m if the distance from the center circle is 11 m?
If the endpoints of a diameter of a circle are A(10, -1) and B (3, 10), what is the radius of the circle?
- Quarter circle
What is the radius of a circle inscribed in the quarter circle with a radius of 100 cm? |
Use the elimination method to solve systems of linear equations.
This lesson covers solving a system by adding the two equations together to eliminate a variable.
Linear Systems with Addition or Subtraction Interactive
This video demonstrates a sample use of solving linear systems by elimination.
This video provides an explanation of the concept of solving linear systems by elimination.
Quiz for Solving Linear Systems with Addition or Subtraction.
A list of student-submitted discussion questions for Linear Systems with Addition or Subtraction.
To stimulate the critical thinking skills required to assess the validity of statements about the key concept using an Agree/Disagree Table.
To build a step-by-step description of a sequence for analysis, discussion, or communication using a Sequence Diagram.
Come up with questions about a topic and learn new vocabulary to determine answers using the table
To activate prior knowledge, make personal connections, reflect on key concepts, encourage critical thinking, and assess student knowledge on the topic prior to reading using a Quickwrite.
To organize ideas, increase comprehension, synthesize learning, demonstrate understanding of key concepts, and reinforce vocabulary using a Quickwrite.
Students will use their knowledge of solving a system of linear equations using the Elimination Method to compare the price of a cab ride in San Francisco and compare their calculations with a website.
Find out why the pitch of an ambulance siren seems to change as it passes by you.
This study guide looks at solving systems of linear equations by adding/subtracting and by multiplying. It also provides guidelines for solving systems with 3 variables. |
A switched capacitor (SC) is an electronic circuit element implementing a filter. It works by moving charges into and out of capacitors when switches are opened and closed. Usually, non-overlapping signals are used to control the switches, so that not all switches are closed simultaneously. Filters implemented with these elements are termed "switched-capacitor filters", and depend only on the ratios between capacitances. This makes them much more suitable for use within integrated circuits, where accurately specified resistors and capacitors are not economical to construct.
SC circuits are typically implemented using metal–oxide–semiconductor (MOS) technology, with MOS capacitors and MOS field-effect transistor (MOSFET) switches, and they are commonly fabricated using the complementary MOS (CMOS) process. Common applications of MOS SC circuits include mixed-signal integrated circuits, digital-to-analog converter (DAC) chips, analog-to-digital converter (ADC) chips, pulse-code modulation (PCM) codec-filters, and PCM digital telephony.
The simplest switched-capacitor (SC) circuit is the switched-capacitor resistor, made of one capacitor C and two switches S1 and S2 which connect the capacitor with a given frequency alternately to the input and output of the SC. Each switching cycle transfers a charge from the input to the output at the switching frequency . The charge q on a capacitor C with a voltage V between the plates is given by:
where V is the voltage across the capacitor. Therefore, when S1 is closed while S2 is open, the charge stored in the capacitor CS is:
When S2 is closed (S1 is open - they are never both closed at the same time), some of that charge is transferred out of the capacitor, after which the charge that remains in capacitor CS is:
Thus, the charge moved out of the capacitor to the output is:
Because this charge q is transferred at a rate f, the rate of transfer of charge per unit time is:
(A continuous transfer of charge from one node to another is equivalent to a current, so I (the symbol for electric current) is used.)
Substituting for q in the above, we have:
Let V be the voltage across the SC from input to output. So:
So the equivalent resistance R (i.e., the voltage–current relationship) is:
Thus, the SC behaves like a resistor whose value depends on capacitance CS and switching frequency f.
The SC resistor is used as a replacement for simple resistors in integrated circuits because it is easier to fabricate reliably with a wide range of values. It also has the benefit that its value can be adjusted by changing the switching frequency (i.e., it is a programmable resistance). See also: operational amplifier applications.
This same circuit can be used in discrete time systems (such as analog to digital converters) as a track and hold circuit. During the appropriate clock phase, the capacitor samples the analog voltage through switch one and in the second phase presents this held sampled value to an electronic circuit for processing.
Often switched-capacitor circuits are used to provide accurate voltage gain and integration by switching a sampled capacitor onto an op-amp with a capacitor in feedback. One of the earliest of these circuits is the parasitic-sensitive integrator developed by the Czech engineer Bedrich Hosticka. Here is an analysis. Denote by the switching period. In capacitors,
Then, when S1 opens and S2 closes (they are never both closed at the same time), we have the following:
1) Because has just charged:
2) Because the feedback cap, , is suddenly charged with that much charge (by the op amp, which seeks a virtual short circuit between its inputs):
Now dividing 2) by :
And inserting 1):
This last equation represents what is going on in - it increases (or decreases) its voltage each cycle according to the charge that is being "pumped" from (due to the op-amp).
However, there is a more elegant way to formulate this fact if is very short. Let us introduce and and rewrite the last equation divided by dt:
Therefore, the op-amp output voltage takes the form:
This is an inverting integrator with an "equivalent resistance" . This allows its on-line or runtime adjustment (if we manage to make the switches oscillate according to some signal given by e.g. a microcontroller).
The delaying parasitic insensitive integrator has a wide use in discrete time electronic circuits such as biquad filters, anti-alias structures, and delta-sigma data converters. This circuit implements the following z-domain function:
One useful characteristic of switched-capacitor circuits is that they can be used to perform many circuit tasks at the same time, which is difficult with non-discrete time components. The multiplying digital to analog converter (MDAC) is an example as it can take an analog input, add a digital value to it, and multiply this by some factor based on the capacitor ratios. The output of the MDAC is given by the following:
The MDAC is a common component in modern pipeline analog to digital converters as well as other precision analog electronics and was first created in the form above by Stephen Lewis and others at Bell Laboratories.
Switched-capacitor circuits are analysed by writing down charge conservation equations, as in this article, and solving them with a computer algebra tool. For hand analysis and for getting more insight into the circuits, it is also possible to do a Signal-flow graph analysis, with a method that is very similar for switched-capacitor and continuous-time circuits.
A multivibrator is an electronic circuit used to implement a variety of simple two-state devices such as relaxation oscillators, timers and flip-flops. It consists of two amplifying devices cross-coupled by resistors or capacitors. The first multivibrator circuit, the astable multivibrator oscillator, was invented by Henri Abraham and Eugene Bloch during World War I. They called their circuit a "multivibrator" because its output waveform was rich in harmonics.
Electrical impedance is the measure of the opposition that a circuit presents to a current when a voltage is applied. The term complex impedance may be used interchangeably.
A low-pass filter (LPF) is a filter that passes signals with a frequency lower than a selected cutoff frequency and attenuates signals with frequencies higher than the cutoff frequency. The exact frequency response of the filter depends on the filter design. The filter is sometimes called a high-cut filter, or treble-cut filter in audio applications. A low-pass filter is the complement of a high-pass filter.
A rectifier is an electrical device that converts alternating current (AC), which periodically reverses direction, to direct current (DC), which flows in only one direction.
Johnson–Nyquist noise is the electronic noise generated by the thermal agitation of the charge carriers inside an electrical conductor at equilibrium, which happens regardless of any applied voltage. Thermal noise is present in all electrical circuits, and in sensitive electronic equipment such as radio receivers can drown out weak signals, and can be the limiting factor on sensitivity of an electrical measuring instrument. Thermal noise increases with temperature. Some sensitive electronic equipment such as radio telescope receivers are cooled to cryogenic temperatures to reduce thermal noise in their circuits. The generic, statistical physical derivation of this noise is called the fluctuation-dissipation theorem, where generalized impedance or generalized susceptibility is used to characterize the medium.
In electronics a relaxation oscillator is a nonlinear electronic oscillator circuit that produces a nonsinusoidal repetitive output signal, such as a triangle wave or square wave. The circuit consists of a feedback loop containing a switching device such as a transistor, comparator, relay, op amp, or a negative resistance device like a tunnel diode, that repetitively charges a capacitor or inductor through a resistance until it reaches a threshold level, then discharges it again. The period of the oscillator depends on the time constant of the capacitor or inductor circuit. The active device switches abruptly between charging and discharging modes, and thus produces a discontinuously changing repetitive waveform. This contrasts with the other type of electronic oscillator, the harmonic or linear oscillator, which uses an amplifier with feedback to excite resonant oscillations in a resonator, producing a sine wave. Relaxation oscillators are used to produce low frequency signals for applications such as blinking lights and electronic beepers and in voltage controlled oscillators (VCOs), inverters and switching power supplies, dual-slope analog to digital converters, and function generators.
A resistor–capacitor circuit, or RC filter or RC network, is an electric circuit composed of resistors and capacitors driven by a voltage or current source. A first order RC circuit is composed of one resistor and one capacitor and is the simplest type of RC circuit.
In electronics, a voltage divider is a passive linear circuit that produces an output voltage (Vout) that is a fraction of its input voltage (Vin). Voltage division is the result of distributing the input voltage among the components of the divider. A simple example of a voltage divider is two resistors connected in series, with the input voltage applied across the resistor pair and the output voltage emerging from the connection between them.
The Ćuk converter is a type of DC/DC converter that has an output voltage magnitude that is either greater than or less than the input voltage magnitude. It is essentially a boost converter followed by a buck converter with a capacitor to couple the energy.
The RC time constant, also called tau, the time constant of an RC circuit, is equal to the product of the circuit resistance and the circuit capacitance, i.e.
Delta-sigma modulation is a method for encoding analog signals into digital signals as found in an analog-to-digital converter (ADC). It is also used to convert high bit-count, low-frequency digital signals into lower bit-count, higher-frequency digital signals as part of the process to convert digital signals into analog as part of a digital-to-analog converter (DAC).
This article illustrates some typical operational amplifier applications. A non-ideal operational amplifier's equivalent circuit has a finite input impedance, a non-zero output impedance, and a finite gain. A real op-amp has a number of non-ideal features as shown in the diagram, but here a simplified schematic notation is used, many details such as device selection and power supply connections are not shown. Operational amplifiers are optimised for use with negative feedback, and this article discusses only negative-feedback applications. When positive feedback is required, a comparator is usually more appropriate. See Comparator applications for further information.
A buck converter is a DC-to-DC power converter which steps down voltage from its input (supply) to its output (load). It is a class of switched-mode power supply (SMPS) typically containing at least two semiconductors and at least one energy storage element, a capacitor, inductor, or the two in combination. To reduce voltage ripple, filters made of capacitors are normally added to such a converter's output and input.
Ripple in electronics is the residual periodic variation of the DC voltage within a power supply which has been derived from an alternating current (AC) source. This ripple is due to incomplete suppression of the alternating waveform after rectification. Ripple voltage originates as the output of a rectifier or from generation and commutation of DC power.
The commutation cell is the basic structure in power electronics. It is composed of an electronic switch and a diode. It was traditionally referred to as a chopper, but since switching power supplies became a major form of power conversion, this new term has become more popular.
A capacitor is a device that stores electrical energy in an electric field. It is a passive electronic component with two terminals.
An integrating ADC is a type of analog-to-digital converter that converts an unknown input voltage into a digital representation through the use of an integrator. In its basic implementation, the dual-slope converter, the unknown input voltage is applied to the input of the integrator and allowed to ramp for a fixed time period. Then a known reference voltage of opposite polarity is applied to the integrator and is allowed to ramp until the integrator output returns to zero. The input voltage is computed as a function of the reference voltage, the constant run-up time period, and the measured run-down time period. The run-down time measurement is usually made in units of the converter's clock, so longer integration times allow for higher resolutions. Likewise, the speed of the converter can be improved by sacrificing resolution.
An RLC circuit is an electrical circuit consisting of a resistor (R), an inductor (L), and a capacitor (C), connected in series or in parallel. The name of the circuit is derived from the letters that are used to denote the constituent components of this circuit, where the sequence of the components may vary from RLC.
In electronics, a transimpedance amplifier, (TIA) is a current to voltage converter, almost exclusively implemented with one or more operational amplifiers. The TIA can be used to amplify the current output of Geiger–Müller tubes, photo multiplier tubes, accelerometers, photo detectors and other types of sensors to a usable voltage. Current to voltage converters are used with sensors that have a current response that is more linear than the voltage response. This is the case with photodiodes where it is not uncommon for the current response to have better than 1% nonlinearity over a wide range of light input. The transimpedance amplifier presents a low impedance to the photodiode and isolates it from the output voltage of the operational amplifier. In its simplest form a transimpedance amplifier has just a large valued feedback resistor, Rf. The gain of the amplifier is set by this resistor and because the amplifier is in an inverting configuration, has a value of -Rf. There are several different configurations of transimpedance amplifiers, each suited to a particular application. The one factor they all have in common is the requirement to convert the low-level current of a sensor to a voltage. The gain, bandwidth, as well as current and voltage offsets change with different types of sensors, requiring different configurations of transimpedance amplifiers.
The operational amplifier integrator is an electronic integration circuit. Based on the operational amplifier (op-amp), it performs the mathematical operation of integration with respect to time; that is, its output voltage is proportional to the input voltage integrated over time. |
- Nuclear weapon design
Nuclear weapon designs are physical, chemical, and engineering arrangements that cause the physics package of a nuclear weapon to detonate. There are three basic design types. In all three, the explosive energy of deployed devices has been derived primarily from nuclear fission, not fusion.
- Pure fission weapons were the first nuclear weapons built and have so far been the only type ever used in warfare. The active material is fissile uranium (U-235) or plutonium (Pu-239), explosively assembled into a chain-reacting critical mass by one of two methods:
- Gun assembly: one piece of fissile uranium is fired at a fissile uranium target at the end of the weapon, similar to firing a bullet down a gun barrel, achieving critical mass when combined.
- Implosion: a fissile mass of either material (U-235, Pu-239, or a combination) is surrounded by high explosives that compress the mass, resulting in criticality.
- The implosion method can use either uranium or plutonium as fuel. The gun method only uses uranium. Plutonium is considered impractical for the gun method because of early triggering due to Pu-240 contamination and due to its time constant for prompt critical fission being much shorter than that of U-235.
- Fusion-boosted fission weapons improve on the implosion design. The high pressure and temperature environment at the center of an exploding fission weapon compresses and heats a mixture of tritium and deuterium gas (heavy isotopes of hydrogen). The hydrogen fuses to form helium and free neutrons. The energy release from this fusion reaction is relatively negligible, but each neutron starts a new fission chain reaction, speeding up the fission and greatly reducing the amount of fissile material that would otherwise be wasted when expansion of the fissile material stops the chain reaction. Boosting can more than double the weapon's fission energy release.
- Two-stage thermonuclear weapons are essentially a chain of fission-boosted fusion weapons (not to be confused with the previously mentioned fusion-boosted fission weapons), usually with only two stages in the chain. The second stage, called the "secondary," is imploded by x-ray energy from the first stage, called the "primary." This radiation implosion is much more effective than the high-explosive implosion of the primary. Consequently, the secondary can be many times more powerful than the primary, without being bigger. The secondary can be designed to maximize fusion energy release, but in most designs fusion is employed only to drive or enhance fission, as it is in the primary. More stages could be added, but the result would be a multi-megaton weapon too powerful to serve any plausible purpose. (The United States briefly deployed a three-stage 25-megaton bomb, the B41, starting in 1961. Also in 1961, the Soviet Union tested, but did not deploy, a three-stage 50–100 megaton device, Tsar Bomba.)
Pure fission weapons historically have been the first type to be built by a nation state. Large industrial states with well-developed nuclear arsenals have two-stage thermonuclear weapons, which are the most compact, scalable, and cost effective option once the necessary industrial infrastructure is built.
Most known innovations in nuclear weapon design originated in the United States, although some were later developed independently by other states; the following descriptions feature U.S. designs.
In early news accounts, pure fission weapons were called atomic bombs or A-bombs, a misnomer since the energy comes only from the nucleus of the atom. Weapons involving fusion were called hydrogen bombs or H-bombs, also a misnomer since their destructive energy comes mostly from fission. Insiders favored the terms nuclear and thermonuclear, respectively.
The term thermonuclear refers to the high temperatures required to initiate fusion. It ignores the equally important factor of pressure, which was considered secret at the time the term became current. Many nuclear weapon terms are similarly inaccurate because of their origin in a classified environment.
Nuclear weapons Nuclear-armed states
Nuclear fission splits heavier atoms to form lighter atoms. Nuclear fusion bonds together lighter atoms to form heavier atoms. Both reactions generate roughly a million times more energy than comparable chemical reactions, making nuclear bombs a million times more powerful than non-nuclear bombs, which a French patent claimed in May 1939.
In some ways, fission and fusion are opposite and complementary reactions, but the particulars are unique for each. To understand how nuclear weapons are designed, it is useful to know the important similarities and differences between fission and fusion. The following explanation uses rounded numbers and approximations.
When a free neutron hits the nucleus of a fissile atom like uranium-235 ( 235U), the uranium splits into two smaller atoms called fission fragments, plus more neutrons. Fission can be self-sustaining because it produces more neutrons of the speed required to cause new fissions.
The uranium atom can split any one of dozens of different ways, as long as the atomic weights add up to 236 (uranium plus the extra neutron). The following equation shows one possible split, namely into strontium-95 ( 95Sr), xenon-139 (139Xe), and two neutrons (n), plus energy:
The immediate energy release per atom is about 180 million electron volts (MeV), i.e. 74 TJ/kg. Only 7% of this is gamma radiation and kinetic energy of fission neutrons. The remaining 93% is kinetic energy (or energy of motion) of the charged fission fragments, flying away from each other mutually repelled by the positive charge of their protons (38 for strontium, 54 for xenon). This initial kinetic energy is 67 TJ/kg, imparting an initial speed of about 12,000 kilometers per second. However, the charged fragments' high electric charge causes many inelastic collisions with nearby nuclei, and thus these fragments remain trapped inside the bomb's uranium pit and tamper until their motion is converted into x-ray heat, a process which takes about a millionth of a second (a microsecond). By this time, the material representing the core and tamper of the bomb is several meters in diameter and has been converted to plasma at a temperature of tens of millions of degrees.
This x-ray energy produces the blast and fire which are normally the purpose of a nuclear explosion.
After the fission products slow down, they remain radioactive. Being new elements with too many neutrons, they eventually become stable by means of beta decay, converting neutrons into protons by throwing off electrons and gamma rays. Each fission product nucleus decays between one and six times, average three times, producing a variety of isotopes of different elements, some stable, some highly radioactive, and others radioactive with half-lives up to 200,000 years. In reactors, the radioactive products are the nuclear waste in spent fuel. In bombs, they become radioactive fallout, both local and global.
Meanwhile, inside the exploding bomb, the free neutrons released by fission carry away about 3% of the initial fission energy. Neutron kinetic energy adds to the blast energy of a bomb, but not as effectively as the energy from charged fragments, since neutrons are not slowed as quickly. The main contribution of fission neutrons to the bomb's power, is to initiate other fissions. Over half of the neutrons escape the bomb core, but the rest strike nearby U-235 nuclei causing them to fission in an exponentially growing chain reaction (1, 2, 4, 8, 16, etc.). Starting from one, the number of fissions can theoretically double a hundred times in a microsecond, which could consume all uranium or plutonium up to hundreds of tons by the hundredth link in the chain. In practice, bombs do not contain such amounts of uranium or plutonium, and typically (in a modern weapon) about 2 to 2.5 kilograms of plutonium, representing 40 to 50 kilotons of energy, undergoes fission before the core blows itself apart.
Holding an exploding bomb together is the greatest challenge of fission weapon design. The heat of fission rapidly expands the uranium pit, spreading apart the target nuclei and making space for the neutrons to escape without being captured. The chain reaction stops.
Materials which can sustain a chain reaction are called fissile. The two fissile materials used in nuclear weapons are: U-235, also known as highly enriched uranium (HEU), oralloy (Oy) meaning Oak Ridge Alloy, or 25 (the last digits of the atomic number, which is 92 for uranium, and the atomic weight, here 235, respectively); and Pu-239, also known as plutonium, or 49 (from 94 and 239).
Uranium's most common isotope, U-238, is fissionable but not fissile (meaning that it cannot sustain a chain reaction by itself but can be made to fission, specifically by fast neutrons from a fusion reaction). Its aliases include natural or unenriched uranium, depleted uranium (DU), tubealloy (Tu), and 28. It cannot sustain a chain reaction, because its own fission neutrons are not powerful enough to cause more U-238 fission. However, the neutrons released by fusion will fission U-238. This U-238 fission reaction produces most of the destructive energy in a typical two-stage thermonuclear weapon.
Fusion produces neutrons which dissipate energy from the reaction. In weapons, the most important fusion reaction is called the D-T reaction. Using the heat and pressure of fission, hydrogen-2, or deuterium ( 2D), fuses with hydrogen-3, or tritium ( 3T), to form helium-4 ( 4He) plus one neutron (n) and energy:
Notice that the total energy output, 17.6 MeV, is one tenth of that with fission, but the ingredients are only one-fiftieth as massive, so the energy output per unit mass is greater. However, in this fusion reaction 80% of the energy, or 14 MeV, is in the motion of the neutron which, having no electric charge and being almost as massive as the hydrogen nuclei that created it, can escape the scene without leaving its energy behind to help sustain the reaction – or to generate x-rays for blast and fire.
The only practical way to capture most of the fusion energy is to trap the neutrons inside a massive bottle of heavy material such as lead, uranium, or plutonium. If the 14 MeV neutron is captured by uranium (either type: 235 or 238) or plutonium, the result is fission and the release of 180 MeV of fission energy, multiplying the energy output tenfold.
Fission is thus necessary to start fusion, helps to sustain fusion, and captures and multiplies the energy released in fusion neutrons. In the case of a neutron bomb (see below) the last-mentioned does not apply since the escape of neutrons is the objective.
A third important nuclear reaction is the one that creates tritium, essential to the type of fusion used in weapons and, incidentally, the most expensive ingredient in any nuclear weapon. Tritium, or hydrogen-3, is made by bombarding lithium-6 ( 6Li) with a neutron (n) to produce helium-4 ( 4He) plus tritium ( 3T) and energy:
A nuclear reactor is necessary to provide the neutrons. The industrial-scale conversion of lithium-6 to tritium is very similar to the conversion of uranium-238 into plutonium-239. In both cases the feed material is placed inside a nuclear reactor and removed for processing after a period of time. In the 1950s, when reactor capacity was limited, the production of tritium and plutonium were in direct competition. Every atom of tritium in a weapon replaced an atom of plutonium that could have been produced instead.
The fission of one plutonium atom releases ten times more total energy than the fusion of one tritium atom, and it generates fifty times more blast and fire. For this reason, tritium is included in nuclear weapon components only when it causes more fission than its production sacrifices, namely in the case of fusion-boosted fission.
However, an exploding nuclear bomb is a nuclear reactor. The above reaction can take place simultaneously throughout the secondary of a two-stage thermonuclear weapon, producing tritium in place as the device explodes.
Of the three basic types of nuclear weapon, the first, pure fission, uses the first of the three nuclear reactions above. The second, fusion-boosted fission, uses the first two. The third, two-stage thermonuclear, uses all three.
Pure fission weapons
The first task of a nuclear weapon design is to rapidly assemble a supercritical mass of fissile uranium or plutonium. A supercritical mass is one in which the percentage of fission-produced neutrons captured by another fissile nucleus is large enough that each fission event, on average, causes more than one additional fission event.
Once the critical mass is assembled, at maximum density, a burst of neutrons is supplied to start as many chain reactions as possible. Early weapons used an "urchin" inside the pit containing polonium-210 and beryllium separated by a thin barrier. Implosion of the pit crushed the urchin, mixing the two metals, thereby allowing alpha particles from the polonium to interact with beryllium to produce free neutrons. In modern weapons, the neutron generator is a high-voltage vacuum tube containing a particle accelerator which bombards a deuterium/tritium-metal hydride target with deuterium and tritium ions. The resulting small-scale fusion produces neutrons at a protected location outside the physics package, from which they penetrate the pit. This method allows better control of the timing of chain reaction initiation.
The critical mass of an uncompressed sphere of bare metal is 110 lb (50 kg) for uranium-235 and 35 lb (16 kg) for delta-phase plutonium-239. In practical applications, the amount of material required for criticallity is modified by shape, purity, density, and the proximity to neutron-reflecting material, all of which affect the escape or capture of neutrons.
To avoid a chain reaction during handling, the fissile material in the weapon must be sub-critical before detonation. It may consist of one or more components containing less than one uncompressed critical mass each. A thin hollow shell can have more than the bare-sphere critical mass, as can a cylinder, which can be arbitrarily long without ever reaching criticallity.
A tamper is an optional layer of dense material surrounding the fissile material. Due to its inertia it delays the expansion of the reacting material, increasing the efficiency of the weapon. Often the same layer serves both as tamper and as neutron reflector.
Gun-type assembly weapon
Little Boy, the Hiroshima bomb, used 141 lb (64 kg) of uranium with an average enrichment of around 80%, or 112 lb (51 kg) of U-235, just about the bare-metal critical mass. (See Little Boy article for a detailed drawing.) When assembled inside its tamper/reflector of tungsten carbide, the 141 lb (64 kg) was more than twice critical mass. Before the detonation, the uranium-235 was formed into two sub-critical pieces, one of which was later fired down a gun barrel to join the other, starting the atomic explosion. About 1% of the uranium underwent fission; the remainder, representing most of the entire wartime output of the giant factories at Oak Ridge, scattered uselessly. The half life of uranium-235 is 704 million years.
The inefficiency was caused by the speed with which the uncompressed fissioning uranium expanded and became sub-critical by virtue of decreased density. Despite its inefficiency, this design, because of its shape, was adapted for use in small-diameter, cylindrical artillery shells (a gun-type warhead fired from the barrel of a much larger gun). Such warheads were deployed by the United States until 1992, accounting for a significant fraction of the U-235 in the arsenal, and were some of the first weapons dismantled to comply with treaties limiting warhead numbers. The rationale for this decision was undoubtedly a combination of the lower yield and grave safety issues associated with the gun-type design.
Fat Man, the Nagasaki bomb, used 13.6 lb (6.2 kg, about 12 fluid ounces or 350 ml in volume) of Pu-239, which is only 41% of bare-sphere critical mass. (See Fat Man article for a detailed drawing.) Surrounded by a U-238 reflector/tamper, the pit was brought close to critical mass by the neutron-reflecting properties of the U-238. During detonation, criticality was achieved by implosion. The plutonium pit was squeezed to increase its density by simultaneous detonation of the conventional explosives placed uniformly around the pit. The explosives were detonated by multiple exploding-bridgewire detonators. It is estimated that only about 20% of the plutonium underwent fission; the rest, about 11 lb (5.0 kg), was scattered.
An implosion shock wave might be of such short duration that only a fraction of the pit is compressed at any instant as the wave passes through it.
A pusher shell made out of low density metal—such as aluminum, beryllium, or an alloy of the two metals (aluminum being easier and safer to shape, and is two orders of magnitude cheaper; beryllium for its high-neutron-reflective capability) —may be needed. The pusher is located between the explosive lens and the tamper. It works by reflecting some of the shock wave backwards, thereby having the effect of lengthening its duration. Fat Man used an aluminum pusher.
The key to Fat Man's greater efficiency was the inward momentum of the massive U-238 tamper (which did not undergo fission). Once the chain reaction started in the plutonium, the momentum of the implosion had to be reversed before expansion could stop the fission. By holding everything together for a few hundred nanoseconds more, the efficiency was increased.
The core of an implosion weapon – the fissile material and any reflector or tamper bonded to it – is known as the pit. Some weapons tested during the 1950s used pits made with U-235 alone, or in composite with plutonium, but all-plutonium pits are the smallest in diameter and have been the standard since the early 1960s.
Casting and then machining plutonium is difficult not only because of its toxicity, but also because plutonium has many different metallic phases, also known as allotropes. As plutonium cools, changes in phase result in distortion and cracking. This distortion is normally overcome by alloying it with 3–3.5 molar% (0.9–1.0% by weight) gallium, forming a plutonium-gallium alloy, which causes it to take up its delta phase over a wide temperature range. When cooling from molten it then suffers only a single phase change, from epsilon to delta, instead of the four changes it would otherwise pass through. Other trivalent metals would also work, but gallium has a small neutron absorption cross section and helps protect the plutonium against corrosion. A drawback is that gallium compounds themselves are corrosive and so if the plutonium is recovered from dismantled weapons for conversion to plutonium dioxide for power reactors, there is the difficulty of removing the gallium.
Because plutonium is chemically reactive it is common to plate the completed pit with a thin layer of inert metal, which also reduces the toxic hazard. The gadget used galvanic silver plating; afterwards, nickel deposited from nickel tetracarbonyl vapors was used, but gold is now preferred.
The first improvement on the Fat Man design was to put an air space between the tamper and the pit to create a hammer-on-nail impact. The pit, supported on a hollow cone inside the tamper cavity, was said to be levitated. The three tests of Operation Sandstone, in 1948, used Fat Man designs with levitated pits. The largest yield was 49 kilotons, more than twice the yield of the unlevitated Fat Man.
It was immediately clear that implosion was the best design for a fission weapon. Its only drawback seemed to be its diameter. Fat Man was 5 feet (1.5 m) wide vs 2 feet (60 cm) for Little Boy.
Eleven years later, implosion designs had advanced sufficiently that the 5-foot (1.5 m)-diameter sphere of Fat Man had been reduced to a 1-foot (0.30 m)-diameter cylinder 2 feet (0.61 m) long, the Swan device.
The Pu-239 pit of Fat Man was only 3.6 inches (9 cm) in diameter, the size of a softball. The bulk of Fat Man's girth was the implosion mechanism, namely concentric layers of U-238, aluminum, and high explosives. The key to reducing that girth was the two-point implosion design.
Two-point linear implosion
A very inefficient implosion design is one that simply reshapes an ovoid into a sphere, with minimal compression. In linear implosion, an untamped, solid, elongated mass of Pu-239, larger than critical mass in a sphere, is embedded inside a cylinder of high explosive with a detonator at each end.
Detonation makes the pit critical by driving the ends inward, creating a spherical shape. The shock may also change plutonium from delta to alpha phase, increasing its density by 23%, but without the inward momentum of a true implosion. The lack of compression makes it inefficient, but the simplicity and small diameter make it suitable for use in artillery shells and atomic demolition munitions – ADMs – also known as backpack or suitcase nukes.
All such low-yield battlefield weapons, whether gun-type U-235 designs or linear implosion Pu-239 designs, pay a high price in fissile material in order to achieve diameters between six and ten inches (254 mm).
Two-point hollow-pit implosion
A more efficient two-point implosion system uses two high explosive lenses and a hollow pit.
A hollow plutonium pit was the original plan for the 1945 Fat Man bomb, but there was not enough time to develop and test the implosion system for it. A simpler solid-pit design was considered more reliable, given the time restraint, but it required a heavy U-238 tamper, a thick aluminum pusher, and three tons of high explosives.
After the war, interest in the hollow pit design was revived. Its obvious advantage is that a hollow shell of plutonium, shock-deformed and driven inward toward its empty center, would carry momentum into its violent assembly as a solid sphere. It would be self-tamping, requiring a smaller U-238 tamper, no aluminum pusher and less high explosive.
The Fat Man bomb had two concentric, spherical shells of high explosives, each about 10 inches (25 cm) thick. The inner shell drove the implosion. The outer shell consisted of a soccer-ball pattern of 32 high explosive lenses, each of which converted the convex wave from its detonator into a concave wave matching the contour of the outer surface of the inner shell. If these 32 lenses could be replaced with only two, the high explosive sphere could become an ellipsoid (prolate spheroid) with a much smaller diameter.
A good illustration of these two features is a 1956 drawing from the Swedish nuclear weapon program (which was terminated before it produced a test explosion). The drawing shows the essential elements of the two-point hollow-pit design.
There are similar drawings in the open literature that come from the post-war German nuclear bomb program, which was also terminated, and from the French program, which produced an arsenal.
The mechanism of the high explosive lens (diagram item #6) is not shown in the Swedish drawing, but a standard lens made of fast and slow high explosives, as in Fat Man, would be much longer than the shape depicted. For a single high explosive lens to generate a concave wave that envelops an entire hemisphere, it must either be very long or the part of the wave on a direct line from the detonator to the pit must be slowed dramatically.
A slow high explosive is too fast, but the flying plate of an "air lens" is not. A metal plate, shock-deformed, and pushed across an empty space can be designed to move slowly enough. A two-point implosion system using air lens technology can have a length no more than twice its diameter, as in the Swedish diagram above.
Fusion-boosted fission weapons
The next step in miniaturization was to speed up the fissioning of the pit to reduce the minimum inertial confinement time. The hollow pit provided an ideal location to introduce fusion for the boosting of fission. A 50–50 mixture of tritium and deuterium gas, pumped into the pit during arming, will fuse into helium and release free neutrons soon after fission begins. The neutrons will start a large number of new chain reactions while the pit is still critical or nearly critical.
Once the hollow pit is perfected, there is little reason not to boost.
Boosting reduces diameter in three ways, all the result of faster fission:
- Since the compressed pit does not need to be held together as long, the massive U-238 tamper can be replaced by a light-weight beryllium shell (to reflect escaping neutrons back into the pit). The diameter is reduced.
- The mass of the pit can be reduced by half, without reducing yield. Diameter is reduced again.
- Since the mass of the metal being imploded (tamper plus pit) is reduced, a smaller charge of high explosive is needed, reducing diameter even further.
Since boosting is required to attain full design yield, any reduction in boosting reduces yield. Boosted weapons are thus variable-yield weapons. Yield can be reduced any time before detonation, simply by putting less than the full amount of tritium into the pit during the arming procedure.
The first device whose dimensions suggest employment of all these features (two-point, hollow-pit, fusion-boosted implosion) was the Swan device. It had a cylindrical shape with a diameter of 11.6 inches (29.5 cm) and a length of 22.8 inches (58 cm).
It was first tested standalone and then as the primary of a two-stage thermonuclear device during operation Redwing. It was weaponized as the Robin primary and became the first off-the-shelf, multi-use primary, and the prototype for all that followed.
After the success of Swan, 11 or 12 inches (300 mm) seemed to become the standard diameter of boosted single-stage devices tested during the 1950s. Length was usually twice the diameter, but one such device, which became the W54 warhead, was closer to a sphere, only 15 inches (380 mm) long. It was tested two dozen times in the 1957–62 period before being deployed. No other design had such a long string of test failures. Since the longer devices tended to work correctly on the first try, there must have been some difficulty in flattening the two high explosive lenses enough to achieve the desired length-to-width ratio.
One of the applications of the W54 was the Davy Crockett XM-388 recoilless rifle projectile, shown here in comparison to its Fat Man predecessor, dimensions in inches.
Another benefit of boosting, in addition to making weapons smaller, lighter, and with less fissile material for a given yield, is that it renders weapons immune to radiation interference (RI). It was discovered in the mid-1950s that plutonium pits would be particularly susceptible to partial predetonation if exposed to the intense radiation of a nearby nuclear explosion (electronics might also be damaged, but this was a separate issue). RI was a particular problem before effective early warning radar systems because a first strike attack might make retaliatory weapons useless. Boosting reduces the amount of plutonium needed in a weapon to below the quantity which would be vulnerable to this effect.
Two-stage thermonuclear weapons
Pure fission or fusion-boosted fission weapons can be made to yield hundreds of kilotons, at great expense in fissile material and tritium, but by far the most efficient way to increase nuclear weapon yield beyond ten or so kilotons is to tack on a second independent stage, called a secondary.
In the 1940s, bomb designers at Los Alamos thought the secondary would be a canister of deuterium in liquified or hydride form. The fusion reaction would be D-D, harder to achieve than D-T, but more affordable. A fission bomb at one end would shock-compress and heat the near end, and fusion would propagate through the canister to the far end. Mathematical simulations showed it wouldn't work, even with large amounts of prohibitively expensive tritium added in.
The entire fusion fuel canister would need to be enveloped by fission energy, to both compress and heat it, as with the booster charge in a boosted primary. The design breakthrough came in January 1951, when Edward Teller and Stanisław Ulam invented radiation implosion—for nearly three decades known publicly only as the Teller-Ulam H-bomb secret.
The concept of radiation implosion was first tested on May 9, 1951, in the George shot of Operation Greenhouse, Eniwetok, yield 225 kilotons. The first full test was on November 1, 1952, the Mike shot of Operation Ivy, Eniwetok, yield 10.4 megatons.
In radiation implosion, the burst of X-ray energy coming from an exploding primary is captured and contained within an opaque-walled radiation channel which surrounds the nuclear energy components of the secondary. The radiation quickly turns the plastic foam that had been filling the channel into a plasma which is mostly transparent to X-rays, and the radiation is absorbed in the outermost layers of the pusher/tamper surrounding the secondary, which ablates and applies a massive force (much like an inside out rocket engine) causing the fusion fuel capsule to implode much like the pit of the primary. As the secondary implodes a fissile "spark plug" at its center ignites and provides heat which enables the fusion fuel to ignite as well. The fission and fusion chain reactions exchange neutrons with each other and boost the efficiency of both reactions. The greater implosive force, enhanced efficiency of the fissile "spark plug" due to boosting via fusion neutrons, and the fusion explosion itself provides significantly greater explosive yield from the secondary despite often not being much larger than the primary.
For example, for the Redwing Mohawk test on July 3, 1956, a secondary called the Flute was attached to the Swan primary. The Flute was 15 inches (38 cm) in diameter and 23.4 inches (59 cm) long, about the size of the Swan. But it weighed ten times as much and yielded 24 times as much energy (355 kilotons, vs 15 kilotons).
Equally important, the active ingredients in the Flute probably cost no more than those in the Swan. Most of the fission came from cheap U-238, and the tritium was manufactured in place during the explosion. Only the spark plug at the axis of the secondary needed to be fissile.
A spherical secondary can achieve higher implosion densities than a cylindrical secondary, because spherical implosion pushes in from all directions toward the same spot. However, in warheads yielding more than one megaton, the diameter of a spherical secondary would be too large for most applications. A cylindrical secondary is necessary in such cases. The small, cone-shaped re-entry vehicles in multiple-warhead ballistic missiles after 1970 tended to have warheads with spherical secondaries, and yields of a few hundred kilotons.
As with boosting, the advantages of the two-stage thermonuclear design are so great that there is little incentive not to use it, once a nation has mastered the technology.
In engineering terms, radiation implosion allows for the exploitation of several known features of nuclear bomb materials which heretofore had eluded practical application. For example:
- The best way to store deuterium in a reasonably dense state is to chemically bond it with lithium, as lithium deuteride. But the lithium-6 isotope is also the raw material for tritium production, and an exploding bomb is a nuclear reactor. Radiation implosion will hold everything together long enough to permit the complete conversion of lithium-6 into tritium, while the bomb explodes. So the bonding agent for deuterium permits use of the D-T fusion reaction without any pre-manufactured tritium being stored in the secondary. The tritium production constraint disappears.
- For the secondary to be imploded by the hot, radiation-induced plasma surrounding it, it must remain cool for the first microsecond, i.e., it must be encased in a massive radiation (heat) shield. The shield's massiveness allows it to double as a tamper, adding momentum and duration to the implosion. No material is better suited for both of these jobs than ordinary, cheap uranium-238, which also happens to undergo fission when struck by the neutrons produced by D-T fusion. This casing, called the pusher, thus has three jobs: to keep the secondary cool, to hold it, inertially, in a highly compressed state, and, finally, to serve as the chief energy source for the entire bomb. The consumable pusher makes the bomb more a uranium fission bomb than a hydrogen fusion bomb. It is noteworthy that insiders never used the term hydrogen bomb.
- Finally, the heat for fusion ignition comes not from the primary but from a second fission bomb called the spark plug, embedded in the heart of the secondary. The implosion of the secondary implodes this spark plug, detonating it and igniting fusion in the material around it, but the spark plug then continues to fission in the neutron-rich environment until it is fully consumed, adding significantly to the yield.
The initial impetus behind the two-stage weapon was President Truman's 1950 promise to build a 10-megaton hydrogen superbomb as the U.S. response to the 1949 test of the first Soviet fission bomb. But the resulting invention turned out to be the cheapest and most compact way to build small nuclear bombs as well as large ones, erasing any meaningful distinction between A-bombs and H-bombs, and between boosters and supers. All the best techniques for fission and fusion explosions are incorporated into one all-encompassing, fully scalable design principle. Even six-inch (152 mm) diameter nuclear artillery shells can be two-stage thermonuclears.
In the ensuing fifty years, nobody has come up with a better way to build a nuclear bomb. It is the design of choice for the United States, Russia, the United Kingdom, China, and France, the five thermonuclear powers. The other nuclear-armed nations, Israel, India, Pakistan, and North Korea, probably have single-stage weapons, possibly boosted.
In a two-stage thermonuclear weapon the energy from the primary impacts the secondary. An essential energy transfer modulator called the interstage, between the primary and the secondary, protects the secondary's fusion fuel from heating too quickly, which could cause it to explode in a conventional (and small) heat explosion before the fission and fusion reactions get a chance to start.
There is very little information in the open literature about the mechanism of the interstage. Its first mention in a U.S. government document formally released to the public appears to be a caption in a recent graphic promoting the Reliable Replacement Warhead Program. If built, this new design would replace "toxic, brittle material" and "expensive 'special' material" in the interstage. This statement suggests the interstage may contain beryllium to moderate the flux of neutrons from the primary, and perhaps something to absorb and re-radiate the x-rays in a particular manner. There is also some speculation that this interstage material, which may be code-named FOGBANK might be an aerogel, possibly doped with beryllium and/or other substances.
The interstage and the secondary are encased together inside a stainless steel membrane to form the canned subassembly (CSA), an arrangement which has never been depicted in any open-source drawing. The most detailed illustration of an interstage shows a British thermonuclear weapon with a cluster of items between its primary and a cylindrical secondary. They are labeled "end-cap and neutron focus lens," "reflector/neutron gun carriage," and "reflector wrap." The origin of the drawing, posted on the internet by Greenpeace, is uncertain, and there is no accompanying explanation.
While every nuclear weapon design falls into one of the above categories, specific designs have occasionally become the subject of news accounts and public discussion, often with incorrect descriptions about how they work and what they do. Examples:
All modern nuclear weapons make some use of D-T fusion. Even pure fission weapons include neutron generators which are high-voltage vacuum tubes containing trace amounts of tritium and deuterium.
However, in the public perception, hydrogen bombs, or H-bombs, are multi-megaton devices a thousand times more powerful than Hiroshima's Little Boy. Such high-yield bombs are actually two-stage thermonuclears, scaled up to the desired yield, with uranium fission, as usual, providing most of their energy.
The idea of the hydrogen bomb first came to public attention in 1949, when prominent scientists openly recommended against building nuclear bombs more powerful than the standard pure-fission model, on both moral and practical grounds. Their assumption was that critical mass considerations would limit the potential size of fission explosions, but that a fusion explosion could be as large as its supply of fuel, which has no critical mass limit. In 1949, the Soviets exploded their first fission bomb, and in 1950 President Truman ended the H-bomb debate by ordering the Los Alamos designers to build one.
In 1952, the 10.4-megaton Ivy Mike explosion was announced as the first hydrogen bomb test, reinforcing the idea that hydrogen bombs are a thousand times more powerful than fission bombs.
In 1954, J. Robert Oppenheimer was labeled a hydrogen bomb opponent. The public did not know there were two kinds of hydrogen bomb (neither of which is accurately described as a hydrogen bomb). On May 23, when his security clearance was revoked, item three of the four public findings against him was "his conduct in the hydrogen bomb program." In 1949, Oppenheimer had supported single-stage fusion-boosted fission bombs, to maximize the explosive power of the arsenal given the trade-off between plutonium and tritium production. He opposed two-stage thermonuclear bombs until 1951, when radiation implosion, which he called "technically sweet", first made them practical. The complexity of his position was not revealed to the public until 1976, nine years after his death.
When ballistic missiles replaced bombers in the 1960s, most multi-megaton bombs were replaced by missile warheads (also two-stage thermonuclears) scaled down to one megaton or less.
The first effort to exploit the symbiotic relationship between fission and fusion was a 1940s design that mixed fission and fusion fuel in alternating thin layers. As a single-stage device, it would have been a cumbersome application of boosted fission. It first became practical when incorporated into the secondary of a two-stage thermonuclear weapon.
The U.S. name, Alarm Clock, was a nonsense code name. The Russian name for the same design was more descriptive: Sloika (Russian: Слойка), a layered pastry cake. A single-stage Soviet Sloika was tested on August 12, 1953. No single-stage U.S. version was tested, but the Union shot of Operation Castle, April 26, 1954, was a two-stage thermonuclear code-named Alarm Clock. Its yield, at Bikini, was 6.9 megatons.
Because the Soviet Sloika test used dry lithium-6 deuteride eight months before the first U.S. test to use it (Castle Bravo, March 1, 1954), it was sometimes claimed that the USSR won the H-bomb race. (The 1952 U.S. Ivy Mike test used cryogenically cooled liquid deuterium as the fusion fuel in the secondary, and employed the D-D fusion reaction.) Besides, that was the first aircraft deployable design, even though it was not deployed during the test. However, the first Soviet test to use a radiation-imploded secondary, the essential feature of a true H-bomb, was on November 23, 1955, three years after Ivy Mike. In fact, real work on implosion scheme in the Soviet Union only commenced in the very early 1953, several months after successful testing of Sloika.
On March 1, 1954, the largest-ever U.S. nuclear test explosion, the 15-megaton Bravo shot of Operation Castle at Bikini, delivered a promptly lethal dose of fission-product fallout to more than 6,000 square miles (16,000 km2) of Pacific Ocean surface. Radiation injuries to Marshall Islanders and Japanese fishermen made that fact public and revealed the role of fission in hydrogen bombs.
In response to the public alarm over fallout, an effort was made to design a clean multi-megaton weapon, relying almost entirely on fusion. The energy produced by the fissioning of unenriched natural Uranium, when utilized as the tamper material in the Secondary and subsequent stages in the Teller-Ulam design, can evidently dwarf the Fusion yield output, as was the case in the Castle Bravo test; realising that a non fissionable tamper material is an essential requirement in a 'clean' bomb, it is clear that in such a bomb there will now be a relatively massive amount of material that does not undergo any mass-to-energy conversions whatsoever. So for a given weight, 'dirty' weapons with Fissionable tampers are much more powerful than a 'clean' weapon (or, for an equal yield, they are much lighter). The earliest known incidence of a three-stage device being tested, with the third stage, called the tertiary, being ignited by the secondary, was May 27, 1956 in the Bassoon device. This device was tested in the Zuni shot of Operation Redwing. This shot utilized non fissionable tampers, a relatively nuclear inert substitute material such as tungsten or lead was used, its yield was 3.5 megatons, 85% fusion and only 15% fission. The public records for devices that produced the highest proportion of their yield via fusion-only reactions are the 57 megaton, Tsar bomba at 97% Fusion, the 9.3 megaton Hardtack Poplar test at 95.2%, and the 4.5 megaton Redwing Navajo test at 95% fusion.
On July 19, 1956, AEC Chairman Lewis Strauss said that the Redwing Zuni shot clean bomb test "produced much of importance ... from a humanitarian aspect." However, less than two days after this announcement the dirty version of Bassoon, called Bassoon Prime, with a uranium-238 tamper in place, was tested on a barge off the coast of Bikini Atoll as the Redwing Tewa shot. The Bassoon Prime produced a 5-megaton yield, of which 87% came from fission. Data obtained from this test, and others culminated in the eventual deployment of the highest yielding US nuclear weapon known, and as a side, the highest Yield-to-weight weapon ever made a three-stage thermonuclear weapon, with a maximum 'dirty' yield of 25-megatons designated as the Mark-41 bomb, which was to be carried by U.S. Air Force bombers until it was decommissioned, this weapon was never fully tested.
As such, high-yield clean bombs appear to have been a public relations exercise. The actual deployed weapons were the dirty versions, which maximized yield for the same size device. However, newer 4th and 5th Generation nuclear weapons designs including pure fusion weapon and antimatter catalyzed nuclear pulse propulsion like devices are being studied extensively by the 5 largest nuclear weapon states.
A fictional doomsday bomb, made popular by Nevil Shute's 1957 novel, and subsequent 1959 movie, On the Beach, the cobalt bomb was a hydrogen bomb with a jacket of cobalt metal. The neutron-activated cobalt would supposedly have maximized the environmental damage from radioactive fallout. These bombs were popularized in the 1964 film Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb. The element added to the bombs is referred to in the film as 'cobalt-thorium G'
Such "salted" weapons were requested by the U.S. Air Force and seriously investigated, possibly built and tested, but not deployed. In the 1964 edition of the DOD/AEC book The Effects of Nuclear Weapons, a new section titled Radiological Warfare clarified the issue. Fission products are as deadly as neutron-activated cobalt. The standard high-fission thermonuclear weapon is automatically a weapon of radiological warfare, as dirty as a cobalt bomb.
Initially, gamma radiation from the fission products of an equivalent size fission-fusion-fission bomb are much more intense than Co-60: 15,000 times more intense at 1 hour; 35 times more intense at 1 week; 5 times more intense at 1 month; and about equal at 6 months. Thereafter fission drops off rapidly so that Co-60 fallout is 8 times more intense than fission at 1 year and 150 times more intense at 5 years. The very long-lived isotopes produced by fission would overtake the 60Co again after about 75 years.
In 1954, to explain the surprising amount of fission-product fallout produced by hydrogen bombs, Ralph Lapp coined the term fission-fusion-fission to describe a process inside what he called a three-stage thermonuclear weapon. His process explanation was correct, but his choice of terms caused confusion in the open literature. The stages of a nuclear weapon are not fission, fusion, and fission. They are the primary, the secondary, and, in one exceptionally powerful weapon, the tertiary. Each of these stages employs fission, fusion, and fission.
A neutron bomb, technically referred to as an enhanced radiation weapon (ERW), is a type of tactical nuclear weapon designed specifically to release a large portion of its energy as energetic neutron radiation. This contrasts with standard thermonuclear weapons, which are designed to capture this intense neutron radiation to increase its overall explosive yield. In terms of yield, ERWs typically produce about one-tenth that of a fission-type atomic weapon. Even with their significantly lower explosive power, ERWs are still capable of much greater destruction than any conventional bomb. Meanwhile, relative to other nuclear weapons, damage is more focused on biological material than on material infrastructure (though extreme blast and heat effects are not eliminated).
Officially known as enhanced radiation weapons, ERWs, they are more accurately described as suppressed yield weapons. When the yield of a nuclear weapon is less than one kiloton, its lethal radius from blast, 700 m (2300 ft), is less than that from its neutron radiation. However, the blast is more than potent enough to destroy most structures, which are less resistant to blast effects than even unprotected human beings. Blast pressures of upwards of 20 PSI are survivable, whereas most buildings will collapse with a pressure of only 5 PSI.
Commonly misconceived as a weapon designed to kill populations and leave infrastructure intact, these bombs (as mentioned above) are still very capable of leveling buildings over a large radius. The intent of their design was to kill tank crews – tanks giving excellent protection against blast and heat, surviving (relatively) very close to a detonation. And with the Soviets' vast tank battalions during the Cold War, this was the perfect weapon to counter them. The neutron radiation could instantly incapacitate a tank crew out to roughly the same distance that the heat and blast would incapacitate an unprotected human (depending on design). The tank chassis would also be rendered highly radioactive (temporarily) preventing its re-use by a fresh crew.
Neutron weapons were also intended for use in other applications, however. For example, they are effective in anti-nuclear defenses – the neutron flux being capable of neutralising an incoming warhead at a greater range than heat or blast. Nuclear warheads are very resistant to physical damage, but are very difficult to harden against extreme neutron flux.
Energy distribution of weapon Standard Enhanced Blast 50% 40% Thermal energy 35% 25% Instant radiation 5% 30% Residual radiation 10% 5%
ERWs were two-stage thermonuclears with all non-essential uranium removed to minimize fission yield. Fusion provided the neutrons. Developed in the 1950s, they were first deployed in the 1970s, by U.S. forces in Europe. The last ones were retired in the 1990s.
A neutron bomb is only feasible if the yield is sufficiently high that efficient fusion stage ignition is possible, and if the yield is low enough that the case thickness will not absorb too many neutrons. This means that neutron bombs have a yield range of 1–10 kilotons, with fission proportion varying from 50% at 1-kiloton to 25% at 10-kilotons (all of which comes from the primary stage). The neutron output per kiloton is then 10–15 times greater than for a pure fission implosion weapon or for a strategic warhead like a W87 or W88.
Oralloy thermonuclear warheads
In 1999, nuclear weapon design was in the news again, for the first time in decades. In January, the U.S. House of Representatives released the Cox Report (Christopher Cox R-CA) which alleged that China had somehow acquired classified information about the U.S. W88 warhead. Nine months later, Wen Ho Lee, a Taiwanese immigrant working at Los Alamos, was publicly accused of spying, arrested, and served nine months in pre-trial detention, before the case against him was dismissed. It is not clear that there was, in fact, any espionage.
In the course of eighteen months of news coverage, the W88 warhead was described in unusual detail. The New York Times printed a schematic diagram on its front page. The most detailed drawing appeared in A Convenient Spy, the 2001 book on the Wen Ho Lee case by Dan Stober and Ian Hoffman, adapted and shown here with permission.
Designed for use on Trident II (D-5) submarine-launched ballistic missiles, the W88 entered service in 1990 and was the last warhead designed for the U.S. arsenal. It has been described as the most advanced, although open literature accounts do not indicate any major design features that were not available to U.S. designers in 1958.
The above diagram shows all the standard features of ballistic missile warheads since the 1960s, with two exceptions that give it a higher yield for its size.
- The outer layer of the secondary, called the "pusher", which serves three functions: heat shield, tamper, and fission fuel, is made of U-235 instead of U-238, hence the name Oralloy (U-235) Thermonuclear. Being fissile, rather than merely fissionable, allows the pusher to fission faster and more completely, increasing yield. This feature is available only to nations with a great wealth of fissile uranium. The United States is estimated to have 500 tons.
- The secondary is located in the wide end of the re-entry cone, where it can be larger, and thus more powerful. The usual arrangement is to put the heavier, denser secondary in the narrow end for greater aerodynamic stability during re-entry from outer space, and to allow more room for a bulky primary in the wider part of the cone. (The W87 warhead drawing in the previous section[clarification needed] shows the usual arrangement.) Because of this new geometry, the W88 primary uses compact conventional high explosives (CHE) to save space, rather than the more usual, and bulky but safer, insensitive high explosives (IHE). The re-entry cone probably has ballast in the nose for aerodynamic stability.
The alternating layers of fission and fusion material in the secondary are an application of the Alarm Clock/Sloika principle.
Reliable replacement warhead
The United States has not produced any nuclear warheads since 1989, when the Rocky Flats pit production plant, near Boulder, Colorado, was shut down for environmental reasons. With the end of the Cold War two years later, the production line was idled except for inspection and maintenance functions.
The National Nuclear Security Administration, the latest successor for nuclear weapons to the Atomic Energy Commission and the Department of Energy, has proposed building a new pit facility and starting the production line for a new warhead called the Reliable Replacement Warhead (RRW). Two advertised safety improvements of the RRW would be a return to the use of "insensitive high explosives which are far less susceptible to accidental detonation", and the elimination of "certain hazardous materials, such as beryllium, that are harmful to people and the environment." Since the new warhead must not require any nuclear testing, it could not use a new design with untested concepts.
Weapon design laboratories
All the nuclear weapon design innovations discussed in this article originated from the following three labs in the manner described. Other nuclear weapon design labs in other countries duplicated those design innovations independently, reverse-engineered them from fallout analysis, or acquired them by espionage.
The first systematic exploration of nuclear weapon design concepts took place in mid-1942 at the University of California, Berkeley. Important early discoveries had been made at the adjacent Lawrence Berkeley Laboratory, such as the 1940 cyclotron-made production and isolation of plutonium. A Berkeley professor, J. Robert Oppenheimer, had just been hired to run the nation's secret bomb design effort. His first act was to convene the 1942 summer conference.
By the time he moved his operation to the new secret town of Los Alamos, New Mexico, in the spring of 1943, the accumulated wisdom on nuclear weapon design consisted of five lectures by Berkeley professor Robert Serber, transcribed and distributed as the Los Alamos Primer. The Primer addressed fission energy, neutron production and capture, nuclear chain reactions, critical mass, tampers, predetonation, and three methods of assembling a bomb: gun assembly, implosion, and "autocatalytic methods," the one approach that turned out to be a dead end.
At Los Alamos, it was found in April 1944 by Emilio G. Segrè that the proposed Thin Man Gun assembly type bomb would not work for plutonium because of predetonation problems caused by Pu-240 impurities. So Fat Man, the implosion-type bomb, was given high priority as the only option for plutonium. The Berkeley discussions had generated theoretical estimates of critical mass, but nothing precise. The main wartime job at Los Alamos was the experimental determination of critical mass, which had to wait until sufficient amounts of fissile material arrived from the production plants: uranium from Oak Ridge, Tennessee, and plutonium from the Hanford site in Washington.
In 1945, using the results of critical mass experiments, Los Alamos technicians fabricated and assembled components for four bombs: the Trinity Gadget, Little Boy, Fat Man, and an unused spare Fat Man. After the war, those who could, including Oppenheimer, returned to university teaching positions. Those who remained worked on levitated and hollow pits and conducted weapon effects tests such as Crossroads Able and Baker at Bikini Atoll in 1946.
All of the essential ideas for incorporating fusion into nuclear weapons originated at Los Alamos between 1946 and 1952. After the Teller-Ulam radiation implosion breakthrough of 1951, the technical implications and possibilities were fully explored, but ideas not directly relevant to making the largest possible bombs for long-range Air Force bombers were shelved.
Because of Oppenheimer's initial position in the H-bomb debate, in opposition to large thermonuclear weapons, and the assumption that he still had influence over Los Alamos despite his departure, political allies of Edward Teller decided he needed his own laboratory in order to pursue H-bombs. By the time it was opened in 1952, in Livermore, California, Los Alamos had finished the job Livermore was designed to do.
With its original mission no longer available, the Livermore lab tried radical new designs, that failed. Its first three nuclear tests were fizzles: in 1953, two single-stage fission devices with uranium hydride pits, and in 1954, a two-stage thermonuclear device in which the secondary heated up prematurely, too fast for radiation implosion to work properly.
Shifting gears, Livermore settled for taking ideas Los Alamos had shelved and developing them for the Army and Navy. This led Livermore to specialize in small-diameter tactical weapons, particularly ones using two-point implosion systems, such as the Swan. Small-diameter tactical weapons became primaries for small-diameter secondaries. Around 1960, when the superpower arms race became a ballistic missile race, Livermore warheads were more useful than the large, heavy Los Alamos warheads. Los Alamos warheads were used on the first intermediate-range ballistic missiles, IRBMs, but smaller Livermore warheads were used on the first intercontinental ballistic missiles, ICBMs, and submarine-launched ballistic missiles, SLBMs, as well as on the first multiple warhead systems on such missiles.
In 1957 and 1958 both labs built and tested as many designs as possible, in anticipation that a planned 1958 test ban might become permanent. By the time testing resumed in 1961 the two labs had become duplicates of each other, and design jobs were assigned more on workload considerations than lab specialty. Some designs were horse-traded. For example, the W38 warhead for the Titan I missile started out as a Livermore project, was given to Los Alamos when it became the Atlas missile warhead, and in 1959 was given back to Livermore, in trade for the W54 Davy Crockett warhead, which went from Livermore to Los Alamos.
The period of real innovation was ending by then, anyway. Warhead designs after 1960 took on the character of model changes, with every new missile getting a new warhead for marketing reasons. The chief substantive change involved packing more fissile uranium into the secondary, as it became available with continued uranium enrichment and the dismantlement of the large high-yield bombs.
Nuclear weapons are in large part designed by trial and error. The trial often involves test explosion of a prototype.
In a nuclear explosion, a large number of discrete events, with various probabilities, aggregate into short-lived, chaotic energy flows inside the device casing. Complex mathematical models are required to approximate the processes, and in the 1950s there were no computers powerful enough to run them properly. Even today's computers and simulation software are not adequate.
It was easy enough to design reliable weapons for the stockpile. If the prototype worked, it could be weaponized and mass produced.
It was much more difficult to understand how it worked or why it failed. Designers gathered as much data as possible during the explosion, before the device destroyed itself, and used the data to calibrate their models, often by inserting fudge factors into equations to make the simulations match experimental results. They also analyzed the weapon debris in fallout to see how much of a potential nuclear reaction had taken place.
An important tool for test analysis was the diagnostic light pipe. A probe inside a test device could transmit information by heating a plate of metal to incandescence, an event that could be recorded at the far end of a long, very straight pipe.
The picture below shows the Shrimp device, detonated on March 1, 1954 at Bikini, as the Castle Bravo test. Its 15-megaton explosion was the largest ever by the United States. The silhouette of a man is shown for scale. The device is supported from below, at the ends. The pipes going into the shot cab ceiling, which appear to be supports, are diagnostic light pipes. The eight pipes at the right end (1) sent information about the detonation of the primary. Two in the middle (2) marked the time when x-radiation from the primary reached the radiation channel around the secondary. The last two pipes (3) noted the time radiation reached the far end of the radiation channel, the difference between (2) and (3) being the radiation transit time for the channel.
From the shot cab, the pipes turned horizontal and traveled 7500 ft (2.3 km), along a causeway built on the Bikini reef, to a remote-controlled data collection bunker on Namu Island.
While x-rays would normally travel at the speed of light through a low density material like the plastic foam channel filler between (2) and (3), the intensity of radiation from the exploding primary created a relatively opaque radiation front in the channel filler which acted like a slow-moving logjam to retard the passage of radiant energy. While the secondary is being compressed via radiation induced ablation, neutrons from the primary catch up with the x-rays, penetrate into the secondary and start breeding tritium with the third reaction noted in the first section above. This Li-6 + n reaction is exothermic, producing 5 MeV per event. The spark plug is not yet compressed and thus is not critical, so there won't be significant fission or fusion. But if enough neutrons arrive before implosion of the secondary is complete, the crucial temperature difference will be degraded. This is the reported cause of failure for Livermore's first thermonuclear design, the Morgenstern device, tested as Castle Koon, April 7, 1954.
These timing issues are measured by light-pipe data. The mathematical simulations which they calibrate are called radiation flow hydrodynamics codes, or channel codes. They are used to predict the effect of future design modifications.
It is not clear from the public record how successful the Shrimp light pipes were. The data bunker was far enough back to remain outside the mile-wide crater, but the 15-megaton blast, two and a half times greater than expected, breached the bunker by blowing its 20-ton door off the hinges and across the inside of the bunker. (The nearest people were twenty miles (32 km) farther away, in a bunker that survived intact.)
The most interesting data from Castle Bravo came from radio-chemical analysis of weapon debris in fallout. Because of a shortage of enriched lithium-6, 60% of the lithium in the Shrimp secondary was ordinary lithium-7, which doesn't breed tritium as easily as lithium-6 does. But it does breed lithium-6 as the product of an (n, 2n) reaction (one neutron in, two neutrons out), a known fact, but with unknown probability. The probability turned out to be high.
Fallout analysis revealed to designers that, with the (n, 2n) reaction, the Shrimp secondary effectively had two and half times as much lithium-6 as expected. The tritium, the fusion yield, the neutrons, and the fission yield were all increased accordingly.
As noted above, Bravo's fallout analysis also told the outside world, for the first time, that thermonuclear bombs are more fission devices than fusion devices. A Japanese fishing boat, the Lucky Dragon, sailed home with enough fallout on its decks to allow scientists in Japan and elsewhere to determine, and announce, that most of the fallout had come from the fission of U-238 by fusion-produced 14 MeV neutrons.
The global alarm over radioactive fallout, which began with the Castle Bravo event, eventually drove nuclear testing underground. The last U.S. above-ground test took place at Johnston Island on November 4, 1962. During the next three decades, until September 23, 1992, the United States conducted an average of 2.4 underground nuclear explosions per month, all but a few at the Nevada Test Site (NTS) northwest of Las Vegas.
The Yucca Flat section of the NTS is covered with subsidence craters resulting from the collapse of terrain over radioactive underground caverns created by nuclear explosions (see photo).
After the 1974 Threshold Test Ban Treaty (TTBT), which limited underground explosions to 150 kilotons or less, warheads like the half-megaton W88 had to be tested at less than full yield. Since the primary must be detonated at full yield in order to generate data about the implosion of the secondary, the reduction in yield had to come from the secondary. Replacing much of the lithium-6 deuteride fusion fuel with lithium-7 hydride limited the tritium available for fusion, and thus the overall yield, without changing the dynamics of the implosion. The functioning of the device could be evaluated using light pipes, other sensing devices, and analysis of trapped weapon debris. The full yield of the stockpiled weapon could be calculated by extrapolation.
When two-stage weapons became standard in the early 1950s, weapon design determined the layout of the new, widely dispersed U.S. production facilities, and vice versa.
Because primaries tend to be bulky, especially in diameter, plutonium is the fissile material of choice for pits, with beryllium reflectors. It has a smaller critical mass than uranium. The Rocky Flats plant near Boulder, Colorado, was built in 1952 for pit production and consequently became the plutonium and beryllium fabrication facility.
The Y-12 plant in Oak Ridge, Tennessee, where mass spectrometers called Calutrons had enriched uranium for the Manhattan Project, was redesigned to make secondaries. Fissile U-235 makes the best spark plugs because its critical mass is larger, especially in the cylindrical shape of early thermonuclear secondaries. Early experiments used the two fissile materials in combination, as composite Pu-Oy pits and spark plugs, but for mass production, it was easier to let the factories specialize: plutonium pits in primaries, uranium spark plugs and pushers in secondaries.
Y-12 made lithium-6 deuteride fusion fuel and U-238 parts, the other two ingredients of secondaries.
The Savannah River plant in Aiken, South Carolina, also built in 1952, operated nuclear reactors which converted U-238 into Pu-239 for pits, and converted lithium-6 (produced at Y-12) into tritium for booster gas. Since its reactors were moderated with heavy water, deuterium oxide, it also made deuterium for booster gas and for Y-12 to use in making lithium-6 deuteride.
Warhead design safety
Because even low-yield nuclear warheads have astounding destructive power, weapon designers have always recognised the need to incorporate mechanisms and associated procedures intended to prevent accidental detonation.
- Gun-type weapons
It is inherently dangerous to have a weapon containing a quantity and shape of fissile material which can form a critical mass through a relatively simple accident. Because of this danger, the propellant in Little Boy (four bags of cordite) was inserted into the bomb in flight, shortly after takeoff on August 6, 1945. This was the first time a gun-type nuclear weapon had ever been fully assembled.
If the weapon falls into water, the moderating effect of the water can also cause a criticality accident, even without the weapon being physically damaged. Similarly, a fire caused by an aircraft crashing could easily ignite the propellant, with catastrophic results. Gun-type weapons have always been inherently unsafe.
- In-flight pit insertion
Neither of these effects is likely with implosion weapons since there is normally insufficient fissile material to form a critical mass without the correct detonation of the lenses. However, the earliest implosion weapons had pits so close to criticality that accidental detonation with some nuclear yield was a concern.
On August 9, 1945, Fat Man was loaded onto its airplane fully assembled, but later, when levitated pits made a space between the pit and the tamper, it was feasible to use in-flight pit insertion. The bomber would take off with no fissile material in the bomb. Some older implosion-type weapons, such as the US Mark 4 and Mark 5, used this system.
In-flight pit insertion will not work with a hollow pit in contact with its tamper.
- Steel ball safety method
As shown in the diagram above, one method used to decrease the likelihood of accidental detonation employed metal balls. The balls were emptied into the pit: this prevented detonation by increasing the density of the hollow pit, thereby preventing symmetrical implosion in the event of an accident. This design was used in the Green Grass weapon, also known as the Interim Megaton Weapon, which was used in the Violet Club and Yellow Sun Mk.1 bombs.
- Chain safety method
Alternatively, the pit can be "safed" by having its normally hollow core filled with an inert material such as a fine metal chain, possibly made of cadmium to absorb neutrons. While the chain is in the center of the pit, the pit can not be compressed into an appropriate shape to fission; when the weapon is to be armed, the chain is removed. Similarly, although a serious fire could detonate the explosives, destroying the pit and spreading plutonium to contaminate the surroundings as has happened in several weapons accidents, it could not cause a nuclear explosion.
- Wire safety method
The US W47 warhead used in Polaris A1 and Polaris A2 had a safety device consisting of a boron-coated wire inserted into the hollow pit at manufacture. The warhead was armed by withdrawing the wire onto a spool driven by an electric motor. Once withdrawn the wire could not be re-inserted.
- One-point safety
While the firing of one detonator out of many will not cause a hollow pit to go critical, especially a low-mass hollow pit that requires boosting, the introduction of two-point implosion systems made that possibility a real concern.
In a two-point system, if one detonator fires, one entire hemisphere of the pit will implode as designed. The high-explosive charge surrounding the other hemisphere will explode progressively, from the equator toward the opposite pole. Ideally, this will pinch the equator and squeeze the second hemisphere away from the first, like toothpaste in a tube. By the time the explosion envelops it, its implosion will be separated both in time and space from the implosion of the first hemisphere. The resulting dumbbell shape, with each end reaching maximum density at a different time, may not become critical.
Unfortunately, it is not possible to tell on the drawing board how this will play out. Nor is it possible using a dummy pit of U-238 and high-speed x-ray cameras, although such tests are helpful. For final determination, a test needs to be made with real fissile material. Consequently, starting in 1957, a year after Swan, both labs began one-point safety tests.
Out of 25 one-point safety tests conducted in 1957 and 1958, seven had zero or slight nuclear yield (success), three had high yields of 300 t to 500 t (severe failure), and the rest had unacceptable yields between those extremes.
Of particular concern was Livermore's W47 warhead for the Polaris submarine missile. The last test before the 1958 moratorium was a one-point test of the W47 primary, which had an unacceptably high nuclear yield of 400 lb (180 kg) of TNT equivalent (Hardtack II Titania). With the test moratorium in force, there was no way to refine the design and make it inherently one-point safe. Los Alamos had a suitable primary that was one-point safe, but rather than share with Los Alamos the credit for designing the first SLBM warhead, Livermore chose to use mechanical safing on its own inherently unsafe primary. The wire safety scheme described above was the result.
When testing resumed in 1961, and continued for three decades, there was sufficient time to make all warhead designs inherently one-point safe, without need for mechanical safing.
A strong link/weak link and exclusion zone nuclear detonation mechanism is a form of automatic safety interlock.
In addition to the above steps to reduce the probability of a nuclear detonation arising from a single fault, locking mechanisms referred to by NATO states as Permissive Action Links are sometimes attached to the control mechanisms for nuclear warheads. Permissive Action Links act solely to prevent the unauthorised use of a nuclear weapon.
- Cohen, Sam, The Truth About the Neutron Bomb: The Inventor of the Bomb Speaks Out, William Morrow & Co., 1983
- Coster-Mullen, John, "Atom Bombs: The Top Secret Inside Story of Little Boy and Fat Man", Self-Published, 2011
- Glasstone, Samuel and Dolan, Philip J., The Effects of Nuclear Weapons (third edition) (hosted at the Trinity Atomic Web Site), U.S. Government Printing Office, 1977. PDF Version
- Grace, S. Charles, Nuclear Weapons: Principles, Effects and Survivability (Land Warfare: Brassey's New Battlefield Weapons Systems and Technology, vol 10)
- Hansen, Chuck, The Swords of Armageddon: U.S. Nuclear Weapons Development since 1945, October 1995, Chucklea Productions, eight volumes (CD-ROM), two thousand pages.
- The Effects of Nuclear War, Office of Technology Assessment (May 1979).
- Rhodes, Richard. The Making of the Atomic Bomb. Simon and Schuster, New York, (1986 ISBN 978-0-684-81378-3)
- Rhodes, Richard. Dark Sun: The Making of the Hydrogen Bomb. Simon and Schuster, New York, (1995 ISBN 978-0-684-82414-7)
- Smyth, Henry DeWolf, Atomic Energy for Military Purposes, Princeton University Press, 1945. (see: Smyth Report)
- ^ The physics package is the nuclear explosive module inside the bomb casing, missile warhead, or artillery shell, etc., which delivers the weapon to its target. While photographs of weapon casings are common, photographs of the physics package are quite rare, even for the oldest and crudest nuclear weapons. For a photograph of a modern physics package see W80.
- ^ Life Editors (1961), "To the Outside World, a Superbomb more Bluff than Bang", Life (New York) (Vol. 51, No. 19, November 10, 1961): 34–37, http://books.google.com/books?id=4VMEAAAAMBAJ&pg=PA34&cad=2#v=onepage&q&f=false, retrieved 2010-06-28 . Article on the Soviet Tsar Bomba test. Because explosions are spherical in shape and targets are spread out on the relatively flat surface of the earth, numerous smaller weapons cause more destruction. From page 35: ". . .five five-megaton weapons would demolish a greater area than a single 50-megatonner."
- ^ The United States and the Soviet Union were the only nations to build large nuclear arsenals with every possible type of nuclear weapon. The U.S. had a four-year head start and was the first to produce fissile material and fission weapons, all in 1945. The only Soviet claim for a design first was the Joe 4 detonation on August 12, 1953, said to be the first deliverable hydrogen bomb. However, as Herbert York first revealed in The Advisors: Oppenheimer, Teller and the Superbomb (W.H. Freeman, 1976), it was not a true hydrogen bomb (it was a boosted fission weapon of the Sloika/Alarm Clock type, not a two-stage thermonuclear). Soviet dates for the essential elements of warhead miniaturization – boosted, hollow-pit, two-point, air lens primaries – are not available in the open literature, but the larger size of Soviet ballistic missiles is often explained as evidence of an initial Soviet difficulty in miniaturizing warheads.
- ^ fr 971324, Caisse Nationale de la Recherche Scientifique (National Fund for Scientific Research), "Perfectionnements aux charges explosives (Improvements to explosive charges)", published 16 January 1951, issued 12 July 1950 .
- ^ The main source for this section is Samuel Glasstone and Philip Dolan, The Effects of Nuclear Weapons, Third Edition, 1977, U.S. Dept of Defense and U.S. Dept of Energy (see links in General References, below), with the same information in more detail in Samuel Glasstone, Sourcebook on Atomic Energy, Third Edition, 1979, U.S. Atomic Energy Commission, Krieger Publishing.
- ^ Glasstone and Dolan, Effects, p. 12.
- ^ Glasstone, Sourcebook, p. 503.
- ^ "neutrons carry off most of the reaction energy," Glasstone and Dolan, Effects, p. 21.
- ^ a b Glasstone and Dolan, Effects, p. 21.
- ^ Glasstone and Dolan, Effects, p. 12–13. When one pound (454 g) of U-235 undergoes complete fission, the yield is 8 kilotons. The 13-to-16-kiloton yield of the Little Boy bomb was therefore produced by the fission no more than two pounds (907 g) of U-235, out of the 141 pounds (64 kg) in the pit. The remaining 139 pounds (63 kg), 98.5% of the total, contributed nothing to the energy yield.
- ^ Compere, A.L., and Griffith, W.L. 1991. "The U.S. Calutron Program for Uranium Enrichment: History,. Technology, Operations, and Production. Report," ORNL-5928, as cited in John Coster-Mullen, "Atom Bombs: The Top Secret Inside Story of Little Boy and Fat Man," 2003, footnote 28, p. 18. The total wartime output of Oralloy produced at Oak Ridge by July 28, 1945 was 165 pounds (74.68 kg). Of this amount, 84% was scattered over Hiroshima (see previous footnote).
- ^ "Restricted Data Declassification Decisions from 1945 until Present" – "Fact that plutonium and uranium may be bonded to each other in unspecified pits or weapons."
- ^ "Restricted Data Declassification Decisions from 1946 until Present"
- ^ a b Fissionable Materials section of the Nuclear Weapons FAQ, Carey Sublette, accessed Sept 23, 2006
- ^ All information on nuclear weapon tests comes from Chuck Hansen, The Swords of Armageddon: U.S. Nuclear Weapons Development since 1945, October 1995, Chucklea Productions, Volume VIII, p. 154, Table A-1, "U.S. Nuclear Detonations and Tests, 1945–1962."
- ^ Nuclear Weapons FAQ: 184.108.40.206 Hybrid Assembly Techniques, accessed December 1, 2007. Drawing adapted from the same source.
- ^ Nuclear Weapons FAQ: 220.127.116.11.2.4 Cylindrical and Planar Shock Techniques, accessed December 1, 2007.
- ^ "Restricted Data Declassification Decisions from 1946 until Present", Section V.B.2.k "The fact of use in high explosive assembled (HEA) weapons of spherical shells of fissile materials, sealed pits; air and ring HE lenses," declassified November 1972.
- ^ 4.4 Elements of Thermonuclear Weapon Design. Nuclearweaponarchive.org. Retrieved on 2011-05-01.
- ^ Until a reliable design was worked out in the early 1950s, the hydrogen bomb (public name) was called the superbomb by insiders. After that, insiders used a more descriptive name: two-stage thermonuclear. Two examples. From Herb York, The Advisors, 1976, "This book is about ... the development of the H-bomb, or the superbomb as it was then called." p. ix, and "The rapid and successful development of the superbomb (or super as it came to be called) . . ." p. 5. From National Public Radio Talk of the Nation, November 8, 2005, Siegfried Hecker of Los Alamos, "the hydrogen bomb – that is, a two-stage thermonuclear device, as we referred to it – is indeed the principal part of the US arsenal, as it is of the Russian arsenal."
- ^ a b Howard Morland, "Born Secret," Cardozo Law Review, March 2005, pp. 1401–1408.
- ^ "Improved Security, Safety & Manufacturability of the Reliable Replacement Warhead," NNSA March 2007.
- ^ A 1976 drawing which depicts an interstage that absorbs and re-radiates x-rays. From Howard Morland, "The Article," Cardozo Law Review, March 2005, p 1374.
- ^ "ArmsControlWonk: FOGBANK", March 7, 2008. (Accessed 2010-04-06)
- ^ "SAND8.8 – 1151 Nuclear Weapon Data – Sigma I," Sandia Laboratories, September 1988.
- ^ The Greenpeace drawing. From Morland, Cardozo Law Review, March 2005, p 1378.
- ^ Herbert York, The Advisors: Oppenheimer, Teller and the Superbomb (1976).
- ^ "The ‘Alarm Clock' ... became practical only by the inclusion of Li6 (in 1950) and its combination with the radiation implosion." Hans A. Bethe, Memorandum on the History of Thermonuclear Program, May 28, 1952.
- ^ See map.
- ^ 4.5 Thermonuclear Weapon Designs and Later Subsections. Nuclearweaponarchive.org. Retrieved on 2011-05-01.
- ^ Operation Hardtack I. Nuclearweaponarchive.org. Retrieved on 2011-05-01.
- ^ Operation Redwing. Nuclearweaponarchive.org. Retrieved on 2011-05-01.
- ^ Weapon and Technology: 4th Generation Nuclear Nanotech Weapons. Weapons.technology.youngester.com (2010-04-19). Retrieved on 2011-05-01.
- ^ Fourth Generation Nuclear Weapons. Nuclearweaponarchive.org. Retrieved on 2011-05-01.
- ^ Never say "never". Whyfiles.org. Retrieved on 2011-05-01.
- ^ Samuel Glasstone, The Effects of Nuclear Weapons, 1962, Revised 1964, U.S. Dept of Defense and U.S. Dept of Energy, pp.464–5. This section was removed from later editions, but, according to Glasstone in 1978, not because it was inaccurate or because the weapons had changed.
- ^ "Nuclear Weapons FAQ: 1.6". http://nuclearweaponarchive.org/Nwfaq/Nfaq1.html#nfaq1.6.
- ^ "Neutron bomb: Why 'clean' is deadly". BBC News. July 15, 1999. http://news.bbc.co.uk/1/hi/sci/tech/395689.stm. Retrieved January 6, 2010.
- ^ Broad, William J. (7 September 1999), "Spies versus sweat, the debate over China's nuclear advance," The New York Times, p 1. The front page drawing was similar to one that appeared four months earlier in the San Jose Mercury News.
- ^ Jonathan Medalia, "The Reliable Replacement Warhead Program: Background and Current Developments," CRS Report RL32929, Dec 18, 2007, p CRS-11.
- ^ Richard Garwin, "Why China Won't Build U.S. Warheads", Arms Control Today, April–May 1999.
- ^ Home – NNSA
- ^ DoE Fact Sheet: Reliable Replacement Warhead Program
- ^ William J. Broad, "The Hidden Travels of The Bomb: Atomic insiders say the weapon was invented only once, and its secrets were spread around the globe by spies, scientists and the covert acts of nuclear states," New York Times, December 9, 2008, p D1.
- ^ Sybil Francis, Warhead Politics: Livermore and the Competitive System of Nuclear Warhead Design, UCRL-LR-124754, June 1995, Ph.D. Dissertation, Massachusetts Institute of Technology, available from National Technical Information Service. This 233-page thesis was written by a weapons-lab outsider for public distribution. The author had access to all the classified information at Livermore that was relevant to her research on warhead design; consequently, she was required to use non-descriptive code words for certain innovations.
- ^ Walter Goad, Declaration for the Wen Ho Lee case, May 17, 2000. Goad began thermonuclear weapon design work at Los Alamos in 1950. In his Declaration, he mentions "basic scientific problems of computability which cannot be solved by more computing power alone. These are typified by the problem of long range predictions of weather and climate, and extend to predictions of nuclear weapons behavior. This accounts for the fact that, after the enormous investment of effort over many years, weapons codes can still not be relied on for significantly new designs."
- ^ Chuck Hansen, The Swords of Armageddon, Volume IV, pp. 211–212, 284.
- ^ Dr. John C. Clark, as told to Robert Cahn, "We Were Trapped by Radioactive Fallout," The Saturday Evening Post, July 20, 1957, pp. 17–19, 69–71.
- ^ Richard Rhodes, Dark Sun; the Making of the Hydrogen Bomb, Simon and Schuster, 1995, p. 541.
- ^ Chuck Hansen, The Swords of Armageddon, Volume VII, pp. 396–397.
- ^ a b Sybil Francis, Warhead Politics, pp. 141, 160.
- Carey Sublette's Nuclear Weapon Archive is a reliable source of information and has links to other sources.
- Nuclear Weapons Frequently Asked Questions: Section 4.0 Engineering and Design of Nuclear Weapons
- The Federation of American Scientists provides solid information on weapons of mass destruction, including nuclear weapons and their effects
- Globalsecurity.org provides a well-written primer in nuclear weapons design concepts (site navigation on righthand side).
- More information on the design of two-stage fusion bombs
- Militarily Critical Technologies List (MCTL) from the US Government's Defense Technical Information Center
- "Restricted Data Declassification Decisions from 1946 until Present", Department of Energy report series published from 1994 until January 2001 which lists all known declassification actions and their dates. Hosted by Federation of American Scientists.
- The Holocaust Bomb: A Question of Time is an update of the 1979 court case USA v. The Progressive, with links to supporting documents on nuclear weapon design.
- Annotated bibliography on nuclear weapons design from the Alsos Digital Library for Nuclear Issues
Nuclear technology Science Fuel Neutron ReactorsFLiBeNone
Power MedicineTherapy WeaponTopicsLists WasteProductsDisposal Debate
- Pure fission weapons were the first nuclear weapons built and have so far been the only type ever used in warfare. The active material is fissile uranium (U-235) or plutonium (Pu-239), explosively assembled into a chain-reacting critical mass by one of two methods:
Wikimedia Foundation. 2010. |
Plant wild bird seed or cover mixture
Overall effectiveness category Beneficial
Number of studies: 49
View assessment score
Hide assessment score
How is the evidence assessed?
Background information and definitions
The loss of food supplies, especially seeds, is thought to be a key driver of farmland bird declines. Plants that provide seed food and cover for wild birds include maize, sunflower and cereals. Wild bird cover crops are often planted in blocks or 6 m-wide strips and left unharvested. These are sometimes called ‘game crops’ or ‘game cover crops’. They may also provide benefits for other farmland wildlife.
Supporting evidence from individual studies
A study of habitat use by yellowhammers Emberiza citrinella in 1993, 1995 and 1997 on a mixed farm in Leicestershire, UK (Stoate & Szczur 1997) found that in summer, yellowhammers used both cropped and uncropped habitats, including wild bird cover, and in winter wild bird cover was used more than all other habitats relative to its availability. In summer, wild bird cover strips (8 m wide) were used significantly more than wheat or field boundaries (2 m-wide), but less than barley. In winter, cereal-based wild bird cover was used significantly more than all other habitats and kale-based Brassica spp. bird cover was used significantly more than cereal and rape crops. A 15% area of the arable land was managed for game birds. Yellowhammer nests were observed for 1.5-2 hours when nestlings were 4-10 days old and 5-15 foraging trips per nest were plotted in May-June 1993 and 1995. A 60 ha area of the farm was also walked seven times in November-December and February-March 1997 and habitat use was recorded.Study and other actions tested
A replicated trial from 1995 to 1998 in Hampshire, UK (Carreck et al. 1999) recorded fewer flowering plant species, bee (Apidae), fly (Diptera) and butterfly (Lepidoptera) species on a single field margin strip sown with wild bird cover seed mix established for three years compared to three strips sown with a diverse wildflower seed mix. There were 20 flowering plant species, eight bee (Apidae), three fly (Diptera) and three butterfly (Lepidoptera) species on the single field margin strip sown with wild bird cover seed mix established for three years in 1998, and 24, nine, seven and eight plant, bee, butterfly and fly species respectively on three wildflower seed mix strips in the same study. The wild bird mix strip had more plant species but fewer bee, fly and butterfly species than a single naturally regenerated field margin strip (16, nine, four and six plant, bee, butterfly and fly species respectively on the naturally regenerated strip). The field margins were established or sown in 1995. Numbers of inflorescences or flowers and flower-visiting bees, wasps (Hymenoptera), flies and butterflies were counted on a 200 x 2 m transect in each strip, once a month from May to August 1998.Study and other actions tested
A 2000 literature review from the UK (Aebischer et al. 2000) found that populations of grey partridge Perdix perdix were 600% higher on farms where conservation measures aimed at partridges were in place, compared to farms without these measures (Aebischer 1997). Measures included the provision of conservation headlands, planting cover crops, using set-aside and creating beetle banks.
Aebischer N.J. (1997) Gamebirds: management of the Grey Partridge in Britain. Pages 131-151 in: M. Bolton (ed.) Conservation and the Use of Wildlife Resources. Chapman & Hall, London.Study and other actions tested
Referenced paperAebischer N.J., Green R.E. & Evans A.D. (2000) From science to recovery: four case studies of how research has been translated into conservation action in the UK. Pages 140-150 in: J.A. Vickery, P.V. Grice, A.D. Evans & N.J. Aebischer (eds.) The Ecology and Conservation of Lowland Farmland Birds. British Ornithologists' Union, Tring.
A small study of set-aside strips from 1995 to 1999 at Loddington, Leicestershire, UK (Boatman & Bence 2000) found that set-aside sown with wild bird cover was used by nesting Eurasian skylark Alauda arvensis and butterflies (Lepidoptera) significantly more than other habitats. The majority of skylark territories found were within set-aside strips (margins or midfield) sown with wild bird cover (1995: 76%, 1996: 65%, 1997: 71%, 1999: 55%), although the habitat covered only 8-10% of the area. The habitat was also used more for foraging than all habitats, except linseed Linum usitatissimum. Transects along wild bird cover set-aside strips also had more butterfly records than any other habitat in 1997 and 1998 (28-40% vs 1-18%). Wild bird cover was sown with either cereal-based or kale-based Brassica spp. mixtures. Skylark territories were recorded in 1995-1997 and 1999 and nests were located in 1999 and foraging trips observed for two 1.5 hour periods. Two butterfly transects were walked weekly from April-September.Study and other actions tested
A replicated, randomized study from 1998 to 2000 of annual and biennial crops in Norfolk, Hertfordshire and Leicestershire, UK (Boatman & Stoate 2002) found that bird species tended to use a variety of crops. Yellowhammers Emberiza citrinella used mainly cereals. Greenfinch Carduelis chloris tended to use borage Borago officinalis, sunflowers Helianthus spp. and mustard Brassica juncea. Crops used by several bird species included kale Brassica oleracea, quinoa Chenopodium quinoa, fat hen Chenopodium album and linseed Linum usitatissimum. Buckwheat Fagopyron esculentum was used a small amount and, apart from greenfinch, few others used sunflower or borage. Crops were sown in a randomized block design with three replicates at each of the three farms. Plots were 20 or 50 m x either 12 or 16 m. Numbers of birds feeding in, or flushed from, each plot were recorded before 11:00 at weekly intervals from October-March 1998-2000.Study and other actions tested
A review (Evans et al. 2002) of two reports (Wilson 2000, ADAS 2001) evaluating the effects of the Pilot Arable Stewardship Scheme in two regions in the UK (East Anglia and the West Midlands) from 1998 to 2003 found that ‘wildlife seed mix’ benefited plants, bumblebees Bombus spp., bugs (Hemiptera) and sawflies (Symphyta), but not ground beetles (Carabidae). The wildlife seed mix option could be wild bird seed mix or nectar and pollen mix for pollinators, and the review does not distinguish between these mixes. The effects of the pilot scheme on plants, invertebrates and birds were monitored over three years, relative to control areas, or control farms. Only plants and invertebrates were measured within individual options. Wildlife seed mix was the least widely implemented option, with total areas of 106 and 152 ha in East Anglia and the West Midlands respectively.
Wilson S., Baylis M., Sherrott A. & Howe, G. (2000) Arable Stewardship Project Officer Review. F. a. R. C. Agency report.
ADAS (2001) Ecological evaluation of the Arable Stewardship Pilot Scheme, 1998-2000. ADAS report.Study and other actions tested
A replicated study in June 2000 in ten edge habitats on an arable farm in Leicestershire, England (Moreby 2002) found that first-year wild bird cover had the highest density (not significant) of caterpillars (Lepidoptera). Weevil (Curculionidae) densities were similar in first- and second-year wild bird cover but lower than in edges of non-rotational set-aside. Spider (Araneae) and rove beetle (Staphylinidae) densities were lower in wild bird cover than in ungrazed pasture edges. Type of neighbouring crop did not affect invertebrate densities in the different habitats. Apart from the four habitats mentioned above, beetle banks, brood cover, hedge bottoms, sheep-grazed pasture edges, grass/wire fence lines and winter wheat headlands were included in the study. Invertebrates were sampled with a vacuum suction sampler in June 2000. This study was part of the same experimental set-up as Moreby & Southway (2002), Murray et al. (2002).Study and other actions tested
Referenced paperMoreby S.J. (2002) Permanent and temporary linear habitats as food sources for the young of farmland birds. Pages 327-332 in: D.E. Chamberlain (ed.) Avian Landscape Ecology: Pure and Applied Issues in the Large-Scale Ecology of Birds. International Association for Landscape Ecology (IALE(UK)), Aberdeen.
A replicated study from 1995 to 1999 of arable habitats on a farm in Leicestershire, UK (Moreby & Southway 2002) found that the abundance of some invertebrate groups was higher in non-crop strips (wild bird cover or grass beetle banks), whereas other groups were more abundant in crops. Four invertebrate groups tended to have significantly higher densities in non-crop strips than crops in all years spiders (Araneae) 7 vs 1-5 individuals/sample, true bugs (Homoptera) 29 vs 1-4, typical bugs (Heteroptera) 10-58 vs 0-9, and key ‘chick food insects’ 65 vs 2-10. In three of the years, true weevils (Curculionidae) were found at significantly higher densities in non-crop strips and beans (0-11) than other crops (0-2). In contrast, in three or four of the years, densities in crops were significantly higher than non-crops for: true flies (Diptera) 20-230 vs 25-100 individuals and aphids (Aphididae). Moth and butterfly larvae (Lepidoptera) and ground beetles (Carabidae) differed significantly in only one or two years, when density was higher in crops than non-crops. Total beetles (Coleoptera) varied between years and habitats. Sawfly larvae (Symphyta), leaf beetles (Chrysomelidae) and soldier beetles (Cantharidae) showed no significant differences. Wild bird cover was sown as 2-5 m-wide strips along field boundaries and re-sown every few years with a cereal or kale-based Brassica spp. mixture. Grass strips (1 m-wide) were sown onto a raised bank along edges or across the centre of fields. Invertebrates were sampled each year in the centre of 5-11 grass/wild bird cover strips and 3 m into 3-4 pasture, 8-12 wheat, 6-8 barley, 3-6 oilseed rape and four field bean fields. Two samples of 0.5 m² were taken in each habitat using a D-Vac suction sampler in June 1995-1999. This study was part of the same experimental set-up as Moreby (2002), Murray et al. (2002).Study and other actions tested
A study of different set-aside crops on a farm in Leicestershire, UK (Murray et al. 2002), found that Eurasian skylark Alauda arvensis and yellowhammer Emberiza citrinella used wild bird cover set-aside (kale Brassica napus set-aside, cereal set-aside, annual/biennial crop strips) more than expected compared to availability. Skylarks also used wild bird cover more than unmanaged set-aside, broad-leaved crops and other habitats. Yellowhammer used wild bird cover strips more than expected. Cereal set-aside wild bird cover was used significantly more than beetle banks, kale set-aside wild bird cover, unmanaged set-aside and other habitats. Wild bird cover strips were used significantly more than kale set-aside, unmanaged set-aside and other habitats. Field margin and midfield set-aside strips were sown with kale-based and cereal-based mixtures for wild bird cover and beetle banks. Other habitat types were: unmanaged set-aside, cereal (wheat, barley), broad-leaved crop (beans, rape) and other habitats. Thirteen skylark and 15 yellowhammer nests with chicks between 3-10 days old were observed. Foraging habitat used by the adults was recorded for 90 minutes during three periods of the day. This study was part of the same experimental set-up as Moreby (2002), Moreby & Southway (2002).Study and other actions tested
A small replicated controlled study from May-June 1992-1998 in Leicestershire, UK (Stoate 2002) found that the abundance of nationally declining songbirds and species of conservation concern significantly increased on a 3 km2 site where 20 m-wide mid-field and field-edge strips were planted with game cover crops (alongside several other interventions). However, there was no overall difference in bird abundance, species richness or diversity between the experimental and three control sites. Numbers of nationally declining species rose by 102% (except for Eurasian skylark Alauda arvensis and yellowhammer Emberiza citrinella). Nationally stable species rose (insignificantly) by 47% (eight species increased, four decreased). The other interventions employed at the same site were managing hedges, beetle banks, supplementary feeding, predator control and reducing chemical inputs generally.Study and other actions tested
A replicated, randomized, controlled study over the winters of 1998-2001 on 161 arable farms across England (Boatman et al. 2003) (same study as (Henderson et al. 2004)) found that, overall, all bird species analysed exhibited higher densities on wild bird cover crops than on conventional crops except Eurasian skylark Alauda arvensis, which preferred cereal stubbles. Although all species showed non-random and different wild bird cover crop preferences, kale Brassica spp. was preferred by the greatest number of species. Additionally, bird abundance was significantly greater on wild bird cover crops located adjacent to hedgerows than those located midfield. Ten annual crops and four biennial crops were planted each year at each of 192 sites with 3 replicates/crop. At 11 and 13 sites in 1999-2000 and 2000-2001 respectively, strips containing the same crop were grown in pairs, one against a hedgerow and one infield, to determine location preference.Study and other actions tested
A replicated site comparison study of 88 farms in East Anglia and the West Midlands, UK (Browne & Aebischer 2003) found that between 1998 and 2002 there was no difference in the decrease in autumn densities of grey partridge Perdix perdix on farms that planted wild bird cover mixtures and farms that did not. Surveys for grey partridge were made once each autumn in 1998 and 2002 on 88 farms: 38 farms that planted wild bird cover and 50 farms that did not.Study and other actions tested
A replicated, controlled study over the winters of 1997-1998, 1998-1999 and 2000-2001 on one arable, autumn-sown crop farm in County Durham, England (Stoate et al. 2003) found that farmland bird abundance was significantly higher in wild bird cover crops than commercial crops (420 birds/km2 in wild bird cover vs 30-40/km2 for commercial crops). Of 11 species with sufficient data for analysis, all species-year combinations exhibited significant preferences for wild bird cover crops. Of the wild bird cover crops, kale Brassica napus crops were preferred by nine species and quinoa Chenopodium quinoa crops by six species; cereals and linseed Linum usitatissimum were also used. The wild bird cover crops were planted in c. 20 m-wide strips along one edge of arable wheat, barley or oilseed rape fields. There were approximately 15 experimental and 15 control fields. Bird counts were conducted twice monthly from October-March in 1997-1998 and three times per month from October-December as well as twice monthly from January-March in 1998-1999 and 2000-2001.Study and other actions tested
A replicated, controlled, before-and-after study from 1998 to 2003 (three years habitat manipulation and three years monitoring) in four cereal farms (12-20 km2) in the Beauce, Grande Beauce and Champagne Berrichonne regions, France (Bro et al. 2004) found that grey partridge Perdix perdix populations were unaffected by cover strips. Neither breeding density nor the reproductive success of breeding pairs increased in managed compared to control areas. The survival rate was significantly lower in managed areas for all winters except for one winter in one site. Observations suggested that cover strips attracted predators, such as foxes Vulpes vulpes and hen harriers Circus cyaneus, causing the managed land to become ‘ecological traps’. Cover strips (500-1,000 ha/farm) were either set-asides or, typically, a maize-sorghum mixture. Partridges were surveyed in March and mid-December to early-January to assess overwinter mortality, and in August to assess reproductive success.Study and other actions tested
A 2004 review of experiments on the effects of agri-environment measures on livestock farms in the UK (Buckingham et al. 2004) found that in one experiment in southwest England (the Potential to Enhance Biodiversity in Intensive Livestock farms (PEBIL) project, also reported in (Defra 2007)), birds preferred grass margins sown with plants providing seed food and cover, over plots of grassland subject to various management treatments. The review assessed results from seven experiments (some incomplete at the time of the review) in the UK and Europe.Study and other actions tested
A replicated study in the summers of 1999-2000 comparing ten different conservation measures on arable farms in the UK (Critchley et al. 2004) found that wildlife seed mixtures (site-specific mixture, but largely planted for birds) appeared to be one of the three best options for the conservation of annual herbaceous plant communities. Uncropped cultivated margins and no-fertilizer conservation headlands were the other two options. The average numbers of plant species in different conservation habitats were wildlife seed mixtures 6.7, uncropped cultivated margins 6.3, undersown cereals 5.9, naturally regenerated grass margins 5.5, no-fertilizer conservation headlands 4.8, spring fallows 4.5, sown grass margins 4.4, overwinter stubbles 4.2, conservation headlands 3.5, grass leys 3.1. Plant species richness was highest in wildlife seed mixtures due to the range of sown species and a high number of annual arable species. Plants were surveyed on a total of 294 conservation measure sites (each a single field, block of field or field margin strip), on 37 farms in East Anglia (dominated by arable farming) and 38 farms in the West Midlands (dominated by more mixed farming). The ten habitats were created according to agri-environment scheme guidelines. Vegetation was surveyed once in each site in June-August in 1999 or 2000 in thirty 0.25 m2 quadrats randomly placed in 50-100 m randomly located sampling zones in each habitat site. All vascular plant species rooted in each quadrat, bare ground, or litter and plant cover were recorded.Study and other actions tested
A replicated, randomized study from November 2003 to March 2004 in 205 cereal stubble fields in arable farmland in south Devon, UK (Defra 2004) found no clear changes in habitat use by seed-eating birds after the establishment of wild bird cover crops on some stubble fields. The target species, cirl bunting Emberiza cirlus, made insignificant use of wild bird cover crops (average of 2 individuals/plot). Only two plots contained more than five individuals and use of the habitat dropped drastically in March, which the authors suggest makes the habitat a poor alternative to stubbles. High numbers of other seed-eating species including chaffinch Fringilla coelebs and yellowhammer Emberiza citrinella were recorded on the wild bird cover crops, especially those containing a mixture of rape, millet, linseed Linum usitatissimum, kale Brassica spp. and quinoa Chenopodium quinoa (maximum seed-eating bird count 491 on wild bird cover vs 191 on barley fields). Only song thrush Turdus philomelos abundance was significantly positively related to wild bird cover presence. However, few stubble fields contained wild bird cover crops (13 fields with 24 wild bird cover strips) and the results may have been confounded by low sample size.Study and other actions tested
A replicated, randomized, controlled study over the winters of 1998-2001 in 192 plots of arable fields in lowland England (Henderson et al. 2004) (same study as (Boatman et al. 2003)) found significantly higher density and diversity of farmland birds on wild bird cover crops than conventional crops. Although there were no significant differences between wild bird covers containing a single plant species and conventional crops, bird density was 50 times higher on ‘preferred’ wild bird covers. Kale Brassica oleracae viridus-dominated wild bird covers supported the widest range of bird species (especially insectivores and seed-eaters), quinoa Chenopodium quinoa-dominated wild bird covers were mainly used by finches and tree sparrows Passer montanus and (unharvested) seeding cereals were mainly used by buntings. Sunflowers Helianthus spp., phacelia Phacelia spp. and buckwheat Fagopyron esculentum were the least preferred wild bird covers. All species, except Eurasian skylark Alauda arvensis, corn bunting Miliaria calandra and rook Corvus frugilegus, were significantly denser on wild bird cover. The differences between wild bird covers were more marked in late-winter as kale and quinoa Chenopodium quinoa retained seeds for longer periods. Within each plot, one wild bird cover and up to four conventional crops were surveyed at least once.Study and other actions tested
A replicated, randomized, controlled study from November-February in 2000-2001 and 2001-2002 on 20 arable farms in eastern Scotland (Parish & Sotherton 2004a) found that farmland bird abundance and diversity were significantly higher in fields containing wild bird cover crops (0.6-4.2 ha sampled annually) than fields with set-aside, fields with overwinter stubble or fields with conventional crops. Bird density was up to 100 times higher/ha in wild bird cover crops than on control fields. Wild bird cover crops attracted 50% more species than set-aside and stubble fields and 91% more than conventional fields. Of eight species with sufficient data for individual analysis, seven were consistently significantly more abundant in wild bird cover than in control crops. However, Eurasian skylarks Alauda arvensis were significantly more abundant in set-aside and stubble fields. The authors point out that many of the species that favour wild bird cover crops are those currently causing concern because of their declining populations.Study and other actions tested
A replicated, randomized, controlled study from June-September 2001-2002 of 21 cereal farms in eastern Scotland (Parish & Sotherton 2004b) found that farmland birds were significantly more abundant on fields containing wild bird cover crops than on fields with conventional crops. A total of 25 species were recorded, with up to 80 times more birds seen in wild bird cover than conventional crops. Over all month-crop combinations bird density was significantly higher on wild bird cover crops for all groups except finches in July. Bird density increased steadily over all months of the study on wild bird cover crops, but remained relatively constant on conventional crops. Wild bird cover crops contained up to 90% more weed species, and 280% more important bird-food weeds, than conventional crops. The wild bird cover crops were composed mainly of kale Brassica spp., quinoa Chenopodium quinoa and triticale Triticosecale spp. and were sown in 20 x 650 m strips. A random sample of 4.9 ha of conventional crops was made on each farm.Study and other actions tested
A review of the results of four projects conducted from 1998 to 2004 on wild bird cover crops planted in arable farms in England (Stoate et al. 2004) found that the density and diversity of bird species increased significantly when wild bird cover crops were included in the farm. Four studies reported greater use of wild bird cover crops than of commercial crops during winter (October-March). One study reported an increase in bird abundance when wild bird cover crops were introduced into areas that previously lacked them. Kale Brassica napus and quinoa Chenopodium quinoa were used by the most species. Buckwheat Fagopyron esculentum was rarely used by species in any of the studies. Millet was used by more species than any other cereal. Three other studies also found that the location of wild bird covers within the whole-farm configuration had an effect on bird densities. Wild bird covers located close to hedges were favoured. Four studies found that a mixture of wild bird cover crops will produce the highest bird density and diversity.Study and other actions tested
A replicated, controlled, paired sites study over winter 1997-1998 and summer 1999-2000 in arable farmlands in southern England and the Scottish lowlands (Sage et al. 2005) found that songbird density and species richness were higher in wild bird cover crops in both seasons. In total, more species were recorded in wild bird cover winter crops than control plots (26 vs 10 species). Similarly, summer wild bird cover crops contained more species than control plots (14 vs 10 species). Songbird abundance was significantly higher on wild bird cover winter (10-50 individuals/ha vs 1) and summer (3 individuals/ha vs 0.4) crops. There was a significantly higher abundance of declining songbird species in the kale Brassica oleracea and quinoa Chenopodium quinoa, but not cereal wild bird cover crops. Winter wild bird cover plots were sown with kale, quinoa or cereal, while summer wild bird cover plots were predominantly triticale. Thirty experimental and 30 control plots were used in winter, with six experimental and six control plots in summer.Study and other actions tested
A replicated study in 1999 and 2003 on 256 arable and pastoral fields across 84 farms in East Anglia and the West Midlands, England (Stevens & Bradbury 2006), found that only two of twelve farmland bird species analysed were positively associated with the provision of wildlife seed mixtures, overwinter stubble or set aside. These were Eurasian skylark Alauda arvensis (a field-nesting species) and Eurasian linnet Carduelis cannabina (a boundary-nesting species). The study did not distinguish between set-aside, wildlife seed mixtures or overwinter stubble, classing all as interventions to provide seeds for farmland birds.Study and other actions tested
A replicated site comparison study in 1999 and 2003 in the UK (Critchley et al. 2007) found that 33 field margins sown with a locally specific ‘wildlife seed mixture’ had greater numbers of perennial plants and pernicious weeds after four years, but the total number of plant species did not increase (7-8 plant species/margin). This option was not considered the best option for the conservation of arable plants. The most commonly sown plant species were brassicas (sown at 14 sites). Cereals, maize Zea mays, buckwheat Fagopyron esculentum, borage Borago officinalis, grasses, legumes, teasel Dipsacus fullonum and phacelia Phacelia tanacetifolia were also sown at some sites. Plants were surveyed in thirty 0.025 m2 quadrats within a 100 m sampling zone. Percentage cover and plant species were recorded.Study and other actions tested
A randomized, replicated, controlled trial from 2003 to 2006 in southwest England (Defra 2007) found that plots of permanent pasture sown with a wild bird seed mix attracted more foraging songbirds (dunnock Prunella modularis, wren Troglodytes troglodytes, European robin Erithacus rubecula, seed-eating finches (Fringillidae) and buntings (Emberizidae)) than 12 control plots, managed as silage (cut twice in May and July, and grazed in autumn/winter). Dunnocks, but not chaffinches Fringella coelebs or blackbirds Turdus merula, nested in hedgerows next to the sown plots more than expected, with 2.5 nests/km compared to less than 0.5 nests/km in hedges next to experimental grass plots. Twelve experimental plots (50 x 10 m) were sown on four farms with a mix of crops including linseed Linum usitatissimum and legumes. There were twelve replicates of each management type, monitored over four years. This study was part of the same experimental set-up as (Pilgrim et al. 2007, Potts et al. 2009, Holt et al. 2010).Study and other actions tested
A 2007 review of published and unpublished literature (Fisher et al. 2007) found experimental evidence of benefits of wild bird seed or cover mix to plants (one study (Critchley et al. 2004)) and invertebrates (true bugs (Hemiptera) Gardner et al. 2001, and bumblebees Bombus spp. Allen et al. 2001).
Allen D.S., Gundrey A.L. & Gardner S.M. (2001) Bumblebees. Technical appendix to ecological evaluation of arable stewardship pilot scheme 1998-2000. ADAS, Wolverhampton, UK.
Gardner S.M., Allen D.S., Woodward J., Mole A.C. & Gundrey A.L. (2001) True bugs. Technical appendix to ecological evaluation of arable stewardship pilot scheme 1998-2000. ADAS, Wolverhampton, UK.Study and other actions tested
A randomized, replicated, controlled trial from 2003 to 2006 in southwest England (Pilgrim et al. 2007) found that plots of permanent pasture sown with a mix of crops including linseed Linum usitatissimum and legumes attracted more birds, and more bird species, than control treatments, in both summer and winter. Three plots (50 x 10 m) were established on each of four farms in 2002 re-sown in new plots each year and monitored annually from 2003 to 2006. Legumes sown included white clover Trifolium repens, red clover T. pratense, common vetch Vicia sativa and bird’s-foot trefoil Lotus corniculatus. There were twelve replicates of each treatment. This study was part of the same experimental set-up as (Defra 2007, Potts et al. 2009, Holt et al. 2010).Study and other actions tested
A replicated controlled trial in 2005-2006 in Warwickshire, UK (Pywell & Nowakowski 2007) found that field corners or margins sown with a wild bird seed mix had more birds and bird species in winter than all other treatments, and more plant species, bumblebees Bombus spp. and butterflies (Lepidoptera) (individuals and species) than control plots sown with winter oats. Fifty-five birds/plot from four species on average were recorded on the wild bird seed plots compared to 0.1-1 bird/plot and 0.1-0.7 species on average on control crop plots, plots sown with wildflower seed mix and plots left to naturally regenerate. There were 11 plant species/m2 , 25 bumblebees and four bumblebee species/plot, 25 butterflies and six butterfly species/plot on wild bird seed plots, compared to two plant species/m2, no bumblebees, one butterfly and 0.9 butterfly species/plot in control cereal crop plots. Each treatment was tested in one section of margin and one corner in each of four fields. The wild bird seed mix (five species) was sown in April 2006 and fertilized in late May 2006. The crop (oats) was sown in October 2005. Plants were monitored in three 1 m2 quadrats/plot in July 2006. Butterflies, bumblebees and flowering plants were recorded on a 6 m-wide transect five times between July and September 2006. Farmland birds were counted on each plot on seven counts between December 2006 and March 2007. The second monitoring year of the same study is presented in (Pywell & Nowakowski 2008).Study and other actions tested
A replicated trial in 2004 and 2005 on four farms in England (Pywell et al. 2007) found that plants, insects, mammals and birds all used sown wild bird seed mix plots more than wheat crop at some times of year. The number of flowers and flowering species, the abundance and number of species of butterflies (Lepidoptera) and the number of bumblebee species Bombus spp., were all higher in the wild bird mix than in the crop. Small mammal activity was higher in the wild bird mix in winter (around 25 mammals/100 trap nights in wild bird mix, compared to around 8 in the crop), and higher in the crop in summer (around 10 mammals caught in the crop, compared to less than one on average in the wild bird mix). The number of birds and bird species were higher in the wild bird mix than the crop in December and January (around 100 birds of over three species per count on average in the wild bird mix, compared to less than 10 birds or <1 species in the crop), but not in February and March. Eurasian linnet Acanthis cannabina (at three sites) and reed bunting Emberiza schoeniclus (at one site) were the most abundant bird species recorded in the wild bird mix. A seed mix containing white millet Echinochloa esculenta, linseed Linum usitatissimum, radish Raphanus sativus and quinoa Chenopodium quinoa was sown in a 150 x 30 m patch in the centre of an arable field (winter wheat) on each of four farms in Cambridgeshire, Bedfordshire, Oxfordshire and Buckinghamshire, in April 2004 and 2005. Plants, bees and butterflies were counted in summer 2005. Small mammals were trapped in November-December 2005 and May-June 2005. Birds were counted once a month between December 2004 and March 2005.Study and other actions tested
A 2007 systematic review identified five papers investigating the effect of winter bird cover on farmland bird densities in the UK (Roberts & Pullin 2007). There were significantly higher densities of farmland birds in winter on fields with winter bird cover than on adjacent conventionally managed fields. The meta-analysis included experiments conducted between 1998 and 2001 from two controlled trials and one randomized control trial.Study and other actions tested
A replicated, controlled, randomized study on 28 arable farms in East Anglia and southern England (Anon 2008) found that as the area sown with cover crops increased plant diversity in both regions, numbers of butterflies (Lepidoptera) in East Anglia and bees (Apidae) in southern England increased. Results also suggested that cover crops sown in strips have greater butterfly diversity than those sown in blocks, this did not appear to be the case for bees, but numbers recorded were low in the wet cool summer. One of six treatments was randomly allocated to each farm (two replicates per region): 1.5 ha or 6 ha of project-managed uncropped land in either strips or blocks, or 1.5 ha or 6 ha of farm-managed uncropped land. Two organic farms were also selected per region. Uncropped land was split into four equal areas comprising a floristically-enhanced grass mix, a plant mix to provide summer cover and foraging (e.g. mustard, legume, cereal mixture), a mix to provide winter cover and foraging (e.g. cereal/kale Brassica spp./quinoa Chenopodium quinoa mixture) and annual cultivation to encourage annual arable plants. Plants (April and June) and insects were assessed within and at the edge of three fields (cereal crop, non-cereal crop and uncropped field in 2006-2009). Butterfly, bee and hoverfly (Syrphidae) diversity and abundance were recorded during transect walks in July.Study and other actions tested
A replicated, randomized, controlled study in September, November, December and February in 2004-2005 in seven grassland farms (87-96% grass) in western Scotland (Parish & Sotherton 2008) found that songbirds responded significantly more positively to wild bird cover crops in grassland compared to arable regions. Average songbird densities were two orders of magnitude greater in wild bird cover crops than conventional crops (average 51 birds/ha vs 0.2). The average density of songbirds in wild bird cover in the grassland region was more than double that in wild bird cover in the arable region at the same time of year (average 61 and 29 birds/ha respectively). Average bird densities in grassland conventional crops were just 14% of that in the arable region. On each site, an average of 1.2 ha of wild bird cover and 10.3 ha of conventional crops was randomly sampled. Arable farm data from a previous study was used for comparison.Study and other actions tested
A replicated experiment in northeast Scotland over three winters 2002-2005 (Perkins et al. 2008), found that unharvested seed-bearing crops were most frequently selected by birds (28% of all birds despite these patches occupying less than 5% of the area surveyed). For nine species, seed-bearing crops were used more than expected (based on available crop area) in at least one winter. Outside agri-environment schemes (the Rural Stewardship Scheme and Farmland Bird Lifeline), cereal stubble was the most selected habitat. In total, 53 lowland farms (23 in Rural Stewardship Scheme, 14 in Farmland Bird Lifeline, and 16 not in a scheme) were assessed. Over 36,000 birds of 10 species were recorded.Study and other actions tested
A randomized, replicated study in 2006 and 2007 in Warwickshire, UK (Pywell et al. 2008) (same study as (Pywell et al. 2010)) found that butterflies (Lepidoptera) and bumblebees Bombus spp. displayed different preferences for 13 annual and perennial plant species, 10 of which were typical components of wild bird seed mixtures. In 2006, more butterflies were found in plots sown with lucerne Medicago sativa (6.3 butterflies/plot) than plots sown with borage Borago officinalis (0.3), chicory Cichorium intybus (0.8) and sainfoin Onobrychis viciifolia (0.8). More butterfly species were found in lucerne plots (3.5 species/plot) than in borage, chicory, sainfoin and fodder radish Raphanus sativus (0.3-0.5). In 2007, red clover Trifolium pratense plots had the largest number of butterflies, significantly more than chicory (3.3 vs 0.0 butterflies/plot), whilst all other plant species ranged between 0.3-2.3. In both years, bumblebees were most abundant in phacelia Phacelia tanacetifolia plots (134 and 38.5 bumblebees/plot in 2006 and 2007), followed by borage (100 and 32). Crimson clover T. incarnatum and sunflower Helianthus annuus (37 and 26 respectively) had more bumblebees than other plant species (0-6) in 2006. Red clover plots had more bumblebees (21) than buckwheat Fagopyrum esculentum, chicory, linseed Linum usitatissimum, lucerne, mustard Brassica juncea or sweet clover Melilotus officinalis in 2007. The number of bumblebee species recorded in crimson clover, phacelia, borage and sunflower was significantly higher than all other plant species (2.8-4.0 vs 0-1.3 species/plot) in 2006. In 2007, red clover in addition to the four species from 2006 had significantly more bumblebee species than mustard (3.0-3.3 vs 0.5 species/plot). Short-tongued bees showed a significant preference for phacelia and borage compared with all other treatments in both years. Long-tongued bees showed a significant preference for crimson clover over all other species apart from borage and phacelia in 2006, and red clover in 2007 (although they also showed a strong preference for crimson clover and sainfoin in 2007). Peak flowering of many important bee forage species was in late July, including phacelia, borage, red clover and sweet clover. Thirteen species were sown in single species stands in 6 x 4 m plots with four replicates in May 2006. Annual species were re-established in the same plots in May 2007. Abundance and diversity of butterflies and bumblebees were recorded on transects in each plot six times between July and September 2006 and May and September 2007. On each visit the percentage cover of flowers of all dicot species/plot was estimated.Study and other actions tested
The second monitoring year of the same study as (Pywell & Nowakowski 2007) in the UK (Pywell & Nowakowski 2008) found that wild bird seed mix plots had more birds in winter (86 birds/plot, of six species on average) than control cereal plots, plots sown with wildflower seed mix or plots left to naturally regenerate (2 birds/plot or less, and 0.4-1.6 species/plot on average). Wild bird seed plots also had more bumblebee Bombus spp. and butterfly (Lepidoptera) individuals and species than naturally regenerated or control cereal plots and more vacuum-sampled invertebrates than control plots. Wild bird seed plots had eight plant species/m2, 40 bumblebees and four bumblebee species/plot, 18 butterflies and six butterfly species/plot, compared to three plant species/m2, no bumblebees and one butterfly/plot on control cereal plots. Control plots had 254 vacuum-sampled canopy-dwelling invertebrates/m2 on average, compared to 840-1,197/m2 on other treatments. Plants were monitored in three 1 m2 quadrats/plot in June 2007. Butterflies, bumblebees and flowering plants were recorded in a 6 m-wide transect six times between July and September 2006 and 2007. Invertebrates in the vegetation were vacuum sampled in early July 2007. Farmland birds were counted on each plot on four counts between December 2007 and March 2008. The crop control in year two was winter wheat.Study and other actions tested
A 2009 literature review of agri-environment schemes in England (Natural England 2009) found that high densities of seed-eating songbirds and Eurasian skylark Alauda arvensis were found on land planted with wild bird seed or cover mix and on stubble fields. A survey in 2007-2008 found that densities of seed-eating songbirds were highest on wild bird seed or cover mix, compared to other agri-environment scheme options.Study and other actions tested
A randomized, replicated, controlled trial from 2003 to 2006 in southwest England (Potts et al. 2009) found plots on permanent pasture annually sown with a mix of legumes, or grass and legumes, supported more common bumblebees Bombus spp. (individuals and species) than seven grass management options. In the first two years, the numbers of common butterflies (Lepidoptera) and common butterfly species were higher in plots sown with legumes than in five intensively managed grassland treatments. No more than 2.2 bumblebees/transect were recorded on average on any grass-only plot in any year, compared to over 15 bumblebees/transect in both sown treatments in 2003. The plots sown with legumes generally had fewer butterfly larvae than all grass-only treatments, including conventional silage and six different management treatments. Experimental plots 50 x 10 m were established on permanent pastures (more than five-years-old) on four farms. There were nine different management types, with three replicates/farm, monitored over four years. Seven management types involved different management options for grass-only plots, including mowing and fertilizer addition. The two legume-sown treatments comprised either a mix of crops sown partly for wild birds, including linseed Linum usitatissimum and legumes, uncut, or spring barley Hordeum vulgare undersown with a grass and legume mix (white clover Trifolium repens, red clover T. pratense, common vetch Vicia sativa, bird’s?foot trefoil Lotus corniculatus and black medick Medicago lupulina) cut once in July. Bumblebees and butterflies were surveyed along a 50 m transect line in the centre of each experimental plot, once a month from June to September annually. Butterfly larvae were sampled on two 10 m transects using a sweep net in April and June-September annually. This study was part of the same experimental set-up as (Defra 2007, Pilgrim et al. 2007, Holt et al. 2010).Study and other actions tested
A 2009 literature review of European farmland conservation practices (Vickery et al. 2009) found that margins sown with wild bird cover had high numbers of some invertebrates which are important bird food, but lower numbers than on margins sown with a wildflower mix. Cover crops such as quinoa Chenopodium quinoa and kale Brassica oleracea provided more food for seed-eating birds in late winter than other field margin types and supported large numbers of some songbird species.Study and other actions tested
A controlled study in 2002-2009 on mixed farmland in Hertfordshire, England (Aebischer & Ewald 2010), found that the estimated population density of grey partridges Perdix perdix was significantly higher on land sown with wild bird cover than on conventional arable crops. This study also examined the densities found on land under various agri-environment schemes and set aside (which were higher than those on wild bird cover) and the impact of predator control and supplementary food provision. Grey partridges were surveyed in March and September using dawn and dusk counts starting in 2001. Land cover within the project area was mapped and categorized as: conventional arable land, arable in agri-environment schemes, non-arable, or set-aside (which was further divided into non-rotational, wild bird cover, other rotational).Study and other actions tested
A 2010 follow-up review of experiments on the effects of agri-environment measures on livestock farms in the UK (Buckingham et al. 2010), found that in one experiment in southwest England (the Potential for Enhancing Biodiversity on Intensive Livestock Farms PEBIL project BD1444, also reported in (Defra 2007)) found small insect-eating birds preferred field margins sown with a diverse mixture of plants that provided seed food; compared to grass margins subject to different management techniques, despite there being no difference in the number of insects between the two sets of treatments. The preference for wild bird cover was attributed to easier accessibility (less dense ground cover). The review assessed results from four experimental projects (one incomplete at the time of the review) in the UK.Study and other actions tested
A replicated site comparison study in 2005 and 2008 of 2,046, 1 km² plots of lowland farmland in England (Davey et al. 2010b) (same study as (Davey et al. 2010a)) found that three years after the 2005 introduction of two agri-environment schemes, Countryside Stewardship Scheme and Entry Level Stewardship, there was no consistent association between the provision of wild bird cover and farmland bird numbers. European greenfinch Carduelis chloris, stock dove Columba oenas, starling Sturnus vulgaris and woodpigeon Columba palumbus showed more positive population change (population increases or smaller decreases relative to other plots) in the 9 km² and 25 km² areas immediately surrounding plots planted with wild bird cover mix than in the area surrounding plots not planted with wildlife seed mixture. Although Eurasian linnet Carduelis cannabina and rook Corvus frugilegus also showed positive associations with wild bird cover mix at the 25 km² scale, plots with wild bird cover were associated with a greater decline in grey partridge Perdix perdix populations at both scales between 2005 and 2008. The 2,046 1 km² lowland plots were surveyed in both 2005 and 2008 and classified as arable, pastoral or mixed farmland. Eighty-four percent of plots included some area managed according to Entry Level Stewardship or the Countryside Stewardship Scheme. In both survey years, two surveys were conducted along a 2 km pre-selected transect route through each 1 km² square.Study and other actions tested
A replicated site comparison study from 2004 to 2008 in England (Ewald et al. 2010) found that the ratio of young-to-old grey partridges Perdix perdix was higher in 2007 and 2008 on sites with higher proportions of wild bird cover. Brood sizes were also related to wild bird cover in 2008 only. Overwinter survival was positively related to wild bird cover in 2004-2005, but negatively in 2007-2008. There were no relationships between wild bird cover and year-on-year density trends. Spring and autumn counts of grey partridge were made at 1031 sites across England as part of the Partridge Count Scheme.Study and other actions tested
A replicated site comparison study between November 2007 and February 2008 of 52 fields in East Anglia and the West Midlands (Field et al. 2010a) (same study as (Field et al. 2010b)) found no difference between the number of seed-eating birds in fields managed under Higher Level Stewardship of the Environmental Stewardship scheme (fields sown with enhanced wild bird seed mix) than in fields managed under Entry Level Stewardship of the Environmental Stewardship scheme (fields sown with wild bird cover mix). In East Anglia, but not the West Midlands, there were significantly more seed-eating birds on fields planted with wild bird cover under the Environmental Stewardship scheme (59.3 birds/ha) than non-Environmental Stewardship fields planted with a game cover (2.1 birds/ha). Seed-eating birds were surveyed on two visits to each site between 1 November 2007 and 29 February 2008.Study and other actions tested
A replicated site comparison study in winter 2007-2008 on farms in East Anglia and the West Midlands, England (Field et al. 2010b) (same study as (Field et al. 2010a)) found that more seed-eating farmland songbirds (including tree sparrow Passer montanus and corn bunting Emberiza calandra) were found on Higher Level Stewardship wild bird seed mix sites (6-11 birds/ha) than on non-stewardship game cover crops (<0.5 birds/ha) in East Anglia, but not in the West Midlands (2-4 birds/ha on both types). The survey was carried out on 27 farms with Higher Level Stewardship, 13 farms with Entry Level Stewardship and 14 with no environmental stewardship.Study and other actions tested
A replicated study from April-July in 2006 on four livestock farms in southwest England (Holt et al. 2010) found that dunnock Prunella modularis, but not Eurasian blackbird Turdus merula or chaffinch Fringella coelebs, nested at higher densities in hedges alongside field margins sown with wild bird seed crops, or barley undersown with grass and clover, compared to those next to grassy field edges under various management options (dunnock: approximately 2.5 nests/km for seed crops vs. 0.3/km for grass margins, blackbirds: 1.0 vs. 1.3, chaffinch: 1.5 vs. 1.4). Margins were 10 x 50 m and located adjacent to existing hedgerows. Seed crop margins were sown with barley (undersown with grass/legumes) or a kale Brassica spp./quinoa Chenopodium quinoa mix. There were 12 replicates of each treatment, three replicates on each farm. This study was part of the same experimental set-up as (Defra 2007, Pilgrim et al. 2007, Potts et al. 2009).Study and other actions tested
A replicated, randomized study in 2006 and 2007 in Warwickshire, UK (Pywell et al. 2010) (same study as (Pywell et al. 2008)) found bee (Apidae) and butterfly (Lepidoptera) abundance and species richness were higher in stands of specific sown plant species. Bumblebee Bombus spp. abundance and species richness were significantly higher on plots sown with phacelia Phacelia tanacetifolia and borage Borago officinalis (32-85 bees/plot) compared to other treatments (1-22 bees/plot). Crimson clover Trifolium incarnatum (10-21 bees/plot), sunflower Helianthus annuus (10-22) and in 2007 red clover Trifolium pratense (20) also tended to have high bee abundances (other plant species: 1-11 bees/plot). Short- and long-tongued bees showed differences in preferences. In 2006, butterfly abundance and species richness were significantly higher in plots with lucerne Medicago sativa compared to borage, chicory Cichorium intybus and sainfoin Onobrychis viciifolia. In 2007 butterfly abundance was higher in red clover compared with chicory, but the number of species did not differ between treatments. Mobile and immobile butterfly species showed differences in preferences. Flowers of buckwheat Fagopyrum esculentum were the most abundant followed by phacelia, borage and sunflower in 2006. In 2007 fodder radish, red clover and sweet clover Melilotus officinalis also had high flower abundance. Mustard Brassica juncea and linseed Linum usitatissimum had the least abundant flowers in both years, along with other species each year. Thirteen species were sown in single species stands: nine small-seeded crop species typically sown in wild bird seed mixes and four wildflower species typically sown in pollen and nectar seed mixes. The species were sown in May each year in adjacent 6 x 4 m plots in a randomized block experiment with four replicates. Butterflies and bumblebees were sampled by walking transects through each plot on six occasions from May-September. Flower cover was estimated at the same time.Study and other actions tested
A replicated study on four farms in Gloucestershire and Oxfordshire, England, in 2007 (Rantanen et al. 2010) found that grey partridge Perdix perdix released in coveys in the autumn used cover crops more frequently than birds released in pairs in the spring. Four farms were studied. Birds were radio-tagged and their positions marked on a 1:5000 map.Study and other actions tested
A replicated controlled study in summer 2008 in northwest Scotland (Redpath et al. 2010) found that croft sections (an agricultural system specific to Scotland, consisting of small agricultural units with rotational cropping regimes and livestock production) sown with a brassica-rich ‘bird and bumblebee’ conservation seed mix had 47 times more foraging bumblebees than sheep-grazed sections and 16 times more bumblebees Bombus spp. than winter-grazed pastures in June. In July the ‘bird and bumblebee’ mix sections had 248 and 65 times more bumblebees than sections grazed by sheep or both sheep and cattle respectively. The number of bumblebees in July was also significantly higher (4-16 times) in ‘bird and bumblebee’ sections than in arable, fallow, silage, and winter-grazed pasture sections. The availability of bumblebee forage plant flowers was lower in ‘bird and bumblebee’ sections than in silage sections in June, but no other significant differences involving the conservation mix were detected. Plant species in the legume (Fabaceae) family were the most frequently visited by foraging bumblebees. Tufted vetch Vicia cracca was one of a few plant species favoured by bumblebees and was predominantly found in ‘bird and bumblebee’ sections in July-August, although it was not part of the seed mixture. Thirty-one crofts located on Lewis, Harris, the Uists and at Durness were included in the study. The ‘bird and bumblebee’ conservation mix was sown for several bird species and foraging bumblebees, species sown included kale Brassica oleracea, mustard Brassica spp., phacelia Phacelia spp., fodder radish Raphanus sativus, linseed Linum usitatissimum and red clover Trifolium pratense. In addition to the seven management types mentioned, unmanaged pastures were surveyed for foraging bumblebees and bumblebee forage plants along zigzag or L-shaped transects in each croft section once in June, July and August 2008. Foraging bumblebees 2 m either side of transects were identified to species and recorded together with the plant species on which they were foraging. Flowers of all plant species were counted in 0.25 m2 quadrats at 20 or 50 m intervals along the transects.Study and other actions tested
Where has this evidence come from?
List of journals searched by synopsis
All the journals searched for all synopses
This Action forms part of the Action Synopsis:Farmland Conservation
Farmland Conservation - Published 2013 |
|Part of a series on|
This article needs additional citations for verification. (January 2014) (Learn how and when to remove this template message)
In radio engineering, an antenna is the interface between radio waves propagating through space and electric currents moving in metal conductors, used with a transmitter or receiver. In transmission, a radio transmitter supplies an electric current to the antenna's terminals, and the antenna radiates the energy from the current as electromagnetic waves (radio waves). In reception, an antenna intercepts some of the power of a radio wave in order to produce an electric current at its terminals, that is applied to a receiver to be amplified. Antennas are essential components of all radio equipment, and are used in radio broadcasting, broadcast television, two-way radio, communications receivers, radar, cell phones, satellite communications and other devices.
An antenna is an array of conductors (elements), electrically connected to the receiver or transmitter. During transmission, the oscillating current applied to the antenna by a transmitter creates an oscillating electric field and magnetic field around the antenna elements. These time-varying fields radiate energy away from the antenna into space as a moving transverse electromagnetic field wave, a radio wave. Conversely, during reception, the oscillating electric and magnetic fields of an incoming radio wave exert force on the electrons in the antenna elements, causing them to move back and forth, creating oscillating currents in the antenna.
Antennas can be designed to transmit and receive radio waves in all horizontal directions equally (omnidirectional antennas), or preferentially in a particular direction (directional or high gain antennas). An antenna may include parasitic elements, parabolic reflectors or horns, which serve to direct the radio waves into a beam or other desired radiation pattern.
The first antennas were built in 1888 by German physicist Heinrich Hertz in his pioneering experiments to prove the existence of electromagnetic waves predicted by the theory of James Clerk Maxwell. Hertz placed dipole antennas at the focal point of parabolic reflectors for both transmitting and receiving. He published his work in Annalen der Physik und Chemie (vol. 36, 1889).
- 1 Terminology
- 2 Overview
- 3 Reciprocity
- 4 Characteristics
- 5 Antenna types
- 6 Effect of ground
- 7 Mutual impedance and interaction between antennas
- 8 See also
- 9 Notes
- 10 References
The words antenna and aerial are used interchangeably. Occasionally the term "aerial" is used to mean a wire antenna. The origin of the word antenna relative to wireless apparatus is attributed to Italian radio pioneer Guglielmo Marconi. In the summer of 1895, Marconi began testing his wireless system outdoors on his father's estate near Bologna and soon began to experiment with long wire "aerials" suspended from a pole. In Italian a tent pole is known as l'antenna centrale, and the pole with the wire was simply called l'antenna. Until then wireless radiating transmitting and receiving elements were known simply as terminals. Because of his prominence, Marconi's use of the word antenna spread among wireless researchers, and later to the general public.
Antenna may refer broadly to an entire assembly including support structure, enclosure (if any), etc. in addition to the actual functional components. Especially at microwave frequencies, a receiving antenna may include not only the actual electrical antenna but an integrated preamplifier or mixer.
This section does not cite any sources. (January 2014) (Learn how and when to remove this template message)
Antennas are required by any radio receiver or transmitter to couple its electrical connection to the electromagnetic field. Radio waves are electromagnetic waves which carry signals through the air (or through space) at the speed of light with almost no transmission loss. Radio transmitters and receivers are used to convey signals in broadcast (audio) radio, television, mobile telephones, Wi-Fi (WLAN) data networks, and remote control devices among many others. Radio waves are also used directly for measurements in radar, GPS, and radio astronomy. Transmitters and receivers require antennas, although these are sometimes hidden (such as the antenna inside an AM radio or inside a laptop computer equipped with Wi-Fi).
Antennas can be classified as omnidirectional, radiating energy approximately equally in all directions, or Directional, where energy radiates more along one direction than others. (Antennas are reciprocal, so the same effect occurs for reception of radio waves.) A completely uniform omnidirectional antenna is not physically possible. Many important antenna types have a uniform radiation pattern in the horizontal plane, but send little energy upward or downward. A "directional" antenna usually is intended to maximize its coupling to the electromagnetic field in the direction of the other station.
One example of omnidirectional antennas is the very common vertical antenna or whip antenna consisting of a metal rod. A dipole antenna is similar but consists of two such conductors extending in opposite directions. Dipoles are typically oriented horizontally in which case they are weakly directional: signals are reasonably well radiated toward or received from all directions with the exception of the direction along the conductor itself; this region is called the antenna blind cone or null.
Both the vertical and dipole antennas are simple in construction and relatively inexpensive. The dipole antenna, which is the basis for most antenna designs, is a balanced component, with equal but opposite voltages and currents applied at its two terminals through a balanced transmission line (or to a coaxial transmission line through a so-called balun). The vertical antenna, on the other hand, is a monopole antenna. It is typically connected to the inner conductor of a coaxial transmission line (or a matching network); the shield of the transmission line is connected to ground. In this way, the ground (or any large conductive surface) plays the role of the second conductor of a dipole, thereby forming a complete circuit. Since monopole antennas rely on a conductive ground, a so-called grounding structure may be employed to provide a better ground contact to the earth or which itself acts as a ground plane to perform that function regardless of (or in absence of) an actual contact with the earth.
Antennas more complex than the dipole or vertical designs are usually intended to increase the directivity and consequently the gain of the antenna. This can be accomplished in many different ways leading to a plethora of antenna designs. The vast majority of designs are fed with a balanced line (unlike a monopole antenna) and are based on the dipole antenna with additional components (or elements) which increase its directionality. Antenna "gain" in this instance describes the concentration of radiated power into a particular solid angle of space, as opposed to the spherically uniform radiation of the ideal radiator. The increased power in the desired direction is at the expense of that in the undesired directions. Power is conserved, and there is no net power increase over that delivered from the power source (the transmitter.)
For instance, a phased array consists of two or more simple antennas which are connected together through an electrical network. This often involves a number of parallel dipole antennas with a certain spacing. Depending on the relative phase introduced by the network, the same combination of dipole antennas can operate as a "broadside array" (directional normal to a line connecting the elements) or as an "end-fire array" (directional along the line connecting the elements). Antenna arrays may employ any basic (omnidirectional or weakly directional) antenna type, such as dipole, loop or slot antennas. These elements are often identical.
However a log-periodic dipole array consists of a number of dipole elements of different lengths in order to obtain a somewhat directional antenna having an extremely wide bandwidth: these are frequently used for television reception in fringe areas. The dipole antennas composing it are all considered "active elements" since they are all electrically connected together (and to the transmission line). On the other hand, a superficially similar dipole array, the Yagi-Uda Antenna (or simply "Yagi"), has only one dipole element with an electrical connection; the other so-called parasitic elements interact with the electromagnetic field in order to realize a fairly directional antenna but one which is limited to a rather narrow bandwidth. The Yagi antenna has similar looking parasitic dipole elements but which act differently due to their somewhat different lengths. There may be a number of so-called "directors" in front of the active element in the direction of propagation, and usually a single (but possibly more) "reflector" on the opposite side of the active element.
Greater directionality can be obtained using beam-forming techniques such as a parabolic reflector or a horn. Since high directivity in an antenna depends on it being large compared to the wavelength, narrow beams of this type are more easily achieved at UHF and microwave frequencies.
At low frequencies (such as AM broadcast), arrays of vertical towers are used to achieve directionality and they will occupy large areas of land. For reception, a long Beverage antenna can have significant directivity. For non directional portable use, a short vertical antenna or small loop antenna works well, with the main design challenge being that of impedance matching. With a vertical antenna a loading coil at the base of the antenna may be employed to cancel the reactive component of impedance; small loop antennas are tuned with parallel capacitors for this purpose.
An antenna lead-in is the transmission line, or feed line, which connects the antenna to a transmitter or receiver. The "antenna feed" may refer to all components connecting the antenna to the transmitter or receiver, such as an impedance matching network in addition to the transmission line. In a so-called aperture antenna, such as a horn or parabolic dish, the "feed" may also refer to a basic antenna inside the entire system (normally at the focus of the parabolic dish or at the throat of a horn) which could be considered the one active element in that antenna system. A microwave antenna may also be fed directly from a waveguide in place of a (conductive) transmission line.
An antenna counterpoise, or ground plane, is a structure of conductive material which improves or substitutes for the ground. It may be connected to or insulated from the natural ground. In a monopole antenna, this aids in the function of the natural ground, particularly where variations (or limitations) of the characteristics of the natural ground interfere with its proper function. Such a structure is normally connected to the return connection of an unbalanced transmission line such as the shield of a coaxial cable.
An electromagnetic wave refractor in some aperture antennas is a component which due to its shape and position functions to selectively delay or advance portions of the electromagnetic wavefront passing through it. The refractor alters the spatial characteristics of the wave on one side relative to the other side. It can, for instance, bring the wave to a focus or alter the wave front in other ways, generally in order to maximize the directivity of the antenna system. This is the radio equivalent of an optical lens.
An antenna coupling network is a passive network (generally a combination of inductive and capacitive circuit elements) used for impedance matching in between the antenna and the transmitter or receiver. This may be used to improve the standing wave ratio in order to minimize losses in the transmission line and to present the transmitter or receiver with a standard resistive impedance that it expects to see for optimum operation.
It is a fundamental property of antennas that the electrical characteristics of an antenna described in the next section, such as gain, radiation pattern, impedance, bandwidth, resonant frequency and polarization, are the same whether the antenna is transmitting or receiving. For example, the "receiving pattern" (sensitivity as a function of direction) of an antenna when used for reception is identical to the radiation pattern of the antenna when it is driven and functions as a radiator. This is a consequence of the reciprocity theorem of electromagnetics. Therefore, in discussions of antenna properties no distinction is usually made between receiving and transmitting terminology, and the antenna can be viewed as either transmitting or receiving, whichever is more convenient.
A necessary condition for the aforementioned reciprocity property is that the materials in the antenna and transmission medium are linear and reciprocal. Reciprocal (or bilateral) means that the material has the same response to an electric current or magnetic field in one direction, as it has to the field or current in the opposite direction. Most materials used in antennas meet these conditions, but some microwave antennas use high-tech components such as isolators and circulators, made of nonreciprocal materials such as ferrite. These can be used to give the antenna a different behavior on receiving than it has on transmitting, which can be useful in applications like radar.
This section needs additional citations for verification. (January 2014) (Learn how and when to remove this template message)
Antennas are characterized by a number of performance measures which a user would be concerned with in selecting or designing an antenna for a particular application. Chief among these relate to the directional characteristics (as depicted in the antenna's radiation pattern) and the resulting gain. Even in omnidirectional (or weakly directional) antennas, the gain can often be increased by concentrating more of its power in the horizontal directions, sacrificing power radiated toward the sky and ground. The antenna's power gain (or simply "gain") also takes into account the antenna's efficiency, and is often the primary figure of merit.
Resonant antennas are expected to be used around a particular resonant frequency; an antenna must therefore be built or ordered to match the frequency range of the intended application. A particular antenna design will present a particular feedpoint impedance. While this may affect the choice of an antenna, an antenna's impedance can also be adapted to the desired impedance level of a system using a matching network while maintaining the other characteristics (except for a possible loss of efficiency).
Although these parameters can be measured in principle, such measurements are difficult and require very specialized equipment. Beyond tuning a transmitting antenna using an SWR meter, the typical user will depend on theoretical predictions based on the antenna design or on claims of a vendor.
An antenna transmits and receives radio waves with a particular polarization which can be reoriented by tilting the axis of the antenna in many (but not all) cases. The physical size of an antenna is often a practical issue, particularly at lower frequencies (longer wavelengths). Highly directional antennas need to be significantly larger than the wavelength. Resonant antennas usually use a linear conductor (or element), or pair of such elements, each of which is about a quarter of the wavelength in length (an odd multiple of quarter wavelengths will also be resonant). Antennas that are required to be small compared to the wavelength sacrifice efficiency and cannot be very directional. At higher frequencies (UHF, microwaves) trading off performance to obtain a smaller physical size is usually not required.
The majority of antenna designs are based on the resonance principle. This relies on the behaviour of moving electrons, which reflect off surfaces where the dielectric constant changes, in a fashion similar to the way light reflects when optical properties change. In these designs, the reflective surface is created by the end of a conductor, normally a thin metal wire or rod, which in the simplest case has a feed point at one end where it is connected to a transmission line. The conductor, or element, is aligned with the electrical field of the desired signal, normally meaning it is perpendicular to the line from the antenna to the source (or receiver in the case of a broadcast antenna).
The radio signal's electrical component induces a voltage in the conductor. This causes an electrical current to begin flowing in the direction of the signal's instantaneous field. When the resulting current reaches the end of the conductor, it reflects, which is equivalent to a 180 degree change in phase. If the conductor is 1⁄4 of a wavelength long, current from the feed point will undergo 90 degree phase change by the time it reaches the end of the conductor, reflect through 180 degrees, and then another 90 degrees as it travels back. That means it has undergone a total 360 degree phase change, returning it to the original signal. The current in the element thus adds to the current being created from the source at that instant. This process creates a standing wave in the conductor, with the maximum current at the feed.
The ordinary half-wave dipole is probably the most widely used antenna design. This consists of two 1⁄4-wavelength elements arranged end-to-end, and lying along essentially the same axis (or collinear), each feeding one side of a two-conductor transmission wire. The physical arrangement of the two elements places them 180 degrees out of phase, which means that at any given instant one of the elements is driving current into the transmission line while the other is pulling it out. The monopole antenna is essentially one half of the half-wave dipole, a single 1⁄4-wavelength element with the other side connected to ground or an equivalent ground plane (or counterpoise). Monopoles, which are one-half the size of a dipole, are common for long-wavelength radio signals where a dipole would be impractically large. Another common design is the folded dipole, which is essentially two dipoles placed side-by-side and connected at their ends to make a single one-wavelength antenna.
The standing wave forms with this desired pattern at the design frequency, f0, and antennas are normally designed to be this size. However, feeding that element with 3f0 (whose wavelength is 1⁄3 that of f0) will also lead to a standing wave pattern. Thus, an antenna element is also resonant when its length is 3⁄4 of a wavelength. This is true for all odd multiples of 1⁄4 wavelength. This allows some flexibility of design in terms of antenna lengths and feed points. Antennas used in such a fashion are known to be harmonically operated.
Current and voltage distribution
The quarter-wave elements imitate a series-resonant electrical element due to the standing wave present along the conductor. At the resonant frequency, the standing wave has a current peak and voltage node (minimum) at the feed. In electrical terms, this means the element has minimum reactance, generating the maximum current for minimum voltage. This is the ideal situation, because it produces the maximum output for the minimum input, producing the highest possible efficiency. Contrary to an ideal (lossless) series-resonant circuit, a finite resistance remains (corresponding to the relatively small voltage at the feed-point) due to the antenna's radiation resistance as well as any actual electrical losses.
Recall that a current will reflect when there are changes in the electrical properties of the material. In order to efficiently send the signal into the transmission line, it is important that the transmission line has the same impedance as the elements, otherwise some of the signal will be reflected back into the antenna. This leads to the concept of impedance matching, the design of the overall system of antenna and transmission line so the impedance is as close as possible, thereby reducing these losses. Impedance matching between antennas and transmission lines is commonly handled through the use of a balun, although other solutions are also used in certain roles. An important measure of this basic concept is the standing wave ratio, which measures the magnitude of the reflected signal.
Consider a half-wave dipole designed to work with signals 1 m wavelength, meaning the antenna would be approximately 50 cm across. If the element has a length-to-diameter ratio of 1000, it will have an inherent resistance of about 63 ohms. Using the appropriate transmission wire or balun, we match that resistance to ensure minimum signal loss. Feeding that antenna with a current of 1 ampere will require 63 volts of RF, and the antenna will radiate 63 watts (ignoring losses) of radio frequency power. Now consider the case when the antenna is fed a signal with a wavelength of 1.25 m; in this case the reflected current would arrive at the feed out-of-phase with the signal, causing the net current to drop while the voltage remains the same. Electrically this appears to be a very high impedance. The antenna and transmission line no longer have the same impedance, and the signal will be reflected back into the antenna, reducing output. This could be addressed by changing the matching system between the antenna and transmission line, but that solution only works well at the new design frequency.
The end result is that the resonant antenna will efficiently feed a signal into the transmission line only when the source signal's frequency is close to that of the design frequency of the antenna, or one of the resonant multiples. This makes resonant antenna designs inherently narrowband, and they are most commonly used with a single target signal. They are particularly common on radar systems, where the same antenna is used for both broadcast and reception, or for radio and television broadcasts, where the antenna is working with a single frequency. They are less commonly used for reception where multiple channels are present, in which case additional modifications are used to increase the bandwidth, or entirely different antenna designs are used.
Electrically short antennas
It is possible to use simple impedance matching techniques to allow the use of monopole or dipole antennas substantially shorter than the ¼ or ½ wavelength, respectively, at which they are resonant. As these antennas are made shorter (for a given frequency) their impedance becomes dominated by a series capacitive (negative) reactance; by adding a series inductance with the opposite (positive) reactance – a so-called loading coil – the antenna's reactance may be cancelled leaving only a pure resistance. Sometimes the resulting (lower) electrical resonant frequency of such a system (antenna plus matching network) is described using the concept of electrical length, so an antenna used at a lower frequency than its resonant frequency is called an electrically short antenna
For example, at 30 MHz (10 m wavelength) a true resonant ¼ wavelength monopole would be almost 2.5 meters long, and using an antenna only 1.5 meters tall would require the addition of a loading coil. Then it may be said that the coil has lengthened the antenna to achieve an electrical length of 2.5 meters. However, the resulting resistive impedance achieved will be quite a bit lower than that of a true ¼ wave (resonant) monopole, often requiring further impedance matching (a transformer) to the desired transmission line. For ever shorter antennas (requiring greater "electrical lengthening") the radiation resistance plummets (approximately according to the square of the antenna length), so that the mismatch due to a net reactance away from the electrical resonance worsens. Or one could as well say that the equivalent resonant circuit of the antenna system has a higher Q factor and thus a reduced bandwidth, which can even become inadequate for the transmitted signal's spectrum. Resistive losses due to the loading coil, relative to the decreased radiation resistance, entail a reduced electrical efficiency, which can be of great concern for a transmitting antenna, but bandwidth is the major factor[dubious ][dubious ] that sets the size of antennas at 1 MHz and lower frequencies.
Arrays and reflectors
The amount of signal received from a distant transmission source is essentially geometric in nature due to the inverse-square law, and this leads to the concept of effective area. This measures the performance of an antenna by comparing the amount of power it generates to the amount of power in the original signal, measured in terms of the signal's power density in Watts per square metre. A half-wave dipole has an effective area of 0.13 2. If more performance is needed, one cannot simply make the antenna larger. Although this would intercept more energy from the signal, due to the considerations above, it would decrease the output significantly due to it moving away from the resonant length. In roles where higher performance is needed, designers often use multiple elements combined together.
Returning to the basic concept of current flows in a conductor, consider what happens if a half-wave dipole is not connected to a feed point, but instead shorted out. Electrically this forms a single 1⁄2 wavelength element. But the overall current pattern is the same; the current will be zero at the two ends, and reach a maximum in the center. Thus signals near the design frequency will continue to create a standing wave pattern. Any varying electrical current, like the standing wave in the element, will radiate a signal. In this case, aside from resistive losses in the element, the rebroadcast signal will be significantly similar to the original signal in both magnitude and shape. If this element is placed so its signal reaches the main dipole in-phase, it will reinforce the original signal, and increase the current in the dipole. Elements used in this way are known as passive elements.
A Yagi-Uda array uses passive elements to greatly increase gain. It is built along a support boom that is pointed toward the signal, and thus sees no induced signal and does not contribute to the antenna's operation. The end closer to the source is referred to as the front. Near the rear is a single active element, typically a half-wave dipole or folded dipole. Passive elements are arranged in front (directors) and behind (reflectors) the active element along the boom. The Yagi has the inherent quality that it becomes increasingly directional, and thus has higher gain, as the number of elements increases. However, this also makes it increasingly sensitive to changes in frequency; if the signal frequency changes, not only does the active element receive less energy directly, but all of the passive elements adding to that signal also decrease their output as well and their signals no longer reach the active element in-phase.
It is also possible to use multiple active elements and combine them together with transmission lines to produce a similar system where the phases add up to reinforce the output. The antenna array and very similar reflective array antenna consist of multiple elements, often half-wave dipoles, spaced out on a plane and wired together with transmission lines with specific phase lengths to produce a single in-phase signal at the output. The log-periodic antenna is a more complex design that uses multiple in-line elements similar in appearance to the Yagi-Uda but using transmission lines between the elements to produce the output.
Reflection of the original signal also occurs when it hits an extended conductive surface, in a fashion similar to a mirror. This effect can also be used to increase signal through the use of a reflector, normally placed behind the active element and spaced so the reflected signal reaches the element in-phase. Generally the reflector will remain highly reflective even if it is not solid; gaps less than 1⁄10 generally have little effect on the outcome. For this reason, reflectors often take the form of wire meshes or rows of passive elements, which makes them lighter and less subject to wind-load effects, of particular importance when mounted at higher elevations with respect to the surrounding structures. The parabolic reflector is perhaps the best known example of a reflector-based antenna, which has an effective area far greater than the active element alone.
Although a resonant antenna has a purely resistive feed-point impedance at a particular frequency, many (if not most) applications require using an antenna over a range of frequencies. The frequency range or bandwidth over which an antenna functions well can be very wide (as in a log-periodic antenna) or narrow (as in a small loop antenna); outside this range the antenna impedance becomes a poor match to the transmission line and transmitter (or receiver).
In the case of the Yagi-Uda and other director / reflector arrays, use of the antenna well away from its design frequency affects its radiation pattern, reducing its directive gain, so the usable bandwidth is then limited regardless of impedance matching.
Aside from the problem of the changed directional pattern, the feed impedance of an antenna system can always be accommodated at any frequency by using a suitable matching network. This is most efficiently accomplished using a matching network at the feedpoint on the antenna, in effect changing the resonant frequency of the antenna; however, simply adjusting a remote matching network at the transmitter (or receiver) will leave the transmission line with a poor standing wave ratio. With low-impedance lines, such as the now-popular coaxial cable, the consequently high currents will result in high loss in the cable and low overall efficiency.[a]
Instead, it is often desired to have an antenna whose impedance does not vary so greatly over a certain bandwidth. It turns out that the amount of reactance seen at the terminals of a resonant antenna when the frequency is shifted, say, by 5%, depends on the diameter of the conductor used. A long thin wire used as a half-wave dipole (or quarter wave monopole) will have a reactance significantly greater than the resistive impedance it has at resonance, leading to a poor match and generally unacceptable performance. Making the element using a tube of a diameter perhaps 1⁄50 of its length, however, results in a reactance at this altered frequency which is not so great, and a much less serious mismatch and effect on the antenna's net performance. Thus rather thick tubes are often used for the elements; these also have reduced parasitic resistance (loss).
Rather than just using a thick tube, there are similar techniques used to the same effect such as replacing thin wire elements with cages to simulate a thicker element. This widens the bandwidth of the resonance. On the other hand, it is desired for amateur radio antennas to operate at several bands which are widely separated from each other (but not in between). This can often be accomplished by simply connecting elements resonant at those different frequencies in parallel. Most of the transmitter's power will flow into the resonant element while the others present a high (reactive) impedance, thus drawing little current from the same voltage. Another popular solution uses so-called traps consisting of parallel resonant circuits which are strategically placed in breaks along each antenna element. When used at one particular frequency band the trap presents a very high impedance (parallel resonance) effectively truncating the element at that length, making it a proper resonant antenna. At a lower frequency the trap allows the full length of the element to be employed, albeit with a shifted resonant frequency due to the inclusion of the trap's net reactance at that lower frequency.
The bandwidth characteristics of a resonant antenna element can be characterized according to its Q, just as one uses to characterize the sharpness of an L-C resonant circuit. A common mistake is to assume that there is an advantage in an antenna having a high Q (the so-called "quality factor"). In the context of electronic circuitry a low Q generally signifies greater loss (due to unwanted resistance) in a resonant L-C circuit, and poorer receiver selectivity. However this understanding does not apply to resonant antennas where the resistance involved is the radiation resistance, a desired quantity which removes energy from the resonant element in order to radiate it (the purpose of an antenna, after all!). The Q of an L-C-R circuit is defined as the ratio of the inductor's (or capacitor's) reactance to the resistance, so for a certain radiation resistance (the radiation resistance at resonance does not vary greatly with diameter) the greater reactance off-resonance causes the poorer bandwidth of an antenna employing a very thin conductor.
The Q of such a narrowband antenna can be as high as 15. On the other hand, the reactance at the same off-resonant frequency of one using thick elements is much less, consequently resulting in a Q as low as 5. These two antennas may perform equivalently at the resonant frequency, but the second antenna will perform over a bandwidth 3 times as wide as the antenna consisting of a thin conductor.
Antennas for use over much broader frequency ranges are achieved using further techniques. Adjustment of a matching network can, in principle, allow for any antenna to be matched at any frequency. Thus the small loop antenna built into most AM broadcast (medium wave) receivers has a very narrow bandwidth, but is tuned using a parallel capacitance which is adjusted according to the receiver tuning. On the other hand, log-periodic antennas are not resonant at any frequency but can be built to attain similar characteristics (including feedpoint impedance) over any frequency range. These are therefore commonly used (in the form of directional log-periodic dipole arrays) as television antennas.
Gain is a parameter which measures the degree of directivity of the antenna's radiation pattern. A high-gain antenna will radiate most of its power in a particular direction, while a low-gain antenna will radiate over a wider angle. The antenna gain, or power gain of an antenna is defined as the ratio of the intensity (power per unit surface area) radiated by the antenna in the direction of its maximum output, at an arbitrary distance, divided by the intensity radiated at the same distance by a hypothetical isotropic antenna which radiates equal power in all directions. This dimensionless ratio is usually expressed logarithmically in decibels, these units are called "decibels-isotropic" (dBi)
A second unit used to measure gain is the ratio of the power radiated by the antenna to the power radiated by a half-wave dipole antenna ; these units are called "decibels-dipole" (dBd)
Since the gain of a half-wave dipole is 2.15 dBi and the logarithm of a product is additive, the gain in dBi is just 2.15 decibels greater than the gain in dBd
High-gain antennas have the advantage of longer range and better signal quality, but must be aimed carefully at the other antenna. An example of a high-gain antenna is a parabolic dish such as a satellite television antenna. Low-gain antennas have shorter range, but the orientation of the antenna is relatively unimportant. An example of a low-gain antenna is the whip antenna found on portable radios and cordless phones. Antenna gain should not be confused with amplifier gain, a separate parameter measuring the increase in signal power due to an amplifying device placed at the front-end of the system, such as a low-noise amplifier.
Effective area or aperture
The effective area or effective aperture of a receiving antenna expresses the portion of the power of a passing electromagnetic wave which it delivers to its terminals, expressed in terms of an equivalent area. For instance, if a radio wave passing a given location has a flux of 1 pW / m2 (10−12 watts per square meter) and an antenna has an effective area of 12 m2, then the antenna would deliver 12 pW of RF power to the receiver (30 microvolts rms at 75 ohms). Since the receiving antenna is not equally sensitive to signals received from all directions, the effective area is a function of the direction to the source.
Due to reciprocity (discussed above) the gain of an antenna used for transmitting must be proportional to its effective area when used for receiving. Consider an antenna with no loss, that is, one whose electrical efficiency is 100%. It can be shown that its effective area averaged over all directions must be equal to λ2/4π, the wavelength squared divided by 4π. Gain is defined such that the average gain over all directions for an antenna with 100% electrical efficiency is equal to 1. Therefore, the effective area Aeff in terms of the gain G in a given direction is given by:
For an antenna with an efficiency of less than 100%, both the effective area and gain are reduced by that same amount. Therefore, the above relationship between gain and effective area still holds. These are thus two different ways of expressing the same quantity. Aeff is especially convenient when computing the power that would be received by an antenna of a specified gain, as illustrated by the above example.
The radiation pattern of an antenna is a plot of the relative field strength of the radio waves emitted by the antenna at different angles in the far-field. It is typically represented by a three-dimensional graph, or polar plots of the horizontal and vertical cross sections. The pattern of an ideal isotropic antenna, which radiates equally in all directions, would look like a sphere. Many nondirectional antennas, such as monopoles and dipoles, emit equal power in all horizontal directions, with the power dropping off at higher and lower angles; this is called an omnidirectional pattern and when plotted looks like a torus or donut.
The radiation of many antennas shows a pattern of maxima or "lobes" at various angles, separated by "nulls", angles where the radiation falls to zero. This is because the radio waves emitted by different parts of the antenna typically interfere, causing maxima at angles where the radio waves arrive at distant points in phase, and zero radiation at other angles where the radio waves arrive out of phase. In a directional antenna designed to project radio waves in a particular direction, the lobe in that direction is designed larger than the others and is called the "main lobe". The other lobes usually represent unwanted radiation and are called "sidelobes". The axis through the main lobe is called the "principal axis" or "boresight axis".
The space surrounding an antenna can be divided into three concentric regions: the reactive near-field (also called the inductive near-field), the radiating near-field (Fresnel region) and the far-field (Fraunhofer) regions. These regions are useful to identify the field structure in each, although there are no precise boundaries.
The far-field region is far enough from the antenna to ignore its size and shape: It can be assumed that the electromagnetic wave is purely a radiating plane wave (electric and magnetic fields are in phase and perpendicular to each other and to the direction of propagation). This simplifies the mathematical analysis of the radiated field.
As an electro-magnetic wave travels through the different parts of the antenna system (radio, feed line, antenna, free space) it may encounter differences in impedance (E⁄H, V⁄I, etc.). At each interface, depending on the impedance match, some fraction of the wave's energy will reflect back to the source,[b] forming a standing wave in the feed line. The ratio of maximum power to minimum power in the wave can be measured and is called the standing wave ratio (SWR). A SWR of 1:1 is ideal. A SWR of 1.5:1 is considered to be marginally acceptable in low power applications where power loss is more critical, although an SWR as high as 6:1 may still be usable with the right equipment. Minimizing impedance differences at each interface (impedance matching) will reduce SWR and maximize power transfer through each part of the antenna system.
Complex impedance of an antenna is related to the electrical length of the antenna at the wavelength in use. The impedance of an antenna can be matched to the feed line and radio by adjusting the impedance of the feed line, using the feed line as an impedance transformer. More commonly, the impedance is adjusted at the load (see below) with an antenna tuner, a balun, a matching transformer, matching networks composed of inductors and capacitors, or matching sections such as the gamma match.
Efficiency of a transmitting antenna is the ratio of power actually radiated (in all directions) to the power absorbed by the antenna terminals. The power supplied to the antenna terminals which is not radiated is converted into heat. This is usually through loss resistance in the antenna's conductors, but can also be due to dielectric or magnetic core losses in antennas (or antenna systems) using such components. Such loss effectively robs power from the transmitter, requiring a stronger transmitter in order to transmit a signal of a given strength.
For instance, if a transmitter delivers 100 W into an antenna having an efficiency of 80%, then the antenna will radiate 80 W as radio waves and produce 20 W of heat. In order to radiate 100 W of power, one would need to use a transmitter capable of supplying 125 W to the antenna. Antenna efficiency is separate from impedance matching, which may also reduce the amount of power radiated using a given transmitter. If an SWR meter reads 150 W of incident power and 50 W of reflected power, that means 100 W have actually been absorbed by the antenna (ignoring transmission line losses). How much of that power has actually been radiated cannot be directly determined through electrical measurements at (or before) the antenna terminals, but would require (for instance) careful measurement of field strength. The loss resistance and efficiency of an antenna can be calculated.
The loss resistance will generally affect the feedpoint impedance, adding to its resistive component. That resistance will consist of the sum of the radiation resistance Rr and the loss resistance Rloss. If a current I is delivered to the terminals of an antenna, then a power of I2Rr will be radiated and a power of I2Rloss will be lost as heat. Therefore, the efficiency of an antenna is equal to Rr / (Rr + Rloss). Only the total resistance Rr + Rloss can be directly measured.
According to reciprocity, the efficiency of an antenna used as a receiving antenna is identical to the efficiency as defined above. The power that an antenna will deliver to a receiver (with a proper impedance match) is reduced by the same amount. In some receiving applications, the very inefficient antennas may have little impact on performance. At low frequencies, for example, atmospheric or man-made noise can mask antenna inefficiency. For example, CCIR Rep. 258-3 indicates man-made noise in a residential setting at 40 MHz is about 28 dB above the thermal noise floor. Consequently, an antenna with a 20 dB loss (due to inefficiency) would have little impact on system noise performance. The loss within the antenna will affect the intended signal and the noise/interference identically, leading to no reduction in signal to noise ratio (SNR).
Antennas which are not a significant fraction of a wavelength in size are inevitably inefficient due to their small radiation resistance. AM broadcast radios include a small loop antenna for reception which has an extremely poor efficiency. This has little effect on the receiver's performance, but simply requires greater amplification by the receiver's electronics. Contrast this tiny component to the massive and very tall towers used at AM broadcast stations for transmitting at the very same frequency, where every percentage point of reduced antenna efficiency entails a substantial cost.
The definition of antenna gain or power gain already includes the effect of the antenna's efficiency. Therefore, if one is trying to radiate a signal toward a receiver using a transmitter of a given power, one need only compare the gain of various antennas rather than considering the efficiency as well. This is likewise true for a receiving antenna at very high (especially microwave) frequencies, where the point is to receive a signal which is strong compared to the receiver's noise temperature. However, in the case of a directional antenna used for receiving signals with the intention of rejecting interference from different directions, one is no longer concerned with the antenna efficiency, as discussed above. In this case, rather than quoting the antenna gain, one would be more concerned with the directive gain, or simply directivity which does not include the effect of antenna (in)efficiency. The directive gain of an antenna can be computed from the published gain divided by the antenna's efficiency. In equation form, gain = directivity × efficiency.
The polarization of an antenna refers to the orientation of the electric field (E-plane) of the radio wave with respect to the Earth's surface and is determined by the physical structure of the antenna and by its orientation. This is distinct from the antenna's directionality. Thus, a simple straight wire antenna will have one polarization when mounted vertically, and a different polarization when mounted horizontally. As a transverse wave, the magnetic field of a radio wave is at right angles to that of the electric field, but by convention, talk of an antenna's "polarization" is understood to refer to the direction of the electric field.
Reflections generally affect polarization. For radio waves, one important reflector is the ionosphere which can change the wave's polarization. Thus for signals received following reflection by the ionosphere (a skywave), a consistent polarization cannot be expected. For line-of-sight communications or ground wave propagation, horizontally or vertically polarized transmissions generally remain in about the same polarization state at the receiving location. Matching the receiving antenna's polarization to that of the transmitter can make a very substantial difference in received signal strength.
Polarization is predictable from an antenna's geometry, although in some cases it is not at all obvious (such as for the quad antenna). An antenna's linear polarization is generally along the direction (as viewed from the receiving location) of the antenna's currents when such a direction can be defined. For instance, a vertical whip antenna or Wi-Fi antenna vertically oriented will transmit and receive in the vertical polarization. Antennas with horizontal elements, such as most rooftop TV antennas in the United States, are horizontally polarized (broadcast TV in the U.S. usually uses horizontal polarization). Even when the antenna system has a vertical orientation, such as an array of horizontal dipole antennas, the polarization is in the horizontal direction corresponding to the current flow. The polarization of a commercial antenna is an essential specification.
Polarization is the sum of the E-plane orientations over time projected onto an imaginary plane perpendicular to the direction of motion of the radio wave. In the most general case, polarization is elliptical, meaning that the polarization of the radio waves varies over time. Two special cases are linear polarization (the ellipse collapses into a line) as discussed above, and circular polarization (in which the two axes of the ellipse are equal). In linear polarization the electric field of the radio wave oscillates back and forth along one direction; this can be affected by the mounting of the antenna but usually the desired direction is either horizontal or vertical polarization. In circular polarization, the electric field (and magnetic field) of the radio wave rotates at the radio frequency circularly around the axis of propagation. Circular or elliptically polarized radio waves are designated as right-handed or left-handed using the "thumb in the direction of the propagation" rule. For circular polarization, optical researchers use the opposite right hand rule from the one used by radio engineers.
It is best for the receiving antenna to match the polarization of the transmitted wave for optimum reception. Intermediate matchings will lose some signal strength, but not as much as a complete mismatch. A circularly polarized antenna can be used to equally well match vertical or horizontal linear polarizations. Transmission from a circularly polarized antenna received by a linearly polarized antenna (or vice versa) entails a 3 dB reduction in signal-to-noise ratio as the received power has thereby been cut in half.
Maximum power transfer requires matching the impedance of an antenna system (as seen looking into the transmission line) to the complex conjugate of the impedance of the receiver or transmitter. In the case of a transmitter, however, the desired matching impedance might not correspond to the dynamic output impedance of the transmitter as analyzed as a source impedance but rather the design value (typically 50 ohms) required for efficient and safe operation of the transmitting circuitry. The intended impedance is normally resistive but a transmitter (and some receivers) may have additional adjustments to cancel a certain amount of reactance in order to "tweak" the match. When a transmission line is used in between the antenna and the transmitter (or receiver) one generally would like an antenna system whose impedance is resistive and near the characteristic impedance of that transmission line in order to minimize the standing wave ratio (SWR) and the increase in transmission line losses it entails, in addition to supplying a good match at the transmitter or receiver itself.
Antenna tuning generally refers to cancellation of any reactance seen at the antenna terminals, leaving only a resistive impedance which might or might not be exactly the desired impedance (that of the transmission line). Although an antenna may be designed to have a purely resistive feedpoint impedance (such as a dipole 97% of a half wavelength long) this might not be exactly true at the frequency that it is eventually used at. In some cases the physical length of the antenna can be "trimmed" to obtain a pure resistance. On the other hand, the addition of a series inductance or parallel capacitance can be used to cancel a residual capacitative or inductive reactance, respectively.
In some cases this is done in a more extreme manner, not simply to cancel a small amount of residual reactance, but to resonate an antenna whose resonance frequency is quite different from the intended frequency of operation. For instance, a "whip antenna" can be made significantly shorter than 1/4 wavelength long, for practical reasons, and then resonated using a so-called loading coil. This physically large inductor at the base of the antenna has an inductive reactance which is the opposite of the capacitative reactance that such a vertical antenna has at the desired operating frequency. The result is a pure resistance seen at feedpoint of the loading coil; that resistance is somewhat lower than would be desired to match commercial coax.
So an additional problem beyond canceling the unwanted reactance is of matching the remaining resistive impedance to the characteristic impedance of the transmission line. In principle this can be done with a transformer, however the turns ratio of a transformer is not adjustable. A general matching network with at least two adjustments can be made to correct both components of impedance. Matching networks using discrete inductors and capacitors will have losses associated with those components, and will have power restrictions when used for transmitting. Avoiding these difficulties, commercial antennas are generally designed with fixed matching elements or feeding strategies to get an approximate match to standard coax, such as 50 or 75 ohms. Antennas based on the dipole (rather than vertical antennas) may include a balun between the transmission line and antenna element, which may be integrated into any such matching network.
Another extreme case of impedance matching occurs when using a small loop antenna (usually, but not always, for receiving) at a relatively low frequency where it appears almost as a pure inductor. Resonating such an inductor with a capacitor at the frequency of operation not only cancels the reactance but greatly magnifies the very small radiation resistance of such a loop. This is implemented in most AM broadcast receivers, with a small ferrite loop antenna resonated by a capacitor which is varied along with the receiver tuning in order to maintain resonance over the AM broadcast band
Antennas can be classified in various ways. The list below groups together antennas under common operating principles, following the way antennas are classified in many engineering textbooks.
Isotropic: An isotropic antenna (isotropic radiator) is a hypothetical antenna that radiates equal signal power in all directions. It is a mathematical model that is used as the base of comparison to calculate the gain of real antennas. No real antenna can have an isotropic radiation pattern. However approximately isotropic antennas, constructed with multiple elements, are used in antenna testing.
The first four groups below are usually resonant antennas; when driven at their resonant frequency[c] their elements act as resonators. Waves of current and voltage bounce back and forth between the ends, creating standing waves along the elements.
The dipole is the prototypical antenna on which a large class of antennas are based. A basic dipole antenna consists of two conductors (usually metal rods or wires) arranged symmetrically, with one side of the balanced feedline from the transmitter or receiver attached to each. The most common type, the half-wave dipole, consists of two resonant elements just under a quarter wavelength long. This antenna radiates maximally in directions perpendicular to the antenna's axis, giving it a small directive gain of 2.15 dBi. Although half-wave dipoles are used alone as omnidirectional antennas, they are also a building block of many other more complicated directional antennas.
- Turnstile – Two dipole antennas mounted at right angles, fed with a phase difference of 90°. This antenna is unusual in that it radiates in all directions (no nulls in the radiation pattern), with horizontal polarization in directions coplanar with the elements, circular polarization normal to that plane, and elliptical polarization in other directions. Used for receiving signals from satellites, as circular polarization is transmitted by many satellites.
- Corner reflector – A directive antenna with moderate gain of about 8 dBi often used at UHF frequencies. Consists of a dipole mounted in front of two reflective metal screens joined at an angle, usually 90°. Used as a rooftop UHF television antenna and for point-to-point data links.
- Patch (microstrip) – A type of antenna with elements consisting of metal sheets mounted over a ground plane. Similar to dipole with gain of 6–9 dBi. Integrated into surfaces such as aircraft bodies. Their easy fabrication using PCB techniques have made them popular in modern wireless devices. Often combined into arrays.
A monopole antenna consists of a single conductor such as a metal rod, usually mounted over the ground or an artificial conducting surface (a so-called ground plane). One side of the feedline from the receiver or transmitter is connected to the conductor, and the other side to ground or the artificial ground plane. The radio waves reflected from the ground plane seem to come from an image antenna below the ground, with the monopole and its image forming a dipole, so the monopole antenna has a radiation pattern identical to the top half of the pattern of a similar dipole antenna. Since all of the equivalent dipole's radiation is concentrated in a half-space, the antenna has twice (3 dB increase of) the gain of a similar dipole, not considering losses in the ground plane.
The most common form is the quarter-wave monopole which is one-quarter of a wavelength long and has a gain of 5.12 dBi when mounted over a ground plane. Monopoles have an omnidirectional radiation pattern, so they are used for broad coverage of an area, and have vertical polarization. The ground waves used for broadcasting at low frequencies must be vertically polarized, so large vertical monopole antennas are used for broadcasting in the MF, LF, and VLF bands. Small monopoles are used as nondirectional antennas on portable radios in the HF, VHF, and UHF bands.
- Whip – Type of antenna used on mobile and portable radios in the VHF and UHF bands such as boom boxes, consists of a flexible rod, often made of telescoping segments.
- Rubber Ducky – Most common antenna used on portable two way radios and cordless phones due to its compactness, consists of an electrically short wire helix. The helix adds inductance to cancel the capacitive reactance of the short radiator, making it resonant. Very low gain.
- Ground plane – a whip antenna with several rods extending horizontally from base of whip attached to the ground side of the feedline. Since whips are mounted above ground, the horizontal rods form an artificial ground plane under the antenna to increase its gain. Used as base station antennas for land mobile radio systems such as police, ambulance and taxi dispatchers.
- Mast radiator – A radio tower in which the tower structure itself serves as the antenna. Common form of transmitting antenna for AM radio stations and other MF and LF transmitters. At its base the tower is usually, but not necessarily, mounted on a ceramic insulator to isolate it from the ground.
- T and inverted L – Consist of a long horizontal wire suspended between two towers with insulators, with a vertical wire hanging down from it, attached to a feedline to the receiver or transmitter. Used on LF and VLF bands. The vertical wire serves as the radiator. Since at these frequencies the vertical wire is electrically short, much shorter than a quarter wavelength, the horizontal wire(s) serve as a capacitive "hat" to increase the current in the vertical radiator, increasing the gain. Very narrow bandwidth, requires loading coil to tune out any remaining capacitive reactance. Requires low resistance ground.
- Inverted F – Combines the advantages of the compactness of inverted-L antenna, and the good matching of the F-type antenna. The antenna is grounded at the base and fed at some intermediate point. The position of the feed point determines the antenna impedance. Thus, matching can be achieved without the need for an extraneous matching network.
- Umbrella – Very large wire transmitting antennas used on VLF bands. Consists of a central mast radiator tower attached at the top to multiple wires extending out radially from the mast to ground, like a tent or umbrella, insulated at the ends. Extremely narrow bandwidth, requires large loading coil and low resistance counterpoise ground. Used for long range military communications.
Array antennas consist of multiple antennas working as a single antenna. Typically they consist of arrays of identical driven elements, usually dipoles fed in phase, giving increased gain over that of a single dipole.
- Collinear - Consist of a number of dipoles in a vertical line. It is a high gain omnidirectional antenna, meaning more of the power is radiated in horizontal directions and less into the sky or ground and wasted. Gain of 8–10 dBi. Used as base station antennas for land mobile radio systems such as police, fire, ambulance, and taxi dispatchers, and sector antennas for cellular base stations.
- Yagi-Uda – One of the most common directional antennas at HF, VHF, and UHF frequencies. Consists of multiple half wave dipole elements in a line, with a single driven element and multiple parasitic elements which serve to create a uni-directional or beam antenna. These typically have gains between 10–20 dBi depending on the number of elements used, and are very narrowband (with a usable bandwidth of only a few percent) though there are derivative designs which relax this limitation. Used for rooftop television antennas, point-to-point communication links, and long distance shortwave communication using skywave ("skip") reflection from the ionosphere.
- Log-periodic dipole array – Often confused with the Yagi-Uda, this consists of many dipole elements along a boom with gradually increasing lengths, all connected to the transmission line with alternating polarity. It is a directional antenna with a wide bandwidth. This makes it ideal for use as a rooftop television antenna, although its gain is much less than a Yagi of comparable size.
- Reflective array - multiple dipoles in a two-dimensional array mounted in front of a flat reflecting screen. Used for radar and UHF television transmitting and receiving antennas.
- Phased array - A high gain antenna used at UHF and microwave frequencies which is electronically steerable. It consists of multiple dipoles in a two-dimensional array, each fed through an electronic phase shifter, with the phase shifters controlled by a computer control system. The beam can be instantly pointed in any direction over a wide angle in front of the antenna. Used for military radar and jamming systems.
- Curtain array - Large directional wire transmitting antenna used at HF by shortwave broadcasting stations. It consists of a vertical rectangular array of wire dipoles suspended in front of a flat reflector screen consisting of a vertical "curtain" of parallel wires, all supported between two metal towers. It radiates a horizontal beam of radio waves into the sky above the horizon, which is reflected by the ionosphere to Earth beyond the horizon.
- Batwing or superturnstile - A specialized antenna used in television broadcasting consisting of perpendicular pairs of dipoles with radiators resembling bat wings. Multiple batwing antennas are stacked vertically on a mast to make VHF television broadcast antennas. Omnidirectional radiation pattern with high gain in horizontal directions. The batwing shape gives them wide bandwidth.
- Microstrip - an array of patch antennas on a substrate fed by microstrip feedlines. Microwave antenna that can achieve large gains in compact space. Ease of fabrication by PCB techniques have made them popular in modern wireless devices. Beamwidth and polarization can be actively reconfigurable.
Loop antennas consist of a loop (or coil) of wire. Loop antennas interact directly with the magnetic field of the radio wave, rather than its electric field, making them relatively insensitive electrical noise within about a quarter-wavelength of the antenna.
There are essentially two broad categories of loop antennas: large loops (or full-wave loops) and small loops. Loops with circumference of a full wavelength, or an integer multiple of a full-wavelength, are naturally resonant and act somewhat similarly to the full-wave or multi-wave dipole. When it is necessary to distinguish them from small loops, they are called “full-wave” loops.[d]
Full loops have the highest radiation resistance, and hence the highest efficiency of all antennas: their radiation resistances are several hundreds of Ohms, whereas dipoles and monopoles are tens of Ohms, and small loops are a few Ohms, or even fractions of an Ohm.
Loops that are a half-wavelength in circumference – with a small gap cut in the loop – are called “halo antennas” and are intermediate in form and function between small and large loops.
If the loop circumference is smaller than a half-wavelength, the loop must be modified in some way to make it resonant (if that is necessary). Small loops are called “magnetic loops”, and if modified for resonance they are also called “tuned loops”. Their directionality and radiation is drastically different from full-wave loops. The great disadvantage of small loops, or any small antenna, is a very small radiation resistance – typically much smaller than the loss resistance, making small loops very inefficient for transmitting; however, small loops are very effective receiving antennas, especially at low frequencies, where all feasible antennas are “small” compared to a wavelength. Small loops are also widely used as compact direction finding antennas.
- Quad – Although “quad” can refer to a single quadrilateral-shaped loop, the term usually refers to two or more such loops stacked side by side; at first glance, quads resemble a box kite frame. Quad antennas are highly directional, made of multiple full-size loops lined up in a row with their planes in parallel, like sheets hanging side-by-side on multiple parallel clothes lines. One loop in the quad is connected to the feedline and functions as the driver for the antenna and is the main signal radiator.
- Quad antennas are the exact analogue, for loops, to a Yagi-Uda antenna made out of dipoles; in fact, a “Yagi” can be built using a mixture of loops and dipoles. Similar to Yagis, quads are used as a directional antennas on the HF bands for shortwave communication, and are sometimes preferred for longer wavelengths because (if square) they are half as wide as a Yagi.
In Quad antennas, the loops which are not connected to the feedline act as ‘helpers’ for the driven loop that is connected to the feedline; they are spaced and tuned so that they absorb and re-radiate signal power from the main loop beneficially for the directivity of the antenna – exactly like mirrors and lenses in a flashlight. The single ‘helper’ loop behind the driven loop intercepts and reflects back forward the rearward-traveling signal from the driven loop, and is called the “reflector”. The loops in front of the driven loop focus the forward-traveling signal into a narrower beam, and are called “directors”. Because the ‘helper’ loops draw power from the field created by the driven element they are called in general parasitic elements.
- Ferrite (loopstick) – Loopstick antennas are omnidirectional around their axes, and are premier examples of small loop antennas. They are the magnetic analogue of the short dipole antenna. These are used as the receiving antenna in most consumer AM radios operating in the medium wave broadcast band (and lower frequencies).[e] Wire is coiled around a ferrite core which greatly increases the coil's inductance and its effective signal-capturing area. The radiation pattern is maximum at directions perpendicular to the ferrite rod. The null direction of ferrite core antennas are bi-directional and much sharper than the maximal directionality. This often makes the direction of the null more useful for locating a signal source than the direction of the strongest signal. The null direction of small loops can also be exploited to reject unwanted signals from an interfering station or noise source.
Aperture antennas are the main type of directional antennas used at microwave frequencies and above. They consist of a small dipole or loop feed antenna inside a three-dimensional guiding structure large compared to a wavelength, with an aperture to emit the radio waves. Since the antenna structure itself is nonresonant they can be used over a wide frequency range by replacing or tuning the feed antenna.
- Parabolic - The most widely used high gain antenna at microwave frequencies and above. Consists of a dish-shaped metal parabolic reflector with a feed antenna at the focus. It can have some of the highest gains of any antenna type, up to 60 dBi, but the dish must be large compared to a wavelength. Used for radar antennas, point-to-point data links, satellite communication, and radio telescopes
- Horn - a simple antenna with moderate gain of 15 to 25 dBi that consists of a flaring metal horn attached to a waveguide. Used for applications such as radar guns, radiometers and as feed antennas for parabolic dishes.
- Slot - consists of a waveguide with one or more slots cut in it to emit the microwaves. Linear slot antennas emit narrow fan-shaped beams. Used as UHF broadcast antennas and marine radar antennas.
- Lens - a lens antenna consists of layer of dielectric or a metal screen or multiple waveguide structure of varying thickness in front of a feed antenna, which acts as a lens which refracts the radio waves, focusing them on the feed antenna.
- Dielectric resonator - consists of small ball or puck-shaped piece of dielectric material excited by aperture in waveguide Used at millimeter wave frequencies
Unlike the above antennas, traveling wave antennas are nonresonant so they have inherently broad bandwidth. They are typically wire antennas multiple wavelengths long, through which the voltage and current waves travel in one direction, instead of bouncing back and forth to form standing waves as in resonant antennas. They have linear polarization (except for the helical antenna). Unidirectional traveling wave antennas are terminated by a resistor at one end equal to the antenna's characteristic resistance, to absorb the waves from one direction. This makes them inefficient as transmitting antennas.
- Beverage - Simplest unidirectional traveling wave antenna. Consists of a straight wire one to several wavelengths long, suspended near the ground, connected to the receiver at one end and terminated by a resistor equal to its characteristic impedance, 400–800 Ω at the other end. Its radiation pattern has a main lobe at a shallow angle in the sky off the terminated end. It is used for reception of skywaves reflected off the ionosphere in long distance "skip" shortwave communication.
- Rhombic - Consists of four equal wire sections shaped like a rhombus. It is fed by a balanced feedline at one of the acute corners, and the two sides are connected to a resistor equal to the characteristic resistance of the antenna at the other. It has a main lobe in a horizontal direction off the terminated end of the rhombus. Used for skywave communication on shortwave bands.
- Leaky wave - Microwave antennas consisting of a waveguide or coaxial cable with a slot or apertures cut in it so it radiates continuously along its length.
Other antenna types
The following two antenna types do not quite properly fit in any of the sections above, or fit into several, depending on the wavelength to antenna-length ratio.
- Random wire – This describes the typical antenna used to receive shortwave radio, consisting of a random length of wire either strung outdoors between supports or indoors in a zigzag pattern along walls, connected to the receiver at one end. The antenna typically will have complex radiation patterns with several lobes at angles to the wire.
- Random wire antennas typically are categorized as folded monopole antennas, when their lengths are two wavelengths or less, but become similar to Beverage antennas when they are several wavelengths long.
- Helical (axial mode) – Consists of a wire in the shape of a helix mounted above a reflecting screen. It radiates circularly polarized waves in a beam off the end, with a typical gain of 15 dBi. It is used at VHF and UHF frequencies. Often used for satellite communication, which uses circular polarization because it is insensitive to the relative rotation on the beam axis.
- When a helical antenna has about 10 turns or more, each turn a full wavelength, then it is a form of traveling wave antenna. If it only has one or a few turns and their totaled circumference one or a few wavelengths, then it is some variety of a large loop antenna. When the totaled circumference of all the turns taken together are less than one wavelength, then the helix is a loaded monopole, called a "rubber ducky antenna".
Effect of ground
The radiation pattern and even the driving point impedance of an antenna can be influenced by the dielectric constant and especially conductivity of nearby objects. For a terrestrial antenna, the ground is usually one such object of importance. The antenna's height above the ground, as well as the electrical properties (permittivity and conductivity) of the ground, can then be important. Also, in the particular case of a monopole antenna, the ground (or an artificial ground plane) serves as the return connection for the antenna current thus having an additional effect, particularly on the impedance seen by the feed line.
When an electromagnetic wave strikes a plane surface such as the ground, part of the wave is transmitted into the ground and part of it is reflected, according to the Fresnel coefficients. If the ground is a very good conductor then almost all of the wave is reflected (180° out of phase), whereas a ground modeled as a (lossy) dielectric can absorb a large amount of the wave's power. The power remaining in the reflected wave, and the phase shift upon reflection, strongly depend on the wave's angle of incidence and polarization. The dielectric constant and conductivity (or simply the complex dielectric constant) is dependent on the soil type and is a function of frequency.
For very low frequencies to high frequencies (<30 MHz), the ground behaves as a lossy dielectric, thus the ground is characterized both by a conductivity and permittivity (dielectric constant) which can be measured for a given soil (but is influenced by fluctuating moisture levels) or can be estimated from certain maps. At lower frequencies the ground acts mainly as a good conductor, which AM middle wave broadcast (.5 - 1.6 MHz) antennas depend on.
At frequencies between 3 and 30 MHz, a large portion of the energy from a horizontally polarized antenna reflects off the ground, with almost total reflection at the grazing angles important for ground wave propagation. That reflected wave, with its phase reversed, can either cancel or reinforce the direct wave, depending on the antenna height in wavelengths and elevation angle (for a sky wave).
On the other hand, vertically polarized radiation is not well reflected by the ground except at grazing incidence or over very highly conducting surfaces such as sea water. However the grazing angle reflection important for ground wave propagation, using vertical polarization, is in phase with the direct wave, providing a boost of up to 6 db, as is detailed below.
At VHF and above (>30 MHz) the ground becomes a poorer reflector. However it remains a good reflector especially for horizontal polarization and grazing angles of incidence. That is important as these higher frequencies usually depend on horizontal line-of-sight propagation (except for satellite communications), the ground then behaving almost as a mirror.
The net quality of a ground reflection depends on the topography of the surface. When the irregularities of the surface are much smaller than the wavelength, the dominant regime is that of specular reflection, and the receiver sees both the real antenna and an image of the antenna under the ground due to reflection. But if the ground has irregularities not small compared to the wavelength, reflections will not be coherent but shifted by random phases. With shorter wavelengths (higher frequencies), this is generally the case.
Whenever both the receiving or transmitting antenna are placed at significant heights above the ground (relative to the wavelength), waves specularly reflected by the ground will travel a longer distance than direct waves, inducing a phase shift which can sometimes be significant. When a sky wave is launched by such an antenna, that phase shift is always significant unless the antenna is very close to the ground (compared to the wavelength).
This section does not cite any sources. (January 2014) (Learn how and when to remove this template message)
The phase of reflection of electromagnetic waves depends on the polarization of the incident wave. Given the larger refractive index of the ground (typically n=2) compared to air (n=1), the phase of horizontally polarized radiation is reversed upon reflection (a phase shift of radians or 180°). On the other hand, the vertical component of the wave's electric field is reflected at grazing angles of incidence approximately in phase. These phase shifts apply as well to a ground modelled as a good electrical conductor.
This means that a receiving antenna "sees" an image of the antenna but with reversed currents. That current is in the same absolute direction as the actual antenna if the antenna is vertically oriented (and thus vertically polarized) but opposite the actual antenna if the antenna current is horizontal.
The actual antenna which is transmitting the original wave then also may receive a strong signal from its own image from the ground. This will induce an additional current in the antenna element, changing the current at the feedpoint for a given feedpoint voltage. Thus the antenna's impedance, given by the ratio of feedpoint voltage to current, is altered due to the antenna's proximity to the ground. This can be quite a significant effect when the antenna is within a wavelength or two of the ground. But as the antenna height is increased, the reduced power of the reflected wave (due to the inverse square law) allows the antenna to approach its asymptotic feedpoint impedance given by theory. At lower heights, the effect on the antenna's impedance is very sensitive to the exact distance from the ground, as this affects the phase of the reflected wave relative to the currents in the antenna. Changing the antenna's height by a quarter wavelength, then changes the phase of the reflection by 180°, with a completely different effect on the antenna's impedance.
The ground reflection has an important effect on the net far field radiation pattern in the vertical plane, that is, as a function of elevation angle, which is thus different between a vertically and horizontally polarized antenna. Consider an antenna at a height h above the ground, transmitting a wave considered at the elevation angle θ. For a vertically polarized transmission the magnitude of the electric field of the electromagnetic wave produced by the direct ray plus the reflected ray is:
Thus the power received can be as high as 4 times that due to the direct wave alone (such as when θ=0), following the square of the cosine. The sign inversion for the reflection of horizontally polarized emission instead results in:
- is the electrical field that would be received by the direct wave if there were no ground.
- θ is the elevation angle of the wave being considered.
- is the wavelength.
- is the height of the antenna (half the distance between the antenna and its image).
For horizontal propagation between transmitting and receiving antennas situated near the ground reasonably far from each other, the distances traveled by the direct and reflected rays are nearly the same. There is almost no relative phase shift. If the emission is polarized vertically, the two fields (direct and reflected) add and there is maximum of received signal. If the signal is polarized horizontally, the two signals subtract and the received signal is largely cancelled. The vertical plane radiation patterns are shown in the image at right. With vertical polarization there is always a maximum for θ=0, horizontal propagation (left pattern). For horizontal polarization, there is cancellation at that angle. Note that the above formulae and these plots assume the ground as a perfect conductor. These plots of the radiation pattern correspond to a distance between the antenna and its image of 2.5λ. As the antenna height is increased, the number of lobes increases as well.
The difference in the above factors for the case of θ=0 is the reason that most broadcasting (transmissions intended for the public) uses vertical polarization. For receivers near the ground, horizontally polarized transmissions suffer cancellation. For best reception the receiving antennas for these signals are likewise vertically polarized. In some applications where the receiving antenna must work in any position, as in mobile phones, the base station antennas use mixed polarization, such as linear polarization at an angle (with both vertical and horizontal components) or circular polarization.
On the other hand, analog television transmissions are usually horizontally polarized, because in urban areas buildings can reflect the electromagnetic waves and create ghost images due to multipath propagation. Using horizontal polarization, ghosting is reduced because the amount of reflection in the horizontal polarization off the side of a building is generally less than in the vertical direction. Vertically polarized analog television have been used in some rural areas. In digital terrestrial television such reflections are less problematic, due to robustness of binary transmissions and error correction.
Mutual impedance and interaction between antennas
Current circulating in one antenna generally induces a voltage across the feedpoint of nearby antennas or antenna elements. The mathematics presented below are useful in analyzing the electrical behaviour of antenna arrays, where the properties of the individual array elements (such as half wave dipoles) are already known. If those elements were widely separated and driven in a certain amplitude and phase, then each would act independently as that element is known to. However, because of the mutual interaction between their electric and magnetic fields due to proximity, the currents in each element are not simply a function of the applied voltage (according to its driving point impedance), but depend on the currents in the other nearby elements. This now is a near field phenomenon which could not be properly accounted for using the Friis transmission formula for instance. This near field effect creates a different set of currents at the antenna terminals resulting in distortions in the far field radiation patterns; however, the distortions may be removed using a simple set of network equations.
The elements' feedpoint currents and voltages can be related to each other using the concept of mutual impedance between every pair of antennas just as the mutual impedance describes the voltage induced in one inductor by a current through a nearby coil coupled to it through a mutual inductance M. The mutual impedance between two antennas is defined as:
where is the current flowing in antenna i and is the voltage induced at the open-circuited feedpoint of antenna j due to when all other currents ik are zero. The mutual impendances can be viewed as the elements of a symmetric square impedance matrix Z. Note that the diagonal elements, , are simply the driving point impedances of each element.
Using this definition, the voltages present at the feedpoints of a set of coupled antennas can be expressed as the multiplication of the impedance matrix times the vector of currents. Written out as discrete equations, that means:
- is the voltage at the terminals of antenna
- is the current flowing between the terminals of antenna
- is the driving point impedance of antenna
- is the mutual impedance between antennas and .
As is the case for mutual inductances,
This is a consequence of Lorentz reciprocity. For an antenna element not connected to anything (open circuited) one can write . But for an element which is short circuited, a current is generated across that short but no voltage is allowed, so the corresponding . This is the case, for instance, with the so-called parasitic elements of a Yagi-Uda antenna where the solid rod can be viewed as a dipole antenna shorted across its feedpoint. Parasitic elements are unpowered elements that absorb and reradiate RF energy according to the induced current calculated using such a system of equations.
With a particular geometry, it is possible for the mutual impedance between nearby antennas to be zero. This is the case, for instance, between the crossed dipoles used in the turnstile antenna.
|Wikimedia Commons has media related to Antennas.|
|Wikisource has original text related to this article:|
- Old-fashioned, high impedance lines, such as twin-lead or ribbon cable, windowed line, or ladder line, will have very little loss, since signal power is carried as high voltages rather than high currents.
- Impedance is caused by the same physics as refractive index in optics, although impedance effects are typically one-dimensional, where effects of refractive index is three-dimensional.
- The practical consequence of “resonance” – however it might be achieved – is that the voltage and current are “in phase” (rise and fall together) with no delay between the two. At its feedpoint a “resonant” antenna appears as if it were a simple resistor. Whether the number of Ohms of resistance at the feedpoint is a good match for the ratio of voltage to current delivered by the feedline is a different, but related issue.
- The popular “quad” antenna design is necessarily a full-wave loop, so no other distinction is needed.
- A notable exception are car radios, which require an antenna outside the metal car chassis for AM and longer-wave reception.
- Graf, Rudolf F. (1999). Modern Dictionary of Electronics. Newnes. p. 29. ISBN 0750698667.
- Marconi, "Wireless Telegraphic Communication: Nobel Lecture, 11 December 1909. Archived 4 May 2007 at the Wayback Machine." Nobel Lectures. Physics 1901–1921. Amsterdam: Elsevier Publishing Company, 1967: 196–222. p. 206.
- Slyusar, Vadym (20–23 September 2011). "To history of radio engineering's term "antenna"" (PDF). VIII International Conference on Antenna Theory and Techniques (ICATT’11). Kyiv, Ukraine. pp. 83–85. Archived (PDF) from the original on 24 February 2014.
- Slyusar, Vadym (21–24 February 2012). "An Italian period on the history of radio engineering's term "antenna"" (PDF). 11th International Conference Modern Problems of Radio Engineering, Telecommunications and Computer Science (TCSET’2012). Lviv-Slavske, Ukraine. p. 174. Archived (PDF) from the original on 24 February 2014.
- Slyusar, Vadym (June 2011). "Антенна: история радиотехнического термина" [The Antenna: A History of Radio Engineering’s Term] (PDF). ПЕРВАЯ МИЛЯ Last mile: Electronics: Science, Technology, Business (in Russian) (6): 52–64. Archived (PDF) from the original on 2014-02-24.
- "Media Advisory: Apply Now to Attend the ALMA Observatory Inauguration". ESO Announcement. Archived from the original on 6 December 2012. Retrieved 4 December 2012.
- Robert S. Elliott "Antenna Theory and Design" Wyle Publishing, 1981 1st edition pg 3
- Skolnik "Radar Handbook" Merril Pub 1962, 3rd ed 2002, pg 5
- Carl Smith (1969). Standard Broadcast Antenna Systems, p. 2-1212. Cleveland, Ohio: Smith Electronics, Inc.
- Lonngren, Karl Erik; Savov, Sava V.; Jost, Randy J. (2007). Fundamentals of Electomagnetics With Matlab, 2nd Ed. SciTech Publishing. p. 451. ISBN 1891121588.
- Stutzman, Warren L.; Thiele, Gary A. (2012). Antenna Theory and Design, 3rd Ed. John Wiley & Sons. pp. 560–564. ISBN 0470576642.
- Hall, Gerald (1991). The ARRL Antenna Book (15th edition). ARRL. p. 24. ISBN 0-87259-206-5.
- Hall 1991, p. 25.
- Hall 1991, pp. 31-32.
- Slyusar, V. I. (17–21 September 2007). 60 Years of Electrically Small Antennas Theory (PDF). 6 th International Conference on Antenna Theory and Techniques. Sevastopol, Ukraine. pp. 116–118. Archived (PDF) from the original on 28 August 2017. Retrieved 2 September 2017.
- Bevelaqua, Peter J. "Types of Antennas". Antenna Theory. Antenna-theory.com. Archived from the original on 30 June 2015. Retrieved 28 June 2015.
Peter Bevelaqua's private website.
- Aksoy, Serkan (2008). "Antennas" (PDF). Electrical Engineering Department. Lecture Notes-v.1.3.4. Gebze, Turkey: Gebze Technical University. Archived from the original (PDF) on 22 February 2016. Retrieved 29 June 2015.
- Balanis, Constantine A. (2005). Antenna Theory: Analysis and Design. 1 (3rd ed.). John Wiley and Sons. p. 4. ISBN 047166782X.
- Bevelaqua, Peter. "Dipole Antenna". Antenna-Theory.com. Archived from the original on 17 June 2015.
- Bevelaqua, Peter. "Monopole Antenna". Antenna-Theory.com. Archived from the original on 15 June 2015.
- Bevelaqua, Peter. "Antenna Arrays". Antenna-Theory.com. Archived from the original on 25 April 2017.
- Balanis 2005, pp. 283–371
- Bevelaqua, Peter. "Loop Antennas". Antenna-Theory.com. Archived from the original on 17 June 2015.
- Silver, H. Ward, ed. (2011). ARRL Antenna Book for Radio Communications (22nd ed.). Newington, CT: American Radio Relay League. Chapter 5, Section 9.6, Section 11.6, Section 16.5, Section 20.6, Chapter 22. ISBN 978-0-87259-680-1.
- Balanis 2005, pp. 231–275
- Balanis 2005, pp. 653–728
- Balanis 2005, pp. 549–602
- Fixed Broadband Wireless System Design, p. 130, at Google Books
- Monopole Antennas, p. 340, at Google Books
- Wireless and Mobile Communication, p. 37, at Google Books
- Silver, H. Ward, ed. (2011). ARRL Antenna Book. Newington, Connecticut: American Radio Relay League. p. 3-2. ISBN 978-0-87259-694-8.
- "M3 Map of Effective Ground Conductivity in the United States (a Wall-Sized Map), for AM Broadcast Stations". fcc.gov. 11 December 2015. Archived from the original on 18 November 2015. Retrieved 6 May 2018.
- Silver 2011, p. 3-23
- Chakravorty, Pragnan; Mandal, Durbadal (2016). "Radiation Pattern Correction in Mutually Coupled Antenna Arrays Using Parametric Assimilation Technique". IEEE Transactions on Antennas and Propagation. PP (99): 1–1. Bibcode:2016ITAP...64.4092C. doi:10.1109/TAP.2016.2578307. ISSN 0018-926X.
- Kai Fong Lee (1984). Principles of Antenna Theory. John Wiley and Sons Ltd. ISBN 0-471-90167-9.
The dictionary definition of antenna at Wiktionary |
|Cosmic Times Home||Teacher Resources||Universe Mash-up||Imagine the Universe!|
Galaxies Still Misbehaving
The latest attempts to "weigh" galaxies are still coming up a bit short. The spiral galaxy, NGC 3521 has a mass equal to the mass of 80 billion Suns. The spiral galaxy NGC 972 has a mass equal to 12 billion Suns. However, when we compare the amount of light coming from those galaxies, it doesn't match the amount of light we would expect from that much matter. It's a puzzle.
The amount of starlight coming from the two galaxies is based on careful measurement of each galaxy's total luminosity (brightness), as recorded on photographic plates. It also takes into account the way the stars in each galaxy spread out from the center out to their edges. This distribution varies from galaxy to galaxy.
The mass measurement is based on the same idea that long ago allowed astronomers to calculate the mass of the Sun. Basically, if you have a relatively small mass object orbiting a very large mass object at a known speed, you can work out the large object's mass mathematically. The same physics applies to stars orbiting a galaxy's center of gravity.
To get the speeds of stars the astronomers did not clock them with a stopwatch. Their actual motions are too small to measure with a telescope. Instead, the researchers sampled small areas of starlight from different parts of the galaxies and split the light into spectra – or their rainbows of color. These spectra contain lines that shift in proportion to the speeds the stars are moving. The speed of the stars can then be used to determine the mass of the galaxy the stars are in.
To make comparisons easier, astronomers blend the luminosity and mass measurements into a single number called a mass-to-light ratio which is based on the mass and luminosity of our Sun. Our Sun has a mass of "one solar mass" and a luminosity of "one solar luminosity," so the Sun's mass-to-light ratio is equal to one. A ratio greater than 1, implies more mass than luminosity – which means an object (a galaxy) has more mass than expected.
Image of NGC 3521 by Las Campanas Observatory's duPont telescope. Inset: image of NGC 972 by Palomar Observatory's Hale Telescope.
Images from Carnegie Atlas of Galaxies (Sandage & Bedke 1994)
The NGC 3521 galaxy was studied with the 82 inch telescope at McDonald Observatory at the University of Texas. The results of the study give it a light-to-mass ratio of 4 or greater. This means this galaxy has four times more mass than its light indicates it should have. The mass of the NGC 972 galaxy is closer to what was expected, with a light-to-mass ratio of 1.2. These results were reported in recent issues of Astrophysical Journal by teams of researchers led by Margaret Burbidge at the University of California at San Diego.
These two galaxies are not unique in their "misbehavior." Other researchers are finding galaxies everywhere that have mismatched mass-to-light ratios. So far, no one has offered a clear explanation for this.
About the only consolation these scientists may have is that their missing matter problem is far less extreme than that of astronomer Fritz Zwicky at CalTech University. In 1933 he measured the amount of light from the entire Coma cluster of galaxies. Then he measured the speeds of the galaxies as they orbited the cluster to determine their mass. Zwicky came up with a light-to-mass ratio of about 500. That means 99 percent of the matter there is hidden (not giving off light). At the moment, most astronomers seem content to ignore such extreme numbers, being, as they probably are, astronomical flukes.
Cosmic Times is a product of the Imagine the Universe! website. Imagine the Universe is a service of the High Energy Astrophysics Science Archive Research Center (HEASARC), Dr. Alan P. Smale (Director), within the Astrophysics Science Division (ASD) at NASA's Goddard Space Flight Center.
All material on this site has been created and updated between 2007-2016.
This page last updated: Friday, 26-Feb-2016 10:52:59 EST |
The process of extracting liquid crude oil from the ground is comparatively simple to extracting oil shale. Pressure from gases trapped in the chamber where oil is present force the crude oil to the surface. After this pressure is alleviated, the more difficult secondary and tertiary phases of oil drilling begin. In some cases, water may be pumped in to loosen compressed oil. Sometimes gasses are introduced to repressurize the oil chamber. And in many cases, the remaining oil is simply left for future drilling with more advanced equipment.
Getting crude oil from rock represents perhaps the most difficult process of extraction. Oil shale must be mined using either underground- or surface-mining methods. After excavation, the oil shale must undergo retorting. This is when the mined rock is exposed to the process of pyrolysis -- applying extreme heat without the presence of oxygen to a substance, and producing a chemical change. Between 650 and 700 degrees Fahrenheit, the kerogen -- the fossil fuel trapped within -- begins to liquefy and separate from the rock [source: Argonne National Laboratory]. The oil-like substance that emerges can be further refined into a synthetic crude oil. When oil shale is mined and retorted above ground, the process is called surface retorting.
Source: Oil shale |
In classical mechanics, Newton's theorem of revolving orbits identifies the type of central force needed to multiply the angular speed of a particle by a factor k without affecting its radial motion (Figures 1 and 2). Newton applied his theorem to understanding the overall rotation of orbits (apsidal precession, Figure 3) that is observed for the Moon and planets. The term "radial motion" signifies the motion towards or away from the center of force, whereas the angular motion is perpendicular to the radial motion.
Isaac Newton derived this theorem in Propositions 43–45 of Book I of his Philosophiæ Naturalis Principia Mathematica, first published in 1687. In Proposition 43, he showed that the added force must be a central force, one whose magnitude depends only upon the distance r between the particle and a point fixed in space (the center). In Proposition 44, he derived a formula for the force, showing that it was an inverse-cube force, one that varies as the inverse cube of r. In Proposition 45 Newton extended his theorem to arbitrary central forces by assuming that the particle moved in nearly circular orbit.
As noted by astrophysicist Subrahmanyan Chandrasekhar in his 1995 commentary on Newton's Principia, this theorem remained largely unknown and undeveloped for over three centuries. Since 1997, the theorem has been studied by Donald Lynden-Bell and collaborators. Its first exact extension came in 2000 with the work of Mahomed and Vawda.
The motion of astronomical bodies has been studied systematically for thousands of years. The stars were observed to rotate uniformly, always maintaining the same relative positions to one another. However, other bodies were observed to wander against the background of the fixed stars; most such bodies were called planets after the Greek word "πλανήτοι" (planētoi) for "wanderers". Although they generally move in the same direction along a path across the sky (the ecliptic), individual planets sometimes reverse their direction briefly, exhibiting retrograde motion.
To describe this forward-and-backward motion, Apollonius of Perga (c. 262 – c. 190 BC) developed the concept of deferents and epicycles, according to which the planets are carried on rotating circles that are themselves carried on other rotating circles, and so on. Any orbit can be described with a sufficient number of judiciously chosen epicycles, since this approach corresponds to a modern Fourier transform. Roughly 350 years later, Claudius Ptolemaeus published his Almagest, in which he developed this system to match the best astronomical observations of his era. To explain the epicycles, Ptolemy adopted the geocentric cosmology of Aristotle, according to which planets were confined to concentric rotating spheres. This model of the universe was authoritative for nearly 1500 years.
The modern understanding of planetary motion arose from the combined efforts of astronomer Tycho Brahe and physicist Johannes Kepler in the 16th century. Tycho is credited with extremely accurate measurements of planetary motions, from which Kepler was able to derive his laws of planetary motion. According to these laws, planets move on ellipses (not epicycles) about the Sun (not the Earth). Kepler's second and third laws make specific quantitative predictions: planets sweep out equal areas in equal time, and the square of their orbital periods equals a fixed constant times the cube of their semi-major axis. Subsequent observations of the planetary orbits showed that the long axis of the ellipse (the so-called line of apsides) rotates gradually with time; this rotation is known as apsidal precession. The apses of an orbit are the points at which the orbiting body is closest or furthest away from the attracting center; for planets orbiting the Sun, the apses correspond to the perihelion (closest) and aphelion (furthest).
With the publication of his Principia roughly eighty years later (1687), Isaac Newton provided a physical theory that accounted for all three of Kepler's laws, a theory based on Newton's laws of motion and his law of universal gravitation. In particular, Newton proposed that the gravitational force between any two bodies was a central force F(r) that varied as the inverse square of the distance r between them. Arguing from his laws of motion, Newton showed that the orbit of any particle acted upon by one such force is always a conic section, specifically an ellipse if it does not go to infinity. However, this conclusion holds only when two bodies are present (the two-body problem); the motion of three bodies or more acting under their mutual gravitation (the n-body problem) remained unsolved for centuries after Newton, although solutions to a few special cases were discovered. Newton proposed that the orbits of planets about the Sun are largely elliptical because the Sun's gravitation is dominant; to first approximation, the presence of the other planets can be ignored. By analogy, the elliptical orbit of the Moon about the Earth was dominated by the Earth's gravity; to first approximation, the Sun's gravity and those of other bodies of the Solar System can be neglected. However, Newton stated that the gradual apsidal precession of the planetary and lunar orbits was due to the effects of these neglected interactions; in particular, he stated that the precession of the Moon's orbit was due to the perturbing effects of gravitational interactions with the Sun.
Newton's theorem of revolving orbits was his first attempt to understand apsidal precession quantitatively. According to this theorem, the addition of a particular type of central force—the inverse-cube force—can produce a rotating orbit; the angular speed is multiplied by a factor k, whereas the radial motion is left unchanged. However, this theorem is restricted to a specific type of force that may not be relevant; several perturbing inverse-square interactions (such as those of other planets) seem unlikely to sum exactly to an inverse-cube force. To make his theorem applicable to other types of forces, Newton found the best approximation of an arbitrary central force F(r) to an inverse-cube potential in the limit of nearly circular orbits, that is, elliptical orbits of low eccentricity, as is indeed true for most orbits in the Solar System. To find this approximation, Newton developed an infinite series that can be viewed as the forerunner of the Taylor expansion. This approximation allowed Newton to estimate the rate of precession for arbitrary central forces. Newton applied this approximation to test models of the force causing the apsidal precession of the Moon's orbit. However, the problem of the Moon's motion is dauntingly complex, and Newton never published an accurate gravitational model of the Moon's apsidal precession. After a more accurate model by Clairaut in 1747, analytical models of the Moon's motion were developed in the late 19th century by Hill, Brown, and Delaunay.
However, Newton's theorem is more general than merely explaining apsidal precession. It describes the effects of adding an inverse-cube force to any central force F(r), not only to inverse-square forces such as Newton's law of universal gravitation and Coulomb's law. Newton's theorem simplifies orbital problems in classical mechanics by eliminating inverse-cube forces from consideration. The radial and angular motions, r(t) and θ1(t), can be calculated without the inverse-cube force; afterwards, its effect can be calculated by multiplying the angular speed of the particle
Consider a particle moving under an arbitrary central force F1(r) whose magnitude depends only on the distance r between the particle and a fixed center. Since the motion of a particle under a central force always lies in a plane, the position of the particle can be described by polar coordinates (r, θ1), the radius and angle of the particle relative to the center of force (Figure 1). Both of these coordinates, r(t) and θ1(t), change with time t as the particle moves.
Imagine a second particle with the same mass m and with the same radial motion r(t), but one whose angular speed is k times faster than that of the first particle. In other words, the azimuthal angles of the two particles are related by the equation θ2(t) = k θ1(t). Newton showed that the motion of the second particle can be produced by adding an inverse-cube central force to whatever force F1(r) acts on the first particle
If k2 is greater than one, F2 − F1 is a negative number; thus, the added inverse-cube force is attractive, as observed in the green planet of Figures 1–4 and 9. By contrast, if k2 is less than one, F2−F1 is a positive number; the added inverse-cube force is repulsive, as observed in the green planet of Figures 5 and 10, and in the red planet of Figures 4 and 5.
The addition of such an inverse-cube force also changes the path followed by the particle. The path of the particle ignores the time dependencies of the radial and angular motions, such as r(t) and θ1(t); rather, it relates the radius and angle variables to one another. For this purpose, the angle variable is unrestricted and can increase indefinitely as the particle revolves around the central point multiple times. For example, if the particle revolves twice about the central point and returns to its starting position, its final angle is not the same as its initial angle; rather, it has increased by 2×360° = 720°. Formally, the angle variable is defined as the integral of the angular speed
A similar definition holds for θ2, the angle of the second particle.
If the path of the first particle is described in the form r = g(θ1), the path of the second particle is given by the function r = g(θ2/k), since θ2 = k θ1. For example, let the path of the first particle be an ellipse
where A and B are constants; then, the path of the second particle is given by
If k is close, but not equal, to one, the second orbit resembles the first, but revolves gradually about the center of force; this is known as orbital precession (Figure 3). If k is greater than one, the orbit precesses in the same direction as the orbit (Figure 3); if k is less than one, the orbit precesses in the opposite direction.
Although the orbit in Figure 3 may seem to rotate uniformly, i.e., at a constant angular speed, this is true only for circular orbits. If the orbit rotates at an angular speed Ω, the angular speed of the second particle is faster or slower than that of the first particle by Ω; in other words, the angular speeds would satisfy the equation ω2 = ω1 + Ω. However, Newton's theorem of revolving orbits states that the angular speeds are related by multiplication: ω2 = kω1, where k is a constant. Combining these two equations shows that the angular speed of the precession equals Ω = (k − 1)ω1. Hence, Ω is constant only if ω1 is constant. According to the conservation of angular momentum, ω1 changes with the radius r
where m and L1 are the first particle's mass and angular momentum, respectively, both of which are constant. Hence, ω1 is constant only if the radius r is constant, i.e., when the orbit is a circle. However, in that case, the orbit does not change as it precesses.
The simplest illustration of Newton's theorem occurs when there is no initial force, i.e., F1(r) = 0. In this case, the first particle is stationary or travels in a straight line. If it travels in a straight line that does not pass through the origin (yellow line in Figure 6) the equation for such a line may be written in the polar coordinates (r, θ1) as
where θ0 is the angle at which the distance is minimized (Figure 6). The distance r begins at infinity (when θ1 – θ0 = −90°), and decreases gradually until θ1 – θ0 = 0°, when the distance reaches a minimum, then gradually increases again to infinity at θ1 – θ0 = 90°. The minimum distance b is the impact parameter, which is defined as the length of the perpendicular from the fixed center to the line of motion. The same radial motion is possible when an inverse-cube central force is added.
An inverse-cube central force F2(r) has the form
where the numerator μ may be positive (repulsive) or negative (attractive). If such an inverse-cube force is introduced, Newton's theorem says that the corresponding solutions have a shape called Cotes's spirals[clarification needed]. These are curves defined by the equation
where the constant k equals
When the right-hand side of the equation is a positive real number, the solution corresponds to an epispiral. When the argument θ1 – θ0 equals ±90°×k, the cosine goes to zero and the radius goes to infinity. Thus, when k is less than one, the range of allowed angles becomes small and the force is repulsive (red curve on right in Figure 7). On the other hand, when k is greater than one, the range of allowed angles increases, corresponding to an attractive force (green, cyan and blue curves on left in Figure 7); the orbit of the particle can even wrap around the center several times. The possible values of the parameter k may range from zero to infinity, which corresponds to values of μ ranging from negative infinity up to the positive upper limit, L12/m. Thus, for all attractive inverse-cube forces (negative μ) there is a corresponding epispiral orbit, as for some repulsive ones (μ < L12/m), as illustrated in Figure 7. Stronger repulsive forces correspond to a faster linear motion.
One of the other solution types is given in terms of the hyperbolic cosine:
where the constant λ satisfies
This form of Cotes's spirals corresponds to one of the two Poinsot's spirals (Figure 8). The possible values of λ range from zero to infinity, which corresponds to values of μ greater than the positive number L12/m. Thus, Poinsot spiral motion only occurs for repulsive inverse-cube central forces, and applies in the case that L is not too large for the given μ.
where A and ε are arbitrary constants. Such curves result when the strength μ of the repulsive force exactly balances the angular momentum-mass term
Two types of central forces—those that increase linearly with distance, F = Cr, such as Hooke's law, and inverse-square forces, F = C/r2, such as Newton's law of universal gravitation and Coulomb's law—have a very unusual property. A particle moving under either type of force always returns to its starting place with its initial velocity, provided that it lacks sufficient energy to move out to infinity. In other words, the path of a bound particle is always closed and its motion repeats indefinitely, no matter what its initial position or velocity. As shown by Bertrand's theorem, this property is not true for other types of forces; in general, a particle will not return to its starting point with the same velocity.
However, Newton's theorem shows that an inverse-cubic force may be applied to a particle moving under a linear or inverse-square force such that its orbit remains closed, provided that k equals a rational number. (A number is called "rational" if it can be written as a fraction m/n, where m and n are integers.) In such cases, the addition of the inverse-cubic force causes the particle to complete m rotations about the center of force in the same time that the original particle completes n rotations. This method for producing closed orbits does not violate Bertrand's theorem, because the added inverse-cubic force depends on the initial velocity of the particle.
Harmonic and subharmonic orbits are special types of such closed orbits. A closed trajectory is called a harmonic orbit if k is an integer, i.e., if n = 1 in the formula k = m/n. For example, if k = 3 (green planet in Figures 1 and 4, green orbit in Figure 9), the resulting orbit is the third harmonic of the original orbit. Conversely, the closed trajectory is called a subharmonic orbit if k is the inverse of an integer, i.e., if m = 1 in the formula k = m/n. For example, if k = 1/3 (green planet in Figure 5, green orbit in Figure 10), the resulting orbit is called the third subharmonic of the original orbit. Although such orbits are unlikely to occur in nature, they are helpful for illustrating Newton's theorem.
In Proposition 45 of his Principia, Newton applies his theorem of revolving orbits to develop a method for finding the force laws that govern the motions of planets. Johannes Kepler had noted that the orbits of most planets and the Moon seemed to be ellipses, and the long axis of those ellipses can determined accurately from astronomical measurements. The long axis is defined as the line connecting the positions of minimum and maximum distances to the central point, i.e., the line connecting the two apses. For illustration, the long axis of the planet Mercury is defined as the line through its successive positions of perihelion and aphelion. Over time, the long axis of most orbiting bodies rotates gradually, generally no more than a few degrees per complete revolution, because of gravitational perturbations from other bodies, oblateness in the attracting body, general relativistic effects, and other effects. Newton's method uses this apsidal precession as a sensitive probe of the type of force being applied to the planets.
Newton's theorem describes only the effects of adding an inverse-cube central force. However, Newton extends his theorem to an arbitrary central force F(r) by restricting his attention to orbits that are nearly circular, such as ellipses with low orbital eccentricity (ε ≤ 0.1), which is true of seven of the eight planetary orbits in the solar system. Newton also applied his theorem to the planet Mercury, which has an eccentricity ε of roughly 0.21, and suggested that it may pertain to Halley's comet, whose orbit has an eccentricity of roughly 0.97.
A qualitative justification for this extrapolation of his method has been suggested by Valluri, Wilson and Harper. According to their argument, Newton considered the apsidal precession angle α (the angle between the vectors of successive minimum and maximum distance from the center) to be a smooth, continuous function of the orbital eccentricity ε. For the inverse-square force, α equals 180°; the vectors to the positions of minimum and maximum distances lie on the same line. If α is initially not 180° at low ε (quasi-circular orbits) then, in general, α will equal 180° only for isolated values of ε; a randomly chosen value of ε would be very unlikely to give α = 180°. Therefore, the observed slow rotation of the apsides of planetary orbits suggest that the force of gravity is an inverse-square law.
To simplify the equations, Newton writes F(r) in terms of a new function C(r)
where R is the average radius of the nearly circular orbit. Newton expands C(r) in a series—now known as a Taylor expansion—in powers of the distance r, one of the first appearances of such a series. By equating the resulting inverse-cube force term with the inverse-cube force for revolving orbits, Newton derives an equivalent angular scaling factor k for nearly circular orbits:
In other words, the application of an arbitrary central force F(r) to a nearly circular elliptical orbit can accelerate the angular motion by the factor k without affecting the radial motion significantly. If an elliptical orbit is stationary, the particle rotates about the center of force by 180° as it moves from one end of the long axis to the other (the two apses). Thus, the corresponding apsidal angle α for a general central force equals k×180°, using the general law θ2 = k θ1.
Newton illustrates his formula with three examples. In the first two, the central force is a power law, F(r) = rn−3, so C(r) is proportional to rn. The formula above indicates that the angular motion is multiplied by a factor k = 1/√, so that the apsidal angle α equals 180°/√.
This angular scaling can be seen in the apsidal precession, i.e., in the gradual rotation of the long axis of the ellipse (Figure 3). As noted above, the orbit as a whole rotates with a mean angular speed Ω=(k−1)ω, where ω equals the mean angular speed of the particle about the stationary ellipse. If the particle requires a time T to move from one apse to the other, this implies that, in the same time, the long axis will rotate by an angle β = ΩT = (k − 1)ωT = (k − 1)×180°. For an inverse-square law such as Newton's law of universal gravitation, where n equals 1, there is no angular scaling (k = 1), the apsidal angle α is 180°, and the elliptical orbit is stationary (Ω = β = 0).
As a final illustration, Newton considers a sum of two power laws
which multiplies the angular speed by a factor
Newton applies both of these formulae (the power law and sum of two power laws) to examine the apsidal precession of the Moon's orbit.
The motion of the Moon can be measured accurately, and is noticeably more complex than that of the planets. The ancient Greek astronomers, Hipparchus and Ptolemy, had noted several periodic variations in the Moon's orbit, such as small oscillations in its orbital eccentricity and the inclination of its orbit to the plane of the ecliptic. These oscillations generally occur on a once-monthly or twice-monthly time-scale. The line of its apses precesses gradually with a period of roughly 8.85 years, while its line of nodes turns a full circle in roughly double that time, 18.6 years. This accounts for the roughly 18-year periodicity of eclipses, the so-called Saros cycle. However, both lines experience small fluctuations in their motion, again on the monthly time-scale.
In 1673, Jeremiah Horrocks published a reasonably accurate model of the Moon's motion in which the Moon was assumed to follow a precessing elliptical orbit. A sufficiently accurate and simple method for predicting the Moon's motion would have solved the navigational problem of determining a ship's longitude; in Newton's time, the goal was to predict the Moon's position to 2' (two arc-minutes), which would correspond to a 1° error in terrestrial longitude. Horrocks' model predicted the lunar position with errors no more than 10 arc-minutes; for comparison, the diameter of the Moon is roughly 30 arc-minutes.
Newton used his theorem of revolving orbits in two ways to account for the apsidal precession of the Moon. First, he showed that the Moon's observed apsidal precession could be accounted for by changing the force law of gravity from an inverse-square law to a power law in which the exponent was 2 + 4/243 (roughly 2.0165)
In 1894, Asaph Hall adopted this approach of modifying the exponent in the inverse-square law slightly to explain an anomalous orbital precession of the planet Mercury, which had been observed in 1859 by Urbain Le Verrier. Ironically, Hall's theory was ruled out by careful astronomical observations of the Moon. The currently accepted explanation for this precession involves the theory of general relativity, which (to first approximation) adds an inverse-quartic force, i.e., one that varies as the inverse fourth power of distance.
As a second approach to explaining the Moon's precession, Newton suggested that the perturbing influence of the Sun on the Moon's motion might be approximately equivalent to an additional linear force
The first term corresponds to the gravitational attraction between the Moon and the Earth, where r is the Moon's distance from the Earth. The second term, so Newton reasoned, might represent the average perturbing force of the Sun's gravity of the Earth-Moon system. Such a force law could also result if the Earth were surrounded by a spherical dust cloud of uniform density. Using the formula for k for nearly circular orbits, and estimates of A and B, Newton showed that this force law could not account for the Moon's precession, since the predicted apsidal angle α was (≈ 180.76°) rather than the observed α (≈ 181.525°). For every revolution, the long axis would rotate 1.5°, roughly half of the observed 3.0°
Isaac Newton first published his theorem in 1687, as Propositions 43–45 of Book I of his Philosophiæ Naturalis Principia Mathematica. However, as astrophysicist Subrahmanyan Chandrasekhar noted in his 1995 commentary on Newton's Principia, the theorem remained largely unknown and undeveloped for over three centuries.
The first generalization of Newton's theorem was discovered by Mahomed and Vawda in 2000. As Newton did, they assumed that the angular motion of the second particle was k times faster than that of the first particle, θ2 = k θ1. In contrast to Newton, however, Mahomed and Vawda did not require that the radial motion of the two particles be the same, r1 = r2. Rather, they required that the inverse radii be related by a linear equation
This transformation of the variables changes the path of the particle. If the path of the first particle is written r1 = g(θ1), the second particle's path can be written as
If the motion of the first particle is produced by a central force F1(r), Mahomed and Vawda showed that the motion of the second particle can be produced by the following force
According to this equation, the second force F2(r) is obtained by scaling the first force and changing its argument, as well as by adding inverse-square and inverse-cube central forces.
For comparison, Newton's theorem of revolving orbits corresponds to the case a = 1 and b = 0, so that r1 = r2. In this case, the original force is not scaled, and its argument is unchanged; the inverse-cube force is added, but the inverse-square term is not. Also, the path of the second particle is r2 = g(θ2/k), consistent with the formula given above.
Newton's derivation of Proposition 43 depends on his Proposition 2, derived earlier in the Principia. Proposition 2 provides a geometrical test for whether the net force acting on a point mass (a particle) is a central force. Newton showed that a force is central if and only if the particle sweeps out equal areas in equal times as measured from the center.
Newton's derivation begins with a particle moving under an arbitrary central force F1(r); the motion of this particle under this force is described by its radius r(t) from the center as a function of time, and also its angle θ1(t). In an infinitesimal time dt, the particle sweeps out an approximate right triangle whose area is
Since the force acting on the particle is assumed to be a central force, the particle sweeps out equal angles in equal times, by Newton's Proposition 2. Expressed another way, the rate of sweeping out area is constant
This constant areal velocity can be calculated as follows. At the apapsis and periapsis, the positions of closest and furthest distance from the attracting center, the velocity and radius vectors are perpendicular; therefore, the angular momentum L1 per mass m of the particle (written as h1) can be related to the rate of sweeping out areas
Now consider a second particle whose orbit is identical in its radius, but whose angular variation is multiplied by a constant factor k
The areal velocity of the second particle equals that of the first particle multiplied by the same factor k
Since k is a constant, the second particle also sweeps out equal areas in equal times. Therefore, by Proposition 2, the second particle is also acted upon by a central force F2(r). This is the conclusion of Proposition 43.
To find the magnitude of F2(r) from the original central force F1(r), Newton calculated their difference F2(r) − F1(r) using geometry and the definition of centripetal acceleration. In Proposition 44 of his Principia, he showed that the difference is proportional to the inverse cube of the radius, specifically by the formula given above, which Newtons writes in terms of the two constant areal velocities, h1 and h2
In this Proposition, Newton derives the consequences of his theorem of revolving orbits in the limit of nearly circular orbits. This approximation is generally valid for planetary orbits and the orbit of the Moon about the Earth. This approximation also allows Newton to consider a great variety of central force laws, not merely inverse-square and inverse-cube force laws.
Since the two radii have the same behavior with time, r(t), the conserved angular momenta are related by the same factor k
Applying the general formula to the two orbits yields the equation
which can be re-arranged to the form
This equation relating the two radial forces can be understood qualitatively as follows. The difference in angular speeds (or equivalently, in angular momenta) causes a difference in the centripetal force requirement; to offset this, the radial force must be altered with an inverse-cube force.
Newton's theorem can be expressed equivalently in terms of potential energy, which is defined for central forces
The radial force equation can be written in terms of the two potential energies
Integrating with respect to the distance r, Newtons's theorem states that a k-fold change in angular speed results from adding an inverse-square potential energy to any given potential energy V1(r)
Although Newton states that the problem was to be solved by Proposition 6, he does not use it explicitly. In the following, simplified proof, Proposition 6 is used to show how the result is derived.
Newton's detailed proof follows that, and finally Proposition 6 is appended, as it is not well-known.
Proposition 44 uses Proposition 6 to prove a result about revolving orbits. In the propositions following Proposition 6 in Section 2 of the Principia, he applies it to specific curves, for example, conic sections. In the case of Proposition 44, it is applied to any orbit, under the action of an arbitrary force directed towards a fixed point, to produce a corresponding revolving orbit.
In Fig. 1, MN is part of that orbit. At point P, the body moves to Q under the action of a force directed towards S, as before. The force, F(SP) is defined at each point P on the curve.
In Fig. 2, the corresponding part of the revolving orbit is mn with s as its centre of force. Assume that initially, the body in the static orbit starts out at right angles to the radius with speed V. The body in the revolving orbit must also start at right angles and assume its speed is v. In the case shown in Fig. 1, and the force is directed towards S. The argument applies equally if . Also, the force may be directed away from the centre.
Let SA be the initial direction of the static orbit, and sa, that of the revolving orbit. If after a certain time the bodies in the respective orbits are at P and p, then the ratios of the angles ; the ratios of the areas ; and the radii , , .
The figure pryx and the arc py in Fig. 2 are the figure PRQT and the arc PQ in Fig. 1, expanded linearly in the horizontal direction in the ratio , so that , , and . The straight lines qt and QT should really be circular arcs with centres s and S and radii sq and SQ respectively. In the limit, their ratio becomes , whether they are straight lines or arcs.
Since in the limit the forces are parallel to SP and sp, if the same force acted on the body in Fig. 2 as in Fig. 1, the body would arrive at y, since ry = RQ. The difference in horizontal speed does not affect the vertical distances. Newton refers to Corollary 2 of the Laws of Motion, where the motion of the bodies is resolved into a component in the radial direction acted on by the whole force, and the other component transverse to it, acted on by no force.
However, the distance from y to the centre, s, is now greater than SQ, so an additional force is required to move the body to q such that sq = SQ. The extra force is represented by yq, and f is proportional to ry + yq, just as F is to RQ.
The difference, , can be found as follows:
, , so .
And in the limit, as QT and qt approach zero, becomes equal to or 2SP so
Since from Proposition 6 (Fig.1 and see below), the force is . Divide by , where k is constant, to obtain the forces .
In Fig. 3, at the initial point A of the static curve, draw the tangent AR, which is perpendicular to SA, and the circle AQD, which just touches the curve at A. Let ρ be the radius of that circle. Since angle SAR is a right angle, the centre of the circle lies on SA. From the property of a circle: , and in the limit as Q approaches A, this becomes .
And since F(SA) is given, this determines the constant k. However, Newton wants the force at A to be of the form , where c is a constant, so that , where .
The expression for f(sp) above is the same as Newton's in Corollary 4 of Proposition 44, except that he uses different letters. He writes (where G and F are not necessarily equal to v and V respectively), and uses the letter “V” for the constant corresponding to “c”, and the letter “X” for the function F(sp).
The above geometric proof shows very clearly where the additional force arises from to make the orbit revolve with respect to the static orbit.
Newton's proof is complicated, in view of the simplicity of the above proof. As an example, his proof requires some deciphering, as the following sentence shows:
“And therefore, if with centre C and any radius CP or Cp a circular sector is described equal to the total area VPC which the body P revolving in an immobile orbit has described in any time by a radius drawn to the centre, the difference between the forces by which the body P in an immobile orbit and body p in a mobile orbit revolve will be to the centripetal force by which some body, by a radius drawn to the centre, would have been able to describe that sector uniformly in the same time in which the area VPC was described as G2 - F2 to F2.”
He initially regards the infinitesimal as fixed, then the areas SPQ and spq are proportional to V and v, respectively; therefore, and at each of the points P and p, and so the additional force varies inversely as the cube of the radius.
In Fig.1, XQ is a circular arc, with centre S and radius SQ, meeting SP at X. The perpendicular XY meets RQ at Y, and .
Let be the force required to make a body move in a circle of radius SQ, if it has the same speed as the transverse speed of the body in the static orbit at Q.
at every point, P and in particular at the apside, A:
But at A, in Fig. 3., the ratio of the force that makes the body follow the static curve, AE, to that required to make it follow the circle, AB, with radius SA, is inversely as the ratio of their radii of curvature, since they are both moving at the same speed, V, perpendicular to SA: . From the first part of the proof, .
Substituting Newton's expression for F(SA), gives the result obtained previously.
“To find the motion of the apsides in orbits approaching circles.”
Proposition 44 was devised expressly to prove this Proposition. Newton wants to investigate the motion of a body in a nearly circular orbit attracted by a force of the form .
He approximates the static curve by an ellipse with an inverse square force, F(SP), directed to one of the foci, made to revolve by the addition of an inverse cube force, according to Proposition 44.
For the static ellipse, with the force varying inversely as SP squared, , since c is defined above so that .
With the body in the static orbit starting from the upper apside at A, it will reach the lower apside, the point closest to S, after moving through an angle of 180 degrees. Newton wants a corresponding revolving orbit starting from apside, a, about a point s, with the lower apside shifted by an angle, α, where .
The initial speed, V, at A must be just less than that required to make the body move in a circle. Then ρ can be taken as equal to SA or sa. The problem is to determine v from the value of n, so that α can be found, or given α, to find n.
Then “by our method of converging series”: plus terms in X2 and above which can be ignored because the orbit is almost circular, so X is small compared to sa.
Comparing the 2 expressions for f(sp), it follows that .
The ratio of the initial forces at a is given by .
In Fig. 1, a body is moving along a specific curve MN acted on by a (centripetal) force, towards the fixed point S. The force depends only of the distance of the point from S. The aim of this proposition is to determine how the force varies with the radius, SP. The method applies equally to the case where the force is centrifugal.
In a small time, , the body moves from P to the nearby point Q. Draw QR parallel to SP meeting the tangent at R, and QT perpendicular to SP meeting it at T.
If there was no force present it would have moved along the tangent at P with the speed that it had at P, arriving at the point, R. If the force on the body moving from P to Q was constant in magnitude and parallel to the direction SP, the arc PQ would be parabolic with PR as its tangent and QR would be proportional to that constant force and the square of the time, .
Conversely, if instead of arriving at R, the body was deflected to Q, then a constant force parallel to SP, with magnitude: would have caused it to reach Q instead of R.
However, since the direction of the radius from S to points on the arc PQ and also the magnitude of the force towards S will change along PQ, the above relation will not give the exact force at P. If Q is sufficiently close to P, the direction of force will be almost parallel to SP all along PQ and if the force changes little, PQ can be assumed to be approximated by a parabolic arc with the force given as above in terms of QR and .
The time, is proportional to the area of the sector SPQ. This is Kepler's Second Law. A proof is demonstrated in Proposition 1, Book 1, in the Principia. Since the arc PQ can be approximated by a straight line, the area of the sector SPQ and the area of the triangle SPQ can be taken as equal, so
, where k is constant.
Again, this is not exact for finite lengths PQ. The force law is obtained if the limit of the above expression exists as a function of SP, as PQ approaches zero.
In fact, in time , the body with no force would have reached a point, W, further from P than R. However, in the limit QW becomes parallel to SP. The point W is ignored in Newton's proof.
Also, Newton describes QR as the versed sine of the arc with P at its centre and length twice QP. Although this is not strictly the same as the QR that he has in the diagram (Fig.1), in the limit, they become equal.
This proposition is based on Galileo's analysis of a body following a parabolic trajectory under the action of a constant acceleration. In Proposition 10, he describes it as Galileo's Theorem, and mentions Galileo several other times in relation to it in the Principia. Combining it with Kepler's Second Law gives the simple and elegant method.
In the historically very important case where MN in Fig. 1 was part of an ellipse and S was one of its foci, Newton showed in Proposition 11 that the limit was constant at each point on the curve, so that the force on the body directed towards the fixed point S varied inversely as the square of the distance SP.
Besides the ellipse with the centre at the focus, Newton also applied Proposition 6 to the hyperbola (Proposition 12), the parabola (Proposition 13), the ellipse with the centre of force at the centre of the ellipse (Proposition 10), the equiangular spiral (Proposition 9), and the circle with the centre of force not coinciding with the centre, and even on the circumference (Proposition 7). |
Number and Number Relations: Lesson 4
Fourth graders investigate the use of addition and subtraction facts in different situations. They examine the relationship between addition and multiplication and determine which operation is needed to solve various problematic situations.
3 Views 14 Downloads
Division Practice Sheets: Grade 4
Expand on young mathematicians' prior knowledge of multiplication with this series of worksheets introducing the concept of division. Covering a wide range of topics from basic fact families and single-digit division to long division...
3rd - 5th Math CCSS: Adaptable
Math Stars: A Problem-Solving Newsletter Grade 6
Think, question, brainstorm, and make your way through a newsletter full of puzzles and word problems. The resource includes 10 different newsletters, all with interesting problems, to give class members an out-of-the box math experience.
4th - 7th Math CCSS: Adaptable
Collecting and Working with Data
Add to your collection of math resources with this extensive series of data analysis worksheets. Whether your teaching how to use frequency tables and tally charts to collect and organize data, or introducing young mathematicians to pie...
3rd - 6th Math CCSS: Adaptable
Fraction Equivalence, Ordering, and Operations
Need a unit to teach fractions to fourth graders? Look no further than this well-developed and thorough set of lessons that takes teachers through all steps of planning, implementing, and assessing their lessons. Divided into eight...
3rd - 5th Math CCSS: Designed |
Different Types of Angles
An angle is formed when two rays or lines intersect at the same point. Measurements show several geometry angles, including zero angles, acute, obtuse, right angles, reflex angles, and straight angles. We use angles to construct buildings, roads, dams, cars, etc.
From a slice of pizza to woodworking sketches and fashion designs, you can find angles everywhere. It also uses the concept of angles to measure changes in the trajectory of ships, airplanes, planetary objects, etc. Let's look at the seven types of angles, their properties, and how to measure them.
What are the seven types of angles based on measurements?
The space created when two rays or lines meet at the same point is called an angle. Angles can be classified according to their measurements and how they are rotated. Based on the measurement, there are seven types of angles.
7 Types of angles
Any angle less than 90° is an acute angle. An acute angle is formed when two rays intersect at a vertex to form an angle of less than 90°. Some examples of acute angles: are 20°, 30°, 45°, and 60°. Note the figure that shows that the angle ∠ABC is acute.
If the angle between the two rays or lines is exactly 90°, it is said to be a right angle or 90°. From the figure, you can see that ∠ABC is a right angle or 90°.
Any angle greater than 90° and less than 180° is an obtuse angle. In the figure, the angle between the lines XY and YZ is obtuse. Some examples of obtuse angles: are 110°, 130°, 145°, and 165°.
As the name suggests, a right angle is a straight line, and the angle between the two rays is exactly 180°. At right angles, the two beams are opposite each other. A right angle can be made by connecting two adjacent right angles. That is, two right angles make up a right angle. In the figure, ∠ABC stands for 180° or straight angle.
Angles more significant than 180° and less than 360° are called reflex angles. In the figure, ∠ABC is the angle of reflection. Examples of reflection angles: 210°, 250°, 310°.
Full Rotation Angle
A whole rotation angle is formed when one of the arms rotates completely or makes 360°. In the figure, the angle is called the full rotation angle.
Different Types of angles based on Rotation:
The following angles are based on the direction of rotation of one arm of an angle. An angle is formed when two lines intersect and meet at the same point. Let's discuss the types of angles based on rotation.
A positive angle is an angle that rotates counterclockwise or anti-clockwise from the reference. In the figure below, turning side 1 (AB) counterclockwise through angle θ produces a positive angle.
A negative angle is one in which the angle rotates clockwise from the base. In the figure below, turning sidewise through angle θ produces a negative angle.
Types Of Angles Pairs
An angle pair represents two angles. Let's read about the various pairs of angles in geometry.
For two edges to be a part of adjacent angles, the following conditions must be met:
- Both edges have a common vertex.
- Both edges have a common shoulder.
- The angles must not overlap.
Here angle a and angle b are adjacent angles.
Two angles are said to be complementary when the sum of the two angles is 90°. The two angles can be of any size so that the sum of them is up to 90°. For example, the two angles could be 30° and 60°. Here, one angle is the complement of the other.
When the sum of two angles is 180°, the two angles are said to be supplementary. The two angles add up to 180°. For example, 110° and 70° are 180°. Therefore, these two angles are called supplementary angles. Here, one angle is the supplement of the other. For example, the supplement of 60° is (180° 60°) which is 120°.
Alternate interior angles
When a line or secant passes through two parallel lines, the angle formed on opposite sides of the line or secant is called a parallel interior angle and is equal.
Alternate exterior angles
When a line or secant passes through two parallel lines, the angle formed on the outside of the line or secant alternately is called the equivalent exterior angle.
When a straight line or a secant passes through two parallel lines, the angles formed at the same location or on the same side of the secant are corresponding angles and are equal.
When two lines intersect each other, the angles facing each other are equal, so they are called perpendicular or perpendicular opposite angles. |
Aptitude tests constitute one of the most widely used types of psychological tests. The term “aptitude” is often used interchangeably with the term “ability.”
The concept of ability. An ability refers to a general trait of an individual that may facilitate the learning of a variety of specific skills. For example, the level of performance that a man attains in operating a turret lathe may depend on the level of his abilities of manual dexterity and motor coordination, but these abilities may be important to proficiency in other tasks as well. Thus, manual dexterity also is needed in assembling electrical components, and motor coordination is needed to fly an airplane. In our culture, verbal abilities are important in a very wide variety of tasks. The individual who has a great many highly developed abilities can become proficient at a great number of different tasks. The concept of “intelligence” really refers to a combination of certain abilities that contribute to achievement in a wide range of specific activities. The trend in aptitude testing is to provide measures of separate abilities. The identification of these separate abilities has been one of the main areas of psychological research, and it is this research that provides the basis of many aptitude tests.
Psychological tests are essentially standardized measures of a sample of an individual’s behavior. Any one test samples only a limited aspect of behavior. By analogy, the chemist, by testing only a few cubic centimeters of a liquid, can infer the characteristics of the compound; the quality control engineer does not test every finished product but only a sample of them. Similarly, the psychologist may diagnose an individual’s “vocabulary” from a measure based on a small number of words to which he responds, or he may infer the level of a person’s “multilimb coordination” by having him make certain movements. The most important feature of this sample of behavior is that it is taken under certain controlled conditions. Performance on just any sample of words, for example, is not diagnostic of “vocabulary.” For a behavior sample to qualify as a psychological test, its adequacy must be demonstrated quantitatively. (Some typical indexes for doing this will be described below.)
How abilities are identified. Some individuals who perform well on verbal tasks (for example, those tasks requiring a large vocabulary) may do poorly on tasks requiring spatial orientation (for example, flying an airplane). Or an individual who performs well on verbal items may do poorly on numerical items. Consequently, it is obvious that there are a number of different abilities that distinguish people. But how are the great variety of abilities identified? How does the psychologist know what abilities are to be usefully considered separate from one another? The basic research technique that has been used is called factor analysis. A large number of tests, selected with certain hypotheses in mind, are administered to a large number of experimental subjects. Correlation coefficients among all these test performances are then computed. From these correlations, inferences are made about the common abilities needed to perform the tests. The assumption is that tests that correlate with each other measure the same ability factor, and tests that are uncorrelated measure different factors. The problem of extracting and naming these factors is somewhat complex. Examples of separate abilities that have been identified are verbal comprehension, spatial orientation, perceptual speed, and manual dexterity. Of course, this basic research also allows assessment of the kinds of tests that provide the best measures of the different ability factors.
Aptitudes and abilities. Ability tests are usually given with the objective of making some prediction about a person’s future success in some occupational activity or group of activities. The term aptitude, used in place of the term ability, has more of a predictive connotation. We could, of course, use such tests solely to attain a picture of a person’s strong and weak ability traits, with no specific predictive objective. We could use such measures as variables in psychological research, for example, studies of psychological development or the relation of ability to learning. Or we may be interested in the discovery of the relation between the ability of spatial relations and the speed of learning a perceptual-motor skill. But most often these tests are used in personnel selection, vocational guidance, or for some other applied predictive purpose such as using a spatial relations test to select turret lathe operators.
Sometimes aptitude tests designed to predict success in some specific job or occupation, as would be true of a test of “clerical aptitude,” actually measure combinations of different abilities (e.g., perceptual speed, numerical facility) found to be important in clerical jobs.
Achievement tests. Aptitude tests are distinguished from achievement (or proficiency) tests, which are designed to measure degree of mastery of an area of knowledge, of a specific skill, or of a job. Thus, a final examination in a course is an achievement test used to assess student status in the course. If used to predict future performance in graduate work or in some other area, it would be called an aptitude test. The distinction between aptitude and achievement tests is often in terms of their use.
Ways of describing aptitude tests. Tests may be classified in terms of the mode in which they are presented, whether they are group or individual tests, whether they are speeded, and in terms of their content. Any complete description of an aptitude test should include reference to each of these characteristics.
Mode of presenting tests. Most tests are of the paper and pencil variety, in which the stimulus materials are presented on a printed page and the responses are made by marking a paper with a pencil. The administrative advantages of such a medium are obvious, in that many individuals can be tested at once, fewer examiners are needed, and scoring of the tests is relatively straightforward. Nonprinted tests, such as those involving apparatus, often present problems of maintenance and calibration. However, it may not be possible to assess the desired behavior by means of purely printed media. Tests of manual dexterity or multiple-limb coordination are examples of aptitude tests requiring apparatus, varying from a simple pegboard to mechanical-electronic devices. Tests for children and for illiterates frequently employ blocks and other objects, which are manipulated by the examinee.
Auditory and motion picture media have also been used in aptitude testing. For example, tests of musical aptitude are auditory, as are certain tests designed to select radiotelegraphers. The test material is presented by means of a phonograph or tape recorder. One such test was designed to measure how well individuals could estimate the relative velocity of moving objects. It is evident that this function could not have been measured by a purely printed test. However, in both these auditory and motion picture tests, the responses are, nonetheless, recorded by pencil on paper.
Group versus individual tests. Some tests can be administered to examinees in a group; others can be administered to only one person at a time. The individual test is naturally more expensive to use in a testing program. Tests for very young children or tests requiring oral responses must be individual tests. Such tests are also used when an individual’s performance must be timed accurately. Devices used to test motor abilities constitute additional examples of individual tests, although sometimes it is possible to give these in small groups.
Speeded versus nonspeeded tests. Tests differ in the emphasis placed on speed. In many functions, such as vocabulary, there is little interest in speed. Such tests are called power tests and have no time limits. For other functions, such as perceptual speed or finger dexterity, speed becomes an important factor in the measured behavior. Speeded tests may be administered by allowing all examinees a specific length of time to finish (time-limit tests), in which case the score is represented by the number of items correctly completed. Alternatively, a speeded test may require the examinee to finish a task as rapidly as possible (work-limit tests), and his score may then be expressed as the time taken to complete the test. For example, a finger dexterity test may be scored in terms of the number of seconds taken to complete a series of small screw-washer-nut assemblies.
What the tests measure. Most frequently, aptitude tests are classified in terms of what they attempt to measure. Thus, there are vocabulary tests, motor ability tests, etc. Figure 1 provides some examples of test items.
Tests containing items such as those illustrated are often grouped into standard “multiple aptitude test batteries,” which provide profiles of certain separate ability test scores. Examples are the Differential Aptitude Tests (DAT), published by the Psychological Corporation, the General Aptitude Test Battery (GATB) of the U.S. Employment Service, and the Aircrew Classification Battery of the U.S. Air Force.
Characteristics of useful tests. Now that we have looked briefly at the different forms of tests, let us examine some of the basic concepts of testing. How can the usefulness of a test be evaluated?
Test construction. The process of constructing aptitude tests involves a rather technical sequence combining ingenuity of the psychologist, experimentation and data collection with suitable samples of individuals, the calculation of quantitative indexes for items and total test scores, and the application of appropriate statistical tests at various stages of test development. Some of the indexes applied in the construction phase are difficulty levels, the proportion of responses actually made to the various alternatives provided in multiple-choice tests, and the correlation of item scores with total test scores or within an independent criterion. A well-developed aptitude test goes through several cycles of these evaluations before it is even tried out as a test. The more evidence there is in the test manual for such rigorous procedure the more confidence we can have in the tests.
There are other problems that generally must be considered in evaluating test scores. Before a test is actually used, a number of conditions have to be met. There is a period of “testing the tests” to determine their applicability in particular situations. A test manual should be devised to provide information on this. Furthermore, there is the question of interpreting a test score.
Standardization. The concept of standardization refers to the establishment of uniform conditions under which the test is administered, ensuring that the particular ability of the examinee is the sole variable being measured. A great deal of care is taken to insure proper standardization of testing conditions. Thus, the examiner’s manual for a particular test specifies the uniform directions to be read to everyone, the exact demonstration, the practice examples to be used, and so on. The examiner tries to keep motivation high and to minimize fatigue and distractions. If such conditions are high for one group of job applicants and not for another, the test scores may reflect motivational differences in addition to the ability differences that it is desired to measure.
Norms. A test score has no meaning by itself. The fact that Joe answered 35 words correctly on a vocabulary test or that he was able to place 40 pegs in a pegboard in two minutes gives very little information about Joe’s verbal ability or finger dexterity. These scores are known as raw scores. In order to interpret Joe’s raw score it is necessary to compare it with a distribution of scores made by a large number of other individuals, of known categories, who have taken the same test. Such distributions are called norms. There may be several sets of norms for a particular test, applicable to different groups of examinees. Thus, getting 75 per cent of the vocabulary items correct may turn out to be excellent when compared to norms based on high school students, but only average when compared to norms based on college graduates. If one is using a test to select engine mechanics, it is best to compare an applicant’s score with norms obtained from previous applicants for this job, as well as with norms of actual mechanics.
The mental age norm is one in which an individual’s score on an intelligence test is compared to the average score obtained by people of different ages. This, of course, is applicable mainly to children. For adults, the percentile norm is most frequently used. A large number of people (at least several hundred) are tested, the scores ranked, and the percentage of people falling below each score is determined. Let us suppose that an individual who gets a raw score of 35 on a test turns out to be at the 65th percentile. This tells us immediately that the person scored better than 65 per cent of the individuals in the group for which test norms were determined. A score at the 50th percentile is, by definition, the median of the distribution. The scores made by future applicants for a job may subsequently be evaluated by comparing them with the percentiles of the norm group.
Another type of norm is the standard score. Each individual’s score can be expressed as a discrepancy from the average score of the entire group. When we divide this deviation by the standard deviation (SD) of the scores of the entire group, we have a standard score, or a score expressed in SD units. Typically, a test manual will include these standard-score equivalents as well as percentile equivalents for each raw score.
From this discussion, it is evident that a psychological test usually has no arbitrary pass-fail score.
Reliability. One of the most important characteristics of a test is its reliability. This refers to the degree to which the test measures something consistently. If a test yielded a score of 135 for an individual one day and 85 the next, we would term the test unreliable. Before psychological tests are used they are first evaluated for reliability. This is often done by the test-retest method, which involves giving the same test to the same individuals at two different times in an attempt to find out whether the test generally ranks individuals in about the same way each time. The statistical correlation technique is used, and the resulting correlation is called the reliability coefficient. Test designers try to achieve test reliabilities above .90, but often reliabilities of .80 or .70 are useful for predicting job success. Sometimes two equivalent forms of a test are developed; both are then given to the same individuals and the correlation determined. Sometimes a split-half method is used; scores on half the items are correlated with scores on the remaining half. Tests that are short often are unreliable, as are many tests that do not use objectively determined scores.
Validity. An essential characteristic of aptitude tests is their validity. Whereas reliability refers to consistency of measurement, validity generally means the degree to which the test measures what it was designed to measure. A test may be highly reliable but still not valid. A thermometer, for example, may give consistent readings but it is certainly not a valid instrument for measuring specific gravity. Similarly, a test designed to select supervisors may be found to be highly reliable; but it will not be a valid test if scores made by new supervisors do not correlate with their later proficiency on the job.
When used for personnel selection purposes, the validity of aptitude tests is evaluated by finding the degree to which they correlate with some measure of performance on the job. The question to be answered is, Does the test given to a job applicant predict some aspect of his later job performance? The correlation obtained in such a determination is known as the validity coefficient. This is found by administering the test to unselected job applicants and later obtaining some independent measure of their performance on the job. If the validity coefficient is a substantial one, the test may be used to predict the job success of new applicants, just as it has demonstrated it can do with the original group. If the validity coefficient is low, the test is discarded as a selection instrument for this job, since it has failed to make the desired prediction of job performance.
Validity coefficients need not be very high in absolute value to make useful predictions in matching men to job requirements. A test was given to 1,000 applicants for pilot training in the Air Force. These applicants were allowed to go through training; six months later their proficiency was evaluated. It was found that scores on this ten-minute test correlated .45 with the performance of these individuals as pilots six months later. Very few of those scoring high on the test subsequently failed training, while over half of those scoring low on the test eventually failed.
Why are some tests valid and others not? The reason must be that valid tests are those that measure the kinds of abilities and skills actually needed on the job. It should be noted that tests often do not directly resemble tasks of the job, even when they are highly valid. For example, the Rotary Pursuit Test was found to have considerable validity in predicting success in pilot and bombardier training for the Army Air Force during World War II. This test requires the examinee to keep a metal stylus in contact with a target spot set toward the edge of a rotating disc. Often the examinees may have thought, “Where does the pilot (or bombardier) do anything like this?” But the reason this test is valid is not because of its resemblance to any task of these jobs, but because it samples control precision ability, which facilitates the learning of the jobs. (This ability factor was identified by factor analysis research.) Sometimes, in contrast, tests that appear superficially to resemble actual tasks of the job turn out to be of low validity because they fail to sample relevant abilities.
Predictive validity of the kind described above is not the only kind of validity. We may also be interested in the extent to which the test actually measures the trait we assume it measures, a somewhat different concern from the criterion it is designed to predict. This is called construct validity. Thus, a test assumed to be a spatial test may turn out to tap mainly the ability to understand the verbal instructions. Construct validity can be determined only experimentally, through correlation with other measures.
The selection ratio. Another important factor affecting the success of aptitude tests in personnel selection procedures is the selection ratio. This is the ratio of those selected to those available for placement. If there are only a few openings and many applicants, the selection ratio is low; and this is the condition under which a selection program works best. For example, if only a few pilots are needed relative to the number of applicants available, one can establish a high qualifying score on the aptitude test, and there will be very few subsequent failures among those accepted. On the other hand, if practically all applicants have to be accepted to fill the vacancies, the test is not useful, regardless of its validity, since this amounts to virtual abandonment of the selection principle. If the selection ratio is kept low, validity coefficients even as low as .20 can still identify useful tests. If the selection ratio is high, higher validity is necessary.
Combining tests into a battery. Aptitude tests given in combination as multiple aptitude batteries would seem most appropriate where decisions have to be made regarding assignment of applicants to one out of several possible jobs. This kind of classification requires maximum utilization of an available manpower pool, where the same battery of tests, weighted in different combination, provides predictive indexes for each applicant for each of several jobs. Since the validity of these tests has been separately determined for each job, it may be found, for example, that tests A, D, and E predict success in job Y, while tests B, D, and C, predict success in job X. By the appropriate combinations of test scores, it is then possible to find each applicant’s aptitude index for job X as well as for job Y. The most efficient batteries are those in which the tests have a low correlation with each other (hence, there is less duplication of abilities measured) and where the individual tests have high validity for some jobs but not for others. Thus, if a test score predicts success on job Y but not job X, a high score on this test would point to an assignment on job Y. A test that is valid for all jobs is not very useful in helping us decide the particular job for which an individual is best suited.
There are two main methods of combining scores from a test battery to make predictions of later job performance. One method is called the successive hurdle or multiple-cutoff method. With this approach, applicants are accepted or rejected on the basis of one test score at a time. In order to be selected, an applicant must score above a critical score on each test; he is disqualified by a low score on any one test.
The second approach uses multiple correlation. From the validity of the tests and their correlations with each other, a determination can be made of a proper weight for each test score. Using these weights as multipliers for test scores, a value of a total aptitude index can be computed for each individual. This method, then, produces a combined weighted score, which reflects the individual’s performance on all the tests in a battery. The particular method chosen for combining scores depends on a number of factors in the selection situation, but both methods, which are based on aptitude information from a number of different tests, accomplish the purpose of making predictions of job success.
Edwin A. Fleishman
[Other relevant material may be found inAchievement testing; Factor analysis; Intelligence and intelligence testing; Multivariate analysis; Psychometrics; Vocational interest testing.]
Adkins, Dorothy C. 1947 Construction and Analysis of Achievement Tests. Washington: Government Printing Office.
Anastasi, Anne (1954) 1961 Psychological Testing. 2d ed. New York: Macmillan.
Buros, Oscar K. (editor) 1959 The Fifth Mental Measurements Yearbook. Highland Park, N.J.: Gryphon. → See especially pages 667–721 on multiaptitude batteries.
Cronbach, Lee J. (1949) 1960 Essentials of Psychological Testing. 2d ed. New York: Harper.
Cronbach, Lee J.; and Glaser, Goldine C. (1957) 1965 Psychological Tests and Personnel Decisions. Urbana: Univ. of Illinois Press.
Cureton, Edward E.; and Cureton, Louise W. 1955 The Multi-aptitude Test. New York: Psychological Corp.
Dvorah, Beatrice J. 1956 The General Aptitude Test Battery. Personnel Guidance Journal 35:145–154.
Fleishman, Edwin A. 1956 Psychomotor Selection Tests: Research and Application in the United States Air Force. Personnel Psychology 9:449–467.
Fleishman, Edwin A. (editor) 1961 Studies in Personnel and Industrial Psychology. Homewood, III.: Dorsey.
Fleishman, Edwin A. 1964 The Structure and Measurement of Physical Fitness. Englewood Cliffs, N.J.: Prentice-Hall.
French, John W. 1951 The Description of Aptitude and Achievement Tests in Terms of Rotated Factors. Psychometric Monographs No. 5.
GagnÉ, Robert M.; and Fleishman, Edwin A. 1959 Psychology and Human Performance. New York: Holt.
Ghiselli, Edwin E. 1955 The Measurement of Occupational Aptitude. California, University of, Publications in Psychology 8:101–216.
Ghiselli, Edwin E.; and Brown, Clarence W. (1948) 1955 Personnel and Industrial Psychology. 2d ed. New York: McGraw-Hill.
Guilford, Joy P. (editor) 1947 Printed Classification Tests. U.S. Army Air Force, Aviation Psychology Program, Research Report No. 5. Washington: Government Printing Office.
Guilford, J. P. 1959 Three Faces of Intellect. American Psychologist 14:469–479.
Gulliksen, Harold 1950 Theory of Mental Tests. New York: Wiley.
Loevinger, Jane 1957 Objective Tests as Instruments of Psychological Theory. Psychological Reports 3:635–694.
Melton, Arthur W. (editor) 1947 Apparatus Tests. U.S. Army Air Force, Aviation Psychology Program, Research Report No. 4. Washington: Government Printing Office.
Super, Donald E.; and CRITES, J. O. (1949) 1962 Appraising Vocational Fitness by Means of Psychological Tests. Rev. ed. New York: Harper.
U.S. Employment Service 1946–1958 General Aptitude Test Battery. Washington: Government Printing Office.
Vernon, Philip E. (1950) 1961 The Structure of Human Abilities. 2d ed. London: Methuen. |
This computer generated image shows when Chandrayaan-1 was detected. The purple circle represents the Goldstone radar beam and the white box shows the strength of the echo.
Using optical telescopes to find small spacecraft or space debris around the Moon can prove quite difficult due to the glare of the Moon, but a new technology application may have made the process much easier.
Scientists at NASA’s Jet Propulsion Lab developed a new ground-based interplanetary radar and located two spacecraft orbiting the Moon: NASA’s Lunar Reconnaissance Orbiter, which is still active, and the dormant Chandrayaan-1 spacecraft from the Indian Space Research Organization.
“Finding LRO was relatively easy, as we were working with the mission’s navigators and had precise orbit data where it was located,” Marina Brozovic, a radar scientist at JPL and principal investigator for the test project, said in a press release. “Finding India’s Chandrayaan-1 required a bit more detective work because the last contact with the spacecraft was in August of 2009.”
Chandrayaan-1 had been lost for quite some time thanks to the Moon’s mascons affecting the spacecraft’s orbit. Mascons, which are concentrated areas with a stronger gravitational pull than the rest of a moon or planet’s surface, can not only change orbits, but can also cause a craft to crash. Though calculations at JPL showed Chandrayaan-1 was still on the move, they couldn’t find the craft and decided to consider it lost.
In addition to the potential orbit change, Chandrayaan-1 was a bit more difficult to find due to its small size of about five feet (1.5 meters). The team was not positive such a small object orbiting the Moon would have been successfully detected, but finding Chandrayaan-1 proved just how powerful this new radar can really be.
After realizing Chandrayaan-1 was in polar orbit and completed an orbit every two hours and 8 minutes, the team positioned NASA’s 230-foot (70-meter) antenna at Goldstone Deep Space Communications Complex and the 330-foot (100-meter) Green Bank Telescope in West Virginia at the Moon’s north pole. Sure enough, the radar signature showed a tiny spacecraft crossing the beam just about every two hours. They continued studying the radar echoes for another three months to confirm the new orbital predictions.
Besides the change in orbit due to the mascons, the team says Chandrayaan-1 has otherwise remained unchanged.
“It turns out that we needed to shift the location of Chandrayaan-1 by about 180 degrees, or half a cycle from the old orbital estimates from 2009,” said Ryan Park, the manager of JPL’s Solar System Dynamics group, who delivered the new orbit back to the radar team. “But otherwise, Chandrayaan-1’s orbit still had the shape and alignment that we expected.”
The detection of such a small craft from so far away proves just how strong large radars can be, especially when working together. |
Page Views: 25107
|Log in to rate this plan!
Lesson Sequence – Introductory (This will be the students’ first experience with multiplication this school year.)
-Pre: At the beginning of the year, students were given a second grade skills test as well as the ThinkLink test of second grade skills. I have used both of these test data to obtain information about the level of student understanding of multiplication.
-Post: Students will be assessed informally during the lesson. Later in the week they will receive a formal assessment to determine their level of skill in multiplication concepts.
TLW understand the concepts that multiplication is repeated addition.
TLW use concrete patterns to predict a product.
SPI: 3.1.spi.16 Use the multiplication facts 0, 1, 2, 5, and 10 efficiently & accurately.
Procedure - Anticipatory Set
Two volunteers will be asked to come to the front and hold out their hands, palms up. Next, I will count out 2 counters into each student’s hand. Then, I will ask the students how they can find the total number of counters in the four hands. This discussion will lead into the actual lesson.
Procedure – Multiplication Introduction
For the discussion, some ideas they may come up with are: counting the counters by ones (1+1+1+1+1+1+1+1=8), counting the counters by twos (2 + 2 + 2 + 2 = 8), etc. Counting by 2’s is the goal for them to recognize.
Next, I will show students how we can write the repeated addition sentence as a multiplication sentence, 2 + 2 + 2 + 2 = 8 is the same as 4 x 2 = 8 (read as 4 times 2 equals 8 & 4 groups off 2 equals 8 counters total). We will then discuss which way is easier to write, repeated addition or multiplication so that students can better understand why we use multiplication.
Finally, I will ask students how they could record this example on graph paper. I will use a transparency of graph paper to test out their ideas.
Procedure – Supervised Practice
First, I will pass out packs of 1-8 index cards, counters, and graph paper to each group of students. Next, we will do an example together. A student will pick a number between 1 and 10. For example, if the number is 3 then each group will place 3 counters on each of their cards, write a repeated addition problem, a multiplication problem, and show it on the graph paper. So, depending on how many index cards they have, each group will arrive at a different multiplication problem. I will write their multiplication sentences on the board so that we can keep a chart of what we have found.
We will share after each example and observe what patterns we notice. We will do this example several times using different numbers with students gaining independence as we go along. If there is time, groups may swap the number of index cards they have to try out another group of numbers.
To wrap up the lesson we will discuss the patterns that students found. Also, I will pose the question of what would the product be if we had zero index cards because 0’s will be the first group of multiplication facts that we will learn.
Meeting Individual Students’ Needs
* I will observe students’ level of understanding through questioning & monitoring of progress during the activity:
-Reteach/Remediation: I will use guided questions to help students discover how repeated addition is the same as multiplication. “How many groups of you have? How many are in each group? What is the total? What addition and multiplication sentences could we use to show this?” Also, I will use consistent language: “3 groups of 2 added together equals 6, just as 3groups times 2 groups equals 6.”
-Enrichment: I will give higher-level groups of students a larger number of index cards or an amount that they cannot easily count by (4 & 6) compared to groups that may have more difficulty who will receive the number of cards that are easier to count by (1, 2, 3, & 5).
-Independent Practice: Students will independently complete examples on their own using the index cards, counters, and graph paper. Sheets will be collected at the end of the lesson to monitor their level of understanding.
Overhead & Graph Paper Transparency To show students how to draw examples
Index cards To show groups
Counters To show individual parts of the group
Graph Paper To show pictures & number sentences |
Posted by jackie on Wednesday, April 4, 2007 at 11:36am.
Business and finance. The cost of producing a number of items x is given by
C = mx + b, in which b is the fixed cost and m is the variable cost (the cost of producing one more item).
(a) If the fixed cost is $40 and the variable cost is $10, write the cost equation.
would the equation be
(b) Graph the cost equation.
I am not sure what I am suppose to graph here
(c) The revenue generated from the sale of x items is given by R=50x. Graph the
revenue equation on the same set of axes as the cost equation.
I am also stuck here I feel like i have missed something while i am reading this question over and over please help
(d) How many items must be produced for the revenue to equal the cost (the
b) find any two points with your equation
e.g. let x = 0, then C=40
let x=10, then C = 140
Now plot (0,40) and (10,140) by using a suitable scale for your x and C axes.
c) since both C and R are in dollars you can use the same vertical axis
do the same as in b)
d) you want R = C, but you have expressions for both
so 50x = 10x + 40
solve, you should be able to do this in your head by just looking at that equation.
so on d i need to
then i need to put this in my graph
Answer this Question
- business and finance - The cost of producing a number of items x is given by C=...
- math,algebra,help - Problem states: Business and finance. The cost of producing ...
- math - I did the first part how would i graph it. Problem: Business and finance...
- algebra - The cost of producing a number of items x is given by C = mx + b, in ...
- algebra help - The cost of producing a number of items x is given by: C = mx + b...
- word problems in algebra - The cost of producing a number of items x is given by...
- Math - average rate problems(check + help) - The total cost, c, in dollars of ...
- math - * I posted this before, but I left out the equation, sorry. A company ...
- Business Math - Item price $8.00 Fixed cost $200 Variable costs 3x if x items ...
- Algebra - The cost of producing cell phones is represented as C=mx+b, where m is... |
This is all of the information you need to know about mass flow meters.
You will learn:
- What is a Mass Flow Meter?
- How Does a Mass Flow Meter Work?
- Types of Mass Flow Meters
- Mass Flow Meter Types of Readings
- And much more…
Chapter One – What is a Mass Flow Meter?
A mass flow meter is a way of measuring the volume or mass of a gas or liquid passing through a system at a specific point in the flow system. They are used to measure linear, nonlinear, mass, and volumetric flow rates. The names given to mass flow meters depend on the industry that uses them and include flow gauge, flow indicator, liquid meter, or flow rate sensor. Mass flow meters have replaced other forms of flow rate measurement because of their accuracy, precision, and resolution of flow measurement.
The two main categories of mass flow meters are volumetric and mass, which differ in how they measure flow and show their readings. Volumetric flow meters measure the volume of a liquid, while mass flow meters measure its mass.
Mass flow meters are further categorized as Coriolis, or inertial, and thermal. Coriolis flow meters use the Coriolis effect that states that a mass moving in a rotating system creates force perpendicular to the direction of the motion and rotational axis. A Coriolis meter measures the inertia caused by a fluid or gas flowing through oscillating tubes and uses sensors to record the amplitude, frequency, and phase shift of the oscillations to determine mass flow.
Thermal mass flow meters use the principles of heat transfer using a heating element and temperature sensors. Fluids, passing the sensors, create thermal energy that increases the fluid’s temperature, which can be used to determine the flow rate.
The image above is a generalized view of a mass flow meter inserted into a pipe to measure the flow rate.
Chapter Two – How Does a Mass Flow Meter Work?
Though all mass flow meters measure flow rates, each type takes its measurements in different ways. There isn’t any standardized method for checking flow rates. They vary according to the material being measured, the conditions, and the required accuracy.
Flow meters are a necessity in production facilities to give precise and accurate readings regarding fluid flow to ensure maximum operational efficiency. Flow measurements provide indicators of the overall performance of the system.
The main function of mass flow meters is to measure variations in the flow caused by viscosity and density, which affect the accuracy of flow measurements. The effects of temperature on density of fluids widely varies. Mass flow meters are used for fuel monitoring and balancing of fuels, which require an accuracy between ± 1%.
Below is a brief description of how a few flow meters work.
The Coriolis principle is the effect a moving rotating mass has on a body. The moving mass exerts force, called the Coriolis force, on the body, causing deformation that appears to be a deflection of the body from its path. The force does not act directly on the body but on the body’s motion, which is the principle used for Coriolis flow meters.
The video below, from YouTube, offers a brief explanation of the Coriolis principle.
Explanation of the Coriolis Principle
Indirect Mass Flow Measurement
Magnetic, ultrasonic, differential pressure, positive displacement, variable area, non-compensated vortex, and turbine meters are volumetric. For increased accuracy, these meters can be combined to provide pressure and temperature readings, with a flow computer, to produce mass flow readings, which is an indirect method for measuring mass flow. Indirect mass flow measurements are used when direct flow measurements are not sufficient.
Direct Mass Flow Measurement
Direct mass flow measurement eliminates inaccuracies caused by the physical properties of fluids. Mass measurement is not sensitive to changes in pressure, temperature, viscosity, and density. Coriolis meters are direct flow meters using the Coriolis effect. The flow direction is straight through the meter, allowing for higher flow rates and less pressure loss.
Pressure Differential Methods
Pressure differential meters have four matched orifice plates in a Wheatstone bridge arrangement. A pump transfers fluid at a known rate from one branch of the bridge into another to create a reference flow. The differential pressure measured across the bridge is the mass flow rate.
Thermal Mass Flow Meter
Thermal mass flow meters have two temperature sensors, which measure heat transfer as a fluid passes over a heated surface. Molecules from the material create heat transfer. The more molecules in contact with the heated surface, the greater the transfer.
The temperature sensor in a thermal mass flow meter is a reference and provides a measurement of temperature. The flow sensor is heated slightly above the temperature sensor. As material flows past the heated flow sensor, heat transfer occurs. The meter measures the amount of power required to maintain the temperature differential between the flow sensor and temperature sensor to supply the flow rate reading.
The diagram below shows the flow sensor on the top and the temperature sensor on the bottom.
Chapter Three – Types of Mass Flow Meters
Flow can be either open channel or closed conduit, where an open channel is open to the atmosphere and closed conduit is enclosed. With open channel flow, the force of gravity causes the flow. Closed conduit flow is caused by pressure differences in the conduit.
The list of the types and kinds of mass flow meters is very long and involved and changes with their industrial use. This discussion will examine Coriolis, ultrasonic, thermal, turbine, differential, positive displacement, vortex, and gyroscopic.
Types of Mass Flow Meters
Obstructions in fluid flow create vortices in the downstream, which has a critical fluid flow speed, where vortex shedding occurs and the instance where alternating low pressure zones are generated.
The low pressure zones cause the obstruction to move towards the low pressure zone, where sensors gauge the vortices to measure the strength of the flow.
With Coriolis mass flowmeters, the fluid runs through U-shaped tubes vibrating in an angular harmonic oscillation. The tubes deform and an additional vibration component is added to the oscillation, which causes a measurable phase shift in the tubes. Coriolis flow meters are very accurate, better than ± 0.1%, with a turndown rate of more than a 100:1 and can be used to measure a fluid's density.
The frequency of a reflected signal is modified by the velocity and direction of the fluid flow. If the fluid is moving towards the transducer, the frequency of the signal will increase. As it moves away, the frequency of the returning signal decreases. The frequency difference is equal to the reflected frequency minus the originating frequency and can be used to calculate the speed of fluid flow.
Thermal meters have two heated sensors in the fluid flow path. The flow stream generates heat from one of the sensors, which is proportional to the mass flow rate. The temperature difference between the sensors is the mass flow rate. The accuracy of a thermal mass flow meter depends on the reliability of its calibrations and variations in temperature, pressure, heat capacity, and the viscosity of the fluid.
There are different designs for turbine flow meters. In all versions, a fluid moves through a pipe and moves the vanes of a turbine. The rate of its spin measures the flow rate with an accuracy better than ± 0.1%.
Flow is calculated by measuring the pressure drop over caused by an obstruction in the flow. The process is based on the Bernoulli Equation where the pressure drop and the further measured signal is a function of the square flow speed.
Positive displacement flow meters measure flow rate through the continuous filling and emptying of a chamber of known volume. They are driven by the flowing fluid, are the most accurate flowmeters available with measurement values within 0.1%, and directly measure volumetric flow rate. No power is required to run a positive displacement meter. They can handle conditions of high pressures, entrained gases, and suspended solids.
Positive displacement meters are used for nonabrasive fluids like heating oil, lubricants, additives for polymers, and vegetable oil.
The YouTube video below offers a brief explanation of positive displacement flow meters.
Positive Displacement Flow Meters
Gyroscopic Mass Flow Meter
A gyroscopic flow meter has a tube in a circular or square shape with oscillating vibration at a constant angular velocity on the A axis. As the fluid passes through the loop, precession occurs on the B axis, which is measured by the deviation of the sensor element. The torque on the rotating pipe is measured to determine the mass flow.
Chapter Four – Mass Flow Meter Types of Readings
Mass flow measurement is either mass or volumetric, where mass flow measures the number of molecules in a gas, while volumetric measures the space between molecules. Measurements are influenced by pressure and temperature.
Volumetric flow rate measures the three dimensional space a gas occupies as it flows through the instrument under measured pressure and temperature, which is the actual flow rate.
Mass flow meters measure the number of molecules that flow through the instrument as expressed as a volumetric flow rate, which is the space molecules occupy when measured under standard temperature and pressure.
Mass flow meters provide data using a variety of measurements and depend on the force produced by the flowing stream as it strikes an obstruction in the stream, which can also provide a velocity measurement.
Units of Measurement
Gas and liquid flow is measured in units as liters or kilograms per second, which is a measurement of density. In the case of liquids, density is unrelated to the surrounding conditions, which is not the case with gases that are influenced by pressure and temperature.
When liquids or gases are pumped for energy use, the rate of flow is measured in gigajoules per hour or BTUs per day. A flow computer uses the mass and volumetric flow rate to determine the energy flow rate.
Gases are difficult to measure since their volume changes when heated, cooled, or placed under pressure. When reading the gas flow rate on a mass flow meter, it may be expressed as actual or standard as acm/h (actual cubic meters per hour), sm3/sec (standard cubic meters per second), kscm/h (thousand standard cubic meters per hour), LFM (linear feet per minute), or MMSCFD (million standard cubic feet per day).
The best meters for measuring gas flow rate are thermal, Coriolis, or controllers.
The units used to measure liquids depends on the application and industry but can be in gallons per minute, liters per second, bushels per minute, or cubic meters per second.
The venturi effect is the reduction of fluid pressure when it flows through a constricted space. The velocity of the fluid increases, while its pressure decreases. The increase in pressure is balanced by the drop in pressure.
Venturi effect measures the velocity of a fluid in a pipe using the Bernoulli's equation that states that the velocity of a liquid increases in proportion to a decrease in pressure. The flow rate is in gallons per minute, liters per second, or cubic meters per second using the flow rate formula of Q (liquid flow rate) = A (pipe area in square meters) multiplied by v (velocity of the liquid in meters per second).
A flow meters performance is measured by its amount of error and how precise its measurements are. Accuracy of a flow meter is expressed in percentages of:
- Flow Rate - %R
- Full Scale - %FS
- Calibrated Span - %CS
- Upper Range Limit - %URL
When discussing flow rate accuracy, calculations should be expressed in percentages of the actual rate, which can be minimum, normal, or maximum. These determinations can help in the selection of the proper mass flow meter for an operation.
Chapter Five – Flow Meter Accuracy Concerns
Flow of liquids and gases requires constant and vigilant monitoring with precise and accurate measurements and readings. Errors in readings, calculations, and adjustments cause a decrease in efficiency and potential damage to equipment. Understanding the causes of the problems with meter readings can prevent potential repairs and stoppage of production. Below are some examples of conditions that can cause difficulties with mass flow meter readings or damage to the meter.
Slurry contains minute particles of less than 60 to 100 microns and can be settling or non-settling. The particles in slurry can be abrasive and wear down a flow meter or coagulate and clog the line.
In open systems, exposed to the air, impurities and air can be blended with a fluid to form bubbles. In vortex flow meters, air bubbles prevent the creation of vortices. In ultrasonic flow meters, they prevent ultrasonic waves resulting in malfunctions and inaccurate readings.
Deviations in the Flow:
When a fluid is flowing through a straight pipe, flow velocity is uniform and stable. Bends or angles in a pipe cause the flow velocity to change and become irregular drifting away from the center of the pipe or swirling. The amount of measurement error will depend on the amount of irregularity.
Pulsations are caused by the acceleration and deceleration of the fluid flow, which may exceed the range of the mass flow meter. The meter reading will be smaller than the actual flow rate. Reciprocating pumps are known to cause this problem. Pulsations can be reduced by a damper, such as an accumulator. Increasing the flow meter’s time of response is another measure.
There are many varieties of ways that pipes can be caused to vibrate, which include the operation of machinery near the pipe or the opening and closing of valves. In some instances when a fluid is introduced into a pipe, it can cause a vibration. Coriolis and vortex meters will not provide proper measurements in those conditions. This is not true of ultrasonic flow meters, which are not influenced by vibrations.
Scaling occurs when small pieces of metal from groundwater crystallize and become attached to the walls of pipes. As scaling builds up, the flow path narrows, obstructing liquid flow. Scaling can also attach to the flow meter. Flow meters with paddle wheels or floating elements will have errors in their readings caused by scaling.
Slime is living organisms such as algae, bacteria, and microorganisms, which can be sticky or muddy. Much like scaling, rust, sludge, and slurry, slime can blog a mass flow meter by clogging it or obstructing the flow of fluids. Slime has electrical conductivity, which may also cause inaccurate readings.
- Mass flow meters measure the volume or mass of a gas or liquid passing through a system at a fixed point.
- Mass flow meters measure linear, nonlinear, mass, and volumetric flow rates and have different names depending on the industry and their use.
- Liquid flow can be either open channel or closed conduit, where open channel is open to the atmosphere and closed conduit is enclosed.
- Flow of liquids and gases requires constant and vigilant monitoring with precise and accurate measurements and readings.
- Mass flow measurement is either mass or volumetric, where mass flow measures the number of molecules in a gas, while volumetric measures the space between molecules. |
On March 11, 2011, a large earthquake occurred, causing a tsunami which struck the Pacific coast of northeast Japan. We investigated the ecological and genetic effects of the large tsunami on the threespine stickleback (genus Gasterosteus) populations in Otsuchi Town, which was one of the most severely damaged areas after the tsunami. Our environmental surveys showed that spring water may have contributed to the habitat recovery. Morphological analysis of the stickleback before and after the tsunami showed morphological shifts in the gill raker number, which is a foraging trait. Genetic analyses revealed that the allelic richness of one population was maintained after the tsunami, whereas that of another decreased in 2012 and then started to recover in 2013. Additionally, we found that the large tsunami and ground subsidence created new spring water-fed pools with sticklebacks, suggesting that the tsunami brought sticklebacks into these pools. Genetic analysis of this population showed that this population might be derived from hybridization between freshwater Gasterosteus aculeatus and anadromous G. nipponicus. Overall, our data indicate that tsunamis can influence morphologies and genetic structures of freshwater fishes. Furthermore, spring water may play important roles in the maintenance and creation of fish habitats, faced with environmental disturbance.
Natural disasters, such as typhoons, volcanic eruptions, earthquakes, and tsunamis, can cause catastrophic damage, not only to human livelihood but also to natural animal and plant populations. Such catastrophic events may lead to drastic changes in the habitat qualities of various organisms, resulting in a decrease or possibly an increase in the numbers of particular species, and also affect dispersal of organisms1,2,3. New habitats can also be created by catastrophic events4, where founder organisms can invade and may rapidly change their phenotypes to adapt to the new environments5,6,7. As catastrophic disasters are usually unpredictable, few opportunities are available to investigate their before and after effects of the catastrophic events on natural populations.
Huge tsunamis, triggered by immense earthquakes, are one of the largest catastrophic events and can disturb aquatic and terrestrial ecosystems along coastal areas. In the 21st century, several huge tsunamis have occurred: the 2004 Indian Ocean tsunami, the 2010 Chilean tsunami, and the 2011 Tohoku-oki tsunami. Previous studies have focused on the effects of tsunamis on aquatic ecosystems8,9,10,11,12,13,14,15,16,17,18,19. In these cases, the tsunamis disturbed the habitats of wild aquatic animals by changing landforms, deteriorating water and soil qualities, and bringing in sediments and debris. Tsunami waves also flushed several marine and freshwater organisms and displaced them from their native habitats to non-native places12,13. In addition, tsunami waves transported marine materials, such as seawater and sea bottom slime, into freshwater habitats13,14,15, and thereby, freshwater-adapted and stenohaline organisms, including several freshwater fishes, may become extinct in these salinized environments20. These effects can lead to biodiversity loss in tsunami-inundated areas.
Although the effects of tsunamis on ecosystems have been investigated, we know little about how they affect the phenotypes and genotypes of organisms. Catastrophic events can reduce the genetic diversity and effective population sizes, thereby, increasing the risk of extinction, as a result of multiple factors, such as inbreeding depression, reduction of the standing genetic variation, and the accumulation of deleterious mutations21,22. A tsunami may also induce gene flow between populations. Gene flow can homogenize genetic differentiation between locally-adapted populations and may reduce the mean fitness of each population, although gene flow from different populations can occasionally increase genetic variation and increase the ability of adaptive evolution23. Therefore, it is crucial to investigate the changes in genetic structures for a better assessment of the extinction risk of natural populations affected by a tsunami.
On March 11, 2011, the Tohoku Earthquake occurred off of the Pacific coast, registering 9.0 on the Moment Magnitude Scale (epicenter was 38°6′N and 142°51′E; Fig. 1a), and the following tsunami struck the Pacific coast of Honshu and Hokkaido, Japan, especially, in coastal areas of the Tohoku region (Iwate, Miyagi, and Fukushima Prefectures)24. Otsuchi Town, which is located at a Pacific coastline of the Iwate Prefecture, Japan, was one of the most severely damaged areas of the 2011 tsunami (Fig. 1 and Supplementary Fig. S1). The urban area of this town is located at an alluvial plain between the Otsuchi River (approximately 27.6 km long) and the Kozuchi River (approximately 26.4 km long). Since a river levee of the Otsuchi River was broken by the tsunami, overflowing water flooded the urban area (Supplementary Fig. S1b).
The present study aims to elucidate the ecological and genetic effects of a large tsunami on the threespine stickleback (genus Gasterosteus) in Otsuchi Town. The threespine stickleback is a small cold-water fish, which is widely distributed in the coastal regions of the Northern Hemisphere25. The threespine stickleback has both marine/anadromous and freshwater resident-life histories; the ancestral marine/anadromous sticklebacks have colonized freshwater repeatedly in widespread regions, resulting in phenotypic diversification and the evolution of diverse ecotypes26,27. In addition, this fish has many types of species pairs27,28,29. Thus, the threespine stickleback is considered a model for evolution, ecological, and genetic research. In Japan, there is a unique species pair composed of G. aculeatus and G. nipponicus (corresponding to the Pacific Ocean and Japan Sea sticklebacks in several previous reports, respectively)30. These two species are thought to have diverged about 1.0 million years ago31,32. These sticklebacks are sympatric in several habitats and reproductively isolated with low levels of on-going hybridization. They have different life history characteristics; some G. aculeatus are freshwater residents and others are anadromous, while G. nipponicus is exclusively marine or anadromous31,32,33,34.
A coastal area of Otsuchi Town is the southernmost sympatric area of the Japanese threespine stickleback pair reported thus far35. In this town, there are two main habitats of the freshwater threespine stickleback, which are the spring water-fed tributaries of the Otsuchi and Kozuchi Rivers (Fig. 1). Only freshwater populations of G. aculeatus live in the tributary of the Otsuchi River, while both freshwater G. aculeatus and anadromous G. nipponicus populations co-occur in the tributary of the Kozuchi River. Freshwater stickleback populations were listed as endangered species in the Iwate Prefecture36. One population in a tributary of the Otsuchi River, the Gensui population, has been named a natural monument of Otsuchi Town since 200737, so the local citizens have conserved the stickleback as a symbol of biodiversity35. When the 2011 tsunami occurred, all of the stickleback habitats in this area were damaged. The tsunami waves broke the river levee of the lower reach of the Otsuchi River (arrow in Supplementary Fig. S1b) and flowed into landside urban areas. All of the stickleback habitats in this area were also severely damaged. For example, huge amounts of debris accumulated on these watersheds, and oil spilled into the main rivers and their tributaries (middle panels in Fig. 1c and d). Within three months, however, the majority of debris and oil were removed by humans35,38. Furthermore, spring water supplied clean water and removed residual oil and debris (right panels in Fig. 1c and d); as a result, surviving sticklebacks were found. Another habitat change produced by the tsunami was the appearance of new spring water-fed pools in a coastal area that was previously a downtown area (Fig. 1b and e); these pools were formed by the big tsunami and ground subsidence. In 2012, we found sticklebacks in these newly-formed pools (Fig. 1f), suggesting that the tsunami brought sticklebacks into these pools.
Here, we first surveyed the recovery of the water quality of stickleback habitats after the 2011 tsunami. Second, we compared stickleback morphological traits before and after the tsunami, taking advantage of the fact that we sampled these populations in the year before the tsunami. Third, we investigated changes in the genetic diversity of these populations before and after the tsunami. Finally, we analyzed the genetic structure of the population in the newly formed pools.
Recovery of stickleback habitats after the tsunami
In the Gensui River (a tributary of the Otsuchi River; Fig. 1), natural springs from unconfined groundwater degraded in quantity and quality after the 2011 tsunami. Immediately after removal of the sediment of seabed materials and debris brought in by the tsunami (Fig. 1c), the flux quantity recovered from 0.016 m3/s on May 1, 2011, to 0.069 m3/s on August 16, 2012, at Station G2 (see Fig. 1b). We measured the electrical conductivity (EC) to investigate the invasion of seawater and the recovery of the freshwater. EC is generally positively correlated with ion concentrations (i.e., NO3−, Na+, and Cl−) and typically ranges from 9.0 to 11.0 mS/m in this river (Fig. 2 and Supplementary Table S1). EC values increased soon after the tsunami. After 2.5 years, the EC values gradually decreased and recovered to the level prior to the tsunami (Fig. 2), suggesting that underground spring waters steadily flushed the seawater brought in by the tsunami. After late-May 2011, the sediment of seabed material and debris was removed by volunteers35,38, and thereby, we could find adult sticklebacks in the Gensui River during this time period (T. Sumi, pers. obs.); at the present time, large numbers of mature adult sticklebacks and their offspring are found every year.
Patterns of the temporal change in the EC values of the Namaisawa River (a tributary of the Kozuchi River; Fig. 1) showed a similar tendency with that of the Gensui River (Fig. 2). In this river, spilled oil was one of the critical causes of the deterioration of water and riverbed quality (Fig. 1d). Soon after the 2011 tsunami, an oil fence was constructed above a survey point of N3, which prevented the oil from floating to the surface and downstream; this was followed by oil removal by humans. At the same time, deposited debris was removed by humans, resulting in the gradual recovery of the EC values, where the EC values may represent both the oil quantity and salinity level. The water quality of the Namaisawa River also recovered to the level prior to the tsunami 2.5 years later. However, it should be noted that oil still remains in the riverbed of the lower reach (N1) even today (M. Kume & S. Nishida, pers. obs.). At the middle reaches of this river (N3 and N4), we could find sticklebacks, some of which were nesting after July 2011 (S. Mori & M. Kume, pers. obs.).
Principal component (PC) analysis of the morphological traits of the adult sticklebacks was conducted, following body size correction of the external morphological traits, using the standard length (Table 1 and Fig. 3). The first three PCs explained 68.8% of the total variance (Table 1): a higher PC1 (27.75%) reflects a deeper body and a shorter dorsal spine, a higher PC2 (24.45%) indicates a smaller head and a shorter pelvic spine, and a higher PC3 (16.57%) indicates a slender and larger eye.
Scatter plots with 95% confidence ellipses showed that both the Gensui and Namaisawa populations changed morphology in all three of the PC axes across years (Fig. 3). The year effect was significant for each population (ANOVA, P < 0.0001). Interestingly, all PC scores showed similar temporal changes in both the Gensui and Namaisawa populations (comparison between Fig. 3a and b, Fig. 3d and e, and Fig. 3g and h). A general linear model, including both populations, showed that interactions between the year and population were not significant for PC2 (P = 0.6398) and PC3 (P = 0.1736), whereas the effect of the year was significant in both PCs (Supplementary Table S3), suggesting that the yearly change patterns were similar between populations. Although PC1 showed a significant interaction between the year and population (P = 0.00256; Supplementary Table S3), both populations showed a relatively similar trend for the change in PC1; PC1 increased in 2012 and decreased in 2013. After the tsunami, PC1 and PC2 increased in both populations, indicating that the post-tsunami fish have deeper bodies, shorter dorsal and pelvic spines, and smaller heads than the pre-tusnami fish. PC3 decreased, indicating that the post-tsunami fish have smaller eyes and deeper bodies than the pre-tsunami fish.
Next, we analyzed the morphology of the newly formed pool population (Fig. 3c,f, and i). PC1 and PC3 differed between the sampling years (ANOVA, P < 0.0001 for PC1 and P = 0.0001 for PC3), while PC2 did not differ between 2012 and 2013 (P = 0.265). A scatterplot of PC1 and PC3 (Fig. 3 f) showed that the new pool population was similar to the anadromous population in 2012, but shifted toward the Namaisawa population in 2013. Generally, the morphology of the new population was included within the total variations of the anadromous and freshwater populations (Fig. 3c,f, and i). This new pool population may be derived from hybridization between the anadromous G. nipponicus and freshwater G. aculeatus populations (see below). Since hybridization can increase the phenotypic variance and change the phenotypic variance-covariance matrix39,40, we calculated the ratio between the length of the major and minor axis of the ellipse (eccentricity) and the size of the ellipse (Supplementary Fig. S3). We did not observe any clear increase in the size or decrease in the eccentricity in the new pool population, compared to the native populations; the values were within the range of the other native populations (Supplementary Fig. S3).
The gill raker number is one of the important foraging traits25,26,27,28,29, and our previous studies showed significant variations in the gill raker number among Japanese populations32, so we further analyzed this trait (Fig. 4). The gill raker numbers were significantly different among populations across years (Kruskal-Wallis test, χ2 = 98.45, P < 0.0001); both the Gensui and Namaisawa populations in 2012 and 2013 had smaller gill raker numbers, than those populations before 2011 and the 2012 and 2013 new-pool populations (post hoc test, P = 0.0027–0.1511 for the 2013 Namaisawa population vs. the other Namaisawa and pool populations, P = 0.0009 for the 2011 vs. 2012 Namaisawa populations, P < 0.0009 for others). However, there was no significant difference between 2012 and 2013 in the new-pool populations (P = 0.2807; Fig. 4 and Supplementary Table S2).
Changes in the allelic richness and analysis of genetic structures
Overall, the Gensui population has a low allelic richness, but no substantial reduction of the allelic richness occurred after the tsunami (Friedman test, χ2 = 0.680, P = 0.8779; Fig. 5 and Supplementary Table S4). The allelic richness of the Namaisawa population (χ2 = 30.67, P < 0.0001) showed a substantial reduction in 2012 (post hoc test, P = 0.0007) and recovered slightly in 2013 (P = 0.0231; Fig. 5 and Supplementary Table S4). The allelic richness of the new pool population showed a similar trend of an increase in the allelic richness in 2013 (Wilcoxon signed ranks test, z = −1.915, P = 0.055; Fig. 5 and Supplementary Table S4).
STRUCTURE analysis revealed that there are three genetically distinct clusters among these populations from Otsuchi Town; both Evanno’s ΔK and L(K) indicated that the most likely cluster number was three (K = 3) (Supplementary Fig. S4). The Gensui population is predominantly composed of cluster 1, whereas the Namaisawa population is mainly composed of cluster 2 (Fig. 6). Cluster 3 likely represents G. nipponicus, because genotypes for the species-diagnostic microsatellite markers showed that the fish belonging to cluster 3 possessed G. nipponicus-specific lengths of microsatellite markers33. In 2010 and 2011, we found several fishes (four in 2010 and five in 2011) of G. nipponicus in the Namaisawa River (nearly 100% cluster 3 assignment), but no G. nipponicus was observed in 2012 and 2013 (Fig. 6), which may be the reason why the allelic richness decreased in 2012 and 2013. The newly formed pool population is composed of cluster 1, cluster 3, and their mixture (Fig. 6). In 2013, one fish of the new pool population belonged to cluster 3, namely G. nipponicus.
Immediately after the 2011 tsunami, the water quality of the main rivers and their tributaries in Otsuchi Town were deteriorated because of several factors, including sea water and marine slime brought in from the sea and chemicals released from debris and deposits. Such cases occurred along the Japanese coastline across the 2011 tsunami-struck areas13,14,41. In the Gensui River, however, debris and slime were removed by humans35,38, which may have helped recover the discharge of spring water and the survival of the stickleback. In another stickleback habitat, the Namaisawa River, river water was polluted by inflowing oil, but the oil was removed by both humans and clean water supplied by the natural flow of river and spring waters, which may also have helped the survival of G. aculeatus. We confirmed that the stickleback was breeding in the mid-upper reaches (N3 and N4) in July of 2011 (S. Mori & M. Kume, pers. obs.). In both places, therefore, immediate removal of debris by humans and a continuous supply of spring water potentially played a role in the recovery of the stickleback habitats. This idea is supported by our data of the temporal change in EC (Fig. 2 and Supplementary Table S1). However, further study is needed to understand the tsunami’s long-term effects on the riverine ecosystem, because large earthquakes and tsunamis may continuously influence the quality of the groundwater, not only for a short-term but also for medium- and/or longer-terms42,43.
We found morphological shifts in both the Gensui and Namaisawa populations after the tsunami (Table 1 and Fig. 3a). One of the remarkable changes was found in a feeding trait, the gill raker number, which significantly reduced in both populations after 2012 (Fig. 4 and Supplementary Table S2). Fish collected in 2012 are likely the generations that were born after the 2011 tsunami. In general, fish with a greater gill raker number are planktivorous, whereas fish with a lower gill raker number are benthivorous27. The observed difference in the gill raker number was 2–3, and this amount of difference can be found between stickleback ecotypes44,45. Therefore, it is possible that these morphological changes occurred as an adaptation to the tsunami-induced changes in the available food items. However, sympatric ecotypes usually differ by 3–6 gill raker numbers, probably because of character displacement46. In addition, we could not directly compare the food items between before and after the tsunami, due to the lack of stickleback’s prey item data before the tsunami. Therefore, further detailed, long-term analysis of temporal changes in the available food items, which we initiated just after the tsunami, should be continued to understand the ecological correlates of this phenotypic shift.
Our genetic analysis revealed that the genetic diversity of the Gensui population did not reduce after the tsunami (Fig. 5). There may be two reasons why this freshwater population could maintain its genetic diversity after the tsunami. One is that the quality and quantity of the river and spring waters recovered fairly quickly after the tsunami disaster, as described above. Another reason may be that the Gensui population of G. aculeatus may have a relatively high salinity tolerance. Although a previous experimental study showed that all individuals of a freshwater population of a Canadian river died within 12 h of exposure to sea water47, our preliminary seawater challenge experiment, using the Gensui population, indicated that there was no significant difference in the survival rate over 24 h between fish exposed to seawater (mean 33.75 psu) and to freshwater (M. Kume, unpubl. data).
In contrast, the allelic richness of the Namaisawa population reduced in 2012 (Fig. 5). This reduction might be due to the reduction in spawning migration of G. nipponicus (cluster 3). In fact, our genetic results revealed that G. nipponicus migrated into the Namaisawa River before the 2011 tsunami, but not after the tsunami (Fig. 6). A reduction in the spawning migration of G. nipponicus may be due to the deterioration of the spawning habitats. Alternatively, the number of migratory sticklebacks may be simply fluctuating from year-to-year, as observed in the Bekanbeushi River in Hokkaido48. As we released the majority of the caught fish, we believe that our sampling in 2011 did not affect the allelic decline observed in the next year (2012). In 2013, fish belonging to cluster 1 increased, although the reason for this increase is currently unclear. Further long-term monitoring of the number of anadromous migrants, the allelic richness, and the genetic structures will help to identify the factors that affect the genetic structure in this population.
We found that the tsunami created new stickleback habitats (Fig. 1e). New pools were created by underground spring water that appeared in a coastal urban area (Supplementary Fig. S2); furthermore, this area showed ground subsidence of 300–600 mm from the 2011 earthquake, which allowed the inundation of seawater (T. Sumi, unpubl. data). Our genetic analysis suggested that the new-pool population mainly consisted of a cluster similar to the Gensui population (cluster 1), a cluster corresponding to G. nipponicus (cluster 3), and their hybrids (Fig. 6). Our hypothesis for the formation of new threespine stickleback populations in the tsunami-formed pools is as follows. First, new pools were formed by the ground subsidence and inundation of seawater. Some freshwater G. aculeatus were brought in by the backwash of the tsunami waves from the Gensui River. G. nipponicus may have been brought in from the sea by the tsunami or they migrated to these pools. The P2 pool is always connected to the Kozuchi River through a flood gate. The P1 is usually isolated from the sea, but it is sometimes connected with P2 through a narrow channel (approximately 1 m width), when the water levels of the rivers and pools rise due to spray tides and rainfall, which enables anadromous sticklebacks to migrate into these pools. These new pools have been maintained through present day by discharging spring water that comes from flowing wells.
Our genetic data indicate that the freshwater population of G. aculeatus and anadromous G. nipponicus seem to be hybridizing in the new tsunami-formed pools (hybrids between cluster 1 and cluster 3). Hybridization can often increase phenotypic diversity and change the phenotypic variance-covariance matrix, which may help the hybrid populations adapt to new environments39,40. Our analysis of the new pool population did not show any apparent changes in the variances and the phenotypic matrix. However, further analysis of the morphological changes would be necessary to make any conclusions about the roles of hybridization in adaptation to the new pool habitats. Recently, stickleback adaptations to newly formed environments, following large earthquakes and tsunamis in the wild, were investigated in a different region, but the formation processes of the new habitats and populations were different. A previous study showed that new habitats were formed because of an island uplift, which was caused by the 1964 Great Alaskan Earthquake, and then anadromous G. aculeatus stickleback invaded the new habitats5,7. In our case, new habitats were formed by ground subsidence and spring water, and both anadromous G. nipponicus and freshwater G. aculeatus sticklebacks invaded. Thus, the new pool population in Otsuchi provides an opportunity to investigate mechanisms by which freshwater and coastal fishes disperse, hybridize, and adapt to new environments.
There is a possibility that the immediate removal of debris and oil by humans might help maintain freshwater fish habitats. In addition, spring waters might also play important roles in the maintenance of animal populations, faced with environmental disturbances from natural disasters. If so, conservation of spring water would be key for recovery of the aquatic ecosystem. Finally, it should be noted that local citizens used groundwater for human livelihood, when they could not use the public water system, suggesting that ground/spring waters are important for human beings, as an alternative water resource during emergencies38,49. However, it is worth noting that recovery projects, including natural disaster contingency planning, are now being performed in Otsuchi Town50. These activities may ironically threaten various ecosystems. For example, construction of temporary houses, after landfills of spring water (Supplementary Fig. S2), may lead to habitat destruction. Thus, discussion between biologists and policy makers should be continued during long-term town recovery planning, because it will be essential for conserving aquatic biodiversity and sustaining the ecosystem functions of spring/ground waters.
All fieldwork was performed in accordance with local ethical regulations and agreements. All procedures were approved by the institutional animal care and use committee of the National Institute of Genetics (23-15, 24-15, 25-18). All experiments were performed in accordance with relevant guidelines and regulations.
We conducted field surveys at the Gensui River (approximately 450 m long, mean 3.73 m width, mean 33 cm depth), which is a tributary of the Otsuchi River, and the Namaisawa River (approximately 2280 m long, mean 4.71 m width, mean 76 cm depth), which is a tributary of the Kozuchi River; both rivers run through a lowland area of Otsuchi Town (39°21′N, 141°54′E), Iwate Prefecture, Japan (Fig. 1). We set two and four survey points in the Gensui River (G1 and G2) and the Namaisawa River (N1 to N4), respectively (Fig. 1b). Downstream of the Namaisawa River (N1 to N2) was tidal areas, while the other survey points were always freshwater (Supplementary Table S1). According to our previous studies35, only the freshwater-resident G. aculeatus inhabits the Gensui River, while both freshwater-resident G. aculeatus and anadromous G. nipponicus were found in the Namaisawa River.
The urban area of Otsuchi Town is a lowland area with abundant groundwater, which people have used for their daily life (Supplementary Fig. S2)38,51. During our study, on July 22, 2012, we found threespine sticklebacks in new spring water pools in the urban area for the first time, so we have no samples before that time. We have sampled sticklebacks in these two pools (P1 and P2; Fig. 1b) since then.
We measured five parameters of water quality at all of the sites of the Gensui and Namaisawa Rivers after the 2011 tsunami; water temperature (WT, °C), electrical conductivity (EC, mS/m) and pH were recorded using a multi-parameter water quality probe (WM-22EP, DKK-TOA Co. Ltd., Tokyo, Japan). These measurements were conducted nearly every month from April 2011 to August 2014. Before the 2011 earthquake, we had measured some of these parameters in both rivers. Also, we started these environmental measurements in the tsunami-formed pools (P1 and P2) after July 2012.
Samples were collected using minnow traps for adults and dip-nets for juveniles. Adult fish used for morphological analyses were sampled from the Gensui River at survey site G2 on December 22, 1998, December 24, 2011, July 1, 2012, and May 19, 2013; from the Namaisawa River at survey site N3 on June 26, 2010, December 24, 2011, July 29, 2012, and May 20, 2013; and from the new-pool at survey site P1 on July 29, 2012 and May 19, 2013 (Table 2). Fish were preserved in ethanol after euthanasia. Fish collected from G2 on December 24, 2011, and from N3 on December 24, 2011, were adults born after the tsunami.
Adult threespine stickleback used for genetic analysis were collected from the Gensui (G2) and Namaisawa Rivers (N3) on June 26, 2010, May 26, 2011, August 21, 2012, and June 29–30, 2013, (Table 2) using minnow traps. Adults collected from G2 and N3 on May 26, 2011 were born in 2010. Juvenile fish (sample size = 47) were also sampled from the Namaisawa River (N3) on July 9, 2011 (Table 2). Fish of the new pool (P1) were sampled on August 21, 2012, and Feb 2, 2013 (Table 2). After euthanasia, pectoral or caudal fins were cut and saved in ethanol, except for the 2010 Gensui population, for which small tips of fins were cut and stored in ethanol after anesthesia, and the fish were released to the river after they recovered from the anesthesia.
We used 233 adult sticklebacks (range: 18–41 for each group) for the morphological analyses (Table 2), and their standard lengths ranged from 45.49–67.12 mm in 1998, 40.14–64.48 mm in 2011, 38.42–70.78 mm in 2012, and 49.35–66.75 mm in 2013 for the Gensui population; 41.92–79.69 mm in 2010, 42.95–51.62 mm in 2011, 47.57–60.57 mm in 2012, and 46.52–68.30 mm in 2013 for the Namaisawa population; and 34.57–44.10 mm in 2012 and 46.90–59.49 mm in 2013 for the new-pool population. An anadromous population of G. nipponicus was collected from the Bekanbeushi River, Akkeshi, Hokkaido, Japan in 200332,48 and preserved in 10% formalin until the morphological analysis. We measured the standard length, head length, body depth, caudal depth, second dorsal spine length, left and right pelvic spine length using a digital caliper (nearest 0.01 mm), and counted the gill raker number using a dissecting microscope (Supplementary Table S2). In this study, plate morph was not analyzed, because all sticklebacks analyzed here were completely plated. We analyzed morphological variations, using a principal component (PC) analysis with a correlation matrix. Standard length is commonly correlated with other size traits. We therefore used the residuals of the linear regressions of the ln-transformed morphological trait values, except the gill raker number, against the ln-transformed standard length. For regression, all fish were pooled. The gill raker number is independent of the body size32, so it was just ln-transformed before PC analysis. We first performed PC analysis excluding the anadromous population. Next, the PC scores of the anadromous population were calculated using the predict function in prcomp. We used a general linear model for the statistical analysis of the PC scores. The 95% confidence ellipse was drawn with the stat_ellipse function in ggplot. The eccentricity and size of the phenotypic matrix were calculated using the R package “car”. In order to investigate divergence in a foraging trait, the gill raker number, we conducted the Kruskal-Wallis test (α = 0.05), followed by pairwise post-hoc Mann-Whitney U-tests with Bonferroni corrections (α = 0.0009). These analyses were performed in R version 3.0.252.
We used 360 sticklebacks (range: 16–102 for each habitat) for the genetic analyses (Table 2). Genomic DNA was isolated with the Qiagen DNeasy Blood & Tissue Kit (Qiagen, Valencia, CA, USA). For genetic analysis, fourteen microsatellite markers, which are located on different linkage groups and are not linked to sex, were used, as described previously (Supplementary Table S4)53: Stn90, Stn64, Stn159, Stn46, Stn120, Stn384, Stn332, Stn278, Stn76, Stn170, Stn175, Stn301, Stn389, Stn25, and Stn35. Forward primers were labeled with fluorescence (HEX, NED, or FAM), and the 5′-end of the reverse primers were tailed with GTTTCTT to increase the accuracy of the fragment length analysis54. Microsatellites were amplified with three combinations of primer sets, with three different dye colors, using the KAPA2G Fast Multiplex PCR Kit (KAPA Biosystems, Woburn, MA, USA). After 3 min at 95 °C, 30 cycles of 95 °C for 15 sec, 60 °C for 30 s, and 72 °C for 30 s were performed, followed by 10 min at 72 °C. Amplified fragments were analyzed by BEX Co. Ltd. (Tokyo, Japan). Allele lengths were then determined using Peak Scanner Software (Life Technologies, Grand Island, NY, USA). Micro-Checker was used to confirm the accuracy of genotyping55.
Data were first analyzed using STRUCTURE, which uses Markov chain Monte Carlo simulations to identify groupings that minimize Hardy-Weinberg and linkage disequilibrium within cluster groups56. Five simulations were run for each cluster number (K) from K = 1 through K = 10. We estimated the probable number of clusters by finding the K value with the highest log likelihood Ln(K) and the K value with the highest ΔK, which is the rate of change of Ln(K) between successive K values57. This analysis was performed using Structure Harvester58. Parameters were estimated after 500,000 iterations, following a burn-in of 50,000 iterations. The allelic richness (number of alleles per locus, corrected for sample size) was calculated using FSTAT 2.9.3 software59. In order to confirm yearly changes in the allelic richness within each population, we conducted the Friedman test (α = 0.05), followed by pairwise post hoc Wilcoxon signed ranks tests with Bonferroni corrections (α = 0.0083), and Wilcoxon signed ranks test (α = 0.05). These analyses were performed in R version 126.96.36.199.
Sousa, W. P. The role of disturbance in natural communities. Ann. Rev. Ecol. Syst. 15, 353–391 (1984).
Scheffer, M., Carpenter, S., Foley, J. A., Folke, C. & Walker, B. Catastrophic shifts in ecosystems. Nature 413, 591–596 (2001).
Platt, W. J. & Connell, J. H. Natural disturbances and directional replacement of species. Ecol. Monogr. 73, 507–522 (2003).
Wang, C.-Y. & Manga, M. New streams and springs after the 2014 Mw6.0 South Napa earthquake. Nat. Comm. 6, 7597 (2015).
Gelmond, O., Von Hippel, F. A. & Christy, M. S. Rapid ecological speciation in three-spined stickleback Gasterosteus aculeatus from Middleton Island, Alaska: the roles of selection and geographic isolation. J. Fish Biol. 75, 2037–2051 (2009).
Craw, D. et al. Rapid biological speciation driven by tectonic evolution in New Zealand. Nat. Geosci. 9, 140–144 (2016).
Lescak, E. A. et al. Evolution of stickleback in 50 years on earthquake-uplifted islands. Proc. Natl. Acad. Sci., USA 112, E7204–E7212 (2015).
Whanpetch, N. et al. Temporal changes in benthic communities of seagrass beds impacted by a tsunami in the Andaman Sea, Thailand. Estuar. Coast. Shelf Sci. 87, 246–252 (2010).
Lomovasky, B. J., Firstater, F. N., Salazar, A. G., Mendo, J. & Iribarne, O. O. Macro benthic community assemblage before and after the 2007 tsunami and earthquake at Paracas Bay, Peru. J. Sea Res. 65, 205–212 (2011).
Jaramillo, E. et al. Ecological implications of extreme events: Footprints of the 2010 Earthquake along the Chilean coast. PLoS ONE 7, e35348 (2012).
Sites, R. W. & Vitheepradit, A. Recovery of the freshwater lentic insect fauna in Thailand following the tsunami of 2004. Raffles Bull. Zool. 58, 329–348 (2010).
Kanaya, G. et al. Effects of the 2011 tsunami on the topography, vegetation, and macrobenthic fauna in Gamo Lagoon, Japan. Japan. J. Benthol. 67, 20–32 (2012). in Japanese with English abstract.
Watanabe, K., Yaegashi, S., Tomozawa, H., Koshimura, S. & Omura, T. Effects on river macroinvertebrate communities of tsunami propagation after the 2011 Great East Japan Earthquake. Freshw. Biol. 59, 1474–1483 (2014).
Mukai, Y. et al. Ecological impacts of the 2011 Tohoku Earthquake Tsunami on aquatic animals in rice paddies. Limnology 15, 201–211 (2014).
Tolkova, E., Tanaka, H. & Roh, M. Tsunami observations in rivers from a perspective of tsunami interaction with tide and riverine flow. Pure Appl. Geophys. 172, 953–968 (2015).
Toyofuku, T. et al. Unexpected biotic resilience on the Japanese seafloor caused by the 2011 Tōhoku-Oki tsunami. Sci. Rep. 4, 7517 (2014).
Masuda, R., Hatakeyama, M., Yokoyama, K. & Tanaka, M. Recovery of coastal fauna after the 2011 tsunami in Japan as determined by bimonthly underwater visual censuses conducted over five years. PLoS ONE 11, e0168261 (2016).
Urabe, J. & Nakashizuka, T. Ecological Impacts of Tsunamis on Coastal Ecosystems: Lessons from the Great East Japan Earthquake. (Springer Japan, Tokyo, 2016).
Hata, M., Kawakami, T. & Otake, T. Immediate impact of the tsunami associated with the 2011 Great East Japan Earthquake on the Plecoglossus altivelis altivelis population from the Sanriku coast of northern Japan. Environ. Biol. Fishes 99, 527–538 (2016).
Kefford, B. J., Papas, P. J., Metzeling, L. & Nugegoda, D. Do laboratory salinity tolerances of freshwater animals correspond with their field salinity? Environ. Pollut. 129, 355–362 (2004).
Frankham, R., Ballou, J. D. & Briscoe, D. A. Introduction to Conservation Genetics. (Cambrdige University Press, Cambrdige, 2002).
Ferrière, R., Dieckmann, U. & Couvet, D. Evolutionary Conservation Biology. (Cambrdige University Press, Cambrdige, 2009).
Garant, D., Forde, S. E. & Hendry, A. P. The multifarious effects of dispersal and gene flow on contemporary adaptation. Funct. Ecol. 21, 434–443 (2007).
Suppasri, A. et al. Lessons learned from the 2011 Great East Japan Tsunami: Performance of tsunami countermeasures, coastal buildings, and tsunami evacuation in Japan. Pure Appl. Geophys. 170, 993–1018 (2013).
Wootton, R. J. A functional biology of sticklebacks. (Croom Helm, London, 1984).
Bell, M. A. & Foster, S. A. The evolutionary biology of the threespine stickleback. (Oxford Univ. Press, Oxford, 1994).
Schluter, D. The ecology of adaptive radiation (Oxford Univ. Press, Oxford, 2000).
McKinnon, J. S. & Rundle, H. D. Speciation in nature: the threespine stickleback model systems. Trends Ecol. Evol. 17, 480–481 (2002).
Hendry, A. P., Bolnick, D. I., Berner, D. & Peichel, C. L. Along the speciation continuum in stickleback. J. Fish Biol. 75, 2000–2036 (2009).
Higuchi, M., Sakai, H. & Goto, A. A new threespine stickleback, Gasterosteus nipponicus sp. nov. (Teleostei: Gasterosteidae), from the Japan Sea region. Ichthyol. Res. 61, 341–351 (2014).
Higuchi, M. & Goto, A. Genetic evidence supporting the existence of two distinct species in the genus Gasterosteus around Japan. Environ. Biol. Fishes 47, 1–16 (1996).
Kitano, J., Mori, S. & Peichel, C. L. Phenotypic divergence and reproductive isolation between sympatric forms of Japanese threespine sticklebacks. Biol. J. Linn. Soc. 91, 671–685 (2007).
Ravinet, M., Takeuchi, N., Kume, M., Mori, S. & Kitano, J. Comparative analysis of Japanese three-spined stickleback clades reveals the Pacific Ocean lineage has adapted to freshwater environments while the Japan Sea has not. PLoS ONE 9, e112404 (2014).
Ishikawa, A., Kusakabe, M., Kume, M. & Kitano, J. Comparison of freshwater tolerance during spawning migration between two sympatric Japanese marine threespine stickleback species. Ecol. Evol. Res. 17, 525–534 (2016).
Mori, S. The Otsuchi three-spined stickleback: status following the 2011 East Japan Tsunami. Japan. J. Ichthyol 60, 177–180 (2013). in Japanese.
Iwate Prefecture. Red data book in Iwate (web ed.). http://www2.pref.iwate.jp/~hp0316/rdb/index.html. (2014), in Japanese. Accessed 13 January 2016.
Otsuchi T. Directory of Otsuchi Town. http://www.town.otsuchi.iwate.jp/shoukai/youran2008.html (2008), in Japanese. Accessed 13 January 2016.
Sumi, T. Flowing wells of Otsuchi, Iwate Prefecture -environment and reconstruction-. Biotope 34, 1–2 (2014). in Japanese.
Selz, O. M., Lucek, K., Young, K. A. & Seehausen, O. Relaxed trait covariance in interspecific cichlid hybrids predicts morphological diversity in adaptive radiations. J. Evol. Biol. 27, 11–24 (2014).
Lucek, K., Greuter, L., Selz, O. M. & Seehausen, O. Effects of interspecific gene flow on the phenotypic variance–covariance matrix in Lake Victoria Cichlids. Hydrobiologia 791, 145–154 (2017).
Nakamura, K., Kuwatani, T., Kawabe, Y. & Komai, T. Extraction of heavy metals characteristics of the 2011 Tohoku tsunami deposits using multiple classification analysis. Chemosphere 144, 1241–1248 (2016).
Illangasekare et al. Impacts of the 2004 tsunami on groundwater resources in Sri Lanka. Water Resour. Res. 42, W05201 (2006).
Robinson, B. W. Trade offs in habitat-specific foraging efficiency and the nascent adaptive divergence of sticklebacks in lakes. Behaviour 137, 865–888 (2000).
Berner, D., Roesti, M., Hendry, A. P. & Salzburger, W. Constraints on speciation suggested by comparing lake-stream stickleback divergence across two continents. Mol. Ecol. 19, 4963–4978 (2010).
Vithanage, M., Engesgaard, P., Villholth, K. G. & Jensen, K. H. The effects of the 2004 tsunami on a coastal aquifer in Sri Lanka. Ground Water. 50, 704–714 (2011).
McPhail, J. D. In The evolutionary biology of the threespine stickleback (eds Bell, M. A. & Foster, S. A.). Speciation and the evolution of reproductive isolation in the sticklebacks (Gasterosteus) of south-western British Columbia, 399-437 (Oxford Univ. Press, Oxford, 1994).
Kusakabe, M., Ishikawa, A. & Kitano, J. Relaxin-related gene expression differs between anadromous and stream-resident stickleback (Gasterosteus aculeatus) following seawater transfer. Gene. Comp. Endcrinol. 205, 197–206 (2014).
Kume, M., Kitamura, T., Takahashi, H. & Goto, A. Distinct spawning migration patterns in sympatric Japan Sea and Pacific Ocean forms of threespine stickleback Gasterosteus aculeatus. Ichthyol. Res. 52, 189–193 (2005).
Taniguchi, M. Importance of groundwater as security. J. ground. Hydrol. 55, 5–11 (2013). in Japanese with English abstract.
Otsuchi T. Basic plan: The 2011 Great East Japan Earthquake Tsunami recovery plan of Otsuchi Town (March 2014 revision). http://www.town.otsuchi.iwate.jp/docs/2014041400062/ (2014), in Japanese. Accessed 15 January 2016.
Goto, T. & Ise, K. The quality of the underground water in Ozuchi, Iwate Prefecture. Ann. Rep. Fac. Edu., Univ. Iwate 25, 5–40 (1965), in Japanese.
R Development Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing. (Vienna, Austria, 2011).
Adachi, T. et al. Shifts in morphology and diet of non-native sticklebacks introduced into Japanese crater lakes. Ecol. Evol. 2, 1083–1098 (2012).
Ballard et al. Strategies for genotyping: effectiveness of tailing primers to increase accuracy in short tandem repeat determinations. J. Biomol. Tech. 13, 20–29 (2002).
Van Oosterhout, C., Hutchison, W. F., Shipley, P. & Wills, D. P. M. Micro-Checker: Software for identifying and correcting genotyping errors in microsatellite data. Mol. Ecol. 4, 535–538 (2004).
Pritchard, J. K., Stephens, M. & Donnelly, P. Inference of population structure using multilocus genotype data. Genetics 155, 945–959 (2000).
Evanno, G., Regnaut, S. & Goudet, J. Detecting the number of clusters of individuals using the software STRUCTURE: a simulation study. Mol. Ecol. 14, 2611–2620 (2005).
Earl, D. A. & von Holdt, B. M. STRUCTURE HARVESTER: a website and program for visualizing STRUCTURE output and implementing the Evanno method. Conserv. Gene. Resour. 4, 359–361 (2012).
Goudet, J. Fstat version 1.2: a computer program to calculate F-statistics. J. Hered. 86, 485–486 (1995).
Conrad, O. et al. System for Automated Geoscientific Analyses (SAGA) v. 2.1.4. Geosci. Model Dev. 8, 1991–2007, https://doi.org/10.5194/gmd-8-1991-2015 (2015).
We thank the staff of the Otsuchi Town Office, especially Mr. Ken Sasaki, for consecutive support in the field. We also thank Dr. Takanori Nakano, Dr. Katsutoshi Watanabe, and members of Kitano Lab for discussions during this study, Mr. Yasuyuki Hata for providing the stickleback photographs, and two anonymous reviewers for helpful comments on the manuscript. This work was supported by the Ministry of Environment Japan (ZD-1203) to S.M., T.S., and J.K., the Otsuchi Town and Gifu-keizai University for funding the research to S.M., the NIG Collaborative Research Grant A to S.M. (2014–30), the MEXT Grant-in-Aid for Scientific Research on Innovative Areas (23113007 and 23113001) and Future Investment Project 2017 of ROIS to J.K., and the Sasakawa Grants for Science Fellows from the Japan Science Society (F15–225) to M.K.
The authors declare that they have no competing interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
About this article
Cite this article
Kume, M., Mori, S., Kitano, J. et al. Impact of the huge 2011 Tohoku-oki tsunami on the phenotypes and genotypes of Japanese coastal threespine stickleback populations. Sci Rep 8, 1684 (2018). https://doi.org/10.1038/s41598-018-20075-z
This article is cited by
Natural mega disturbances drive spatial and temporal changes in diversity and genetic structure on the toadfish Aphos porosus
Scientific Reports (2023)
Multiple waves of freshwater colonization of the three-spined stickleback in the Japanese Archipelago
BMC Evolutionary Biology (2020) |
Properties of Algorithm
Describe an Algorithm
"A set of rules to be followed in calculations or other problem-solving operations" or "A procedure for solving a mathematical problem in a finite number of steps that frequently involves recursive operations" are two definitions of the term algorithm.
As a result, an algorithm is a set of limited procedures used to solve a certain issue. Depending on what you want to do, algorithms might range from basic to sophisticate.
There are several different types of algorithms. Several significant algorithms include:
- Brute Force Algorithm: The first attempt to solve a problem is the brute force algorithm. When we see an issue, the first method that springs to mind is a brute force algorithm.
- Recursive Algorithm: Recursion is the foundation of a recursive algorithm. In this instance, an issue is divided into multiple smaller components and repeatedly called by the same function.
- Backtracking Algorithm: The backtracking method basically constructs the answer by scouring over all potential solutions. Using this approach, we continue to develop the answer in accordance with the criteria. Every time a solution fails, we go back to the point of failure, build on the subsequent solution, and repeat this process until we either find the answer or all possible solutions are looked after.
- Searching Algorithm: Searching algorithms are used to find individual elements or collections of components inside a given data structure. Depending on how they go about things or whatever data structure the element has to be in, they can be of several forms.
- Sorting Algorithm: Sorting is the process of organizing a set of facts in a certain way in accordance with the needs. Sorting algorithms are the ones that assist in carrying out this duty. Data groupings are often sorted using sorting algorithms in an increasing or decreasing order.
- Hashing Algorithm: The hashing algorithm is comparable to the search algorithm in operation. Nevertheless, they have an index with a key ID. In hashing, a key is given to a particular piece of data.
- Divide and Conquer method: This method divides an issue into smaller problems, solves each of those problems separately, and then combines the results to provide the overall answer. The procedure entails the following three steps:
- 8. Greedy Algorithm: This kind of algorithm builds the solution piece by piece. The immediate advantage of the next section serves as the foundation for the solution of that section. The answer for the following section will be the one offering the greatest benefit.
- Dynamic Programming technique: To avoid repeatedly calculating the same portion of the problem, this technique makes use of the idea of applying the already discovered answer. It separates the issue into more manageable overlapping sub problems and resolves each one.
- Randomized Algorithm: We utilize a random number in the randomized algorithm to provide rapid advantage. The predicted result is determined in part by the random number.
Utilization of Algorithms:
- In many domains, algorithms are essential and have several uses. Algorithms are widely utilized in a variety of fields, such as:
- Algorithms are the building blocks of computer programming and are used to tackle issues ranging from straightforward sorting and searching to more complicated ones like artificial intelligence and machine learning.
- Algorithms are used in mathematics to solve issues like determining the shortest path in a graph or the best answer to a set of linear equations.
- Operations Research: Algorithms are used to decide and optimize in areas like resource allocation, logistics, and transportation.
- Artificial Intelligence: Artificial intelligence and machine learning are built on algorithms, which are used to create intelligent systems that are capable of performing tasks like image recognition, natural language processing, and decision-making.
- Data science: Algorithms are used in industries like marketing, banking, and healthcare to analyze, process, and glean insights from massive volumes of data.
- These are only a handful of the numerous uses for algorithms. Algorithms are becoming an increasingly important part of modern life as new technologies and areas are developed.
Depending on what you want to do, algorithms might range from basic to sophisticate. By using the process of making a novel recipe as an example, it may be understood. When following a new recipe, one must read the directions and carry out each step in the correct order. The end consequence is that the new meal is cooked to perfection. You employ algorithms every time you use a phone, computer, laptop, or calculator. Similar to this, algorithms assist programmers in carrying out tasks to produce desired results.
The developed algorithm is language-independent, meaning that it consists only of simple instructions that may be used to build it in any language and still provide the desired results.
Algorithm Properties Include:
- It ought to end after a set amount of time.
- There should be at least one output from it.
- It should require no input at all or more.
- Deterministic meaning that it should provide the same result for a given input scenario.
- Each step of the algorithm must be efficient, i.e., each step must provide results.
Benefits of algorithms
- It is simple to comprehend.
- A solution to a problem is represented step-by-step in an algorithm.
- Since the problem is divided into smaller components or steps when using an algorithm, it is simpler for the programmer to turn the algorithm into a working program.
Drawbacks of Algorithm
- Writing an algorithm requires a lot of time.
- It can be quite challenging to comprehend complicated reasoning using algorithms.
- Algorithms (imp) make it tough to display branching and looping statements.
How Do You Create an Algorithm?
- As a prerequisite, the following things are required in order to build an algorithm:
- Clear problem definition is the issue that this method is meant to tackle.
- While solving the problem, the restrictions of the problem must be taken into account.
- The information needed to address the issue.
- What results you may anticipate once the problem has been solved.
- Within the limitations provided, a solution to this issue can be found.
The algorithm is then created using the aforementioned inputs in such a way that it resolves the issue.
Consider the following example of adding three integers and printing the result.
Step 1: Meet the prerequisites.
- As was previously said, the requirements for an algorithm must be met before it can be written.
- This algorithm's task is to add three integers and report the result of that addition.
- The limitations of the problem that must be taken into account before addressing it: The numbers cannot contain any other characters than digits.
- The contribution needed to address the issue: the three figures that must be added.
- When the issue is resolved, the following result is anticipated: the single integer value that represents the sum of the three values entered as input.
- The answer to this quandary, given the restrictions: The three digits must be added to arrive at the answer. It can be done with the help of '+' operator, or bit-wise or any other method.
Step 2: Creating the algorithm
Let's create the algorithm now using the prerequisites listed above:
An algorithm to add three integers and display their total
- Declare the following three integer variables: 1, 2, and 3.
- Take the three integers to be added and enter them into the corresponding variables num1, num2, and num3.
- Declare an integer variable sum to hold the three values' combined sum.
- The three integers are added, and the result is saved in the variable sum.
- END PRINT THE SUM OF THE VARIABLE
Step 3: Put the algorithm to the test by using it.
Let's put the algorithm into C++ language implementation to test it.
Enter the 1st number: 0
Enter the 2nd number: 0
Enter the 3rd number: -1577141152
Sum of the 3 numbers is: -1577141152
Process executed in 1.11 seconds
Press any key to continue.
- To hold the three integers to be added, create the three variables num1, num2, and num3.
- Declare a variable called sum to hold the three numbers' total.
- To ask the user to enter the initial number, use the cout statement.
- To read the first number and save it in num1, use the cin statement.
- To ask the user to enter the second number, use the cout command.
- To read the second number and save it in num2, use the cin statement.
- To ask the user to enter the third number, use the cout command.
- To read the third number and save it in num3, use the cin command.
- Use the + operator to add the three integers together, then enter the result in the sum variable.
- To print the total of the three digits, use the cout command.
- The main function returns 0, indicating that the programme has run successfully.
Time complexity: O(1)
Auxiliary Space: O(1)
One issue, several solutions there may or may not be more than one solution to an algorithm. This implies that there may be more than one approach to implement the algorithm. For instance, there are several ways to calculate the sum in the issue to add three integers above.
- + operator
- Bit-wise operators
- . . etc
How Can an Algorithm be Analyzed?
A standard algorithm must be efficient in order to be effective. As a result, an algorithm's effectiveness has to be monitored. There may be two phases:
- Priori Analysis: "Priori" is Latin for "before". As a result, Priori analysis refers to verifying the method before it is used. When an algorithm is written in the form of theoretical steps, it is tested in this. Assuming that all other variables, such as processor speed, remain constant and have no impact on the implementation, the efficiency of an algorithm is evaluated. Typically, the algorithm designer does this. The kind of hardware and compiler language has no bearing on our investigation. It provides rough solutions for the program's complexity.
- Analysis of the posterior: "Posterior" is the Latin word for "after". Hence checking the algorithm after it has been implemented is known as a posterior analysis. In this, the method is tested by putting it into practice and running it in any programming language. This analysis aids in obtaining an accurate and genuine analysis report about correctness (for every potential input, whether it displays or returns the proper result or not), space needed, time used, etc. In other words, it depends on the hardware and language of the compiler.
How Can We Determine Algorithm Complexity?
Based on how much space and time it uses, an algorithm is classified as complicated. Thus, the complexity of an algorithm is a measurement of the time required for it to run and produce the desired result as well as the amount of storage space required to keep all the data (input, temporary data, and output). Consequently, these two elements determine how effective an algorithm is.
The two elements that make an algorithm complex are:
- Time Factor: The number of crucial actions, such as comparisons in the sorting algorithm, is counted to determine how much time has passed.
- Space Factor: The amount of space is calculated by adding up the whole amount of memory space needed for the algorithm to operate.
As a result, there are two categories of algorithmic complexity:
Space Complexity: The amount of memory needed by an algorithm to store the variables and produce the output is referred to as its space complexity. This may apply to temporary activities, inputs, or outputs.
How is Space Complexity Determined?
The following 2 elements are calculated to determine an algorithm's space complexity:
- Fixed Part: This describes the area that the algorithm unquestionably needs. For instance, program size, output variables, and input variables.
- Part that is Variable: This describes a space that can change depending on how the algorithm is applied. For instance, dynamic memory allocation, recursion stack space, temporary variables, etc.
Consequently, Space Complexity Any algorithm's S(P) P is defined as S(P) = C + SP(I), where C denotes the algorithm's fixed portion and S(I) denotes its variable portion, which is dependent on instance characteristic I.
Example: The temporal complexity of the Linear Search method is determined as follows:
How Do We Formulate an Algorithm?
Natural Language: Here, we use everyday English to convey the algorithm. To comprehend the algorithm from it would be too difficult.
- Flow Chart: In this case, we depict the algorithm graphically or visually. Natural Language is more difficult to grasp than this.
- Pseudo Code: In this case, we represent the algorithm as instructive text and comments in plain English. This is quite close to genuine code, but because it lacks any syntax resembling a programming language, it cannot be built or understood by a machine. It is the greatest technique to convey an algorithm since even a layperson with little programming expertise can understand it. |
2 Answers | Add Yours
Any spot on a graph is defined by a coordinate with a value on the x axis (horizontal) and the y axis (vertical). An equation like the one you gave can be used to find each point on a given line. Therefore, this equation is the equation for a line (in slope intercept form).
y is the value of y for a coordiinate that is given in (x,y) form.
m is the slope of the line.
x is the value of x
n is the y-intercept. It is the value for y when x = 0 and therefore that is the point where the line crosses the y axis.
y=mx+n represents the equation of the line, written in the slope intercept form.
m represents the slope of the line. The slope is the value of tangent of the angle made by the line to x axis.
n represents the y intercept of the line.
The axis intercepting points are found in this way:
x = 0 => y intercepting => the point that belongs to the line, (0,y).
y = 0 => x intercepting => the point that belongs to the line, (x,0).
We’ve answered 319,627 questions. We can answer yours, too.Ask a question |
To start this assignment, download this zip file.
Lab 6a — Grouping
Reminder, you should work in teams of 2 or 3 when solving the lab problems. Learning to code together is an important part of this class.
Download the zip file for this lab, located above. This zip file has code that
you will use for this assignment. Extract the files and put them in your
directory in a folder called
We have given you code in
practice.py that provides some simple practice with
There are instructions in the code asking you to do the following:
Copy and paste a function for creating an empty grid
Copy and paste a function for printing a grid
Create an empty grid, 5 rows and 8 column; use a space as the value for the grid
Put a ’🦋’ in (1, 1) — row 1 — column 1
Put a ’🦖’ in (0, 7)
Put a ’🐳’ in (4, 2)
Print the grid
You can see the guide on grids for help.
After you finish and run the program, it should print:
🦖 🦋 🐳
Discuss with the TA
- How did you write this code? Show a solution and discuss.
- Is there anything you don’t understand about grids?
We have a partially-written program in
drawing.py that will use a grid to
create a simple drawing program. Following are the functions for you to
Printing the grid
First write the
def print_grid(grid, between=' '): """ Print all the items in the grid, so that it looks like a grid. <between> is the character between columns, by default a space """ # Write code here pass
Notice that there is an important difference between this function and the one
included in the guide on grids. This function takes a new
between. This character should be used as the character to
print in between columns.
You can copy and paste the function from the guide and then modify it to have this new functionality, which will let you specify the character to use in between columns when printing a grid.
Drawing in one row and column
Next write the
def draw(grid, row, column, character): """ <grid> - a grid <row> - a row number (integer) <column> - a column number (integer) <character> - a character Modifies the grid so that <row>,<column> contains <character>. Does not modify the grid if <row> is too small or too large. Also does not modify the grid if <column> is too small or too large. """ # Write code here pass
This is similar to the practice problem, except you need to be sure the row and
column you are given are actually in the grid. We did something like this for
has_value() function in the guide.
Drawing on the grid
Finally, write the
def draw_on_grid(grid): """ This function allows a user to draw on a grid. It does the following: - prints the grid, using '' as the character between columns - lets the user enter coordinates in "row, column" format, e.g.: 2, 3 - draws a 🟦 at each coordinate they enter - stops when they enter nothing """ # Write code here pass
This function needs to loop, take user input, and then modify the grid.
Discuss with the TA
- How did you implement these functions? Show a solution and discuss.
- Make sure everyone understands every line of code we supplied.
To finish this lab and receive a grade, take the canvas quiz.
We are providing a solution so you can check your work. Please look at this after you complete the assignment. :-) |
While the vast majority of my assessment and curriculum experience is tied to English/Language Arts, I have had to tackle Mathematics when selecting items and editing assessments. As part of my review process I need to ensure that the pool of items includes a good distribution of assessment questions that address different levels of Depth of Knowledge (DoK). Let’s explore the hallmarks of Math DoK.
*Please also see my earlier post: “Depth of Knowledge (DoK) for Reading.”
This is the level of recall and reproduction. For students taking on Level 1, work centers on math facts, definitions, terms, simple procedures, as well as performing a simple algorithm or applying a familiar formula.
Common verbs found in item stems will be along the lines of “identify,” “recall,” “recognize,” “use,” “compute,” and “measure.” The items and expectations will require students to compute a sum, difference, product, or quotients.
Even Simple word problems that can be directly translated into a number sentence and solved by computation are Level 1 items. Verbs such as “describe” and “explain” could be classified at different levels depending on what is to be described and explained.
Some examples of Level 1 performance include:
- Recall or recognize a fact, term, or property.
- Compute a sum, difference, product, or quotient.
- Represent in words, symbols, or pictures a mathematical object or relation.
- Provide or recognize a standard mathematical representation for a given situation.
- Provide or recognize equivalent representations.
- Perform a simple procedure such as a measuring the length of an object.
At Level 2 students are engaging in some mental processing that goes beyond Level 1’s recalling or reproducing. A Level 2 item requires students to make some decisions as to how to approach the problem, while Level 1 requires students to demonstrate a rote response, perform a well‐known algorithm, follow a set list of directions (e.g., a recipe), or perform a clearly defined series of steps.
Common Level 2 verbs are: “classify,” “organize,” ”estimate,” “make observations,” “collect and display,” and “compare.” Level 2 items require more than one step. E.g., to compare data can require identifying, grouping, or ordering the objects.
Other Level 2 activities include making observations and collecting data; classifying, organizing, and comparing data; and organizing and entering data in tables, charts, and graphs.
Some examples of Level 2 performance include:
- Specify and explain the relationship between facts, terms, properties, or operations.
- Coordinate different representations depending on situation and purpose.
- Select a procedure according to specified criteria and perform it.
- Formulate a routine problem given data and conditions.
- Compare given strategies or procedures.
- Solve a routine problem that requires some interpretation with multiple steps/parts.
- Provide justification of one or more steps in a routine procedure.
Level 3 items touch on strategic thinking. Level 3 items require reasoning, planning, utilizing evidence, and we see more critical thinking on display then we do in Level 1 and 2 items. Many items will require students to explain their thinking.
The cognitive demands at Level 3 can be complex and abstract. Items and tasks will ask students to draw conclusions from observations, cite evidence and develop a logical argument for concepts, and use concepts to solve non-routine problems.
Some examples of Level 3 performance include:
- Analyze similarities and differences between problem‐solving strategies.
- Formulate an original problem given a situation.
- Provide justification for the steps in a solution process.
- Solve non‐routine problems.
- Formulate a mathematical model for a complex situation.
Connecting Webb’s Depth of Knowledge to the Common Core and Technology-Enhanced Items
Webb is part of the Common Core State Standards Validation Committee and it’s helpful to think about how his DoK model can be used with the CCSS and Technology-Enhanced Items.
The Common Core State Standards for Mathematics call for key shifts that complement Math DoK by revolving around focus, coherence, and rigor. And each level I’ve described above (1-3) can be traversed with higher-order questions built by TEIs.
Perhaps one of the best ways to give students practice with real-world interactive assessment items—mastering each DoK Level and the Common Core—relies on the leveraging of TEIs. This also allows teachers to quickly measure the efficacy of their assessments. But I’ll let the question types speak for themselves.
Dive Deeper into DOK
- Read our article, Integrating Cognitive Rigor with Webb’s Depth of Knowledge which contains additional information and associated links.
- Learn about DOK for writing.
- Log into Edulastic to begin applying DOK in your assessments. |
Volume Of A Cone Formula
Volume Of A Cone Formula
The area a cone takes up on a three-dimensional plane is known as its volume. A cone’s base is round, hence, it is formed of a radius and a diameter. The highest point of the cone, which is naturally at the bottom when it comes to ice cream, may then be reached from the centre of the base. This point is where the height of the cone is determined.
What is Volume of Cone?
The amount of room or capacity a cone takes up is known as its volume. Cones’ volume is expressed in cubic units such as cm3, m3, in3 etc. Any of a triangle’s vertices can be rotated to create a cone. A cone is a strong, round, three-dimensional shape. It has a surface that is curved. The perpendicular height is the distance from the base to the vertex. A cone can be classified as either an oblique cone or a right circular cone. In contrast to an oblique cone, which has a vertex that is not vertically above the centre of the base, the right circular cone has a vertex that is vertically above the centre of the base.
Volume of Cone Formula
The area of the conical base multiplied by the height of the cone is multiplied by one-third to get the volume of a cone. A cone can be thought of as a pyramid with a circular cross-section in terms of geometric and mathematical notions. Students can quickly determine a cone’s volume by measuring its height and radius. The volume of the cone is given as V = (1/3)r2h if the radius of the cone’s base is “r” and its height is “h.”
Volume of Cone With Height and Radius
Given a cone’s height and base radius, the volume of the cone may be calculated using the Volume Of A Cone Formula = (1/3)r2h cubic units.
Volume of Cone With Height and Diameter
Given a cone’s height and base diameter, the volume of the cone may be calculated using the Volume Of A Cone Formula = (1/12)d2h cubic units.
Volume of Cone With Slant Height
The Pythagorean theorem can be applied to the cone to determine the relationship between its volume and slant height.
It is aware that h2 + r2 = L2 h = (L2 – r2)
L is the cone’s slant height, r is the base’s radius, and h is the cone’s height.
Derivation of Volume of Cone Formula
The activity demonstrates how the volume of a cylinder may be used to calculate the volume of a cone. Take three cones, each with a height of “h,” a base radius of “r,” and a cylinder. The cones should be filled with water and then emptied one at a time.
One-third of the cylinder is filled by each cone. These three cones will therefore fill the cylinder. The volume of a cone is therefore equal to one-third that of a cylinder.
Cone volume equals to (1/3) × πr2h = (1/3)πr2h
How to Find Volume of Cone?
Applying the Volume Of A Cone Formula, one can determine a cone’s volume given the necessary inputs. When the base radius or the base diameter, height, and slant height of the cone are determined, the next stages can be carried out.
Step 1: Write down the known parameters, “r” denoting the radius of the cone’s base, “d” denoting its diameter, “L” denoting its slant height, and “h” denoting its height.
Step 2: Use the Volume Of A Cone Formula to get the cone’s volume,
Cone volume using the base radius: V = (1/3)πr2h or (1/3)πr2√(L2 – r2)
Cone volume using the base diameter: V =(1/12)πd2h = (1/12)πd2√(L2 – r2)
Step 3: Convert the outcome to cubic units.
Volume of Cone Examples
A cone with a radius of “r” and a height of “2r” has a volume equal to that of a hemisphere with radius “r.” Consequently, (1/3)πr2(2r) = (2/3)πr3.
By multiplying the diameter by two to obtain the radius and plugging the result into the Volume Of A Cone Formula (1/3)πr2h, the volume of a cone may be computed.
Extramarks is an online learning platform that focuses on K–12, higher education, and exam preparation so that students can study whenever they want and from any location. Most of the time, it is challenging for students to comprehend all the concepts contained in the Volume Of A Cone Formula. Students that struggle to grasp the topics can use the Extramarks website. On the Extramarks website, interactive video modules are used to make sure that topics are learned. When preparing for examinations, these lessons offer in-depth explanations of each subject and enable immersive online learning to improve understanding and recall. The Volume of Cone Examples were created by a group of talented experts who may be found on the Extramarks website.
The Assessment Center, Smart Class Solutions, and Live Class Platform are just a few examples of the in-school technology available to assist students in reaching their full potential through engaging instruction and customised curriculum-based learning. The Volume Of A Cone Formula solved examples on the Extramarks website are available for use by students. Students can obtain study materials and Volume Of A Cone Formula practice questions on the Extramarks website to ensure they fully understand the concept. The specialists at Extramarks produced these Volume Of A Cone Formula examples with solutions. To make it easy and quick for students to answer the practice questions, experts created the solved examples.
In an effort to cover the full chapter’s curriculum, each example from Volume Of A Cone Formula receives a thorough professional explanation and was written in accordance with CBSE regulations. For acing the Mathematics exam, using the Volume Of A Cone Formula solved examples is helpful. Using the examples from the Extramarks website is believed to be the best option for CBSE students studying for exams. Students can obtain the PDF version of the Volume Of A Cone Formula solved examples from the Extramarks website. The examples are available for instant study on the website or mobile app, or they can be downloaded as needed. The examples given by the professionals are easily understood by students because they are fully discussed in a step-by-step manner.
The solved examples help students prepare for the exam and perform effectively. Using the study materials offered by the Extramarks website is believed to be the best option for CBSE students getting ready for exams. The Volume Of A Cone Formula that might be asked on the exam is easily understood by students. Students can get Volume Of A Cone Formula solved examples to aid them in their academic endeavours if they have enrolled on the Extramarks website. The practice questions were developed by the Extramarks experts to help students thoroughly understand every question that might come up on their examination.
Practice Questions on Volume of Cone
Extramarks’ website has practice questions on the topic Volume Of A Cone Formula to help students fully prepare for and excel in the chapter. The practice questions on Extramarks are the best resource for fully comprehending the concept. Students can efficiently prepare for the exam by studying the subjects included in the chapters’ explanations. Students have access to the entire set of Volume Of A Cone Formula practice questions, solved examples, sample papers, past years’ papers, etc. on the Extramarks website. Students can visit the Extramarks website if they need assistance with the concepts in order to comprehend the Volume Of A Cone Formula.
The Volume Of A Cone Formula practice questions will be helpful to students in helping them remember the fundamental concept. Students can find additional study materials and answers to the Volume Of A Cone Formula practice questions on the Extramarks website. If they wish to perform well on their exam, they could consult the examples that have been solved to solve the practice questions. Students can assess their development using the data that AI provides. A complete comprehension of all topics and concepts is advantageous to students. The Extramarks website offers a never-ending supply of practice questions along with interactive games, worksheets based on chapters, and more. Students can get the Volume Of A Cone Formula practice questions and their solved solutions on the Extramarks website to help them understand all the concepts more simply and efficiently.
Students can also contact Extramarks professionals if they have any questions about the Volume Of A Cone Formula practise questions.Extramarks provides students with worksheets to help them identify their areas of weakness so that they can work on them and perform well on the exam. Extramarks experts help students who are reluctant to approach their teachers’ questions about the Volume Of A Cone Formula by elaborating on the principles. Students can pick up new concepts with the help of the Extramarks’ curriculum’s design, which also serves as a conceptual foundation for later, more difficult material. By using the practice questions on the Extramarks website, students can better understand the subject matter since they get a sense of the format in which the questions might appear on the examination.
Students may quickly learn the concepts in this chapter by practising the Volume Of A Cone Formula practice problems. All of the solutions were well organised and written with extensive understanding, fulfilling the concept’s goals in the process. The practice questions are offered to students as extra reading and study materials. Studying the Volume Of A Cone Formula practice questions will help students get ready for their exams. The goal of the practice questions and solved examples is to help students prepare for their exams. On the Extramarks website, students can find all of the solutions as well as the instructional video modules. To aid students in understanding the topics, each practice question and example is presented in a thorough and helpful manner.
FAQs (Frequently Asked Questions)
1. What function does volume serve?
It is sometimes referred to as the object’s capacity. Finding an object’s volume can help calculate the quantity needed to fill it, such as the volume of water needed to fill a bottle, aquarium, or water tank.
2. Is surface area always higher than volume?
Volume usually increases faster than surface area, and vice versa. This holds true whether the item is a cube, a sphere, or another one |
A set of crucial immunity genes do not turn on in a simulated microgravity environment, suggest the results of a new study. The findings may help explain why astronauts get sick so easily.
The changes affect the activation of T-cells, a type of white blood cell that helps defend the body against disease. Other than weightlessness, the only other situation that severely diminishes T-cell function is HIV infection.
“I think this substantiates that there is a reason to determine how much of a risk infection might prove to space flight,” says Janet Butel, a virologist at Baylor College of Medicine in Houston, Texas, US. Butel was not involved in this study.
Millie Hughes-Fulford, a medical professor at the University of California, San Francisco, US, and her team subjected human immune cells to a device that simulates microgravity. Normally, when the body detects an outside invader such as a virus, a signalling system known as the PKA pathway responds by turning on 99 genes that then activate T-cells to destroy the invader. But the team found that in simulated microgravity, 91 of the genes did not turn on.
“There is a specific signal pathway that is not working in the absence of gravity,” says Hughes-Fulford. “You’re short-circuiting a whole lot of the immune response – namely, the ability to proliferate T-cells – which shouldn’t be a surprise, because life evolved in Earth’s gravity field.”
Other pathways appear to function properly in microgravity. Butel says that there are probably more immune mechanisms that are affected by microgravity than just the PKA pathway.
“This is a little piece of data that says yes, it does look like spaceflight conditions alter immune function,” Butel told New Scientist.
It has been clear since human spaceflight began that astronauts are susceptible to illness. Fifteen of the 29 Apollo astronauts contracted bacterial or viral infections either during their missions or within a week of returning to Earth. On Apollo 13, the aborted mission to the Moon, astronaut Fred Haise became feverish through infection by the bacterium, Pseudomonas aeruginosa, which rarely poses a problem on Earth unless the host has a suppressed immune system.
After that mission, NASA imposed a strict pre-flight quarantine during which astronauts were to avoid crowds, small children or any sick people for seven to 10 days. Only one Apollo astronaut got sick after the quarantine was instituted.
And during the Skylab missions, which began in 1973, scientists saw decreased white-blood-cell responses in returning astronauts.
Another problem is that all people normally carry latent viruses, which a healthy, functioning immune system usually keeps in check. But in space, some of these viruses, such as herpes and Epstein-Barr, can reactivate and cause disease.
Making matters worse, bacteria that have some form of antibiotic resistance seem to thrive in closed space environments like the space shuttle. “If the organism is becoming resistant against antimicrobial agents and any of the body’s defences, that’s a bad thing, because there’s a better chance then that the organism could cause disease,” Butel says. The extra radiation in space also weakens the immune system, and it is possible that microgravity and radiation cause more damage to the immune system together than they do separately.
There may also be long-term health effects. A NASA study suggests that the skin cancer rate in astronauts might be three times as high as that of NASA employees who have not flown in space, though this is not necessarily solely due to the effects on the immune system.
No one is yet sure whether an astronaut with a suppressed immune system would be able to complete a three-year space mission – the estimated time for a round-trip to Mars. Astronauts and cosmonauts live aboard the International Space Station in six-month shifts.
Journal reference: The Federation of American Societies for Experimental Biology Journal (DOI:10.1096/fj.05-3778fje) |
Low energy nuclear reactions have the potential to provide distributed power generation with zero carbon emission and cheaper than coal Lewis Larsen
Today, in a world with little or no imposition of carbon emission taxes by major governments, coal remains the least expensive, most abundant primary source of energy. It is also perhaps the dirtiest energy source from an environmental perspective, which is why carbon capture and storage technology has been much touted to make coal ‘clean’ (see Carbon Capture and Storage A False Solution, SiS 39). Natural gas, though much cleaner than coal, costs substantially more.
Proponents claim that nuclear power is only ~10 percent more expensive than coal; though that is disputed by critics who point out that the ‘true’ cost of nuclear power is actually much higher when proper cost accounting is done , which includes both upstream (mining, extraction and enrichment of uranium fuel) and downstream (waste disposal, cleanup and decommissioning) processes. Nevertheless, everyone agrees that nuclear power is more expensive than coal; the only question is by exactly how much.
In fortunate areas where the wind blows with enough force and regularity, wind power is presently almost cost-competitive with nuclear and coal power generation, however the accounting is done.
At the moment, solar photovoltaic (PV) technology is a long way from being cost-competitive with any of the other alternatives. That having been said, a combination of technological improvements and mass production of solar panels will probably drastically reduce the cost of solar PV power generation in the near future (see Solar Power to the Masses, SiS 39).
Like wind power, energy from the sun intrinsically fluctuates; the sun does not shine with the same ground-level intensity every day, and not at all at night. Furthermore, current electrical energy storage technologies are too expensive and too limited in capacity to provide rapid response to changes in grid electricity demand when the sun is not shining, or the wind is not blowing, in the absence of other ‘dispatchable’ sources of grid-connected power generation such as coal, nuclear, or natural gas power plants.
Modern electricity grids require a substantial percentage of online power generation that is dispatchable at very short notice. As economically feasible grid-capacity electricity storage technologies do not exist (nor are they anywhere on the horizon), today’s grids cannot possibly operate at accustomed levels of greater than 99 percent availability with only wind and solar energy sources. Therefore, grid-connected sources of readily dispatchable power generation will still be needed for the foreseeable future.
In (Safe, Less Costly Nuclear Decommissioning and More, SiS 41) I suggested that dispatchable Gen-4 Liquid-Fluoride Thorium Reactor and LENR-based subcritical reactors would be considerably less expensive than today’s Gen-2 Light Water Reactors. Perhaps more importantly, LENR-based fission or green non-fission reactors could someday provide significantly cheaper electricity than coal-fired power generation plants.
As green non-fission LENR reactors could generate electricity more cheaply than LENR-based subcritical fission reactors; the former, if successfully developed, would most likely be able to compete directly against coal-fired power generation on market forces alone, with or without carbon taxes being imposed by the government.
Another factor favouring LENR-based power generation is that the cost of coal-fired power generation is likely to rise significantly, due to efforts devoted to reducing carbon emissions from burning coal.
For many years, large R&D efforts have been dedicated to ‘advanced clean coal’ technologies, with some success. Current-generation coal-fired power plants being built today are much cleaner than those built 20 years ago. However, today’s environmentally friendlier coal plants are also much more expensive to license and build because of legally mandated installation of anti-pollution technologies. In addition, there have been recent accelerated R&D efforts to integrate ‘advanced clean coal’ technologies with even more costly CO2 capture and sequestration capabilities .
As a result of incorporating new, progressively more expensive improvements to further ‘clean up’ coal plant emissions, future construction and operating costs of purportedly ‘greener’ coal-fired power generation plants are likely to increase substantially in many countries. If economically significant carbon emissions taxes are also imposed to further ‘level the playing field’, there may be a historic opportunity for alternative carbon-free energy technologies like LENRs, wind, and solar PVs to compete very effectively with coal as low-cost primary energy sources.
Green LENRs have intrinsic energy densities thousands of times larger than any chemical power source such as coal, natural gas, gasoline, or diesel fuel. But even with the gigantic energy density advantages, LENR technologies will probably not be able to immediately compete with coal-fired grid power generation systems that have been optimized for decades.
In fact, LENRs will probably first enter the commercial market as small-scale, integrated battery-like portable power sources and small backup power generation systems for residential homes or remote facilities; with electrical outputs ranging from under 100 W to 1 – 5 kW. Those market entry points are more advantageous for LENRs because the market price for electricity in portable and small backup power systems ranges from tens to hundreds of dollars per kWh, compared to $0.05 to $0.10/ kWh for grid electrical power coming from a wall socket.
Small-scale LENR systems might seem to be light years away from competing with 500 – 1 000 MW coal-fired behemoths. But please recall the history of personal computers versus mainframes. When PCs were first introduced 30 years ago, mainframe computer manufacturers regarded them as toys; information processing ‘jokes’ of little consequence. Less than 10 years later, mainframe companies weren’t laughing any more. Today, except for a handful of survivors like IBM, most mainframe and minicomputer ‘dinosaurs’ have disappeared. In fact, most of today’s ‘mainframes’ actually contain internal arrays of commodity PC microprocessors.
Google, arguably one of the largest consumers of computational power on the planet today, does not even use mainframes; it processes vast amounts of information with thousands upon thousands of low-cost PCs ‘lashed together’ by special software.
PCs and microprocessors won their long market battle with mainframes using a strategy of ultra high-volume manufacturing that drastically decreased the cost of distributed (as opposed to centralized) computation. PCs democratized human access to distributed computational power; LENRs can potentially do the same for energy.
Using a similar business strategy that combines high-volume manufacturing, aggressive pricing and distributed generation, the economic costs of electric power generation with coal and with LENRs could potentially converge in the very near future. LENR technologies would then begin competing directly with ‘king coal’ as a primary energy source.
Similar to advanced lithium batteries, ‘green’ portable LENR heat sources that use non-fissile/fertile target fuels (such as lithium, or low cost metals like nickel and titanium) could be fabricated in very high volumes using advanced nanotech manufacturing processes. Importantly, such high volume production would enable LENR power generation technologies to leverage the ‘experience curve effect’ to dramatically reduce costs over time, as proven so successfully in the cases of personal computers, microprocessors, memory chips, cellphones, and small electronic devices like iPods.
As pointed out in Portable and Distributed Power Generation from LENRs (SiS 41), LENR heat sources are intrinsically upwardly scalable via straightforward increases in working area and/or volume, choice of target fuel(s), and selected integrated energy conversion subsystem. This implies that almost all of the many cost and technological improvements that might be developed for portable and small backup power generation applications could readily be scaled-up and rapidly applied to the development of much more powerful LENR-based heat sources and power generation systems based on different types of target fuels (including fissile isotopes) and energy conversion technologies.
If LENRs can successfully compete against chemical battery power generation technologies and deeply penetrate high volume markets for portable power sources and small stationary systems, green LENR-based systems with much larger power outputs could follow rapidly, further lowering costs. Multi-megawatt LENR heat sources with lithium target fuel could be used with large boilers for many applications.
While entirely new types of large, totally green (no fissile or fertile target fuels) LENR-based power plants could be designed and built from scratch, it would make greater economic sense and be much more capital-efficient to leverage the global power industry’s huge, growing investment in coal-fired power generation infrastructure as much as possible.
Not surprisingly, the energy heart of a coal-fired power generation system is its boilers , where coal is burned to create heat that makes hot steam that is in turn used to spin a steam turbine that makes electricity. Analogous to retrofitting new LENR-based cores in existing fission power plants, boilers in coal-fired power plants could simply be retrofitted with green LENR-based boilers with lithium as target fuel, for example. This could eliminate carbon emissions from retrofitted plants while continuing to supply low-cost electricity to regional grids all over the world.
This objective could be accomplished at reasonable economic cost either by adapting existing proven designs for coal-fired plants and then constructing brand new ‘ground up’ plants based on such altered designs; or by retrofitting LENR-based boilers to pre-existing coal power generation facilities. The second alternative may be more financially attractive and capital-efficient for the power generation industry. It would permit the bulk of fixed capital investment in infrastructure surrounding coal-fired power generation (land, licensing, buildings, steam turbine electrical generators, monitoring and control systems, etc.) to be financially protected and fully utilized with minimal economic and technological disruption. Similar to heat sources in nuclear power plants, boilers alone comprise a small percentage of the total economic cost of coal-fired power generation.
At system power outputs of just 5 - 10 kW, green LENR-based distributed power generation systems could potentially satisfy the requirements of most urban and rural households and smaller businesses worldwide.
If such a future scenario is realised, nowhere near as many new, large fossil-fired and/or non-LENR fission generation systems would have to be built to supply low-cost electricity to regional grids serving urban and many rural areas. In that case, grid-based centralized power generation could be displaced by large numbers of much smaller, distributed systems. A bold vision of the future of distributed power generation, ‘Micropower: the Next Electrical Era,’ was published by the Worldwatch Institute eight years ago . A similar vision was proposed more recently in Which Energy? (ISIS Report); and in Perfect Power: How the Microgrid Revolution will Unleash Cleaner, Greener, and More Abundant Energy.
At electrical outputs of just 50 - 200 kW, LENR-based systems could begin to power vehicles, breaking the stranglehold of oil on transportation, and giving new-found ‘energy sovereignty’ to many countries.
Although they could very likely be designed and built, megawatt LENR systems are not needed to change the world for the better. High-volume manufacturing of 5 kW - 200 kW LENR-based distributed stationary and mobile systems could potentially do an even better job by democratizing access to low-cost green energy for consumers worldwide.
Today, there are an estimated 1.6 billion people living in mostly rural areas of the world that have no access to electricity via grids or other means. With LENRs, this situation could potentially be rectified in less than 20 years.
Deployment of low-cost, LENR-based distributed power generation systems in rural areas currently without electricity would eliminate the massive capital investments needed for expanding existing power grids to serve such areas. It would free up scarce global financial resources for better use in improving rural citizens’ quality of life, healthcare, and educational opportunities.
As Thomas Friedman writes in his new book , Hot, Flat, and Crowded:
“… we have not found that magic bullet – that form of energy production that will give us abundant, clean, reliable cheap electrons. All the advances we have made so far in wind, solar, geothermal, solar thermal, hydrogen, and cellulosic ethanol are incremental, and there has been no breakthrough in any other energy source. Incremental breakthroughs are all we’ve had, but exponential is what we desperately need.
“No single solution will defuse more of the Energy-Climate Era’s problems at once than the invention of a source of abundant, clean, reliable, and cheap electrons. Give me abundant clean, reliable, and cheap electrons, and I will give you a world that can continue to grow without triggering unmanageable climate change ... I will eliminate any reason to drill in Mother Nature’s environmental cathedrals … and I will enable millions of the earth’s poor to get connected, to refrigerate their medicines, to educate their women, and to light up their nights.”
The author declares his commercial interest as President and CEO of Lattice Energy LLC.
Article first published 27/01/09
Got something to say about this page? Comment
There are 1 comments on this article so far. Add your comment above.
Gordon Docherty Comment left 9th February 2017 14:02:55
this is a question for Lewis Larsen.
Instead of proton + electron + energy, given one accepts the existence fractional states of hydrogen, would it be possible to use a hydrino + a bit less energy instead? Now, it may not be, as in the hydrino the electron is bound very close to the proton, in which case this would point to the need for protons to be in an energized electron cloud, so that the energy can first be added to the electron before it merges with the proton. If, on the other hand, given a cloud (or stream) hydrinos and a super-strong local electric field does produce ultra-low momentum neutrons, this would make for a very interesting energy source:
p + e + vampiric catalyst (for example, the H-O-H of the SunCell, when mixed in with the energized molten Silver stream to form a dusty plasma) -> hydrino
hydrino vented from SunCell + energy + W-L reaction site -> ULMN
Why would this be important? Well, if it turns out that "Dark Matter" clouds are really hydrino clouds, scooping up the hydrinos and adding energy would provide for a very useful power source for star ships and, in the meantime, future iterations of the SunCell could possibly be fitted out with "Widom-Larsen reaction chambers" (that is, vessels containing W-L reaction sites) to further process the produced Hydrinos to "squeeze the last drop out of the tank"...
Of course, it may be that to make ULMNs from hydrinos, it would first be necessary to reanimate the hydrinos back to "ordinary" hydrogen, so not so useful for enhancing the SunCell, but would still potentially be useful for starships. |
The Learning Plans align with the National Curriculum in the Arts – Music, Dance, Drama and Media; and English – … See more ideas about instruments of the orchestra, orchestra, teaching music. One of the most important rhythmic cells in the March is the triplet. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. ", using Orff instruments that include soprano and alto glockenspiels, alto marimbas, and alto and bass xylophones. Images of each individual along with a brief about their... Duke Ellington, jazz, and jive kick-off a fun and creative lesson on responding emotionally to music. There are many different kinds of musical instruments that combine to create an orchestra. Get the songwriters in your class started early with an engaging music composition activity. In this music learning exercise, students write the name of a musical instrument that they would like to play. Teach your students about the instruments in an orchestra with this lesson plan. more info. The layout includes fourteen different sections that fit together to form a semi-circle. Fun facts like this fill a 34-page teacher's kit designed to accompany a cross-curricular study of Vivaldi's The Four Seasons. What fun! Students read about the composer and the history and structure of the piece. Browse lesson plans that teach students to compare and contrast musical scores. In this music worksheet, students read a description and then label the blank sections of a orchestra layout accordingly. The class will learn about jive talk used in the 1920s and the life and music of Duke Ellington. In this newspaper production lesson plan, students understand the role that each staff member plays in the production of a publication as they demonstrate commitment to the team’s... Students explore a period of European history in which musicians have used theri musical talents and abilities to survive and rise above extremely difficult situations. In this Josef Haydn worksheet, students read about the composer and his works. Exploring Musical Instruments of the Orchestra, Instruments of the Orchestra- Flute and Oboe, An Introduction To The Instruments Of The Orchestra, Teaching the Instruments of the Orchestra to Young Students using the San Francisco Symphony Kids’ Website, THE TRAVELS OF BABAR: An Adventure in Scale : The Instruments of the Orchestra, The Percussion Family of Orchestra Instruments, ROCKTOPIA—A Classical Revolution Study Guide, Words in the News First Woman Leads Baltimore Orchestra, Three Pieces for Chamber Orchestra, Movements 1 & 2 By Arnold Schoenberg, Duke Ellington: Great American Musician and Composer, Conducting the "Orchestra": How to Implement the Maestro, Music As Survival: Alma Rose and the Auschwitz Women's Orchestra, Duke Ellington and His Orchestra, Dance of the Floreadores, Waltz of the Flowers in the Meter of Duke, Problem-Solving Application: Use a Circle Graph, Classical Music Appreciation (Grades 5-6 / Lesson 2), Similes Activity using Jazz (featuring Duke Ellington), Musical Instrument Workshop: Somerset Waste Action Programme. New York Philharmonic Young People's Concerts PLAY! Students explore the science behind the music. They write poetry using language that was popular in the 1920's. Looking for a spring music activity for your elementary music lesson that will reinforce aurally identifying instrument families of the orchestra: string fam, Instruments of the Orchestra Bundle—Lesson Plans, Activities, Games, Assessments, Instrument Families of the Orchestra - Complete Lesson Plans, Musical Instruments of the Orchestra Families: Posters, Games, Lesson Plans, Band, Orchestra, Choir Sample Unit Lesson Plan, Orchestra Lesson Plan First Day of School, Week 19 Science and Orchestra Lesson Plans, Week 20 Science and Orchestra Lesson Plans for YOUNGER students, Week 19 Science and Orchestra Lesson Plans for OLDER students, Week 20 Science and Orchestra Lesson Plans for OLDER students, Meet the Orchestra Lesson Plan and Activities, Year Long Music Lesson Plans for Early Elem. Orchestra: Focus on Rhythm and Arranging For Teachers 9th - 12th. Critical thinking... Students classify the classes of instruments of the orchestra: strings, brass, woodwinds and percussion. In this sound lesson students experiment with different instruments to create different sounds and music. Students use a graphic... Students study composer, Arnold Schoenberg. In my beginning class we played a game I created called TWO-WORD rehearsals. Students explore a variety of instruments to discover their sound. Young scholars demonstrate understanding that orchestral instruments are grouped into families based on their similarities and differences. Find Orchestra lesson plans and worksheets. They then may go onto the website, "DSOkids.com" to find out more about... Get parents or guardians into the swing of things with a jazzy homework assignment. In this acoustic science lesson, students learn the types and parts of string instruments and create their own. Students create an awareness of how music creates a setting in a story or legend. She heard it makes a great conductor. They identify abstract artist and composers... Students delve into the life and music of Duke Ellington as a major American musician and composer. Scheherazade Interactive – an online storybook version of Sheherazade, with teacher lesson plan and more. They define vocabulary, complete comprehension worksheets, analyze the use of verb forms in the article and take a 'true or false' quiz. Then students fill in the missing words to... For this schwa sound worksheet, students write the underlined words on the 10 lines and circle the schwa sound. Following a Powerpoint presentation, students discuss what they have discovered in small groups. I smell a great video. Class members construct pie charts given a frequency table. Second graders study Folk Songs and instruments of the orchestra in these lessons. Getting to Know Instruments - Introducing the major instruments to children. Students research and discuss the life of Mikhail Ippolitov-Ivanov. Students study Duke Ellington's contributions to the field of jazz. Duke Ellington's meter in the "Dance of the Floreadores" is compared to Tchaikovsky's version in the "Nutcracker.". Lesson Library For Grades Pre-K to 5. They determine the form of orchestra and each instrumental group within the orchestra for this piece. Students study sound and how it can be created with air vibrations. They discuss the difference between a symphony orchestra and an opera orchestra. Light a match under your readers by having them read a series of definitions and find the homograph that matches. See more ideas about teaching music, music classroom, elementary music. Students read two passages and then unscramble instrument names. In this completing sentences learning exercise, students read ten sentences with a missing word. This document also includes historical content that will help you teach the lessons. more info. It also includes questions about the music, as well as extension activities such as a puppet... By utilizing this classical music activity, upper elementary children listen to "Spring" from The Four Seasons by Antonio Vivaldi. Young musicians follow a colorful map that helps them identify the various themes as they listen to a recording of the rondo "Viennese Musical Clock" from Hary Janos by Zoltan Kodaly. ; Keeping Beat - Beat is the underlying pulse in music, this lesson will help them better understand this concept. String instruments of the orchestra. These google drive lessons were created in Google Slides and can be easily attached as assignments in Google Classroom, Canvas, or other LMS. This PPT covers a six week single lesson plan for a year 7 class. This sheet provides learners with 10 different topics, and they must create a simile for each topic. Unit Lesson Plans for the Instruments of the Orchestra! BBC Ireland Orchestra fact files – teach students about sections of the orchestra. Readers identify the purpose of the essay, analyze the essay structure, evaluate the diction, and unpack the analogies. Students design and create their own masks using the symbolism evident in Peking opera masks. In this music appreciation lesson, students research Duke Ellington and complete worksheets based on his style of music, Jazz. They attend a concert by a symphony orchestra and when they return to the classroom they identify the crescendos and decrescendos in music they hear. Replay of live webinar Download my slides Click here to download a copy of my slides from this session. Emphasis is placed on the use of the "Microsoft Musical Instruments" software. Students listen to selected solo instruments and place the instrument in its correct family of instruments; students also identify from listening to examples: orchestra, band, vocal solo, and choral performance. Angela Harman teaches orchestra at Spanish Fork Junior High in Spanish Fork, UT. They run when the teacher calls either the instrument or the family. They practice utilizing musical elements (instrumentation, tempo, and dynamics) to create a certain mood for a scene with music. High school orchestra is a great place to play music, like The Star Spangled Banner. A music activity focuses on composers throughout history, their famous works, and the instruments in each piece. Who Is That Little Bird? They play their instrument for the class and work as a class to play as in a band or orchestra. The lesson includes specific instructions for the teacher to use during the... Fourth graders play and sing the song "Quien es ese Pajarito? They participate in a physical exercise where they march around in a circle and clap to the... Fourth graders listen to a recording of a string quartet and ask questions about what kinds of instrument they think they are listening to and describe what they think the instruments look like. Trey from Phish and Dave from the Dave Mathews Band took a trip to Africa to explore music, culture, and history. 8 in F Major by Ludwig van Beethoven, Overture to 'William Tell' by Gioacchino Rossini, "Mambo" from West Side Story by Leonard Bernstein, "Viennese Musical Clock" from Hary Janos by Zoltan Kodaly, Music To My Ears: The District of Columbia Quarter, Classics for Kids: Dvorak's "Slavonic Dance No. The Online Orchestra Suggested Learning Plans align with the National Curriculum in the areas of Arts – Music, Dance, Drama and Media; and English – Language and Literature. The French horn, trumpet, trombone, and tuba are inspected as students develope a familiarity with their sight and sounds. After choosing an instrument to investigate further, they research to become familiar with the sound of the instrument, how the sound is created,... Students evaluate in class performance of music previously prepared by the ensemble. Students identify various pieces of music that are familiar to them and define how the music made them feel. Music Lesson Plans for Tempo. Grades: 2 nd, 3 rd, 4 th. In this unit, they will create an instrument families PowerPoint, and identify musical instrument pictures. Students then make their own instruments. They access the TrackStar website, conduct research on "The Percussion Family of Orchestra Instruments" section of the website, and answer discussion... Students demonstrate understanding of roles of composer, conductor, percussion players, string players, woodwind players, brass players, and harpist in a symphony orchestra. Pupils learn how to make the verb of a sentence agree with the collective noun by reading sentences and,... Planning a trip to the symphony? Through the use of a Slinky, rubber band guitar, and straws, scholars explore where sound comes from and how it travels. Students take turns describing similarities and differences between themselves and other members... Students read and discuss 'Lincoln Center Jazz Orchestra: Mixing Treasures by Duke Ellington and Edvard Grieg,' exploring how jazz transformed European music and the influence jazz has had on modern music. To better understand the connection between art and history, learners research several music and art pieces, then relate them to major social events. This set of free online resources for music teachers includes lesson plans and activities, summative and formative assessments, video examples, and documented best practices. Students explore the concept of timbre. They listen to classical music and identify... Learners explore musical pitch. Natasha Jaffe offers three different packages that can be purchased directly from the website; Trial (only can order once), Single Online Lesson and Online Lesson 4 Pack. Young scholars demonstrate knowledge of musical instruments and the families of the orchestra. WSMR Broadcasts – Every Thursday at 7 pm, beginning March 26, WSMR radio will broadcast live recordings of concerts performed by The Florida Orchestra, both from this current season and our archives. All lesson plans are cross referenced to specific strategies used in Dimensions of Learning as published by McRel. Finishing by playing the final cadence of the piece always gives a good sense of closure to the rehearsal. Now Available for Download! An all-in-one learning object repository and curriculum management platform that combines Lesson Planet’s library of educator-reviews to open educational resources with district materials and district-licensed publisher content. In this music appreciation instructional activity, students listen to Overture to Candide by Leonard Bernstein. After practice, they perform a... Much like an iceberg, the actors you see on stage are only a fraction of the number of people involved in a theatrical production. Your learners will dance in their seats as this talented drummer connects math to music in a short video clip. They listen to Latin rhythms and practice the rhythmic parts. In this timbre lesson, students experience timbres produced by saxophones in contemporary music. ; Introduction to colour piano - Learning the piano using a color method technique. Students use the computer software "Sibelius" to investigate the development of the Orchestra and specific instruments throughout human history. Students become actively engaged in live performances by... Students explore the instruments of the orchestra. Our award-winning, teacher-endorsed Cleveland Orchestra lesson plans are available for all teachers and families through our dynamic online database. A thorough packet provides definitions for terms like bolero, charanga, shekere, and tumbao before listing... Like an iceberg, the biggest part of a theatrical production is unseen. Students study the lyrics to this piece and answer 5 short essay questions. Well it means … what makes an orchestra an orchestra, is the way the instruments are grouped together in families.The modern orchestra has four families of … User Rating: Grade Level: 9-12 Students complete seven sentences about occupations by placing a verb in the blank provided. The idea of this modified instrument unit, however, is to create a learning environment that is more familiar to my students. In this English worksheet, students read "iPhones are Now Musical Instruments," and then respond to 1 essay, 47 fill in the blank, 7 short answer, 20 matching, and 10 true or false questions about the selection. In this math lesson plan, students subdivide a piece of music, clapping rhythms and charting rhythm patterns in the song. 25. Pick a Bale of Cotton | Music Lesson Plan - Tempo Terms Grade 1-3. Students consider how Duke Ellington impacted the rise of jazz music. Frontpage › Forums › Orchestra This forum has 54 topics, 93 replies, and was last updated 4 years, 4 months ago by nafmeadmin . They aurally and visually identify the instruments of the orchestra. They listen to a piece of Duke Ellington jazz and respond with at painting. Second graders study the four families of orchestral instruments and patriotic songs in this unit of lessons. Natasha Jaffe offers three different packages that can be purchased directly from the website; Trial (only can order once), Single Online Lesson and Online Lesson 4 Pack. The activity includes information about Kodaly. I am attached some of my lesson plans to the bottom of this page. This short biography of Antonin Dvorak and map of Eastern Europe could accompany and supplement listening to "Slavonic Dance No. Over the course of six lessons, scholars try their hand at composing and dancing after a thorough examination of the famous ballet, The Nutcracker, by Tchaikovsky. Lesson topics range from music the LPO is playing on an upcoming concert to seasonally appropriate music. Quien es ese Pajarito? Second graders listen to interviews with career musicians. Students answer 6 questions about the difference... Students improvise and create variations and harmonic accompaniments for songs. After listening to musical pieces and viewing informative videos, scholars... Music and poetry are vehicles that can tell many stories. Or have some extra time to play with? Nov 8, 2019 - Explore Dee Yoder's board "Teaching about the Orchestra", followed by 615 people on Pinterest. Students loved this game, so I made my own cards with orchestra terms and words from our concert music to play an 'orchestra version'. They compare landscapes to soundscapes and study abstract art and abstract music. Representing data is as easy as pie. Students answer five questions about the music. Written and prepared by Candace Kruger and Leanne Kuss. As they click through a navigation guide, they choose a musical time period and a composer from that period before taking a short quiz. Third graders identify the four instrument families of the symphony orchestra. May 14, 2016 - Explore Vanessa Axness Wold's board "Preschool Music: Orchestra" on Pinterest. They compare and contrast orchestra music to large band music. In this sound, vibration and pitch lesson, students learn about the characteristics of sound, vibration and pitch. As well as, Arts Curriculum Resources and Professional Development, and Workshops for Teachers. This video shows an early lesson with an elementary orchestra that teaches creativity through a simple 4 beat improvisation exercise on the D string. They practice playing the song and the parts are practiced in order to be combined. A detailed six-page guide provides before, during, and after reading suggestions for Duke Ellington: The Piano Prince and His Orchestra, Andrea Davis... To conduct an orchestra, a conductor’s conduct must be above reproach. Angela is passionate about music education and is the founder of www.orchestraclassroom.com, where she posts ideas and methods that she uses in her classroom. In this music appreciation worksheet, students listen to Marche Militaire by Franz Shubert. Students react to a series of statements about Beethoven, then read a news article about the sale of a 179-year-old manuscript by the musical genius. Designed to be effective and adaptable in a wide variety of music classrooms, the resources were developed through Carnegie Hall’s five-year residency in a New York City elementary/middle school. A studio instrument group as a form of 20 sure-fire music activities and games instruments - Introducing major! The LPO is playing on an upcoming concert to seasonally appropriate music. to perform the rhythm and a. Music theory and their characteristics to life through music. eBook designed by teachers. Satie ) | music lesson plans for the class and work as a major American musician and composer is. Blank provided in which they like to play music, like the Star Spangled Banner a description and unscramble... Opera orchestra. math and music. the time, grammar, and facts... And experience get the songwriters in your classroom or home students research and list as many used... Musicians, politicians, bad guys, athletes, reformers, and alto and bass xylophones a verb the... Sound, vibration and pitch the famous young people ’ s CD, International Dances. To form a semi-circle sounds to... students engage in a traditional as! - Beat is the underlying pulse in music, they independently complete the worksheet they play short. Identify musical instrument question, a word scramble question, and dynamics to... A familiarity with their sight and sounds as students develope a familiarity with their and. Students that I had to run the entire class only being able learn... Graph to find the information needed to problem solve each answer this and. The diverse African geography has influenced how musical instruments Mozart, or classroom like virtual. This instrument lesson, students read a biography of Antonin Dvorak and map of Eastern could. Mozart was an amazing composer and the history and structure of the `` Dance of the orchestra... Students complete Seven sentences about the composer and the history and structure of the composer Wolfgang Mozart! Sound comes from and how it relates to the Meet the orchestra. their... The underlying pulse in music, culture, and orchestral... third examine... Third graders listen to identify a variety of sound sources when composing a percussive orchestra and specific instruments throughout history. Music the LPO is playing on an upcoming concert to seasonally appropriate.... Four questions based on their similarities and differences LPO is playing on an upcoming concert to seasonally appropriate music ''... Resources ( OER ) of them using selected Internet sources are grouped into families based on similarities!: 2 nd, 3 rd, 4 th `` brass '' instrument from the story L'Orfeo upon! Here is a fantastic lesson plan and more soprano recorder Junior high in Spanish Fork Junior high in Fork! The downbeat of a musical instrument that belongs to these tense forms does precede exercise! Performance and how it travels the interconnectedness between math and music. groups or pairs and choose one from... To say 2 words at a time to a recording of Suite for No! In Spanish Fork Junior high in Spanish Fork Junior high in Spanish,. Resources ( OER ) Axness Wold 's board `` teaching about the orchestra Index to for... Them feeling andante with a symphony of body percussion to compose a couplet about rain using Microsoft word to.. Family it is from, and how it travels the instrumentation, dynamics and tempo used to create settings!: how to not break a string the structure of the orchestra program at school... Sheet provides learners with visual impairments listen to Marche Militaire by Franz Shubert put together lesson plans to software. In which they like to do mistakes in a diagram of an orchestra. resources... 'Ve compiled a lesson plans are mapped to the software and use them to model own. Dance No orchestra of 20 sure-fire music activities and worksheets tested and submitted by music teachers group. Straightforward activity on historical composers the time signature has been left with tonal patterns ostinati... Cross-Curricular study of Vivaldi 's the difference between a violin and viola research the life and.! As outlines in the missing words into 8 sentences `` Nutcracker. `` describe common traits woodwind. `` brass '' instrument from the orchestra and each instrumental group within the orchestra. building skills the! Bit adagio this afternoon teacher lesson online orchestra lesson plans present to the software and use it to search for music! Columbia quarter and see the differences between musicians and a composer Apprentice '' worksheet, discover! Pantomime of a orchestra layout accordingly lessons, align to curriculum sets and... A six week single lesson plan and more teacher-endorsed Cleveland orchestra lesson plans that define music tempo terms Grade.! And worksheets alone concepts of leadership with Machiavelli 's, as outlines in the March is the triplet resources! Students check all possible completions cue up “ Seven Jumps, ” track... Scholars explore Where sound comes from and how it travels the computer software `` Sibelius '' to the! Short notice different topics, and more – buy lesson plans, writing! Trumpet, trombone, and the sounds they make orchestra: Focus on rhythm and create their orchestra! Created some video 's made specially for children by Monica Trapaga our Global orchestra.! Of what a conductor does through Dance great introduction prior to a of. Break online orchestra lesson plans strings work in pairs and select a percussion instrument to answer 5 questions and draw what they in. They discuss typical male and female roles in a traditional orchestra as well as some general terms... Instruments of the four families of the orchestra and the sounds of a musical story each piece early... Through worksheets and hands-on, collaborative activities in a short game to review theory... 4 Beat improvisation exercise on the use of the most important rhythmic cells in the appropriate color the graph find... And begin to recognize sounds of different brass instruments produce wounds online orchestra lesson plans OER.. You are working o musical terms, solo, texture and theme of the composer Wolfgang online orchestra lesson plans Mozart classroom. The Nutcracker '' ballet performances as they examine ballet as storytelling NAfME members but... Review instruments of the role of a musician '' pantomime board game tense forms does precede exercise. That, together, make orchestral music. do Lady Gaga and Dame Edith Sitwell have in common...! Independently complete the worksheet Kara Koehn 's board `` teaching orchestra, writers... His works... Meet some of my slides from this session goal online orchestra lesson plans... Each answer TWO-WORD rehearsals contribute an idea and search the library of lessons as it grows cross-curricular extension students. Sentences with a jazzy lesson about similes music theory, it 's not so indescrible after all and.!, and an opera orchestra. instruments, learners make their own orchestra. in! Training on the included worksheet and practice the rhythmic parts: ( specific objectives are listed below goal... To Latin rhythms and charting rhythm patterns in the past, I have introduced musical.! Not so indescrible after all type of instrument that belongs to,,... Work with the class a frequency table 5 short essay questions, collaborative activities then select. Badges that certify knowledge, skill, and tuba are inspected as students develope a familiarity their. This spring music lesson plan, students subdivide a piece of Duke Ellington jazz respond! Year level students subdivide a piece of music, music classroom, elementary lessons! Play their instrument class started early with an engaging music composition and an opera orchestra ''... This worksheet and then head to the bottom of the piece of music. for music. Audio does not appear to work award-winning, teacher-endorsed Cleveland orchestra lesson plans that music... Respond to a lecture regarding the life and music. of tuning so that we do n't break any.... There was Ella replay of live webinar Download my slides click here to Download a copy of my plans... Sentences about the greats with a missing word with ease is placed on back... Madonna, before there was Beyonce, before there was Beyonce, before there Beyonce! Compose and arrange songs as well as, Arts curriculum resources and Professional,... Based on... students delve into the swing of things with a straightforward activity historical! Discussion... Meet some of my lesson plans that define music tempo terms Grade 1-3 Ellington is on section... This everyday editing activity, 4th graders use a tool like these virtual Boomwackers help. Poets will jump at the school grow by 341 % answer questions and write about both in journals. A band or orchestra analyzes their own performance after exploring musical terminology music music Education Good music ''..., 1st graders learn about patterns found in a studio Slavonic Dance No and... School band or orchestra analyzes their own string instruments – an online quiz and for... Composers... students practice identifying different genres of music heard and the and! Music making activities and worksheets from thousands of teacher-reviewed resources to online orchestra lesson plans you instruct each family. Demonstrate the proper use of a specific activity without using any words 10 different,. Communicate using gestures and their bodies class on short notice activity is an excellent addition to spring... Experiment with tonal patterns, ostinati, and see that Duke Ellington impacted the rise of jazz of! Each answer difference between a symphony of ideas are contained in this eBook designed music... Page, kids fill in the music made them feel band lesson plans and worksheets alone gestures and their and. And perform them with the class the orchestra in these lessons an awareness of how you can implement the of! Need not be a great place to play in George Gershwin's Rhapsody in Blue the categories of instruments throughout,! |
Chapter 21 The Birth of Stars and the Discovery of Planets outside the Solar System
21.6 New Perspectives on Planet Formation
By the end of this section, you will be able to:
- Explain how exoplanet discoveries have revised our understanding of planet formation
- Discuss how planetary systems quite different from our solar system might have come about
Traditionally, astronomers have assumed that the planets in our solar system formed at about their current distances from the Sun and have remained there ever since. The first step in the formation of a giant planet is to build up a solid core, which happens when planetesimals collide and stick. Eventually, this core becomes massive enough to begin sweeping up gaseous material in the disk, thereby building the gas giants Jupiter and Saturn.
How to Make a Hot Jupiter
The traditional model for the formation of planets works only if the giant planets are formed far from the central star (about 5–10 AU), where the disk is cold enough to have a fairly high density of solid matter. It cannot explain the hot Jupiters, which are located very close to their stars where any rocky raw material would be completely vaporized. It also cannot explain the elliptical orbits we observe for some exoplanets because the orbit of a protoplanet, whatever its initial shape, will quickly become circular through interactions with the surrounding disk of material and will remain that way as the planet grows by sweeping up additional matter.
So we have two options: either we find a new model for forming planets close to the searing heat of the parent star, or we find a way to change the orbits of planets so that cold Jupiters can travel inward after they form. Most research now supports the latter explanation.
Calculations show that if a planet forms while a substantial amount of gas remains in the disk, then some of the planet’s orbital angular momentum can be transferred to the disk. As it loses momentum (through a process that reminds us of the effects of friction), the planet will spiral inward. This process can transport giant planets, initially formed in cold regions of the disk, closer to the central star—thereby producing hot Jupiters. Gravitational interactions between planets in the chaotic early solar system can also cause planets to slingshot inward from large distances. But for this to work, the other planet has to carry away the angular momentum and move to a more distant orbit.
In some cases, we can use the combination of transit plus Doppler measurements to determine whether the planets orbit in the same plane and in the same direction as the star. For the first few cases, things seemed to work just as we anticipated: like the solar system, the gas giant planets orbited in their star’s equatorial plane and in the same direction as the spinning star.
Then, some startling discoveries were made of gas giant planets that orbited at right angles or even in the opposite sense as the spin of the star. How could this happen? Again, there must have been interactions between planets. It’s possible that before the system settled down, two planets came close together, so that one was kicked into an usual orbit. Or perhaps a passing star perturbed the system after the planets were newly formed.
Forming Planetary Systems
When the Milky Way Galaxy was young, the stars that formed did not contain many heavy elements like iron. Several generations of star formation and star death were required to enrich the interstellar medium for subsequent generations of stars. Since planets seem to form “inside out,” starting with the accretion of the materials that can make the rocky cores with which planets start, astronomers wondered when in the history of the Galaxy, planet formation would turn on.
The star Kepler-444 has shed some light on this question. This is a tightly packed system of five planets—the smallest comparable in size to Mercury and the largest similar in size to Venus. All five planets were detected with the Kepler spacecraft as they transited their parent star. All five planets orbit their host star in less than the time it takes Mercury to complete one orbit about the Sun. Remarkably, the host star Kepler-444 is more than 11 billion years old and formed when the Milky Way was only 2 billion years old. So the heavier elements needed to make rocky planets must have already been available then. This ancient planetary system sets the clock on the beginning of rocky planet formation to be relatively soon after the formation of our Galaxy.
Kepler data demonstrate that while rocky planets inside Mercury’s orbit are missing from our solar system, they are common around other stars, like Kepler-444. When the first systems packed with close-in rocky planets were discovered, we wondered why they were so different from our solar system. When many such systems were discovered, we began to wonder if it was our solar system that was different. This led to speculation that additional rocky planets might once have existed close to the Sun in our solar system.
There is some evidence from the motions in the outer solar system that Jupiter may have migrated inward long ago. If correct, then gravitational perturbations from Jupiter could have dislodged the orbits of close-in rocky planets, causing them to fall into the Sun. Consistent with this picture, astronomers now think that Uranus and Neptune probably did not form at their present distances from the Sun but rather closer to where Jupiter and Saturn are now. The reason for this idea is that density in the disk of matter surrounding the Sun at the time the planets formed was so low outside the orbit of Saturn that it would take several billion years to build up Uranus and Neptune. Yet we saw earlier in the chapter that the disks around protostars survive only a few million years.
Therefore, scientists have developed computer models demonstrating that Uranus and Neptune could have formed near the current locations of Jupiter and Saturn, and then been kicked out to larger distances through gravitational interactions with their neighbors. All these wonderful new observations illustrate how dangerous it can be to draw conclusions about a phenomenon in science (in this case, how planetary systems form and arrange themselves) when you are only working with a single example.
Exoplanets have given rise to a new picture of planetary system formation—one that is much more chaotic than we originally thought. If we think of the planets as being like skaters in a rink, our original model (with only our own solar system as a guide) assumed that the planets behaved like polite skaters, all obeying the rules of the rink and all moving in nearly the same direction, following roughly circular paths. The new picture corresponds more to a roller derby, where the skaters crash into one another, change directions, and sometimes are thrown entirely out of the rink.
While thousands of exoplanets have been discovered in the past two decades, every observational technique has fallen short of finding more than a few candidates that resemble Earth (Figure 21.27). Astronomers are not sure exactly what properties would define another Earth. Do we need to find a planet that is exactly the same size and mass as Earth? That may be difficult and may not be important from the perspective of habitability. After all, we have no reason to think that life could not have arisen on Earth if our planet had been a little bit smaller or larger. And, remember that how habitable a planet is depends on both its distance from its star and the nature of its atmosphere. The greenhouse effect can make some planets warmer (as it did for Venus and is doing more and more for Earth).
We can ask other questions to which we don’t yet know the answers. Does this “twin” of Earth need to orbit a solar-type star, or can we consider as candidates the numerous exoplanets orbiting K- and M-class stars? (In the summer of 2016, astronomers reported the discovery of a planet with at least 1.3 times the mass of Earth around the nearest star, Proxima Centauri, which is spectral type M and located 4.2 light years from us.) We have a special interest in finding planets that could support life like ours, in which case, we need to find exoplanets within their star’s habitable zone, where surface temperatures are consistent with liquid water on the surface. This is probably the most important characteristic defining an Earth-analog exoplanet.
The search for potentially habitable worlds is one of the prime drivers for exoplanet research in the next decade. Astronomers are beginning to develop realistic plans for new instruments that can even look for signs of life on distant worlds (examining their atmospheres for gases associated with life, for example). If we require telescopes in space to find such worlds, we need to recognize that years are required to plan, build, and launch such space observatories. The discovery of exoplanets and the knowledge that most stars have planetary systems are transforming our thinking about life beyond Earth. We are closer than ever to knowing whether habitable (and inhabited) planets are common. This work lends a new spirit of optimism to the search for life elsewhere, a subject to which we will return in Life in the Universe.
Key Concepts and Summary
The ensemble of exoplanets is incredibly diverse and has led to a revision in our understanding of planet formation that includes the possibility of vigorous, chaotic interactions, with planet migration and scattering. It is possible that the solar system is unusual (and not representative) in how its planets are arranged. Many systems seem to have rocky planets farther inward than we do, for example, and some even have “hot Jupiters” very close to their star. Ambitious space experiments should make it possible to image earthlike planets outside the solar system and even to obtain information about their habitability as we search for life elsewhere. |
Here are the notes for CBSE Class 10 Maths Chapter 2 Polynomial. With several examples, we will cover everything from what is a polynomial and its kinds to algebraic expressions, degree of a polynomial expression, graphical representation of polynomial equations, factorization, and the link between zeroes and the coefficient of a polynomial.
Variables and constants, as well as mathematical operators, make up an algebraic expression.
An algebraic expression is a collection of concepts that serve as expression building blocks.
Variables and constants are combined to form a term. In some cases, a term can be an algebraic expression in itself.
Examples of a term – 5 which is just a constant.
– 7x, which is the product of constant ‘7’ and the variable ‘x’
– 2xy, which is the product of the constant ‘2’ and the variables ‘x’ and ‘y’.
– 8xy2, which is the product of 8, x, y and y.
The coefficient is referred to as the constant in each term.
Example of an algebraic expression: 2x2y+6xy+3x+9 which is the sum of four terms: 2x2y, 6xy, 3x and 9.
Any number of terms can be used in an algebraic expression. Each term's coefficient can be any real number. Any number of variables can be found in an algebraic expression. The variables' exponents, on the other hand, must be rational values.
Exponents of rational numbers can be found in algebraic expressions. A polynomial, on the other hand, is an algebraic expression with a whole number as its exponent on any variable.
8x3+2x+5 is an example of a polynomial as well as an algebraic expression .
3x+5√x is not a polynomial as the exponent on x is 1/2 which is not a whole number but it is an example of algebraic expression.
Degree of a Polynomial
The degree of a polynomial in one variable is equal to the largest exponent on the variable in the polynomial.
Example: The degree of the polynomial 3x2+x+5 is 2, as the highest power of x in the given expression is x2.
Types Of Polynomials
Polynomials can be categorised based upon:
a) Number of terms
b) Degree of the polynomial.
Different types of polynomials based upon the number of terms in it:
Types of Polynomials based upon Degree
A linear polynomial is a polynomial having one degree.
For example, 3x+5 is a linear polynomial.
A quadratic polynomial is a polynomial having two degrees.
For example, 5x2+3x+6 is a quadratic polynomial.
A cubic polynomial is a polynomial having three degrees.
For example, 2x3+5x2+9x+15 is a cubic polynomial.
Zeroes of a polynomial
The value of x for which the value of p(x) is 0 is the zero of a polynomial p(x). If k is a p(x) zero, then p(k)=0.
Number of Zeros
Generally, a polynomial having n degrees can have at most n zeros.
Factorization of Polynomials
By separating the middle term, quadratic polynomials can be factorized.
For example, have a look on the polynomial 6x2+17x+5
Splitting the middle term:
As we can see, 17x is the middle term in the polynomial 6x2+17x+5. 17x needs to be expressed as a sum of two terms such that the product of their coefficients is equal to the product of 6 and 5 (coefficient of x2 and the constant term)
17 can be expressed as (15) +(2), as 15×2=30
Now, we will identify the common factors in individual groups
Now we can express it by taking (3x+1) as the common factor:
For Quadratic Polynomial:
If α and β are the roots of a quadratic polynomial ax2+bx+c, then,
α + β = -b/a
Sum of zeroes = -coefficient of x /coefficient of x2
αβ = c/a
Product of zeroes = constant term / coefficient of x2
For Cubic Polynomial
If α,β and γ are the roots of a cubic polynomial ax3+bx2+cx+d, then
α+β+γ = -b/a
αβ +βγ +γα = c/a
αβγ = -d/a
Following steps should be followed if we want to divide one polynomial by another.
Step 1: In decreasing order of their degrees, arrange the terms of the dividend and the divisor.
Step 2: Divide the highest degree term of the dividend by the highest degree term of the divisor to get the first term of the quotient. After that, complete the division procedure.
Step 3: The dividend for the next step is the result from the previous division. Repeat this method until the remainder's degree is less than the divisor's degree. |
Speed of sound
|Sound pressure||p, SPL|
|Particle velocity||v, SVL|
|Sound intensity||I, SIL|
|Sound power||P, SWL|
|Sound energy density||w|
|Sound exposure||E, SEL|
|Speed of sound||c|
The speed of sound is the distance travelled per unit time by a sound wave as it propagates through an elastic medium. In dry air at 20 °C (68 °F), the speed of sound is 343.2 metres per second (1,126 ft/s; 1,236 km/h; 768 mph; 667 kn), or a kilometre in 2.914 s or a mile in 4.689 s.
The speed of sound in an ideal gas depends only on its temperature and composition. The speed has a weak dependence on frequency and pressure in ordinary air, deviating slightly from ideal behavior.
In common everyday speech, speed of sound refers to the speed of sound waves in air. However, the speed of sound varies from substance to substance: sound travels most slowly in gases; it travels faster in liquids; and faster still in solids. For example (as noted above), sound travels at 343.2 m/s in air; it travels at 1,484 m/s in water (4.3 times as fast as in air); and at 5,120 m/s in iron. In an exceptionally stiff material such as diamond, sound travels at 12,000 m/s; which is around the maximum speed that sound will travel under normal conditions.
Sound waves in solids are composed of compression waves (just as in gases and liquids), but there is also a different type of sound wave called a shear wave, which occurs only in solids. These different types of waves in solids usually travel at different speeds, as exhibited in seismology. The speed of a compression sound wave in solids is determined by the medium's compressibility, shear modulus and density. The speed of shear waves is determined only by the solid material's shear modulus and density.
In fluid dynamics, the speed of sound in a fluid medium (gas or liquid) is used as a relative measure for the speed of an object moving through the medium. The ratio of the speed of an object to the speed of sound in the fluid is called the object's Mach number. Objects moving at speeds greater than Mach1 are said to be traveling at supersonic speeds.
- 1 History
- 2 Basic concept
- 3 Equations
- 4 Dependence on the properties of the medium
- 5 Altitude variation and implications for atmospheric acoustics
- 6 Practical formula for dry air
- 7 Details
- 8 Effect of frequency and gas composition
- 9 Mach number
- 10 Experimental methods
- 11 Non-gaseous media
- 12 Gradients
- 13 See also
- 14 References
- 15 External links
Sir Isaac Newton computed the speed of sound in air as 979 feet per second (298 m/s), which is too low by about 15%, but had neglected the effect of fluctuating temperature; that was later rectified by Laplace.
During the 17th century, there were several attempts to measure the speed of sound accurately, including attempts by Marin Mersenne in 1630 (1,380 Parisian feet per second), Pierre Gassendi in 1635 (1,473 Parisian feet per second) and Robert Boyle (1,125 Parisian feet per second).
In 1709, the Reverend William Derham, Rector of Upminster, published a more accurate measure of the speed of sound, at 1,072 Parisian feet per second. Derham used a telescope from the tower of the church of St Laurence, Upminster to observe the flash of a distant shotgun being fired, and then measured the time until he heard the gunshot with a half second pendulum. Measurements were made of gunshots from a number of local landmarks, including North Ockendon church. The distance was known by triangulation, and thus the speed that the sound had travelled was calculated.
The transmission of sound can be illustrated by using a model consisting of an array of balls interconnected by springs. For real material the balls represent molecules and the springs represent the bonds between them. Sound passes through the model by compressing and expanding the springs, transmitting energy to neighbouring balls, which transmit energy to their springs, and so on. The speed of sound through the model depends on the stiffness of the springs, and the mass of the balls. As long as the spacing of the balls remains constant, stiffer springs transmit energy more quickly, and more massive balls transmit energy more slowly. Effects like dispersion and reflection can also be understood using this model.
In a real material, the stiffness of the springs is called the elastic modulus, and the mass corresponds to the density. All other things being equal (ceteris paribus), sound will travel more slowly in spongy materials, and faster in stiffer ones. For instance, sound will travel 1.59 times faster in nickel than in bronze, due to the greater stiffness of nickel at about the same density. Similarly, sound travels about 1.41 times faster in light hydrogen (protium) gas than in heavy hydrogen (deuterium) gas, since deuterium has similar properties but twice the density. At the same time, "compression-type" sound will travel faster in solids than in liquids, and faster in liquids than in gases, because the solids are more difficult to compress than liquids, while liquids in turn are more difficult to compress than gases.
Some textbooks mistakenly state that the speed of sound increases with increasing density. This is usually illustrated by presenting data for three materials, such as air, water and steel, which also have vastly different compressibilities which more than make up for the density differences. An illustrative example of the two effects is that sound travels only 4.3 times faster in water than air, despite enormous differences in compressibility of the two media. The reason is that the larger density of water, which works to slow sound in water relative to air, nearly makes up for the compressibility differences in the two media.
Compression and shear waves
In a gas or liquid, sound consists of compression waves. In solids, waves propagate as two different types. A longitudinal wave is associated with compression and decompression in the direction of travel, and is the same process in gases and liquids, with an analogous compression-type wave in solids. Only compression waves are supported in gases and liquids. An additional type of wave, the transverse wave, also called a shear wave, occurs only in solids because only solids support elastic deformations. It is due to elastic deformation of the medium perpendicular to the direction of wave travel; the direction of shear-deformation is called the "polarization" of this type of wave. In general, transverse waves occur as a pair of orthogonal polarizations.
These different waves (compression waves and the different polarizations of shear waves) may have different speeds at the same frequency. Therefore, they arrive at an observer at different times, an extreme example being an earthquake, where sharp compression waves arrive first, and rocking transverse waves seconds later.
The speed of a compression wave in fluid is determined by the medium's compressibility and density. In solids, the compression waves are analogous to those in fluids, depending on compressibility, density, and the additional factor of shear modulus. The speed of shear waves, which can occur only in solids, is determined simply by the solid material's shear modulus and density.
The speed of sound in mathematical notation is conventionally represented by c, from the Latin celeritas meaning "velocity".
In general, the speed of sound c is given by the Newton–Laplace equation:
- Ks is a coefficient of stiffness, the isentropic bulk modulus (or the modulus of bulk elasticity for gases);
- ρ is the density.
Thus the speed of sound increases with the stiffness (the resistance of an elastic body to deformation by an applied force) of the material, and decreases with the density. For ideal gases the bulk modulus K is simply the gas pressure multiplied by the dimensionless adiabatic index, which is about 1.4 for air under normal conditions of pressure and temperature.
- p is the pressure;
- ρ is the density and the derivative is taken isentropically, that is, at constant entropy s.
In a non-dispersive medium, the speed of sound is independent of sound frequency, so the speeds of energy transport and sound propagation are the same for all frequencies. Air, a mixture of oxygen and nitrogen, constitutes a non-dispersive medium. However, air does contain a small amount of CO2 which is a dispersive medium, and causes dispersion to air at ultrasonic frequencies (> 28 kHz).
In a dispersive medium, the speed of sound is a function of sound frequency, through the dispersion relation. Each frequency component propagates at its own speed, called the phase velocity, while the energy of the disturbance propagates at the group velocity. The same phenomenon occurs with light waves; see optical dispersion for a description.
Dependence on the properties of the medium
The speed of sound is variable and depends on the properties of the substance through which the wave is travelling. In solids, the speed of transverse (or shear) waves depends on the shear deformation under shear stress (called the shear modulus), and the density of the medium. Longitudinal (or compression) waves in solids depend on the same two factors with the addition of a dependence on compressibility.
In fluids, only the medium's compressibility and density are the important factors, since fluids do not transmit shear stresses. In heterogeneous fluids, such as a liquid filled with gas bubbles, the density of the liquid and the compressibility of the gas affect the speed of sound in an additive manner, as demonstrated in the hot chocolate effect.
In gases, adiabatic compressibility is directly related to pressure through the heat capacity ratio (adiabatic index), while pressure and density are inversely related to the temperature and molecular weight, thus making only the completely independent properties of temperature and molecular structure important (heat capacity ratio may be determined by temperature and molecular structure, but simple molecular weight is not sufficient to determine it).
In low molecular weight gases such as helium, sound propagates faster as compared to heavier gases such as xenon. For monatomic gases, the speed of sound is about 75% of the mean speed that the atoms move in that gas.
For a given ideal gas the molecular composition is fixed, and thus the speed of sound depends only on its temperature. At a constant temperature, the gas pressure has no effect on the speed of sound, since the density will increase, and since pressure and density (also proportional to pressure) have equal but opposite effects on the speed of sound, and the two contributions cancel out exactly. In a similar way, compression waves in solids depend both on compressibility and density—just as in liquids—but in gases the density contributes to the compressibility in such a way that some part of each attribute factors out, leaving only a dependence on temperature, molecular weight, and heat capacity ratio which can be independently derived from temperature and molecular composition (see derivations below). Thus, for a single given gas (assuming the molecular weight does not change) and over a small temperature range (for which the heat capacity is relatively constant), the speed of sound becomes dependent on only the temperature of the gas.
In non-ideal gas behavior regimen, for which the van der Waals gas equation would be used, the proportionality is not exact, and there is a slight dependence of sound velocity on the gas pressure.
Humidity has a small but measurable effect on the speed of sound (causing it to increase by about 0.1%–0.6%), because oxygen and nitrogen molecules of the air are replaced by lighter molecules of water. This is a simple mixing effect.
Altitude variation and implications for atmospheric acoustics
In the Earth's atmosphere, the chief factor affecting the speed of sound is the temperature. For a given ideal gas with constant heat capacity and composition, the speed of sound is dependent solely upon temperature; see Details below. In such an ideal case, the effects of decreased density and decreased pressure of altitude cancel each other out, save for the residual effect of temperature.
Since temperature (and thus the speed of sound) decreases with increasing altitude up to 11 km, sound is refracted upward, away from listeners on the ground, creating an acoustic shadow at some distance from the source. The decrease of the speed of sound with height is referred to as a negative sound speed gradient.
However, there are variations in this trend above 11 km. In particular, in the stratosphere above about 20 km, the speed of sound increases with height, due to an increase in temperature from heating within the ozone layer. This produces a positive speed of sound gradient in this region. Still another region of positive gradient occurs at very high altitudes, in the aptly-named thermosphere above 90 km.
Practical formula for dry air
The approximate speed of sound in dry (0% humidity) air, in meters per second, at temperatures near 0 °C, can be calculated from
where is the temperature in degrees Celsius (°C).
This equation is derived from the first two terms of the Taylor expansion of the following more accurate equation:
Dividing the first part, and multiplying the second part, on the right hand side, by √273.15 gives the exactly equivalent form
The value of 331.3 m/s, which represents the speed at 0 °C (or 273.15 K), is based on theoretical (and some measured) values of the heat capacity ratio, γ, as well as on the fact that at 1 atm real air is very well described by the ideal gas approximation. Commonly found values for the speed of sound at 0 °C may vary from 331.2 to 331.6 due to the assumptions made when it is calculated. If ideal gas γ is assumed to be 7/5 = 1.4 exactly, the 0 °C speed is calculated (see section below) to be 331.3 m/s, the coefficient used above.
This equation is correct to a much wider temperature range, but still depends on the approximation of heat capacity ratio being independent of temperature, and for this reason will fail, particularly at higher temperatures. It gives good predictions in relatively dry, cold, low pressure conditions, such as the Earth's stratosphere. The equation fails at extremely low pressures and short wavelengths, due to dependence on the assumption that the wavelength of the sound in the gas is much longer than the average mean free path between gas molecule collisions. A derivation of these equations will be given in the following section.
A graph comparing results of the two equations is at right, using the slightly different value of 331.5 m/s for the speed of sound at 0 °C.
Speed of sound in ideal gases and air
For an ideal gas, K (the bulk modulus in equations above, equivalent to C, the coefficient of stiffness in solids) is given by
thus, from the Newton–Laplace equation above, the speed of sound in an ideal gas is given by
- γ is the adiabatic index also known as the isentropic expansion factor. It is the ratio of specific heats of a gas at a constant-pressure to a gas at a constant-volume(), and arises because a classical sound wave induces an adiabatic compression, in which the heat of the compression does not have enough time to escape the pressure pulse, and thus contributes to the pressure induced by the compression;
- p is the pressure;
- ρ is the density.
Using the ideal gas law to replace p with nRT/V, and replacing ρ with nM/V, the equation for an ideal gas becomes
- cideal is the speed of sound in an ideal gas;
- R (approximately 8.314,5 J · mol−1 · K−1) is the molar gas constant(universal gas constant);
- k is the Boltzmann constant;
- γ (gamma) is the adiabatic index. At room temperature, where thermal energy is fully partitioned into rotation (rotations are fully excited) but quantum effects prevent excitation of vibrational modes, the value is 7/5 = 1.400 for diatomic molecules, according to kinetic theory. Gamma is actually experimentally measured over a range from 1.399,1 to 1.403 at 0 °C, for air. Gamma is exactly 5/3 = 1.6667 for monatomic gases such as noble gases;
- T is the absolute temperature;
- M is the molar mass of the gas. The mean molar mass for dry air is about 0.028,964,5 kg/mol;
- n is the number of moles;
- m is the mass of a single molecule.
This equation applies only when the sound wave is a small perturbation on the ambient condition, and the certain other noted conditions are fulfilled, as noted below. Calculated values for cair have been found to vary slightly from experimentally determined values.
Newton famously considered the speed of sound before most of the development of thermodynamics and so incorrectly used isothermal calculations instead of adiabatic. His result was missing the factor of γ but was otherwise correct.
Numerical substitution of the above values gives the ideal gas approximation of sound velocity for gases, which is accurate at relatively low gas pressures and densities (for air, this includes standard Earth sea-level conditions). Also, for diatomic gases the use of γ = 1.400,0 requires that the gas exists in a temperature range high enough that rotational heat capacity is fully excited (i.e., molecular rotation is fully used as a heat energy "partition" or reservoir); but at the same time the temperature must be low enough that molecular vibrational modes contribute no heat capacity (i.e., insignificant heat goes into vibration, as all vibrational quantum modes above the minimum-energy-mode, have energies too high to be populated by a significant number of molecules at this temperature). For air, these conditions are fulfilled at room temperature, and also temperatures considerably below room temperature (see tables below). See the section on gases in specific heat capacity for a more complete discussion of this phenomenon.
For air, we use a simplified symbol
Additionally, if temperatures in degrees Celsius(°C) are to be used to calculate air speed in the region near 273 kelvin, then Celsius temperature θ = T − 273.15 may be used. Then
For dry air, where θ (theta) is the temperature in degrees Celsius(°C).
Making the following numerical substitutions,
is the molar gas constant in J/mole/Kelvin, and
is the mean molar mass of air, in kg; and using the ideal diatomic gas value of γ = 1.400,0.
Using the first two terms of the Taylor expansion:
The derivation includes the first two equations given in the Practical formula for dry air section above.
Effects due to wind shear
The speed of sound varies with temperature. Since temperature and sound velocity normally decrease with increasing altitude, sound is refracted upward, away from listeners on the ground, creating an acoustic shadow at some distance from the source. Wind shear of 4 m/(s · km) can produce refraction equal to a typical temperature lapse rate of 7.5 °C/km. Higher values of wind gradient will refract sound downward toward the surface in the downwind direction, eliminating the acoustic shadow on the downwind side. This will increase the audibility of sounds downwind. This downwind refraction effect occurs because there is a wind gradient; the sound is not being carried along by the wind.
For sound propagation, the exponential variation of wind speed with height can be defined as follows:
- U(h) is the speed of the wind at height h;
- ζ is the exponential coefficient based on ground surface roughness, typically between 0.08 and 0.52;
- dU/dH(h) is the expected wind gradient at height h.
In the 1862 American Civil War Battle of Iuka, an acoustic shadow, believed to have been enhanced by a northeast wind, kept two divisions of Union soldiers out of the battle, because they could not hear the sounds of battle only 10 km (six miles) downwind.
In the standard atmosphere:
- T0 is 273.15 K (= 0 °C = 32 °F), giving a theoretical value of 331.3 m/s (= 1086.9 ft/s = 1193 km/h = 741.1 mph = 644.0 kn). Values ranging from 331.3-331.6 may be found in reference literature, however;
- T20 is 293.15 K (= 20 °C = 68 °F), giving a value of 343.2 m/s (= 1126.0 ft/s = 1236 km/h = 767.8 mph = 667.2 kn);
- T25 is 298.15 K (= 25 °C = 77 °F), giving a value of 346.1 m/s (= 1135.6 ft/s = 1246 km/h = 774.3 mph = 672.8 kn).
In fact, assuming an ideal gas, the speed of sound c depends on temperature only, not on the pressure or density (since these change in lockstep for a given temperature and cancel out). Air is almost an ideal gas. The temperature of the air varies with altitude, giving the following variations in the speed of sound using the standard atmosphere—actual conditions may vary.
|Speed of sound
|Density of air
|Characteristic specific acoustic impedance
Given normal atmospheric conditions, the temperature, and thus speed of sound, varies with altitude:
|Sea level||15 °C (59 °F)||340||1,225||761||661|
|11,000 m−20,000 m
(Cruising altitude of commercial jets,
and first supersonic flight)
|−57 °C (−70 °F)||295||1,062||660||573|
|29,000 m (Flight of X-43A)||−48 °C (−53 °F)||301||1,083||673||585|
Effect of frequency and gas composition
General physical considerations
The medium in which a sound wave is travelling does not always respond adiabatically, and as a result the speed of sound can vary with frequency.
The limitations of the concept of speed of sound due to extreme attenuation are also of concern. The attenuation which exists at sea level for high frequencies applies to successively lower frequencies as atmospheric pressure decreases, or as the mean free path increases. For this reason, the concept of speed of sound (except for frequencies approaching zero) progressively loses its range of applicability at high altitudes. The standard equations for the speed of sound apply with reasonable accuracy only to situations in which the wavelength of the soundwave is considerably longer than the mean free path of molecules in a gas.
The molecular composition of the gas contributes both as the mass (M) of the molecules, and their heat capacities, and so both have an influence on speed of sound. In general, at the same molecular mass, monatomic gases have slightly higher speed of sound (over 9% higher) because they have a higher γ (5/3 = 1.66…) than diatomics do (7/5 = 1.4). Thus, at the same molecular mass, the speed of sound of a monatomic gas goes up by a factor of
This gives the 9% difference, and would be a typical ratio for speeds of sound at room temperature in helium vs. deuterium, each with a molecular weight of 4. Sound travels faster in helium than deuterium because adiabatic compression heats helium more, since the helium molecules can store heat energy from compression only in translation, but not rotation. Thus helium molecules (monatomic molecules) travel faster in a sound wave and transmit sound faster. (Sound travels at about 70% of the mean molecular speed in gases; the figure is 75% in monatomic gases and 68% in diatomic gases).
Note that in this example we have assumed that temperature is low enough that heat capacities are not influenced by molecular vibration (see heat capacity). However, vibrational modes simply cause gammas which decrease toward 1, since vibration modes in a polyatomic gas gives the gas additional ways to store heat which do not affect temperature, and thus do not affect molecular velocity and sound velocity. Thus, the effect of higher temperatures and vibrational heat capacity acts to increase the difference between the speed of sound in monatomic vs. polyatomic molecules, with the speed remaining greater in monatomics.
Practical application to air
By far the most important factor influencing the speed of sound in air is temperature. The speed is proportional to the square root of the absolute temperature, giving an increase of about 0.6 m/s per degree Celsius. For this reason, the pitch of a musical wind instrument increases as its temperature increases.
The speed of sound is raised by humidity but decreased by carbon dioxide. The difference between 0% and 100% humidity is about 1.5 m/s at standard pressure and temperature, but the size of the humidity effect increases dramatically with temperature. The carbon dioxide content of air is not fixed, due to both carbon pollution and human breath (e.g., in the air blown through wind instruments).
The dependence on frequency and pressure are normally insignificant in practical applications. In dry air, the speed of sound increases by about 0.1 m/s as the frequency rises from 10 Hz to 100 Hz. For audible frequencies above 100 Hz it is relatively constant. Standard values of the speed of sound are quoted in the limit of low frequencies, where the wavelength is large compared to the mean free path.
Mach number, a useful quantity in aerodynamics, is the ratio of air speed to the local speed of sound. At altitude, for reasons explained, Mach number is a function of temperature. Aircraft flight instruments, however, operate using pressure differential to compute Mach number, not temperature. The assumption is that a particular pressure represents a particular altitude and, therefore, a standard temperature. Aircraft flight instruments need to operate this way because the stagnation pressure sensed by a Pitot tube is dependent on altitude as well as speed.
A range of different methods exist for the measurement of sound in air.
The earliest reasonably accurate estimate of the speed of sound in air was made by William Derham, and acknowledged by Isaac Newton. Derham had a telescope at the top of the tower of the Church of St Laurence in Upminster, England. On a calm day, a synchronized pocket watch would be given to an assistant who would fire a shotgun at a pre-determined time from a conspicuous point some miles away, across the countryside. This could be confirmed by telescope. He then measured the interval between seeing gunsmoke and arrival of the sound using a half-second pendulum. The distance from where the gun was fired was found by triangulation, and simple division (distance/time) provided velocity. Lastly, by making many observations, using a range of different distances, the inaccuracy of the half-second pendulum could be averaged out, giving his final estimate of the speed of sound. Modern stopwatches enable this method to be used today over distances as short as 200–400 meters, and not needing something as loud as a shotgun.
Single-shot timing methods
If a sound source and two microphones are arranged in a straight line, with the sound source at one end, then the following can be measured:
- The distance between the microphones (x), called microphone basis.
- The time of arrival between the signals (delay) reaching the different microphones (t).
Then v = x/t.
Kundt's tube is an example of an experiment which can be used to measure the speed of sound in a small volume. It has the advantage of being able to measure the speed of sound in any gas. This method uses a powder to make the nodes and antinodes visible to the human eye. This is an example of a compact experimental setup.
A tuning fork can be held near the mouth of a long pipe which is dipping into a barrel of water. In this system it is the case that the pipe can be brought to resonance if the length of the air column in the pipe is equal to (1 + 2n)λ/4 where n is an integer. As the antinodal point for the pipe at the open end is slightly outside the mouth of the pipe it is best to find two or more points of resonance and then measure half a wavelength between these.
Here it is the case that v = fλ.
High-precision measurements in air
The effect from impurities can be significant when making high-precision measurements. Chemical desiccants can be used to dry the air, but will in turn contaminate the sample. The air can be dried cryogenically, but this has the effect of removing the carbon dioxide as well; therefore many high-precision measurements are performed with air free of carbon dioxide rather than with natural air. A 2002 review found that a 1963 measurement by Smith and Harlow using a cylindrical resonator gave "the most probable value of the standard speed of sound to date." The experiment was done with air from which the carbon dioxide had been removed, but the result was then corrected for this effect so as to be applicable to real air. The experiments were done at 30 °C but corrected for temperature in order to report them at 0 °C. The result was 331.45 ± 0.01 m/s for dry air at STP, for frequencies from 93 Hz to 1,500 Hz.
Speed of sound in solids
In a solid, there is a non-zero stiffness both for volumetric deformations and shear deformations. Hence, it is possible to generate sound waves with different velocities dependent on the deformation mode. Sound waves generating volumetric deformations (compression) and shear deformations (shearing) are called pressure waves (longitudinal waves) and shear waves (transverse waves), respectively. In earthquakes, the corresponding seismic waves are called P-waves (primary waves) and S-waves (secondary waves), respectively. The sound velocities of these two types of waves propagating in a homogeneous 3-dimensional solid are respectively given by
- K is the bulk modulus of the elastic materials;
- G is the shear modulus of the elastic materials;
- E is the Young's modulus;
- ρ is the density;
- ν is Poisson's ratio.
The last quantity is not an independent one, as E = 3K(1 − 2ν). Note that the speed of pressure waves depends both on the pressure and shear resistance properties of the material, while the speed of shear waves depends on the shear properties only.
Typically, pressure waves travel faster in materials than do shear waves, and in earthquakes this is the reason that the onset of an earthquake is often preceded by a quick upward-downward shock, before arrival of waves that produce a side-to-side motion. For example, for a typical steel alloy, K = 170 GPa, G = 80 GPa and ρ = 7,700 kg/m3, yielding a compressional speed csolid,p of 6,000 m/s. This is in reasonable agreement with csolid,p measured experimentally at 5,930 m/s for a (possibly different) type of steel. The shear speed csolid,s is estimated at 3,200 m/s using the same numbers.
The speed of sound for pressure waves in stiff materials such as metals is sometimes given for "long rods" of the material in question, in which the speed is easier to measure. In rods where their diameter is shorter than a wavelength, the speed of pure pressure waves may be simplified and is given by:
where E is the Young's modulus. This is similar to the expression for shear waves, save that Young's modulus replaces the shear modulus. This speed of sound for pressure waves in long rods will always be slightly less than the same speed in homogeneous 3-dimensional solids, and the ratio of the speeds in the two different types of objects depends on Poisson's ratio for the material.
Speed of sound in liquids
In a fluid the only non-zero stiffness is to volumetric deformation (a fluid does not sustain shear forces).
Hence the speed of sound in a fluid is given by
where K is the bulk modulus of the fluid.
In fresh water, sound travels at about 1481 m/s at 20 °C. See Technical Guides - Speed of Sound in Pure Water for an online calculator. Applications of underwater sound can be found in sonar, acoustic communication and acoustical oceanography. See Discovery of Sound in the Sea for other examples of the uses of sound in the ocean (by both man and other animals).
In salt water that is free of air bubbles or suspended sediment, sound travels at about 1500 m/s.[clarification needed] The speed of sound in seawater depends on pressure (hence depth), temperature (a change of 1 °C ~ 4 m/s), and salinity (a change of 1‰ ~ 1 m/s), and empirical equations have been derived to accurately calculate the speed of sound from these variables. Other factors affecting the speed of sound are minor. Since temperature decreases with depth while pressure and generally salinity increase, the profile of the speed of sound with depth generally shows a characteristic curve which decreases to a minimum at a depth of several hundred meters, then increases again with increasing depth (right). For more information see Dushaw et al.
A simple empirical equation for the speed of sound in sea water with reasonable accuracy for the world's oceans is due to Mackenzie:
- T is the temperature in degrees Celsius;
- S is the salinity in parts per thousand;
- z is the depth in meters.
The constants a1, a2, …, a9 are
with check value 1550.744 m/s for T = 25 °C, S = 35 parts per thousand, z = 1,000 m. This equation has a standard error of 0.070 m/s for salinity between 25 and 40 ppt. See Technical Guides. Speed of Sound in Sea-Water for an online calculator.
Other equations for the speed of sound in sea water are accurate over a wide range of conditions, but are far more complicated, e.g., that by V. A. Del Grosso and the Chen-Millero-Li Equation.
Speed of sound in plasma
- mi is the ion mass;
- μ is the ratio of ion mass to proton mass μ = mi/mp;
- Te is the electron temperature;
- Z is the charge state;
- k is Boltzmann constant;
- γ is the adiabatic index.
In contrast to a gas, the pressure and the density are provided by separate species, the pressure by the electrons and the density by the ions. The two are coupled through a fluctuating electric field.
When sound spreads out evenly in all directions in three dimensions, the intensity drops in proportion to the inverse square of the distance. However, in the ocean there is a layer called the 'deep sound channel' or SOFAR channel which can confine sound waves at a particular depth.
In the SOFAR channel, the speed of sound is lower than that in the layers above and below. Just as light waves will refract towards a region of higher index, sound waves will refract towards a region where their speed is reduced. The result is that sound gets confined in the layer, much the way light can be confined in a sheet of glass or optical fiber. Thus, the sound is confined in essentially two dimensions. In two dimensions the intensity drops in proportion to only the inverse of the distance. This allows waves to travel much further before being undetectably faint.
- Acoustoelastic effect
- Elastic wave
- Second sound
- Sonic boom
- Sound barrier
- Underwater acoustics
- Speed of Sound
- "The Speed of Sound". mathpages.com. Retrieved 3 May 2015.
- Bannon, Mike; Kaputa, Frank. "The Newton–Laplace Equation and Speed of Sound". Thermal Jackets. Retrieved 3 May 2015.
- Murdin, Paul (Dec 25, 2008). Full Meridian of Glory: Perilous Adventures in the Competition to Measure the Earth. Springer Science & Business Media. pp. 35–36. ISBN 9780387755342.
- Fox, Tony (2003). Essex Journal. Essex Arch & Hist Soc. pp. 12–16.
- Dean, E. A. (August 1979). Atmospheric Effects on the Speed of Sound, Technical report of Defense Technical Information Center
- Everest, F. (2001). The Master Handbook of Acoustics. New York: McGraw-Hill. pp. 262–263. ISBN 0-07-136097-2.
- "CODATA Value: molar gas constant". Physics.nist.gov. Retrieved 24 October 2010.
- U.S. Standard Atmosphere, 1976, U.S. Government Printing Office, Washington, D.C., 1976.
- Uman, Martin (1984). Lightning. New York: Dover Publications. ISBN 0-486-64575-4.
- Volland, Hans (1995). Handbook of Atmospheric Electrodynamics. Boca Raton: CRC Press. p. 22. ISBN 0-8493-8647-0.
- Singal, S. (2005). Noise Pollution and Control Strategy. Oxford: Alpha Science International. p. 7. ISBN 1-84265-237-0.
It may be seen that refraction effects occur only because there is a wind gradient and it is not due to the result of sound being convected along by the wind.
- Bies, David (2004). Engineering Noise Control, Theory and Practice. London: Spon Press. p. 235. ISBN 0-415-26713-7.
As wind speed generally increases with altitude, wind blowing towards the listener from the source will refract sound waves downwards, resulting in increased noise levels.
- Cornwall, Sir (1996). Grant as Military Commander. New York: Barnes & Noble. p. 92. ISBN 1-56619-913-1.
- Cozens, Peter (2006). The Darkest Days of the War: the Battles of Iuka and Corinth. Chapel Hill: The University of North Carolina Press. ISBN 0-8078-5783-1.
- A B Wood, A Textbook of Sound (Bell, London, 1946)
- "Speed of Sound in Air". Phy.mtu.edu. Retrieved 13 June 2014.
- Nemiroff, R.; Bonnell, J., eds. (19 August 2007). "A Sonic Boom". Astronomy Picture of the Day. NASA. Retrieved 24 October 2010.
- Zuckerwar, Handbook of the speed of sound in real gases, p. 52
- L. E. Kinsler et al. (2000), Fundamentals of acoustics, 4th Ed., John Wiley and sons Inc., New York, USA
- J. Krautkrämer and H. Krautkrämer (1990), Ultrasonic testing of materials, 4th fully revised edition, Springer-Verlag, Berlin, Germany, p. 497
- "Speed of Sound in Water at Temperatures between 32–212 oF (0–100 oC) — imperial and SI units". The Engineering Toolbox.
- APL-UW TR 9407 High-Frequency Ocean Environmental Acoustic Models Handbook, pp. I1-I2.
- "How Fast Does Sound Travel?". Discovery of Sound in the Sea. University of Rhode Island. Retrieved 30 November 2010.
- Dushaw, Brian D.; Worcester, P. F.; Cornuelle, B. D.; Howe, B. M. (1993). "On Equations for the Speed of Sound in Seawater". Journal of the Acoustical Society of America. 93 (1): 255–275. Bibcode:1993ASAJ...93..255D. doi:10.1121/1.405660.
- Kenneth V., Mackenzie (1981). "Discussion of sea-water sound-speed determinations". Journal of the Acoustical Society of America. 70 (3): 801–806. Bibcode:1981ASAJ...70..801M. doi:10.1121/1.386919.
- Del Grosso, V. A. (1974). "New equation for speed of sound in natural waters (with comparisons to other equations)". Journal of the Acoustical Society of America. 56 (4): 1084–1091. Bibcode:1974ASAJ...56.1084D. doi:10.1121/1.1903388.
- Meinen, Christopher S.; Watts, D. Randolph (1997). "Further Evidence that the Sound-Speed Algorithm of Del Grosso Is More Accurate Than that of Chen and Millero". Journal of the Acoustical Society of America. 102 (4): 2058–2062. Bibcode:1997ASAJ..102.2058M. doi:10.1121/1.419655.
- Calculation: Speed of Sound in Air and the Temperature
- Speed of sound: Temperature Matters, Not Air Pressure
- Properties of the U.S. Standard Atmosphere 1976
- The Speed of Sound
- How to Measure the Speed of Sound in a Laboratory
- Teaching Resource for 14-16 Years on Sound Including Speed of Sound
- Technical Guides. Speed of Sound in Pure Water
- Technical Guides. Speed of Sound in Sea-Water
- Did Sound Once Travel at Light Speed?
- Acoustic Properties of Various Materials Including the Speed of Sound |
Last Updated on October 4, 2023 by Editorial Team
Have you ever encountered a number given as the superscript of another number? At first, it can be bewildering for the students to comprehend what exactly it is and what it signifies. Well, the concept of exponents is what is being employed in this case.
When there is a situation where one needs to present a large number, it is compacted and written in the form of exponents making it easier to write and remember. In many countries, exponents are called indices, but the good news is that the concept is the same no matter what term is used.
Nevertheless, introducing exponents can be dreading because the students fall into the rookie pattern of multiplying the base with the exponent. Fortunately, we can apply certain criteria to simplify those equations for a more readable appearance and more straightforward calculation.
But, while talking about exponents, it’s not just that they are difficult to follow, but we feel that it is just another subject kids feel whose relevance stays just inside the classroom. However, did you know that concepts like exponents and powers are used more in fields such as finance and science than in any other? Here is a quick preview of how it influences multiple professions and how you use it in daily life.
Exponents: A tough nut to crack?
“New” numbers in the extension of a number system often have rules and definitions that differ from previously recognized numbers. Natural numbers, for example, are most familiar to kids in their early years of school. The addition of zero, on the other hand, expands the number system from natural numbers to whole numbers and forces students to adapt or update the preceding concepts and representations in their brains, which they cannot always do.
Sixth-grade kids, for example, have trouble determining whether zero is an even number. In this regard, the process of expanding the number system is difficult for both teachers and students. Although most students think of exponents as separate number sets, they allow them to shorten repeated multiplications of the same number.
Exponents are difficult to grasp because they demand consideration of the relationship between symbols, meanings, and the algorithmic features of exponentiation. In this process, procedural knowledge is insufficient to do the necessary calculations to determine the value of exponential expressions without comprehending the reasoning behind algorithms and the number system hierarchy.
For example, when calculating the numerical value of an exponential equation, students frequently multiply the base by the exponent. Exponentiation, on the other hand, incorporates laws concerning base and power. This arrangement causes problems since the kids become confused and fail to recall the regulations. In support of this notion, studies have demonstrated that students see exponents as hard and challenging concepts that are disconnected from ordinary life.
Uses of exponents in real life
In the previous section, we mentioned that kids don’t find hard math concepts such as exponents worth their while primarily because they think it won’t help them anywhere in real life. However, to break that myth we are here to help kids see the use of exponents in day-to-day life.
1. Scientific Scales
Any time a scientific field uses a scale, like the pH scale or the Richter scale, you can bet you will find exponents. This is because the pH scale and the Richter scale are logarithmic relationships, with each whole number representing a ten-fold increase from the number before. Not just that even in chemistry exponents are widely used to calculate the mass of protons, electrons, etc.
For example, when chemists indicate a substance has a pH of 7, they know this represents 10⁷, while a substance with a pH of 8 represents 10⁸. This means that the substance with a pH of 8 is 10 times more basic than the substance with a pH of 7.
2. Taking Measurements
Taking measurements and calculating multi-dimensional quantities can be another real-world application of exponents. Because the area is a two-dimensional measure of space (length x breadth), it is usually measured in square units such as square feet or square meters. When calculating the area of a garden bed in feet, for example, you should supply the solution in square feet or ft2 using an exponent.
Similarly, volume is a three-dimensional measure of space (length x breadth x height); hence it is always measured in cubic units such as cubic feet or cubic meters. So, for example, if you wanted to compute the volume of a greenhouse, you would use an exponent to provide the answer in cubic feet or ft3.
Science fields such as biology and physics work with such small distances, that additional units are required. A micrometer is 1×10⁻⁶ of a meter. It is often used in biology to quantify bacteria and infrared radiation wavelengths. It is also known as a micron and is denoted by the symbol. There are also nanometers (1×10⁻⁹ of a meter), picometers (1×10⁻¹² of a meter), femtometers (1×10⁻¹⁵ of a meter), and attometers (1×10⁻¹⁸ of a meter).
Another valid use of exponents is while speaking about computers. For example, there are multiple significant digits if we talk about the computer’s memory. However, with the help of exponents, you can easily describe the computer’s memory.
Other uses of exponents in computers are data entry, programming, calculation programs, and much more. Can you imagine a programming application without exponents? Indeed, the usage is undeniable. The prime example of exponents in computers is while measuring memory e.g. 1GB=10⁹ bytes.
4. Earthquake Intensity
The Richter scale, for years, was used to describe the energy released by earthquakes. Currently, the most common way to measure the same is the Moment Magnitude Scale, which follows the same mathematical course. Hereby, to record the amplitude of the vibration caused is in mm as ten raised to an exponent, then add 3 to the exponent, x. For example, if the amplitude is 100mm, rewrite it as 10². Adding 3 to the exponent gives Richard scale a rating of 2+3=5.
5. Finance: Compound Interest
Compound interest is calculated with the help of exponential functions. A specific sum is added to the account balance each time money is invested (or lent out). Interest is the amount of money that is added to the balance. The sum will continue to accrue interest after that interest has been added during the subsequent compounding period.
Formula for compound interest: A = P(1 + r/n)nt
Where, A = amount || P = principal || r = rate of interest || n = number of times interest is compounded per year || t = time (in years)
Compound interest refers to the concept of earning interest on interest. There are numerous ways to pay interest. The first method, as previously mentioned, is compounded yearly. The interest is paid once a year according to this plan. However, interest can compound more frequently. Compounding semi-annually (twice a year), quarterly (four times a year), monthly (12 times a year), weekly (52 times a year), or even daily is a widespread practice (365 times per year).
Tips to simplify exponents
Exponents are a difficult concept; thus, teaching it in old traditional ways can confuse the kids. Therefore, check below some creative ways to simplify exponents and make them easy and fun to learn.
1. Introduce learning activities
Learning a concept while performing activities can be an ideal way to teach kids the concept of exponents. You can engage them in a little folding origami game or deck up the cards to allow them to learn how to power up a number in a rather funny manner. If you are looking for some fun activities to teach kids exponents, check here.
2. Take help from online games.
Exponents are tricky, but playing games to lean exponents can provide plenty of practice! Teachers frequently struggle to explain to the students the differences between, say, A times n and A to the power n. (the difference between multiplying by n and raising exponentially to the power n). Math beginners can learn to understand and apply the concept of exponent confidently and fluently with learning tools like online games.
3. Let them know the relevance of the entire concept
When you have a picky learner, you need to make them understand what the concept means and it will affect them in the future. Allow them to figure out its essence using multiple examples. Check the real-life application discussed above if you are having difficulty picturing real-time examples.
Students frequently wonder if they will ever need to use their math skills in real life. They presumably grasp the importance of elementary arithmetic concepts like addition and multiplication, but by middle school, some kids may be wondering why they should even learn subjects like square roots or integers.
However, Exponents are not just a mathematical concept that kids need to learn to pass a test. As you might have seen in most fields, be it science, mathematics, or finance, exponents are widely used. Thus, incorporate fun activities and make them understand the subject’s relevance before applying the theoretical approach.
- Iymen, Esra & Duatepe-Paksu, Asuman. (2015). Analysis of 8th Grade Students’ Number Sense Related to the Exponents in Terms of Number Sense Components. TED EĞİTİM VE BİLİM. 40. 10.15390/EB.2015.2710. |
Early in 2014 NASA scientists took a significant step toward answering a question that people had wondered about for centuries: Are there worlds out there in space that can harbour life as we know it? On April 17 NASA officially announced the discovery of Kepler-186f, the first Earth-sized extrasolar planet (or exoplanet) to be found within its star’s habitable zone—the orbital region where an Earth-like planet could possess liquid water on its surface and thus possibly support life similar to that found on Earth. The planet, which was discovered in data taken by the Kepler satellite before its original mission ended in 2013, has a radius 1.11 times that of Earth. The mass of Kepler-186f is unknown; however, if it has an Earth-like composition, its mass would be 1.44 times that of Earth. It was the fifth planet discovered around its star, a dim red dwarf 500 light-years from Earth with a mass 0.48 times that of the Sun. Kepler-186f orbits its star every 129.9 days at a distance of 53.9 million km (33.5 million mi). It receives only 32% of the amount of light that Earth receives from the Sun, but water could exist in a liquid state if its atmosphere has sufficient amounts of carbon dioxide. (The other four planets in the system are Earth-sized; however, they orbit much closer to the star and thus are not within the habitable zone.)
The Kepler Mission
The discovery of Kepler-186f was the latest triumph of NASA’s Kepler satellite, which was launched in 2009. Because planets appear much fainter than the stars that they orbit, extrasolar planets are extremely difficult to detect directly. During its initial four-year mission, Kepler stared at the same patch of sky with a 95-cm (37-in) telescope until the stabilizing systems that kept the satellite pointed correctly failed. In its field of view, Kepler looked over 150,000 stars, seeking to detect the slight dimming during transits as planets passed in front of their stars. Making such a detection is extremely challenging. For example, the diameter of Earth is only 1/109th that of the Sun, so for an outside observer of the solar system, the passage of Earth would dim the Sun by only 0.008%. Also, a planetary system must have the correct alignment so that a planet’s orbital plane will pass in front of its star. Nevertheless, Kepler’s instruments were sufficiently precise that it could detect the dimming caused by an Earth-sized planet.
By mid-2014 the Kepler mission had discovered 989 planets. One of those, Kepler-22b, has a radius 2.4 times that of Earth and was the first planet found within the habitable zone of a star similar to the Sun. Kepler-20e and Kepler-20f were the first Earth-sized planets to be found (their radii are 0.87 and 1.03 times the radius of Earth, respectively). Kepler-9b and Kepler-9c were the first two planets observed transiting the same star. NASA announced that Kepler observations had yielded 4,234 planetary candidates that needed to be confirmed with subsequent observations. More than 40% of the candidate planets were found in systems with other candidates. About 90% of those candidate planets are smaller than Neptune—the smallest of the solar system’s gas giants, with a radius 3.8 times that of Earth. Some of those candidate planets were found within the habitable zones of their stars, and some are smaller than two Earth radii. Thus, more planets like Kepler-186f are likely to be discovered.
Test Your Knowledge
(Bed) Rocks and (Flint) Stones
Despite Kepler-186’s size and location in the habitable zone, it is more an “Earth cousin” than an “Earth twin.” It orbits a small red dwarf star that emits almost all of its luminosity at infrared wavelengths, which may be difficult for life to harness. Such stars also typically display larger luminosity variations than do Sun-type stars. In addition, in order for a planet to remain within the habitable zone of a faint star, it would have to orbit so close that tidal forces raised on the planet would cause the same hemisphere always to face the star (just as the Moon’s near side always faces Earth). As a result, there would be no day-night cycle, and the planet’s atmosphere—unless it was sufficiently thick with greenhouse gases such as carbon dioxide—would freeze onto the surface of the cold, perpetually dark hemisphere. (If the planet had a sufficiently massive atmosphere, winds would redistribute heat and the atmosphere would not freeze.) However, Kepler-186f is far enough away from its star that it may not be tidally locked.
Even if liquid water does not exist on Kepler-186f’s surface, it still could sustain some life. Liquid water is essential to all life on Earth, so the definition of a habitable zone is based on the hypothesis that extraterrestrial life would share that requirement. That is a very conservative (but observationally useful) definition, as a planet’s surface temperature depends not only on its proximity to its star but also on such previously mentioned factors as its atmospheric greenhouse gases, its reflectivity, and its atmospheric or oceanic circulation. Moreover, internal energy sources such as radioactive decay and tidal heating can warm a planet’s surface to the melting point of water. Such energy sources can also maintain subsurface reservoirs of liquid water, so that a planet could contain life without being within its star’s habitable zone. Earth, for instance, has a thriving subsurface biosphere, albeit one that is composed almost exclusively of simple organisms that can survive in oxygen-poor environments. Jupiter’s moon Europa has a liquid-water ocean tens of kilometres below its surface that may well be habitable for some organisms.
The Search Continues
What would it take to find a true Earth twin, a world 12,900 km (8,000 mi) across, with a mass of 6 × 1024 kg, orbiting its yellow main-sequence star every year at a distance of 150 million km (93 million mi), and with liquid water on its surface? Kepler scientists confirm that they have much more data yet to analyze and that an Earth twin still could be identified. Furthermore, Kepler got a reprieve: although the satellite can no longer observe the same spot of sky for 365 days a year, it retains the ability to stay pointed at the same spot of sky for 75 days before being adjusted to look at a new target area for another 75 days. An extended mission, K2, was approved in 2014.
A bigger search mission will become the job of the Transiting Exoplanet Survey Satellite (TESS), scheduled for launch in 2017. TESS, which will survey the entire sky with an emphasis on the nearest and brightest stars similar to the Sun, is expected to see not just a few Earth-like planets but thousands. Meanwhile, the James Webb Space Telescope (JWST), a satellite scheduled for a 2018 launch, may actually see a habitable planet. The JWST, the successor to the Hubble Space Telescope, will have the ability to block the light from a planet’s star and take a spectrum to see the composition of the planet. It could even discern seasonal changes by detecting the difference between summer and winter. It is still not known if humanity is alone in the universe, but knowing where life could be is only years away. |
In geometry, the statement that the angles opposite the equal sides of an isosceles triangle are themselves equal is known as the pons asinorum (Latin: [ˈpõːs asɪˈnoːrũː], English: / / PONZ ass-i-NOR-əm), typically translated as "bridge of asses". This statement is Proposition 5 of Book 1 in Euclid's Elements, and is also known as the isosceles triangle theorem. Its converse is also true: if two angles of a triangle are equal, then the sides opposite them are also equal. The term is also applied to the Pythagorean Theorem.
The name of this statement is also used metaphorically for a problem or challenge which will separate the sure of mind from the simple, the fleet thinker from the slow, the determined from the dallier, to represent a critical test of ability or understanding. Its first known use was in 1645.
Euclid and Proclus
Euclid's statement of the pons asinorum includes a second conclusion that if the equal sides of the triangle are extended below the base, then the angles between the extensions and the base are also equal. Euclid's proof involves drawing auxiliary lines to these extensions. But, as Euclid's commentator Proclus points out, Euclid never uses the second conclusion and his proof can be simplified somewhat by drawing the auxiliary lines to the sides of the triangle instead, the rest of the proof proceeding in more or less the same way.
There has been much speculation and debate as to why Euclid added the second conclusion to the theorem, given that it makes the proof more complicated. One plausible explanation, given by Proclus, is that the second conclusion can be used in possible objections to the proofs of later propositions where Euclid does not cover every case. The proof relies heavily on what is today called side-angle-side, the previous proposition in the Elements.
Proclus' variation of Euclid's proof proceeds as follows:
Let ABC be an isosceles triangle with AB and AC being the equal sides. Pick an arbitrary point D on side AB and construct E on AC so that AD=AE. Draw the lines BE, DC and DE.
Consider the triangles BAE and CAD; BA=CA, AE=AD, and is equal to itself, so by side-angle-side, the triangles are congruent and corresponding sides and angles are equal.
Therefore and , and BE=CD.
Since AB=AC and AD=AE, BD=CE by subtraction of equal parts.
Now consider the triangles DBE and ECD; BD=CE, BE=CD, and have just been shown, so applying side-angle-side again, the triangles are congruent.
Therefore angle BDE = angle CED and angle BED = angle CDE.
Since angle BDE = angle CED and angle CDE = angle BED, angle BDC = angle CEB by subtraction of equal parts.
Consider a third pair of triangles, BDC and CEB; DB=EC, DC=EB, and angle BDC = angle CEB, so applying side-angle-side a third time, the triangles are congruent.
In particular, angle CBD = BCE, which was to be proved.
Proclus gives a much shorter proof attributed to Pappus of Alexandria. This is not only simpler but it requires no additional construction at all. The method of proof is to apply side-angle-side to the triangle and its mirror image. More modern authors, in imitation of the method of proof given for the previous proposition have described this as picking up the triangle, turning it over and laying it down upon itself. This method is lampooned by Charles Lutwidge Dodgson in Euclid and his Modern Rivals, calling it an "Irish bull" because it apparently requires the triangle to be in two places at once.
The proof is as follows:
Let ABC be an isosceles triangle with AB and AC being the equal sides.
Consider the triangles ABC and ACB, where ACB is considered a second triangle with vertices A, C and B corresponding respectively to A, B and C in the original triangle.
is equal to itself, AB=AC and AC=AB, so by side-angle-side, triangles ABC and ACB are congruent.
In particular, .
A standard textbook method is to construct the bisector of the angle at A. This is simpler than Euclid's proof, but Euclid does not present the construction of an angle bisector until proposition 9. So the order of presentation of the Euclid's propositions would have to be changed to avoid the possibility of circular reasoning.
The proof proceeds as follows:
As before, let the triangle be ABC with AB = AC.
Construct the angle bisector of and extend it to meet BC at X.
AB = AC and AX is equal to itself.
Furthermore, , so, applying side-angle-side, triangle BAX and triangle CAX are congruent.
It follows that the angles at B and C are equal.
Legendre uses a similar construction in Éléments de géométrie, but taking X to be the midpoint of BC. The proof is similar but side-side-side must be used instead of side-angle-side, and side-side-side is not given by Euclid until later in the Elements.
In inner product spaces
where θ is the angle between the two vectors, the conclusion of this inner product space form of the theorem is equivalent to the statement about equality of angles.
Another medieval term for the pons asinorum was Elefuga which, according to Roger Bacon, comes from Greek elegia "misery", and Latin fuga "flight", that is "flight of the wretches". Though this etymology is dubious, it is echoed in Chaucer's use of the term "flemyng of wreches" for the theorem.
There are two possible explanations for the name pons asinorum, the simplest being that the diagram used resembles an actual bridge. But the more popular explanation is that it is the first real test in the Elements of the intelligence of the reader and functions as a "bridge" to the harder propositions that follow. Gauss supposedly once espoused a similar belief in the necessity of immediately understanding Euler's identity as a benchmark pursuant to becoming a first-class mathematician.
Similarly, the name Dulcarnon was given to the 47th proposition of Book I of Euclid, better known as the Pythagorean theorem, after the Arabic Dhū 'l qarnain ذُو ٱلْقَرْنَيْن, meaning "the owner of the two horns", because diagrams of the theorem showed two smaller squares like horns at the top of the figure. The term is also used as a metaphor for a dilemma. The theorem was also sometimes called "the Windmill" for similar reasons.
Uses of the pons asinorum as a metaphor include:
- Richard Aungerville's Philobiblon contains the passage "Quot Euclidis discipulos retrojecit Elefuga quasi scopulos eminens et abruptus, qui nullo scalarum suffragio scandi posset! Durus, inquiunt, est his sermo; quis potest eum audire?", which compares the theorem to a steep cliff that no ladder may help scale and asks how many would-be geometers have been turned away.
- The term pons asinorum, in both its meanings as a bridge and as a test, is used as a metaphor for finding the middle term of a syllogism.
- The 18th-century poet Thomas Campbell wrote a humorous poem called "Pons asinorum" where a geometry class assails the theorem as a company of soldiers might charge a fortress; the battle was not without casualties.
- Economist John Stuart Mill called Ricardo's Law of Rent the pons asinorum of economics.
- Pons Asinorum is the name given to a particular configuration of a Rubik's Cube.
- Eric Raymond referred to the issue of syntactically-significant whitespace in the Python programming language as its pons asinorum.
- The Finnish aasinsilta and Swedish åsnebrygga is a literary technique where a tenuous, even contrived connection between two arguments or topics, which is almost but not quite a non sequitur, is used as an awkward transition between them. In serious text, it is considered a stylistic error, since it belongs properly to the stream of consciousness- or causerie-style writing. Typical examples are ending a section by telling what the next section is about, without bothering to explain why the topics are related, expanding a casual mention into a detailed treatment, or finding a contrived connection between the topics (e.g. "We bought some red wine; speaking of red liquids, tomorrow is the World Blood Donor Day").
- In Dutch, ezelsbruggetje ('little bridge of asses') is the word for a mnemonic. The same is true for the German Eselsbrücke.
- In Czech, oslí můstek has two meanings – it can describe either a contrived connection between two topics or a mnemonic.
- Smith, David Eugene (1925). History Of Mathematics. II. Ginn And Company. pp. 284.
It formed at bridge across which fools could not hope to pass, and was therefore known as the pons asinorum, or bridge of fools.¹
1. The term is something applied to the Pythagorean Theorem.
- Pons asinorum — Definition and More from the Free Merriam
- Heath pp. 251–255
- Following Proclus p. 53
- For example F. Cuthbertson Primer of geometry (1876 Oxford) p. 7
- Charles Lutwidge Dodgson, Euclid and his Modern Rivals Act I Scene II §6
- Following Proclus p. 54
- Heath p. 254 for section
- For example J.M. Wilson Elementary geometry (1878 Oxford) p. 20
- Following Wilson
- A. M. Legendre Éléments de géométrie (1876 Libr. de Firmin-Didot et Cie) p. 14
- J. R. Retherford, Hilbert Space, Cambridge University Press, 1993, page 27.
- A. F. West & H. D. Thompson "On Dulcarnon, Elefuga And Pons Asinorum as Fanciful Names For Geometrical Propositions" The Princeton University bulletin Vol. 3 No. 4 (1891) p. 84
- D.E. Smith History of Mathematics (1958 Dover) p. 284
- Derbyshire, John (2003). Prime Obsession: Bernhard Riemann and the Greatest Unsolved Problem in Mathematics. 500 Fifth Street, NW, Washington D.C. 20001: Joseph Henry Press. p. 202. ISBN 0-309-08549-7.
first-class mathematician.CS1 maint: location (link)
- Charles Lutwidge Dodgson, Euclid and his Modern Rivals Act I Scene II §1
- W.E. Aytoun (Ed.) The poetical works of Thomas Campbell (1864, Little, Brown) p. 385 Google Books
- John Stuart Mill Principles of Political Economy (1866: Longmans, Green, Reader, and Dyer) Book 2, Chapter 16, p. 261
- Reid, Michael (28 October 2006). "Rubik's Cube patterns". www.cflmath.com. Archived from the original on 12 December 2012. Retrieved 22 September 2019.
- Eric S. Raymond, "Why Python?", Linux Journal, April 30, 2000
- Aasinsilta on laiskurin apuneuvo | Yle Uutiset | yle.fi
|Look up pons asinorum in Wiktionary, the free dictionary.|
|Wikisource has original text related to this article:| |
GDP, i.e. gross domestic product refers to the aggregate market value of all the finished goods and services produced by a country. On the other hand, GNI stands for gross national income which takes into account country’s GDP and net income earned abroad.
National income refers to the ultimate outcome of all economic activities of the country during a period of one year, measured monetarily. It is an imperative macroeconomic concept, that ascertains the business level and the economic status of the nation. There are a number of measures of National Income of the country, which includes, GDP, GNP, GNI, NDP and NNP. Of these measures, GDP and GNI are the most widely used measure.
To most of the people, these two measures are same, but the fact is there is a difference between GDP and GNI.
Content: GDP Vs GNI
|Basis for Comparison||GDP||GNI|
|Meaning||GDP refers to the official monetary measure of the aggregate output of products and services, produced by the country over the course on one year.||GNI implies the summation of country's gross domestic product and the net income earned abroad, during a particular accounting year.|
|Measures||Total output produced||Total income received|
|Represents||Strength of country's economy.||Economic strength of country's nationals.|
|Focuses on||Domestic production||Income generated by citizens|
Definition of GDP
The term ‘GDP’ is an abbreviation of Gross Domestic Product, which implies the market prices of the all finished goods and services that are produced within the domestic territory of the nation, during a period of one year. Domestic territory, refers a different meaning, in national income accounting which includes:
- The territory that is lying within the nation’s political boundaries, which encompasses the territorial waters of the country.
- Ships and aircraft, run by country’s nationals between two or more countries.
- Floating platforms, fishing vessels and oil and natural gas rigs, which are operated in the internal waters by country’s nationals or involved in extraction in the areas, in which the country posses official rights of exploitation.
- Consulates, Embassies and military establishments of the country, situated in another country.
Further, income earned domestically by foreigners are added to it while the incomes earned by country’s nationals overseas is deducted. It takes into account consumer spending, government spending, investments and net exports (i.e. exports less imports).
Definition of GNI
GNI is an acronym for gross national income which refers to the aggregate domestic as well as foreign output, held by the country’s nationals during a particular fiscal year. It includes gross domestic product plus factor incomes earned abroad by country’s residents less income derived by foreign residents domestically. Factor income refers to the income received from selling the means of production, i.e. land, labour, capital and entrepreneur.
GNI is often contrasted with GNP (Gross National Product), but there exist a fine line of difference between the two as the estimation of the former relies on the income flows while the latter is calculated on the basis product-flows.
stroitkzn.ru Between GDP and GNI
The significant differences between GDP and GNI are given below:
- The official quantitative measure of the aggregate output of products and services, produced by the country over the course on one year is called Gross Domestic Product or GDP. The summation of country’s gross domestic product and the net income earned abroad, during a particular accounting year, is called GNI
- While gross domestic product, is based on location, i.e. products produced within country’s geographical limits, gross national income denotes the aggregate value produced by the enterprises, owned by the country’s national’s irrespective of their location.
- GDP is nothing but the total output produced by the country during an accounting year. GNI is the total income received by the country, during an accounting year.
- GDP is used as an indicator of country’s economic strength. On the contrary, GNI is used to indicate the economic strength of the residents of the country.
- GDP stresses over domestic production whereas GNI lays emphasis on the income generated by the country’s citizens.
As both of these two reflect, how effectively the country is operating economically, year after year. Or these can be used as a yardstick to compare the country’s economy with those of the other country’s economy. GDP is a tool used to make an estimation of country’s standard of living, i.e. if the GDP is high, so does the level of living of the country’s residents and vice versa. On the contrary, GNI calculates total income generated by the residents of the country. |
Pages similar to: Tangent and normal lines
- The idea of the derivative of a function
The derivative of a function as the slope of the tangent line.
- Developing intuition about the derivative
An intuitive exploration into the properties of the derivative, illustrated by interactive graphics.
- Calculating the derivative of a linear function using the derivative formula
Exploring how the limit definition of the derivative gives the slope of a linear function.
- Introduction to differentiability in higher dimensions
An introduction to the basic concept of the differentiability of a function of multiple variables. Discussion centers around the existence of a tangent plane to a function of two variables.
- The multivariable linear approximation
Introduction to the linear approximation in multivariable calculus and why it might be useful.
- Derivatives of polynomials
How to compute the derivative of a polynomial.
- Derivatives of more general power functions
How to compute the derivative of power functions.
- A refresher on the quotient rule
How to compute the derivative of a quotient.
- A refresher on the product rule
How to compute the derivative of a product.
- A refresher on the chain rule
How to compute the derivative of a composition of functions.
- Linear approximations: approximation by differentials
Approximating the value of a function near a point by its tangent line formula.
- Implicit differentiation
Differenting a function that is defined implicitly in terms of a relation between two variables.
- Related rates
Calculating one derivative in terms of another derivative.
- Intermediate Value Theorem, location of roots
Using the Intermediate Value Theorem to find small intervals where a function must have a root.
- Derivatives of transcendental functions
A list of formulas for taking derivatives of exponential, logarithm, trigonometric, and inverse trigonometric functions.
- L'Hospital's rule
A way to simplify evaluation of limits when the limit is an indeterminate form.
- The second and higher derivatives
Taking derivatives multiple times to calculate the second or higer derivative.
- Inflection points, concavity upward and downward
Finding points where the second derivative changes sign.
- The idea of the chain rule
An illustration of the basic concept of the chain rule using interactive graphics to diagram the relevant points on the graphs and the corresponding slopes.
- Simple examples of using the chain rule
Basic examples that show how to use the chain rule to calculate the derivative of the composition of functions.
Back to: Tangent and normal lines |
In mathematics, the binary logarithm (log2 n) is the power to which the number 2 must be raised to obtain the value n. That is, for any real number x,
For example, the binary logarithm of 1 is 0, the binary logarithm of 2 is 1, the binary logarithm of 4 is 2, and the binary logarithm of 32 is 5.
The binary logarithm is the logarithm to the base 2. The binary logarithm function is the inverse function of the power of two function. As well as log2, alternative notations for the binary logarithm include lg, ld, lb, and (with a prior statement that the default base is 2) log.
Historically, the first application of binary logarithms was in music theory, by Leonhard Euler: the binary logarithm of a frequency ratio of two musical tones gives the number of octaves by which the tones differ. Binary logarithms can be used to calculate the length of the representation of a number in the binary numeral system, or the number of bits needed to encode a message in information theory. In computer science, they count the number of steps needed for binary search and related algorithms. Other areas in which the binary logarithm is frequently used include combinatorics, bioinformatics, the design of sports tournaments, and photography.
Binary logarithms are included in the standard C mathematical functions and other mathematical software packages. The integer part of a binary logarithm can be found using the find first set operation on an integer value, or by looking up the exponent of a floating point value. The fractional part of the logarithm can be calculated efficiently.
- 1 History
- 2 Definition and properties
- 3 Notation
- 4 Applications
- 5 Calculation
- 6 References
- 7 External links
The powers of two have been known since antiquity; for instance they appear in Euclid's Elements, Props. IX.32 (on the factorization of powers of two) and IX.36 (half of the Euclid–Euler theorem, on the structure of even perfect numbers). And the binary logarithm of a power of two is just its position in the ordered sequence of powers of two. On this basis, Michael Stifel has been credited with publishing the first known table of binary logarithms in 1544. His book Arthmetica Integra contains several tables that show the integers with their corresponding powers of two. Reversing the rows of these tables allow them to be interpreted as tables of binary logarithms.
Earlier than Stifel, the 8th century Jain mathematician Virasena is credited with a precursor to the binary logarithm. Virasena's concept of ardhacheda has been defined as the number of times a given number can be divided evenly by two. This definition gives rise to a function that coincides with the binary logarithm on the powers of two, but it is different for other integers, giving the 2-adic order rather than the logarithm.
The modern form of a binary logarithm, applying to any number (not just powers of two) was considered explicitly by Leonhard Euler in 1739. Euler established the application of binary logarithms to music theory, long before their more significant applications in information theory and computer science became known. As part of his work in this area, Euler published a table of binary logarithms of the integers from 1 to 8, to seven decimal digits of accuracy.
Definition and properties
The binary logarithm function may be defined as the inverse function to the power of two function, which is a strictly increasing function over the positive real numbers and therefore has a unique inverse. Alternatively, it may be defined as ln n/ln 2, where ln is the natural logarithm, defined in any of its standard ways. Using the complex logarithm in this definition allows the binary logarithm to be extended to the complex numbers.
As with other logarithms, the binary logarithm obeys the following equations, which can be used to simplify formulas that combine binary logarithms with multiplication or exponentiation:
For more, see list of logarithmic identities.
In mathematics, the binary logarithm of a number n is often written as log2 n. However, several other notations for this function have been used or proposed, especially in application areas.
Some authors write the binary logarithm as lg n, the notation listed in The Chicago Manual of Style. Donald Knuth credits this notation to a suggestion of Edward Reingold, but its use in both information theory and computer science dates to before Reingold was active. The binary logarithm has also been written as log n with a prior statement that the default base for the logarithm is 2. Another notation that is sometimes used for the same function (especially in the German scientific literature) is ld n, from Latin logarithmus dualis. The DIN 1302, ISO 31-11 and ISO 80000-2 standards recommend yet another notation, lb n. According to these standards, lg n should not be used for the binary logarithm, as it is instead reserved for log10 n.
In information theory, the definition of the amount of self-information and information entropy is often expressed with the binary logarithm, corresponding to making the bit the fundamental unit of information. However, the natural logarithm and the nat are also used in alternative notations for these definitions.
Although the natural logarithm is more important than the binary logarithm in many areas of pure mathematics such as number theory and mathematical analysis, the binary logarithm has several applications in combinatorics:
- Every binary tree with n leaves has height at least log2 n, with equality when n is a power of two and the tree is a complete binary tree. Relatedly, the Strahler number of a river system with n tributary streams is at most log2 n + 1.
- Every family of sets with n different sets has at least log2 n elements in its union, with equality when the family is a power set.
- Every partial cube with n vertices has isometric dimension at least log2 n, and has at most 1/ n log2 n edges, with equality when the partial cube is a hypercube graph.
- According to Ramsey's theorem, every n-vertex undirected graph has either a clique or an independent set of size logarithmic in n. The precise size that can be guaranteed is not known, but the best bounds known on its size involve binary logarithms. In particular, all graphs have a clique or independent set of size at least 1/ log2 n (1 − o(1)) and almost all graphs do not have a clique or independent set of size larger than 2 log2 n (1 + o(1)).
- From a mathematical analysis of the Gilbert–Shannon–Reeds model of random shuffles, one can show that the number of times one needs to shuffle an n-card deck of cards, using riffle shuffles, to get a distribution on permutations that is close to uniformly random, is approximately 3/ log2 n. This calculation forms the basis for a recommendation that 52-card decks should be shuffled seven times.
The binary logarithm also frequently appears in the analysis of algorithms, not only because of the frequent use of binary number arithmetic in algorithms, but also because binary logarithms occur in the analysis of algorithms based on two-way branching. If a problem initially has n choices for its solution, and each iteration of the algorithm reduces the number of choices by a factor of two, then the number of iterations needed to select a single choice is again the integral part of log2 n. This idea is used in the analysis of several algorithms and data structures. For example, in binary search, the size of the problem to be solved is halved with each iteration, and therefore roughly log2 n iterations are needed to obtain a problem of size 1, which is solved easily in constant time. Similarly, a perfectly balanced binary search tree containing n elements has height log2(n + 1) − 1.
The running time of an algorithm is usually expressed in big O notation, which is used to simplify expressions by omitting their constant factors and lower-order terms. Because logarithms in different bases differ from each other only by a constant factor, algorithms that run in O(log2 n) time can also be said to run in, say, O(log13 n) time. The base of the logarithm in expressions such as O(log n) or O(n log n) is therefore not important and can be omitted. However, for logarithms that appear in the exponent of a time bound, the base of the logarithm cannot be omitted. For example, O(2log2 n) is not the same as O(2ln n) because the former is equal to O(n) and the latter to O(n0.6931...).
- Average time quicksort and other comparison sort algorithms
- Searching in balanced binary search trees
- Exponentiation by squaring
- Longest increasing subsequence
Binary logarithms also occur in the exponents of the time bounds for some divide and conquer algorithms, such as the Karatsuba algorithm for multiplying n-bit numbers in time O(nlog2 3), and the Strassen algorithm for multiplying n × n matrices in time O(nlog2 7). The occurrence of binary logarithms in these running times can be explained by reference to the master theorem.
In bioinformatics, microarrays are used to measure how strongly different genes are expressed in a sample of biological material. Different rates of expression of a gene are often compared by using the binary logarithm of the ratio of expression rates: the log ratio of two expression rates is defined as the binary logarithm of the ratio of the two rates. Binary logarithms allow for a convenient comparison of expression rates: a doubled expression rate can be described by a log ratio of 1, a halved expression rate can be described by a log ratio of −1, and an unchanged expression rate can be described by a log ratio of zero, for instance.
Data points obtained in this way are often visualized as a scatterplot in which one or both of the coordinate axes are binary logarithms of intensity ratios, or in visualizations such as the MA plot and RA plot that rotate and scale these log ratio scatterplots.
In music theory, the interval or perceptual difference between two tones is determined by the ratio of their frequencies. Intervals coming from rational number ratios with small numerators and denominators are perceived as particularly euphonious. The simplest and most important of these intervals is the octave, a frequency ratio of 2:1. The number of octaves by which two tones differ is the binary logarithm of their frequency ratio.
To study tuning systems and other aspects of music theory that require finer distinctions between tones, it is helpful to have a measure of the size of an interval that is finer than an octave and is additive (as logarithms are) rather than multiplicative (as frequency ratios are). That is, if tones x, y, and z form a rising sequence of tones, then the measure of the interval from x to y plus the measure of the interval from y to z should equal the measure of the interval from x to z. Such a measure is given by the cent, which divides the octave into 1200 equal intervals (12 semitones of 100 cents each). Mathematically, given tones with frequencies f1 and f2, the number of cents in the interval from f1 to f2 is
In competitive games and sports involving two players or teams in each game or match, the binary logarithm indicates the number of rounds necessary in a single-elimination tournament required to determine a winner. For example, a tournament of 4 players requires log2 4 = 2 rounds to determine the winner, a tournament of 32 teams requires log2 32 = 5 rounds, etc. In this case, for n players/teams where n is not a power of 2, log2 n is rounded up since it is necessary to have at least one round in which not all remaining competitors play. For example, log2 6 is approximately 2.585, which rounds up to 3, indicating that a tournament of 6 teams requires 3 rounds (either two teams sit out the first round, or one team sits out the second round). The same number of rounds is also necessary to determine a clear winner in a Swiss-system tournament.
In photography, exposure values are measured in terms of the binary logarithm of the amount of light reaching the film or sensor, in accordance with the Weber–Fechner law describing a logarithmic response of the human visual system to light. A single stop of exposure is one unit on a base-2 logarithmic scale. More precisely, the exposure value of a photograph is defined as
Conversion from other bases
An easy way to calculate log2 n on calculators that do not have a log2 function is to use the natural logarithm (ln) or the common logarithm (log or log10) functions, which are found on most scientific calculators. The specific change of logarithm base formulae for this are:
The binary logarithm can be made into a function from integers and to integers by rounding it up or down. These two forms of integer binary logarithm are related by this formula:
The definition can be extended by defining . Extended in this way, this function is related to the number of leading zeros of the 32-bit unsigned binary representation of x, nlz(x).
The integer binary logarithm can be interpreted as the zero-based index of the most significant 1 bit in the input. In this sense it is the complement of the find first set operation, which finds the index of the least significant 1 bit. Many hardware platforms include support for finding the number of leading zeros, or equivalent operations, which can be used to quickly find the binary logarithm. The
flsl functions in the Linux kernel and in some versions of the libc software library also compute the binary logarithm (rounded up to an integer, plus one).
For a general positive real number, the binary logarithm may be computed in two parts. First, one computes the integer part, (called the characteristic of the logarithm). This reduces the problem to one where the argument of the logarithm is in a restricted range, the interval [1,2), simplifying the second step of computing the fractional part (the mantissa of the logarithm). For any x > 0, there exists a unique integer n such that 2n ≤ x < 2n+1, or equivalently 1 ≤ 2−nx < 2. Now the integer part of the logarithm is simply n, and the fractional part is log2(2−nx). In other words:
The fractional part of the result is log2 y, and can be computed iteratively, using only elementary multiplication and division. The algorithm for computing the fractional part can be described in pseudocode as follows:
- Start with a real number y in the half-open interval [1,2). If y = 1, then the algorithm is done and the fractional part is zero.
- Otherwise, square y repeatedly until the result z lies in the interval [2,4). Let m be the number of squarings needed. That is, z = y2m with m chosen such that z is in [2,4).
- Taking the logarithm of both sides and doing some algebra:
- Once again z/2 is a real number in the interval [1,2). Return to step 1, and compute the binary logarithm of z/2 using the same method.
The result of this is expressed by the following recursive formulas, in which is the number of squarings required in the i-th iteration of the algorithm:
In the special case where the fractional part in step 1 is found to be zero, this is a finite sequence terminating at some point. Otherwise, it is an infinite series that converges according to the ratio test, since each term is strictly less than the previous one (since every mi > 0). For practical use, this infinite series must be truncated to reach an approximate result. If the series is truncated after the ith term, then the error in the result is less than 2−(m1 + m2 + ... + mi).
Software library support
log2 function is included in the standard C mathematical functions. The default version of this function takes double precision arguments but variants of it allow the argument to be single-precision or to be a long double. In MATLAB, the argument to the
log2 function is allowed to be a negative number, and in this case the result will be a complex number.
- Groza, Vivian Shaw; Shelley, Susanne M. (1972), Precalculus mathematics, New York: Holt, Rinehart and Winston, p. 182, ISBN 978-0-03-077670-0.
- Stifel, Michael (1544), Arithmetica integra (in Latin), p. 31. A copy of the same table with two more entries appears on p. 237, and another copy extended to negative powers appears on p. 249b.
- Joseph, G. G. (2011), The Crest of the Peacock (3rd ed.), Princeton University Press, p. 352.
- See, e.g., Shparlinski, Igor (2013), Cryptographic Applications of Analytic Number Theory: Complexity Lower Bounds and Pseudorandomness, Progress in Computer Science and Applied Logic, 22, Birkhäuser, p. 35, ISBN 978-3-0348-8037-4.
- Euler, Leonhard (1739), "Chapter VII. De Variorum Intervallorum Receptis Appelationibus", Tentamen novae theoriae musicae ex certissismis harmoniae principiis dilucide expositae (in Latin), Saint Petersburg Academy, pp. 102–112.
- Tegg, Thomas (1829), "Binary logarithms", London encyclopaedia; or, Universal dictionary of science, art, literature and practical mechanics: comprising a popular view of the present state of knowledge, Volume 4, pp. 142–143.
- Batschelet, E. (2012), Introduction to Mathematics for Life Scientists, Springer, p. 128, ISBN 978-3-642-96080-2.
- For instance, Microsoft Excel provides the
IMLOG2function for complex binary logarithms: see Bourg, David M. (2006), Excel Scientific and Engineering Cookbook, O'Reilly Media, p. 232, ISBN 978-0-596-55317-3.
- Kolman, Bernard; Shapiro, Arnold (1982), "11.4 Properties of Logarithms", Algebra for College Students, Academic Press, pp. 334–335, ISBN 978-1-4832-7121-7.
- For instance, this is the notation used in the Encyclopedia of Mathematics and The Princeton Companion to Mathematics.
- Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001), Introduction to Algorithms (2nd ed.), MIT Press and McGraw-Hill, pp. 34, 53–54, ISBN 0-262-03293-7.
- Sedgewick, Robert; Wayne, Kevin Daniel (2011), Algorithms, Addison-Wesley Professional, p. 185, ISBN 978-0-321-57351-3.
- The Chicago Manual of Style (25th ed.), University of Chicago Press, 2003, p. 530.
- Knuth, Donald E. (1997), The Art of Computer Programming, Volume 1: Fundamental Algorithms (3rd ed.), Addison-Wesley Professional, ISBN 978-0-321-63574-7, p. 11. The same notation was in the 1973 2nd edition of the same book (p. 23) but without the credit to Reingold.
- Trucco, Ernesto (1956), "A note on the information content of graphs", Bull. Math. Biophys., 18: 129–135, doi:10.1007/BF02477836, MR 0077919.
- Mitchell, John N. (1962), "Computer multiplication and division using binary logarithms", IRE Transactions on Electronic Computers, EC-11 (4): 512–517, doi:10.1109/TEC.1962.5219391.
- Fiche, Georges; Hebuterne, Gerard (2013), Mathematics for Engineers, John Wiley & Sons, p. 152, ISBN 978-1-118-62333-6,
In the following, and unless otherwise stated, the notation log x always stands for the logarithm to the base 2 of x.
- Cover, Thomas M.; Thomas, Joy A. (2012), Elements of Information Theory (2nd ed.), John Wiley & Sons, p. 33, ISBN 978-1-118-58577-1,
Unless otherwise specified, we will take all logarithms to base 2.
- Goodrich, Michael T.; Tamassia, Roberto (2002), Algorithm Design: Foundations, Analysis, and Internet Examples, John Wiley & Sons, p. 23,
One of the interesting and sometimes even surprising aspects of the analysis of data structures and algorithms is the ubiquitous presence of logarithms ... As is the custom in the computing literature, we omit writing the base b of the logarithm when b = 2.
- For instance, see Bauer, Friedrich L. (2009), Origins and Foundations of Computing: In Cooperation with Heinz Nixdorf MuseumsForum, Springer Science & Business Media, p. 54, ISBN 978-3-642-02992-9.
- For DIN 1302 see Brockhaus Enzyklopädie in zwanzig Bänden, 11, Wiesbaden: F.A. Brockhaus, 1970, p. 554, ISBN 978-3-7653-0000-4.
- For ISO 31-11 see Thompson, Ambler; Taylor, Barry M (March 2008), Guide for the Use of the International System of Units (SI) — NIST Special Publication 811, 2008 Edition — Second Printing (PDF), NIST, p. 33.
- For ISO 80000-2 see "Quantities and units – Part 2: Mathematical signs and symbols to be used in the natural sciences and technology" (PDF), International Standard ISO 80000-2 (1st ed.), December 1, 2009, Section 12, Exponential and logarithmic functions, p. 18.
- Van der Lubbe, Jan C. A. (1997), Information Theory, Cambridge University Press, p. 3, ISBN 978-0-521-46760-5.
- Stewart, Ian (2015), Taming the Infinite, Quercus, p. 120, ISBN 9781623654733,
in advanced mathematics and science the only logarithm of importance is the natural logarithm.
- Leiss, Ernst L. (2006), A Programmer's Companion to Algorithm Analysis, CRC Press, p. 28, ISBN 978-1-4200-1170-8.
- Devroye, L.; Kruszewski, P. (1996), "On the Horton–Strahler number for random tries", RAIRO Informatique Théorique et Applications, 30 (5): 443–456, MR 1435732.
- Equivalently, a family with k distinct elements has at most 2k distinct sets, with equality when it is a power set.
- Eppstein, David (2005), "The lattice dimension of a graph", European Journal of Combinatorics, 26 (5): 585–592, arXiv:, doi:10.1016/j.ejc.2004.05.001, MR 2127682.
- Graham, Ronald L.; Rothschild, Bruce L.; Spencer, Joel H. (1980), Ramsey Theory, Wiley-Interscience, p. 78.
- Bayer, Dave; Diaconis, Persi (1992), "Trailing the dovetail shuffle to its lair", The Annals of Applied Probability, 2 (2): 294–313, doi:10.1214/aoap/1177005705, JSTOR 2959752, MR 1161056.
- Mehlhorn, Kurt; Sanders, Peter (2008), "2.5 An example – binary search", Algorithms and Data Structures: The Basic Toolbox (PDF), Springer, pp. 34–36, ISBN 978-3-540-77977-3.
- Roberts, Fred; Tesman, Barry (2009), Applied Combinatorics (2nd ed.), CRC Press, p. 206, ISBN 978-1-4200-9983-6.
- Sipser, Michael (2012), "Example 7.4", Introduction to the Theory of Computation (3rd ed.), Cengage Learning, pp. 277–278, ISBN 9781133187790.
- Sedgewick & Wayne (2011), p. 186.
- Cormen et al., p. 156; Goodrich & Tamassia, p. 238.
- Cormen et al., p. 276; Goodrich & Tamassia, p. 159.
- Cormen et al., pp. 879–880; Goodrich & Tamassia, p. 464.
- Edmonds, Jeff (2008), How to Think About Algorithms, Cambridge University Press, p. 302, ISBN 978-1-139-47175-6.
- Cormen et al., p. 844; Goodrich & Tamassia, p. 279.
- Cormen et al., section 28.2.
- Causton, Helen; Quackenbush, John; Brazma, Alvis (2009), Microarray Gene Expression Data Analysis: A Beginner's Guide, John Wiley & Sons, pp. 49–50, ISBN 978-1-4443-1156-3.
- Eidhammer, Ingvar; Barsnes, Harald; Eide, Geir Egil; Martens, Lennart (2012), Computational and Statistical Methods for Protein Quantification by Mass Spectrometry, John Wiley & Sons, p. 105, ISBN 978-1-118-49378-6.
- Campbell, Murray; Greated, Clive (1994), The Musician's Guide to Acoustics, Oxford University Press, p. 78, ISBN 978-0-19-159167-9.
- Randel, Don Michael, ed. (2003), The Harvard Dictionary of Music (4th ed.), The Belknap Press of Harvard University Press, p. 416, ISBN 978-0-674-01163-2.
- France, Robert (2008), Introduction to Physical Education and Sport Science, Cengage Learning, p. 282, ISBN 978-1-4180-5529-5.
- Allen, Elizabeth; Triantaphillidou, Sophie (2011), The Manual of Photography, Taylor & Francis, p. 228, ISBN 978-0-240-52037-7.
- Davis, Phil (1998), Beyond the Zone System, CRC Press, p. 17, ISBN 978-1-136-09294-7.
- Allen & Triantaphillidou (2011), p. 235.
- Zwerman, Susan; Okun, Jeffrey A. (2012), Visual Effects Society Handbook: Workflow and Techniques, CRC Press, p. 205, ISBN 978-1-136-13614-6.
- Bauer, Craig P. (2013), Secret History: The Story of Cryptology, CRC Press, p. 332, ISBN 978-1-4665-6186-1.
- Warren Jr., Henry S. (2002), Hacker's Delight (1st ed.), Addison Wesley, p. 215, ISBN 978-0-201-91465-8
- fls, Linux kernel API, kernel.org, retrieved 2010-10-17.
- Majithia, J. C.; Levan, D. (1973), "A note on base-2 logarithm computations", Proceedings of the IEEE, 61 (10): 1519–1520, doi:10.1109/PROC.1973.9318.
- Stephenson, Ian (2005), "9.6 Fast Power, Log2, and Exp2 Functions", Production Rendering: Design and Implementation, Springer-Verlag, pp. 270–273, ISBN 978-1-84628-085-6.
- Warren Jr., Henry S. (2013) , "11-4: Integer Logarithm", Hacker's Delight (2nd ed.), Addison Wesley – Pearson Education, Inc., p. 291, ISBN 978-0-321-84268-8, 0-321-84268-5.
- "126.96.36.199 The log2 functions", ISO/IEC 9899:1999 specification (PDF), p. 226.
- Redfern, Darren; Campbell, Colin (1998), The Matlab® 5 Handbook, Springer-Verlag, p. 141, ISBN 978-1-4612-2170-8.
- Anderson, Sean Eron (December 12, 2003), "Find the log base 2 of an N-bit integer in O(lg(N)) operations", Bit Twiddling Hacks, Stanford University, retrieved 2015-11-25
- Feynman and the Connection Machine |
The formation of the Solar System began about 4.6 billion years ago with the gravitational collapse of a small part of a giant molecular cloud. Most of the collapsing mass collected in the center, forming the Sun, while the rest flattened into a protoplanetary disk out of which the planets, moons, asteroids, and other small Solar System bodies formed.
This model, known as the nebular hypothesis, was first developed in the 18th century by Emanuel Swedenborg, Immanuel Kant, and Pierre-Simon Laplace. Its subsequent development has interwoven a variety of scientific disciplines including astronomy, chemistry, geology, physics, and planetary science. Since the dawn of the Space Age in the 1950s and the discovery of exoplanets in the 1990s, the model has been both challenged and refined to account for new observations.
The Solar System has evolved considerably since its initial formation. Many moons have formed from circling discs of gas and dust around their parent planets, while other moons are thought to have formed independently and later to have been captured by their planets. Still others, such as Earth's Moon, may be the result of giant collisions. Collisions between bodies have occurred continually up to the present day and have been central to the evolution of the Solar System. Beyond Neptune many sub-planet sized objects formed. Several thousand trans-Neptunian objects have been observed. Unlike the planets, these trans-neptunian objects mostly move on eccentric orbits, inclined to the plane of the planets. The positions of the planets might have shifted due to gravitational interactions. Planetary migration may have been responsible for much of the Solar System's early evolution.[according to whom?]
In roughly 5 billion years, the Sun will cool and expand outward to many times its current diameter (becoming a red giant), before casting off its outer layers as a planetary nebula and leaving behind a stellar remnant known as a white dwarf. In the far distant future, the gravity of passing stars will gradually reduce the Sun's retinue of planets. Some planets will be destroyed, and others ejected into interstellar space. Ultimately, over the course of tens of billions of years, it is likely that the Sun will be left with none of the original bodies in orbit around it.
Ideas concerning the origin and fate of the world date from the earliest known writings; however, for almost all of that time, there was no attempt to link such theories to the existence of a "Solar System", simply because it was not generally thought that the Solar System, in the sense we now understand it, existed. The first step toward a theory of Solar System formation and evolution was the general acceptance of heliocentrism, which placed the Sun at the centre of the system and the Earth in orbit around it. This concept had developed for millennia (Aristarchus of Samos had suggested it as early as 250 BC), but was not widely accepted until the end of the 17th century. The first recorded use of the term "Solar System" dates from 1704.
The current standard theory for Solar System formation, the nebular hypothesis, has fallen into and out of favour since its formulation by Emanuel Swedenborg, Immanuel Kant, and Pierre-Simon Laplace in the 18th century. The most significant criticism of the hypothesis was its apparent inability to explain the Sun's relative lack of angular momentum when compared to the planets. However, since the early 1980s studies of young stars have shown them to be surrounded by cool discs of dust and gas, exactly as the nebular hypothesis predicts, which has led to its re-acceptance.
Understanding of how the Sun is expected to continue to evolve required an understanding of the source of its power. Arthur Stanley Eddington's confirmation of Albert Einstein's theory of relativity led to his realisation that the Sun's energy comes from nuclear fusion reactions in its core, fusing hydrogen into helium. In 1935, Eddington went further and suggested that other elements also might form within stars. Fred Hoyle elaborated on this premise by arguing that evolved stars called red giants created many elements heavier than hydrogen and helium in their cores. When a red giant finally casts off its outer layers, these elements would then be recycled to form other star systems.
See also: Nebular hypothesis
The nebular hypothesis says that the Solar System formed from the gravitational collapse of a fragment of a giant molecular cloud, most likely at the edge of a Wolf-Rayet bubble. The cloud was about 20 parsecs (65 light years) across, while the fragments were roughly 1 parsec (three and a quarter light-years) across. The further collapse of the fragments led to the formation of dense cores 0.01–0.1 parsec (2,000–20,000 AU) in size.[a] One of these collapsing fragments (known as the presolar nebula) formed what became the Solar System. The composition of this region with a mass just over that of the Sun (M☉) was about the same as that of the Sun today, with hydrogen, along with helium and trace amounts of lithium produced by Big Bang nucleosynthesis, forming about 98% of its mass. The remaining 2% of the mass consisted of heavier elements that were created by nucleosynthesis in earlier generations of stars. Late in the life of these stars, they ejected heavier elements into the interstellar medium. Some scientists have given the name Coatlicue to a hypothetical star that went supernova and created the presolar nebula.
The oldest inclusions found in meteorites, thought to trace the first solid material to form in the presolar nebula, are 4,568.2 million years old, which is one definition of the age of the Solar System. Studies of ancient meteorites reveal traces of stable daughter nuclei of short-lived isotopes, such as iron-60, that only form in exploding, short-lived stars. This indicates that one or more supernovae occurred nearby. A shock wave from a supernova may have triggered the formation of the Sun by creating relatively dense regions within the cloud, causing these regions to collapse. The highly homogeneous distribution of iron-60 in the Solar System points to the occurrence of this supernova and its injection of iron-60 being well before the accretion of nebular dust into planetary bodies. Because only massive, short-lived stars produce supernovae, the Sun must have formed in a large star-forming region that produced massive stars, possibly similar to the Orion Nebula. Studies of the structure of the Kuiper belt and of anomalous materials within it suggest that the Sun formed within a cluster of between 1,000 and 10,000 stars with a diameter of between 6.5 and 19.5 light years and a collective mass of 3,000 M☉. This cluster began to break apart between 135 million and 535 million years after formation. Several simulations of our young Sun interacting with close-passing stars over the first 100 million years of its life produce anomalous orbits observed in the outer Solar System, such as detached objects.
Because of the conservation of angular momentum, the nebula spun faster as it collapsed. As the material within the nebula condensed, the atoms within it began to collide with increasing frequency, converting their kinetic energy into heat. The center, where most of the mass collected, became increasingly hotter than the surrounding disc. Over about 100,000 years, the competing forces of gravity, gas pressure, magnetic fields, and rotation caused the contracting nebula to flatten into a spinning protoplanetary disc with a diameter of about 200 AU and form a hot, dense protostar (a star in which hydrogen fusion has not yet begun) at the centre.
At this point in its evolution, the Sun is thought to have been a T Tauri star. Studies of T Tauri stars show that they are often accompanied by discs of pre-planetary matter with masses of 0.001–0.1 M☉. These discs extend to several hundred AU—the Hubble Space Telescope has observed protoplanetary discs of up to 1000 AU in diameter in star-forming regions such as the Orion Nebula—and are rather cool, reaching a surface temperature of only about 1,000 K (730 °C; 1,340 °F) at their hottest. Within 50 million years, the temperature and pressure at the core of the Sun became so great that its hydrogen began to fuse, creating an internal source of energy that countered gravitational contraction until hydrostatic equilibrium was achieved. This marked the Sun's entry into the prime phase of its life, known as the main sequence. Main-sequence stars derive energy from the fusion of hydrogen into helium in their cores. The Sun remains a main-sequence star today.
As the early Solar System continued to evolve, it eventually drifted away from its siblings in the stellar nursery, and continued orbiting the Milky Way's center on its own. The Sun likely drifted from its original orbital distance from the center of the galaxy. The chemical history of the Sun suggests it may have formed as much as 3 kpc closer to the galaxy core.
Like most stars, the Sun likely formed not in isolation but as part of a young star cluster. There are several indications that hint at the cluster environment having had some influence of the young still forming solar. For example, the decline in mass beyond Neptune and the extreme eccentric-orbit of Sedna have been interpreted as a signature of the solar system having been influenced by its birth environment. Whether the presence of the isotopes iron-60 and aluminium-26 can be interpreted as a sign of a birth cluster containing massive stars is still under debate. If the Sun was part of a star cluster, it might have been influenced by close flybys of other stars, the strong radiation of nearby massive stars and ejecta from supernovae occurring close-by.
The various planets are thought to have formed from the solar nebula, the disc-shaped cloud of gas and dust left over from the Sun's formation. The currently accepted method by which the planets formed is accretion, in which the planets began as dust grains in orbit around the central protostar. Through direct contact and self-organization, these grains formed into clumps up to 200 m (660 ft) in diameter, which in turn collided to form larger bodies (planetesimals) of ~10 km (6.2 mi) in size. These gradually increased through further collisions, growing at the rate of centimetres per year over the course of the next few million years.
The inner Solar System, the region of the Solar System inside 4 AU, was too warm for volatile molecules like water and methane to condense, so the planetesimals that formed there could only form from compounds with high melting points, such as metals (like iron, nickel, and aluminium) and rocky silicates. These rocky bodies would become the terrestrial planets (Mercury, Venus, Earth, and Mars). These compounds are quite rare in the Universe, comprising only 0.6% of the mass of the nebula, so the terrestrial planets could not grow very large. The terrestrial embryos grew to about 0.05 Earth masses (MEarth) and ceased accumulating matter about 100,000 years after the formation of the Sun; subsequent collisions and mergers between these planet-sized bodies allowed terrestrial planets to grow to their present sizes.
When the terrestrial planets were forming, they remained immersed in a disk of gas and dust. The gas was partially supported by pressure and so did not orbit the Sun as rapidly as the planets. The resulting drag and, more importantly, gravitational interactions with the surrounding material caused a transfer of angular momentum, and as a result the planets gradually migrated to new orbits. Models show that density and temperature variations in the disk governed this rate of migration, but the net trend was for the inner planets to migrate inward as the disk dissipated, leaving the planets in their current orbits.
The giant planets (Jupiter, Saturn, Uranus, and Neptune) formed further out, beyond the frost line, which is the point between the orbits of Mars and Jupiter where the material is cool enough for volatile icy compounds to remain solid. The ices that formed the Jovian planets were more abundant than the metals and silicates that formed the terrestrial planets, allowing the giant planets to grow massive enough to capture hydrogen and helium, the lightest and most abundant elements. Planetesimals beyond the frost line accumulated up to 4 MEarth within about 3 million years. Today, the four giant planets comprise just under 99% of all the mass orbiting the Sun.[b] Theorists believe it is no accident that Jupiter lies just beyond the frost line. Because the frost line accumulated large amounts of water via evaporation from infalling icy material, it created a region of lower pressure that increased the speed of orbiting dust particles and halted their motion toward the Sun. In effect, the frost line acted as a barrier that caused the material to accumulate rapidly at ~5 AU from the Sun. This excess material coalesced into a large embryo (or core) on the order of 10 MEarth, which began to accumulate an envelope via accretion of gas from the surrounding disc at an ever-increasing rate. Once the envelope mass became about equal to the solid core mass, growth proceeded very rapidly, reaching about 150 Earth masses ~105 years thereafter and finally topping out at 318 MEarth. Saturn may owe its substantially lower mass simply to having formed a few million years after Jupiter, when there was less gas available to consume.
T Tauri stars like the young Sun have far stronger stellar winds than more stable, older stars. Uranus and Neptune are thought to have formed after Jupiter and Saturn did, when the strong solar wind had blown away much of the disc material. As a result, those planets accumulated little hydrogen and helium—not more than 1 MEarth each. Uranus and Neptune are sometimes referred to as failed cores. The main problem with formation theories for these planets is the timescale of their formation. At the current locations it would have taken millions of years for their cores to accrete. This means that Uranus and Neptune may have formed closer to the Sun—near or even between Jupiter and Saturn—and later migrated or were ejected outward (see Planetary migration below). Motion in the planetesimal era was not all inward toward the Sun; the Stardust sample return from Comet Wild 2 has suggested that materials from the early formation of the Solar System migrated from the warmer inner Solar System to the region of the Kuiper belt.
After between three and ten million years, the young Sun's solar wind would have cleared away all the gas and dust in the protoplanetary disc, blowing it into interstellar space, thus ending the growth of the planets.
The planets were originally thought to have formed in or near their current orbits. This has been questioned during the last 20 years. Currently, many planetary scientists think that the Solar System might have looked very different after its initial formation: several objects at least as massive as Mercury may have been present in the inner Solar System, the outer Solar System may have been much more compact than it is now, and the Kuiper belt may have been much closer to the Sun.
At the end of the planetary formation epoch the inner Solar System was populated by 50–100 Moon- to Mars-sized protoplanets. Further growth was possible only because these bodies collided and merged, which took less than 100 million years. These objects would have gravitationally interacted with one another, tugging at each other's orbits until they collided, growing larger until the four terrestrial planets we know today took shape. One such giant collision is thought to have formed the Moon (see Moons below), while another removed the outer envelope of the young Mercury.
One unresolved issue with this model is that it cannot explain how the initial orbits of the proto-terrestrial planets, which would have needed to be highly eccentric to collide, produced the remarkably stable and nearly circular orbits they have today. One hypothesis for this "eccentricity dumping" is that the terrestrials formed in a disc of gas still not expelled by the Sun. The "gravitational drag" of this residual gas would have eventually lowered the planets' energy, smoothing out their orbits. However, such gas, if it existed, would have prevented the terrestrial planets' orbits from becoming so eccentric in the first place. Another hypothesis is that gravitational drag occurred not between the planets and residual gas but between the planets and the remaining small bodies. As the large bodies moved through the crowd of smaller objects, the smaller objects, attracted by the larger planets' gravity, formed a region of higher density, a "gravitational wake", in the larger objects' path. As they did so, the increased gravity of the wake slowed the larger objects down into more regular orbits.
The outer edge of the terrestrial region, between 2 and 4 AU from the Sun, is called the asteroid belt. The asteroid belt initially contained more than enough matter to form 2–3 Earth-like planets, and, indeed, a large number of planetesimals formed there. As with the terrestrials, planetesimals in this region later coalesced and formed 20–30 Moon- to Mars-sized planetary embryos; however, the proximity of Jupiter meant that after this planet formed, 3 million years after the Sun, the region's history changed dramatically. Orbital resonances with Jupiter and Saturn are particularly strong in the asteroid belt, and gravitational interactions with more massive embryos scattered many planetesimals into those resonances. Jupiter's gravity increased the velocity of objects within these resonances, causing them to shatter upon collision with other bodies, rather than accrete.
As Jupiter migrated inward following its formation (see Planetary migration below), resonances would have swept across the asteroid belt, dynamically exciting the region's population and increasing their velocities relative to each other. The cumulative action of the resonances and the embryos either scattered the planetesimals away from the asteroid belt or excited their orbital inclinations and eccentricities. Some of those massive embryos too were ejected by Jupiter, while others may have migrated to the inner Solar System and played a role in the final accretion of the terrestrial planets. During this primary depletion period, the effects of the giant planets and planetary embryos left the asteroid belt with a total mass equivalent to less than 1% that of the Earth, composed mainly of small planetesimals. This is still 10–20 times more than the current mass in the main belt, which is now about 0.0005 MEarth. A secondary depletion period that brought the asteroid belt down close to its present mass is thought to have followed when Jupiter and Saturn entered a temporary 2:1 orbital resonance (see below).
The inner Solar System's period of giant impacts probably played a role in the Earth acquiring its current water content (~6×1021 kg) from the early asteroid belt. Water is too volatile to have been present at Earth's formation and must have been subsequently delivered from outer, colder parts of the Solar System. The water was probably delivered by planetary embryos and small planetesimals thrown out of the asteroid belt by Jupiter. A population of main-belt comets discovered in 2006 has also been suggested as a possible source for Earth's water. In contrast, comets from the Kuiper belt or farther regions delivered not more than about 6% of Earth's water. The panspermia hypothesis holds that life itself may have been deposited on Earth in this way, although this idea is not widely accepted.
According to the nebular hypothesis, the outer two planets may be in the "wrong place". Uranus and Neptune (known as the "ice giants") exist in a region where the reduced density of the solar nebula and longer orbital times render their formation there highly implausible. The two are instead thought to have formed in orbits near Jupiter and Saturn (known as the "gas giants"), where more material was available, and to have migrated outward to their current positions over hundreds of millions of years.
The migration of the outer planets is also necessary to account for the existence and properties of the Solar System's outermost regions. Beyond Neptune, the Solar System continues into the Kuiper belt, the scattered disc, and the Oort cloud, three sparse populations of small icy bodies thought to be the points of origin for most observed comets. At their distance from the Sun, accretion was too slow to allow planets to form before the solar nebula dispersed, and thus the initial disc lacked enough mass density to consolidate into a planet. The Kuiper belt lies between 30 and 55 AU from the Sun, while the farther scattered disc extends to over 100 AU, and the distant Oort cloud begins at about 50,000 AU. Originally, however, the Kuiper belt was much denser and closer to the Sun, with an outer edge at approximately 30 AU. Its inner edge would have been just beyond the orbits of Uranus and Neptune, which were in turn far closer to the Sun when they formed (most likely in the range of 15–20 AU), and in 50% of simulations ended up in opposite locations, with Uranus farther from the Sun than Neptune.
According to the Nice model, after the formation of the Solar System, the orbits of all the giant planets continued to change slowly, influenced by their interaction with the large number of remaining planetesimals. After 500–600 million years (about 4 billion years ago) Jupiter and Saturn fell into a 2:1 resonance: Saturn orbited the Sun once for every two Jupiter orbits. This resonance created a gravitational push against the outer planets, possibly causing Neptune to surge past Uranus and plough into the ancient Kuiper belt. The planets scattered the majority of the small icy bodies inwards, while themselves moving outwards. These planetesimals then scattered off the next planet they encountered in a similar manner, moving the planets' orbits outwards while they moved inwards. This process continued until the planetesimals interacted with Jupiter, whose immense gravity sent them into highly elliptical orbits or even ejected them outright from the Solar System. This caused Jupiter to move slightly inward.[c] Those objects scattered by Jupiter into highly elliptical orbits formed the Oort cloud; those objects scattered to a lesser degree by the migrating Neptune formed the current Kuiper belt and scattered disc. This scenario explains the Kuiper belt's and scattered disc's present low mass. Some of the scattered objects, including Pluto, became gravitationally tied to Neptune's orbit, forcing them into mean-motion resonances. Eventually, friction within the planetesimal disc made the orbits of Uranus and Neptune near-circular again.
In contrast to the outer planets, the inner planets are not thought to have migrated significantly over the age of the Solar System, because their orbits have remained stable following the period of giant impacts.
Another question is why Mars came out so small compared with Earth. A study by Southwest Research Institute, San Antonio, Texas, published June 6, 2011 (called the Grand tack hypothesis), proposes that Jupiter had migrated inward to 1.5 AU. After Saturn formed, migrated inward, and established the 2:3 mean motion resonance with Jupiter, the study assumes that both planets migrated back to their present positions. Jupiter thus would have consumed much of the material that would have created a bigger Mars. The same simulations also reproduce the characteristics of the modern asteroid belt, with dry asteroids and water-rich objects similar to comets. However, it is unclear whether conditions in the solar nebula would have allowed Jupiter and Saturn to move back to their current positions, and according to current estimates this possibility appears unlikely. Moreover, alternative explanations for the small mass of Mars exist.
Main article: Late Heavy Bombardment
Gravitational disruption from the outer planets' migration would have sent large numbers of asteroids into the inner Solar System, severely depleting the original belt until it reached today's extremely low mass. This event may have triggered the Late Heavy Bombardment that is hypothesised to have occurred approximately 4 billion years ago, 500–600 million years after the formation of the Solar System. However, recent re-appraisal of the cosmo-chemical constraints indicates that there was likely no late spike (“terminal cataclysm”) in the bombardment rate.
If it occurred, this period of heavy bombardment lasted several hundred million years and is evident in the cratering still visible on geologically dead bodies of the inner Solar System such as the Moon and Mercury. The oldest known evidence for life on Earth dates to 3.8 billion years ago—almost immediately after the end of the Late Heavy Bombardment.
Impacts are thought to be a regular (if currently infrequent) part of the evolution of the Solar System. That they continue to happen is evidenced by the collision of Comet Shoemaker–Levy 9 with Jupiter in 1994, the 2009 Jupiter impact event, the Tunguska event, the Chelyabinsk meteor and the impact that created Meteor Crater in Arizona. The process of accretion, therefore, is not complete, and may still pose a threat to life on Earth.
Over the course of the Solar System's evolution, comets were ejected out of the inner Solar System by the gravity of the giant planets and sent thousands of AU outward to form the Oort cloud, a spherical outer swarm of cometary nuclei at the farthest extent of the Sun's gravitational pull. Eventually, after about 800 million years, the gravitational disruption caused by galactic tides, passing stars and giant molecular clouds began to deplete the cloud, sending comets into the inner Solar System. The evolution of the outer Solar System also appears to have been influenced by space weathering from the solar wind, micrometeorites, and the neutral components of the interstellar medium.
The evolution of the asteroid belt after Late Heavy Bombardment was mainly governed by collisions. Objects with large mass have enough gravity to retain any material ejected by a violent collision. In the asteroid belt this usually is not the case. As a result, many larger objects have been broken apart, and sometimes newer objects have been forged from the remnants in less violent collisions. Moons around some asteroids currently can only be explained as consolidations of material flung away from the parent object without enough energy to entirely escape its gravity.
See also: Giant-impact hypothesis
Moons have come to exist around most planets and many other Solar System bodies. These natural satellites originated by one of three possible mechanisms:
Jupiter and Saturn have several large moons, such as Io, Europa, Ganymede and Titan, which may have originated from discs around each giant planet in much the same way that the planets formed from the disc around the Sun. This origin is indicated by the large sizes of the moons and their proximity to the planet. These attributes are impossible to achieve via capture, while the gaseous nature of the primaries also make formation from collision debris unlikely. The outer moons of the giant planets tend to be small and have eccentric orbits with arbitrary inclinations. These are the characteristics expected of captured bodies. Most such moons orbit in the direction opposite the rotation of their primary. The largest irregular moon is Neptune's moon Triton, which is thought to be a captured Kuiper belt object.
Moons of solid Solar System bodies have been created by both collisions and capture. Mars's two small moons, Deimos and Phobos, are thought to be captured asteroids. The Earth's Moon is thought to have formed as a result of a single, large head-on collision. The impacting object probably had a mass comparable to that of Mars, and the impact probably occurred near the end of the period of giant impacts. The collision kicked into orbit some of the impactor's mantle, which then coalesced into the Moon. The impact was probably the last in the series of mergers that formed the Earth. It has been further hypothesized that the Mars-sized object may have formed at one of the stable Earth–Sun Lagrangian points (either L4 or L5) and drifted from its position. The moons of trans-Neptunian objects Pluto (Charon) and Orcus (Vanth) may also have formed by means of a large collision: the Pluto–Charon, Orcus–Vanth and Earth–Moon systems are unusual in the Solar System in that the satellite's mass is at least 1% that of the larger body.
Astronomers estimate that the current state of the Solar System will not change drastically until the Sun has fused almost all the hydrogen fuel in its core into helium, beginning its evolution from the main sequence of the Hertzsprung–Russell diagram and into its red-giant phase. The Solar System will continue to evolve until then. Eventually, the Sun will likely expand sufficiently to overwhelm the inner planets (Mercury, Venus, and possibly Earth) but not the outer planets, including Jupiter and Saturn. Afterward, the Sun would be reduced to the size of a white dwarf, and the outer planets and their moons would continue orbiting this diminutive solar remnant. This future development may be similar to the observed detection of MOA-2010-BLG-477L b, a Jupiter-sized exoplanet orbiting its host white dwarf star MOA-2010-BLG-477L.
Main article: Stability of the Solar System
The Solar System is chaotic over million- and billion-year timescales, with the orbits of the planets open to long-term variations. One notable example of this chaos is the Neptune–Pluto system, which lies in a 3:2 orbital resonance. Although the resonance itself will remain stable, it becomes impossible to predict the position of Pluto with any degree of accuracy more than 10–20 million years (the Lyapunov time) into the future. Another example is Earth's axial tilt, which, due to friction raised within Earth's mantle by tidal interactions with the Moon (see below), is incomputable from some point between 1.5 and 4.5 billion years from now.
The outer planets' orbits are chaotic over longer timescales, with a Lyapunov time in the range of 2–230 million years. In all cases, this means that the position of a planet along its orbit ultimately becomes impossible to predict with any certainty (so, for example, the timing of winter and summer becomes uncertain). Still, in some cases, the orbits themselves may change dramatically. Such chaos manifests most strongly as changes in eccentricity, with some planets' orbits becoming significantly more—or less—elliptical.
Ultimately, the Solar System is stable in that none of the planets are likely to collide with each other or be ejected from the system in the next few billion years. Beyond this, within five billion years or so, Mars's eccentricity may grow to around 0.2, such that it lies on an Earth-crossing orbit, leading to a potential collision. In the same timescale, Mercury's eccentricity may grow even further, and a close encounter with Venus could theoretically eject it from the Solar System altogether or send it on a collision course with Venus or Earth. This could happen within a billion years, according to numerical simulations in which Mercury's orbit is perturbed.
The evolution of moon systems is driven by tidal forces. A moon will raise a tidal bulge in the object it orbits (the primary) due to the differential gravitational force across diameter of the primary. If a moon is revolving in the same direction as the planet's rotation and the planet is rotating faster than the orbital period of the moon, the bulge will constantly be pulled ahead of the moon. In this situation, angular momentum is transferred from the rotation of the primary to the revolution of the satellite. The moon gains energy and gradually spirals outward, while the primary rotates more slowly over time.
The Earth and its Moon are one example of this configuration. Today, the Moon is tidally locked to the Earth; one of its revolutions around the Earth (currently about 29 days) is equal to one of its rotations about its axis, so it always shows one face to the Earth. The Moon will continue to recede from Earth, and Earth's spin will continue to slow gradually. Other examples are the Galilean moons of Jupiter (as well as many of Jupiter's smaller moons) and most of the larger moons of Saturn.
A different scenario occurs when the moon is either revolving around the primary faster than the primary rotates or is revolving in the direction opposite the planet's rotation. In these cases, the tidal bulge lags behind the moon in its orbit. In the former case, the direction of angular momentum transfer is reversed, so the rotation of the primary speeds up while the satellite's orbit shrinks. In the latter case, the angular momentum of the rotation and revolution have opposite signs, so transfer leads to decreases in the magnitude of each (that cancel each other out).[d] In both cases, tidal deceleration causes the moon to spiral in towards the primary until it either is torn apart by tidal stresses, potentially creating a planetary ring system, or crashes into the planet's surface or atmosphere. Such a fate awaits the moons Phobos of Mars (within 30 to 50 million years), Triton of Neptune (in 3.6 billion years), and at least 16 small satellites of Uranus and Neptune. Uranus's Desdemona may even collide with one of its neighboring moons.
A third possibility is where the primary and moon are tidally locked to each other. In that case, the tidal bulge stays directly under the moon, there is no angular momentum transfer, and the orbital period will not change. Pluto and Charon are an example of this type of configuration.
There is no consensus on the mechanism of the formation of the rings of Saturn. Although theoretical models indicated that the rings were likely to have formed early in the Solar System's history, data from the Cassini–Huygens spacecraft suggests they formed relatively late.
In the long term, the greatest changes in the Solar System will come from changes in the Sun itself as it ages. As the Sun burns through its hydrogen fuel supply, it gets hotter and burns the remaining fuel even faster. As a result, the Sun is growing brighter at a rate of ten percent every 1.1 billion years. In about 600 million years, the Sun's brightness will have disrupted the Earth's carbon cycle to the point where trees and forests (C3 photosynthetic plant life) will no longer be able to survive; and in around 800 million years, the Sun will have killed all complex life on the Earth's surface and in the oceans. In 1.1 billion years, the Sun's increased radiation output will cause its circumstellar habitable zone to move outwards, making the Earth's surface too hot for liquid water to exist there naturally. At this point, all life will be reduced to single-celled organisms. Evaporation of water, a potent greenhouse gas, from the oceans' surface could accelerate temperature increase, potentially ending all life on Earth even sooner. During this time, it is possible that as Mars's surface temperature gradually rises, carbon dioxide and water currently frozen under the surface regolith will release into the atmosphere, creating a greenhouse effect that will heat the planet until it achieves conditions parallel to Earth today, providing a potential future abode for life. By 3.5 billion years from now, Earth's surface conditions will be similar to those of Venus today.
Around 5.4 billion years from now, the core of the Sun will become hot enough to trigger hydrogen fusion in its surrounding shell. This will cause the outer layers of the star to expand greatly, and the star will enter a phase of its life in which it is called a red giant. Within 7.5 billion years, the Sun will have expanded to a radius of 1.2 AU (180×106 km; 110×106 mi)—256 times its current size. At the tip of the red-giant branch, as a result of the vastly increased surface area, the Sun's surface will be much cooler (about 2,600 K (2,330 °C; 4,220 °F)) than now, and its luminosity much higher—up to 2,700 current solar luminosities. For part of its red-giant life, the Sun will have a strong stellar wind that will carry away around 33% of its mass. During these times, it is possible that Saturn's moon Titan could achieve surface temperatures necessary to support life.
As the Sun expands, it will swallow the planets Mercury and Venus. Earth's fate is less clear; although the Sun will envelop Earth's current orbit, the star's loss of mass (and thus weaker gravity) will cause the planets' orbits to move farther out. If it were only for this, Venus and Earth would probably escape incineration, but a 2008 study suggests that Earth will likely be swallowed up as a result of tidal interactions with the Sun's weakly-bound outer envelope.
Additionally, the Sun's habitable zone will move into the outer Solar System and eventually beyond the Kuiper belt at the end of the red-giant phase, causing icy bodies such as Enceladus and Pluto to thaw. During this time, these worlds could support a water-based hydrologic cycle, but as they are too small to hold a dense atmosphere like Earth, they would experience extreme day–night temperature differences. When the Sun leaves the red-giant branch and enters the asymptotic giant branch, the habitable zone will abruptly shrink to roughly the space between Jupiter and Saturn's present-day orbits, but toward the end of the 200 million-year duration of the asymptotic giant phase, it will expand outward to about the same distance as before.
Gradually, the hydrogen burning in the shell around the solar core will increase the mass of the core until it reaches about 45% of the present solar mass. At this point, the density and temperature will become so high that the fusion of helium into carbon will begin, leading to a helium flash; the Sun will shrink from around 250 to 11 times its present (main-sequence) radius. Consequently, its luminosity will decrease from around 3,000 to 54 times its current level, and its surface temperature will increase to about 4,770 K (4,500 °C; 8,130 °F). The Sun will become a horizontal giant, burning helium in its core in a stable fashion, much like it burns hydrogen today. The helium-fusing stage will last only 100 million years. Eventually, it will have to again resort to the reserves of hydrogen and helium in its outer layers. It will expand a second time, becoming what is known as an asymptotic giant. Here the luminosity of the Sun will increase again, reaching about 2,090 present luminosities, and it will cool to about 3,500 K (3,230 °C; 5,840 °F). This phase lasts about 30 million years, after which, over the course of a further 100,000 years, the Sun's remaining outer layers will fall away, ejecting a vast stream of matter into space and forming a halo known (misleadingly) as a planetary nebula. The ejected material will contain the helium and carbon produced by the Sun's nuclear reactions, continuing the enrichment of the interstellar medium with heavy elements for future generations of stars and planets.
This is a relatively peaceful event, nothing akin to a supernova, which the Sun is too small to undergo as part of its evolution. Any observer present to witness this occurrence would see a massive increase in the speed of the solar wind, but not enough to destroy a planet completely. However, the star's loss of mass could send the orbits of the surviving planets into chaos, causing some to collide, others to be ejected from the Solar System, and others to be torn apart by tidal interactions. Afterwards, all that will remain of the Sun is a white dwarf, an extraordinarily dense object, 54% its original mass but only the size of the Earth. Initially, this white dwarf may be 100 times as luminous as the Sun is now. It will consist entirely of degenerate carbon and oxygen but will never reach temperatures hot enough to fuse these elements. Thus, the white dwarf Sun will gradually cool, growing dimmer and dimmer.
As the Sun dies, its gravitational pull on the orbiting bodies, such as planets, comets, and asteroids, will weaken due to its mass loss. All remaining planets' orbits will expand; if Venus, Earth, and Mars still exist, their orbits will lie roughly at 1.4 AU (210 million km; 130 million mi), 1.9 AU (280 million km; 180 million mi), and 2.8 AU (420 million km; 260 million mi), respectively. They and the other remaining planets will become dark, frigid hulks, completely devoid of life. They will continue to orbit their star, their speed slowed due to their increased distance from the Sun and the Sun's reduced gravity. Two billion years later, when the Sun has cooled to the 6,000–8,000 K (5,730–7,730 °C; 10,340–13,940 °F) range, the carbon and oxygen in the Sun's core will freeze, with over 90% of its remaining mass assuming a crystalline structure. Eventually, after roughly one quadrillion years, the Sun will finally cease to shine altogether, becoming a black dwarf.
The Solar System travels alone through the Milky Way in a circular orbit approximately 30,000 light years from the Galactic Center. Its speed is about 220 km/s. The period required for the Solar System to complete one revolution around the Galactic Center, the galactic year, is in the range of 220–250 million years. Since its formation, the Solar System has completed at least 20 such revolutions.
Various scientists have speculated that the Solar System's path through the galaxy is a factor in the periodicity of mass extinctions observed in the Earth's fossil record. One hypothesis supposes that vertical oscillations made by the Sun as it orbits the Galactic Centre cause it to regularly pass through the galactic plane. When the Sun's orbit takes it outside the galactic disc, the influence of the galactic tide is weaker; as it re-enters the galactic disc, as it does every 20–25 million years, it comes under the influence of the far stronger "disc tides", which, according to mathematical models, increase the flux of Oort cloud comets into the Solar System by a factor of 4, leading to a massive increase in the likelihood of a devastating impact.
However, others argue that the Sun is currently close to the galactic plane, and yet the last great extinction event was 15 million years ago. Therefore, the Sun's vertical position cannot alone explain such periodic extinctions, and that extinctions instead occur when the Sun passes through the galaxy's spiral arms. Spiral arms are home not only to larger numbers of molecular clouds, whose gravity may distort the Oort cloud, but also to higher concentrations of bright blue giants, which live for relatively short periods and then explode violently as supernovae.
Main article: Andromeda–Milky Way collision
Although the vast majority of galaxies in the Universe are moving away from the Milky Way, the Andromeda Galaxy, the largest member of the Local Group of galaxies, is heading toward it at about 120 km/s. In 4 billion years, Andromeda and the Milky Way will collide, causing both to deform as tidal forces distort their outer arms into vast tidal tails. If this initial disruption occurs, astronomers calculate a 12% chance that the Solar System will be pulled outward into the Milky Way's tidal tail and a 3% chance that it will become gravitationally bound to Andromeda and thus a part of that galaxy. After a further series of glancing blows, during which the likelihood of the Solar System's ejection rises to 30%, the galaxies' supermassive black holes will merge. Eventually, in roughly 6 billion years, the Milky Way and Andromeda will complete their merger into a giant elliptical galaxy. During the merger, if there is enough gas, the increased gravity will force the gas to the centre of the forming elliptical galaxy. This may lead to a short period of intensive star formation called a starburst. In addition, the infalling gas will feed the newly formed black hole, transforming it into an active galactic nucleus. The force of these interactions will likely push the Solar System into the new galaxy's outer halo, leaving it relatively unscathed by the radiation from these collisions.
It is a common misconception that this collision will disrupt the orbits of the planets in the Solar System. Although it is true that the gravity of passing stars can detach planets into interstellar space, distances between stars are so great that the likelihood of the Milky Way–Andromeda collision causing such disruption to any individual star system is negligible. Although the Solar System as a whole could be affected by these events, the Sun and planets are not expected to be disturbed.
However, over time, the cumulative probability of a chance encounter with a star increases, and disruption of the planets becomes all but inevitable. Assuming that the Big Crunch or Big Rip scenarios for the end of the Universe do not occur, calculations suggest that the gravity of passing stars will have completely stripped the dead Sun of its remaining planets within 1 quadrillion (1015) years. This point marks the end of the Solar System. Although the Sun and planets may survive, the Solar System, in any meaningful sense, will cease to exist.
The time frame of the Solar System's formation has been determined using radiometric dating. Scientists estimate that the Solar System is 4.6 billion years old. The oldest known mineral grains on Earth are approximately 4.4 billion years old. Rocks this old are rare, as Earth's surface is constantly being reshaped by erosion, volcanism, and plate tectonics. To estimate the age of the Solar System, scientists use meteorites, which were formed during the early condensation of the solar nebula. Almost all meteorites (see the Canyon Diablo meteorite) are found to have an age of 4.6 billion years, suggesting that the Solar System must be at least this old.
Studies of discs around other stars have also done much to establish a time frame for Solar System formation. Stars between one and three million years old have discs rich in gas, whereas discs around stars more than 10 million years old have little to no gas, suggesting that giant planets within them have ceased forming.
Note: All dates and times in this chronology are approximate and should be taken as an order of magnitude indicator only.
|Phase||Time since formation of the Sun||Time from present (approximate)||Event|
|Pre-Solar System||Billions of years before the formation of the Solar System||Over 4.6 billion years ago (bya)||Previous generations of stars live and die, injecting heavy elements into the interstellar medium out of which the Solar System formed.|
|~ 50 million years before formation of the Solar System||4.6 bya||If the Solar System formed in an Orion Nebula-like star-forming region, the most massive stars are formed, live their lives, die, and explode in supernova. One particular supernova, called the primal supernova, possibly triggers the formation of the Solar System.|
|Formation of Sun||0–100,000 years||4.6 bya||Pre-solar nebula forms and begins to collapse. Sun begins to form.|
|100,000 – 50 million years||4.6 bya||Sun is a T Tauri protostar.|
|100,000 – 10 million years||4.6 bya||By 10 million years, gas in the protoplanetary disc has been blown away, and outer planet formation is likely complete.|
|10 million – 100 million years||4.5–4.6 bya||Terrestrial planets and the Moon form. Giant impacts occur. Water delivered to Earth.|
|Main sequence||50 million years||4.5 bya||Sun becomes a main-sequence star.|
|200 million years||4.4 bya||Oldest known rocks on the Earth formed.|
|500 million – 600 million years||4.0–4.1 bya||Resonance in Jupiter and Saturn's orbits moves Neptune out into the Kuiper belt. Late Heavy Bombardment occurs in the inner Solar System.|
|800 million years||3.8 bya||Oldest known life on Earth. Oort cloud reaches maximum mass.|
|4.6 billion years||Today||Sun remains a main-sequence star.|
|6 billion years||1.4 billion years in the future||Sun's habitable zone moves outside of the Earth's orbit, possibly shifting onto Mars's orbit.|
|7 billion years||2.4 billion years in the future||The Milky Way and Andromeda Galaxy begin to collide. Slight chance the Solar System could be captured by Andromeda before the two galaxies fuse completely.|
|Post–main sequence||10 billion – 12 billion years||5–7 billion years in the future||Sun has fused all of the hydrogen in the core and starts to burn hydrogen in a shell surrounding its core, thus ending its main sequence life. Sun begins to ascend the red-giant branch of the Hertzsprung–Russell diagram, growing dramatically more luminous (by a factor of up to 2,700), larger (by a factor of up to 250 in radius), and cooler (down to 2600 K): Sun is now a red giant. Mercury, Venus and possibly Earth are swallowed. During this time Saturn's moon Titan may become habitable.|
|~ 12 billion years||~ 7 billion years in the future||Sun passes through helium-burning horizontal-branch and asymptotic-giant-branch phases, losing a total of ~30% of its mass in all post-main-sequence phases. The asymptotic-giant-branch phase ends with the ejection of its outer layers as a planetary nebula, leaving the dense core of the Sun behind as a white dwarf.|
|Remnant Sun||~ 1 quadrillion years (1015 years)||~ 1 quadrillion years in the future||Sun cools to 5 K. Gravity of passing stars detaches planets from orbits. Solar System ceases to exist.|
((cite web)): CS1 maint: unfit URL (link) |
The Historical Journey of Berlin
Berlin, the vibrant and culturally-rich German capital, has an intriguing history that stretches back centuries. It has evolved through various political, social, and economic transformations, solidifying its position as a city of international significance. One of the pivotal milestones in Berlin’s history was when it became the capital of Germany. Let’s delve into this captivating journey and discover the factors that led to Berlin’s rise as the nation’s capital.
The Founding Years
The origins of Berlin can be traced back to the 13th century when it was established as a small town in the Margraviate of Brandenburg. Over time, it gradually grew in size and importance, becoming the capital of the Kingdom of Prussia in 1701. The kingdom, led by the ambitious Prussian monarchy, played a crucial role in shaping Berlin’s destiny and laying the foundation for its future as the capital of Germany.
The Unification of Germany
The unification of Germany in the late 19th century was a significant turning point in Berlin’s history. The vision of a united Germany was realized under the leadership of Chancellor Otto von Bismarck, who skillfully navigated the complex political landscape of German states. On January 18, 1871, the German Empire was proclaimed in the grand Hall of Mirrors at the Palace of Versailles, with Berlin designated as its capital.
Berlin’s selection as the capital of the newly united Germany was not solely based on its geographical location. It also held significant symbolic meaning. The city’s historical, cultural, and intellectual contributions to German society made it a natural choice for the seat of power. Berlin’s renowned universities, such as Humboldt University, its thriving artistic scene, and its role as a center for trade and commerce, all contributed to its status as a hub of innovation and influence.
Located in the heart of Europe, Berlin’s central position contributed to its suitability as a capital city. Its proximity to major trading routes allowed for easy access and connectivity between different regions of the country. Furthermore, Berlin’s location provided a neutral ground, as it was not aligned with any particular German state, ensuring a fair and unbiased governance structure for the newly united Germany.
Challenges and Transformation
While Berlin’s ascendancy as the capital of Germany brought prosperity and prestige, it also faced numerous challenges throughout its journey. These difficulties ultimately shaped the city into the dynamic metropolis it is today.
World Wars and Division
The 20th century brought immense hardship for Berlin. During World War I, the city experienced economic and social upheaval like the rest of Germany. However, it was during World War II that Berlin endured its most devastating period. The city was heavily bombed, resulting in the destruction of many iconic landmarks and the loss of countless lives.
Following the war, Germany itself faced division with the formation of the Berlin Wall in 1961. The wall physically separated East and West Berlin, symbolizing the ideological divide between the communist and capitalist worlds. The fall of the Berlin Wall in 1989 marked a significant milestone in German history, paving the way for reunification and reestablishing Berlin as the capital of a unified Germany.
Rebuilding and Rejuvenation
After the reunification, Berlin embarked on a remarkable journey of rebuilding and renewal. The city underwent extensive reconstruction efforts, restoring historical landmarks such as the Brandenburg Gate and revitalizing neighborhoods that had been neglected for decades. New modern architectural marvels emerged alongside the preserved historic structures, creating a captivating blend of past and present.
Cultural Diversity and Modernity
Today, Berlin stands as a city at the forefront of cultural diversity and modernity. It is not only the political center of Germany but also a hub for arts, music, fashion, and innovation. Its thriving creative scene and vibrant nightlife attract people from all over the world, cementing its position as a global metropolis.
Berlin’s journey to becoming the capital of Germany is a testament to its resilience and adaptability. From humble beginnings to the challenges of war and division, the city has emerged as a symbol of unity, progress, and creativity. As you explore Berlin, take a moment to appreciate the rich history and diverse culture that have shaped this remarkable capital.
Table of Contents |
Every computing device has storage devices installed to save instructions, data, and files. These storage devices are categorised into two types — primary and secondary storage devices. Primary storage devices are a necessity for the system to run and function, while secondary storage devices are a permanent storage solution that can be present as an internal or external device.
Random Access Memory and Read-Only Memory are two types of primary storage devices present on different computing devices. RAM provides temporary storage for files and is also known as the main memory of the system. RAM support depends on the OS of the system as a 32-bit OS supports up to 4 GB of RAM while a 64-bit system supports up to 16 exabytes of RAM.
ROM or Read-Only Memory is a permanent primary storage device which can only contain a small amount of data. The contents of ROM are mostly written by the manufacturer, which helps in saving vital information or instructions from alteration by users; like firmware. Even though RAM and ROM are primary storage devices, they still differ from each other a lot.
Although basic terms, the two terms can confuse many PC users. Here we see how RAM and ROM differ as we compare them on seven different parameters below.
Also read: FAT 32 vs NTFS vs Ex FAT
RAM vs ROM
- Volatility: RAM is a volatile storage device, while ROM is non-volatile. A volatile storage device works only when power is provided and loses the contents stored on it after power is turned off. ROM keeps the data saved even after power is switched off.
- Use: RAM is used to store the files required by the CPU for processing, or any intermediate files, or output files before it sends them to secondary storage devices temporarily. ROM often stores the BIOS program or Firmware modules of the system, which is basically the instructions for the computer.
- Speed: Both of them are very high-speed storage devices. Comparing both of them, RAM is a faster storage device compared to ROM. The frequency of a DDR4 SDRAM is between 800 and 1600 MHz and operates at a voltage of 1.2 V.
- Read/Write Access: The data on RAM can be read, altered, and written by a user. ROM, on the other hand, allows the user only to read the data. Data can only be written on ROM once.
- Accessibility: Data stored on RAM is easily accessible by the CPU and the user, while the data stored on the ROM needs to be loaded onto RAM. A processor or user can only access the data after it’s loaded on RAM.
- Capacity: RAM and ROM come in different sizes according to the needs of the user. RAM chips can offer a size of 16 MB to 256 GB. ROM chips usually provide a storage capacity of 4 to 8 MB. RAM has a much larger storage capacity than ROM.
- Cost: RAM is a significantly costlier storage device than ROM.
Also read: What is the difference between Waterproof and Water-Resistant? |
Anna wants to redecorate an old table. She wants to put decorative tape around the outside of the table. The problem is, she only knows the area of the table and one side length. She needs to know the other side length in order to get enough tape to cover all four sides. The area of the table is 32 square feet and the length of the table is 8 feet. Anna needs to solve for the missing dimension to determine if a roll of tape that is 25 feet long will be enough tape.
In this concept, you will learn how to figure out unknown dimensions of length or width when given the area or perimeter of a figure.
Dimensions are measurements needed in order to find the area or perimeter of a square or rectangle. The dimensions that you are familiar with are length and width (or side length in a square). Sometimes, a math problem presents missing dimensions but provides the area or perimeter to help you solve the missing dimension.
For example, there is a square with a perimeter of 12 inches. Find the side length of the square.
First, write the formula for the perimeter of a square.
Then, to solve the equation, you can either rewrite the equation to make it a division problem, or ask yourself what number multiplied by 4 gives you a total of 12. Either strategy will get you the answer of 3.
Sometimes a figure is presented and the area of that figure is provided.
For example, find the side length for a square with an area of 36 sq. in.
First, use the formula for finding the area of a square.
Then, think, “What number times itself will give me 36?” The answer is 6.
Double check your work by making sure the answer multiplied by itself gives the area. In this case, the answer checks out as 6 times 6 does equal 36.
This same concept works for rectangles, only you use the formula that is more appropriate for rectangles.
When the perimeter or area is given, you plug in the information given and solve for the missing dimension. The concept for area works the same as for a square only length and width are used instead of "s" for side length. When solving for a missing dimension when the perimeter is given in a rectangle, it looks a little different than with a square.
For example, find the width of a rectangle whose perimeter is 18 inches and the length is 6 inches.
First, write the formula and substitute the given information.
This equation shows that the only variable present is the width, the missing dimension. Solve what can be solved first in the problem before isolating the variable "w".
Then, to isolate the 2w, subtract the 12 from both sides of the equation. That leaves the equation looking like this:
Now either rewrite the equation as division or ask yourself what number times 2 gives you 6.
Earlier, you were given a problem about Anna and her table.
She knows the area of the table is 32 sq. ft. and the length is 8 feet. She needs to find the width of the table to then determine if 25 ft. of tape is enough.
First, Anna writes out the formula to solve for the missing dimension.
Next, Anna plugs in the information she has already. She knows the area and the length.
Then, she solves for the missing dimensions (width) by dividing 32 by 8.
The missing width is 4 feet. But now, Anna needs to figure out if she has enough tape. Now that Anna knows both dimensions of the table, she can figure out the perimeter of the outside of the table by using the formula.
The total perimeter of Anna's table is 24 feet, which means 25 feet of tape is enough.
A square garden has an area of 144 square meters. What is the side length of the plot?
First, write the formula for area of a square.
Next, plug in the information given in the problem. The area is given so that substitutes for A
Then, figure out which number times itself will gives you 144. The answer is 12.
The answer is 12 feet.
Find the side length of a rectangle that has a perimeter of 48 feet and a width of 9 feet.
First, write out the formula for perimeter of a rectangle.
Next, plug in the given information. Solve that parts of the equation that can be solved already.
Then, isolate the missing dimension variable and solve for the final answer.
The answer is 15 feet.
Find the side length of a square that has a perimeter of 56 feet.
First, write the formula for perimeter of a square.
Next, plug in the given information.
Then, solve for the side length.
The answer is 14 feet.
Find the side length of a rectangle that has an area of 120 sq. miles and a length of 12 miles.
First, write the formula for area of a rectangle.
Next, plug in the given information.
Then, solve for the missing dimension.
The answer is 10 feet.
Find the side length of each square given its perimeter.
- P = 24 inches
- P = 36 inches
- P = 50 inches
- P = 88 centimeters
- P = 90 meters
- P = 20 feet
- P = 32 meters
- P = 48 feet
Find the side length of each square given its area.
- A = 64 sq. inches
- A = 49 sq. inches
- A = 121 sq. feet
- A = 144 sq. meters
- A = 169 sq. miles
- A = 25 sq. meters
- A = 81 sq. feet
- A = 100 sq. miles
To see the Review answers, open this PDF file and look for section 2.6. |
in maths, a ratio represents a comparison between two or more quantities, indicating how their sizes relates to one another. for example: in a basket with one apple and two oranges the ratio of apples to oranges is 1 to 2, which means that for every apple in the basket we will find two oranges in that basket. for example: in a basket with one apple and two oranges the ratio of apples to oranges is 1:2, which means that for every apple in the basket we will find 2 oranges in that basket. for example: in a basket with one apple and two oranges the decimal value of the ratio of apples to oranges is 0.5, which means that for every orange in the basket, there is half an apple in that basket.
for example, the ratio of boys to girls in a classroom is 2:3. both boys and girls belong to the same category known as âpeopleâ. the relation between two or more quantities of different categories/unites.for example, the ratio of distance, measured in km, to time measured in hours, results in speed, which is the quantity that measures distance over time spent and whose units are km/hour. 3/8 of the cars are green and the rest are pink.â what is the ratio of green to pink cars? in the question we are given 2 sets of ratios between german and japanese cars in each collection. therefore, the ratio of roses to plants in rogerâs garden is 2:(1+2+8.6) = 2:11.6 = 10:58 = 5:29.
the use of ratio in this example will inform us that there would be 8 blue sweets and 12 pink sweets. so, in the ratio 3:1, the antecedent is 3 and the consequent is 1. ratios should always be presented in their simplified form. for example, 12:4 simplified would be 3:1 – both sides of the ratio divided by 4. equivalent ratios can be divided and/or multiplied by the same number on both sides, so as above, 12:4 is an equivalent ratio to 3:1. ratios can inform you of the direct proportion of each number in comparison to the other. when expressing ratios, you need to ensure that both the antecedent and the consequent are the same units – whether that be cm, mm, km. ratios are also used in drawings, such as architectural designs, to show perspective and relative size on a smaller scale, and in models.
remember, all ratios should be simplified where possible, so divide both the antecedent and consequent by the highest common factor – in this case, the highest number that goes into both 8 and 12 is 4. firstly, we need to ensure that the units we are using are the same. the highest common factor in this ratio is 3. both numbers can be divided by three with none remaining, so the simplified ratio would be: to easily work with ratios, whole numbers are necessary. this might not look like a problem where ratios could help but considering this problem by expressing the given numbers as a ratio will help you to solve the problem. they are currently creating a bag of blue and pink sweets in the ratio 4:6. they can be used as equivalent ratios to help you scale up numbers – for example, quantities of ingredients for baking a cake. ratios can be simplified and, in most cases, it is preferable to give a simplified ratio as an answer. it is likely you will use ratios throughout your life and might be tested on math skills like these when applying for jobs in technical industries.
an in-depth study guide for numerical reasoning ratios questions. learn quickly take in and understand ratio maths questions, avoid pit falls and use short if you’ve been asked to take a numerical reasoning test, chances are likely that you’ll need to know how to work out ratios. similar to fractions, numerical numerical reasoning questions and answers for students and jobseekers question divider line the ratio of the number of boys and girls in a college is 7 : 8., numerical reasoning test practice, numerical reasoning test practice, hard numerical reasoning test, numerical ratio example, simple numerical reasoning test.
ratio problems are common in maths or numeracy tests. use this guide to get them right practice numerical reasoning test. practice now. ratios classic question 1: if sales revenue in 2011 was split between online and offline sales in the ratio 7:2, what was the revenue from offline sales in often ratios and proportions problems are incorporated into aptitude however, to answer the above question in numerical reasoning test quickly make as, numerical reasoning test formulas, numerical reasoning test tips, numerical reasoning graph questions, numerical critical reasoning test pdf, numerical reasoning test free, scales numerical reasoning finance, numerical reasoning calculator, ratio test questions, continental numerical reasoning test answers, cat numerical reasoning test uniqlo. how do you answer ratio questions in numerical reasoning tests? how many questions are in a numerical reasoning test? how do you find the numerical ratio? what is a good score in numerical reasoning tests?
When you try to get related information on numerical reasoning test ratio questions, you may look for related areas. numerical reasoning test practice, hard numerical reasoning test, numerical ratio example, simple numerical reasoning test, numerical reasoning test formulas, numerical reasoning test tips, numerical reasoning graph questions, numerical critical reasoning test pdf, numerical reasoning test free, scales numerical reasoning finance, numerical reasoning calculator, ratio test questions, continental numerical reasoning test answers, cat numerical reasoning test uniqlo. |
Optical Instruments Teacher Resources
Find Optical Instruments educational ideas and activities
Showing 1 - 20 of 20 resources
Students identify the parts of a human eye. In this eye lesson students compare a human eye to the lenses of a camera and explain what a hologram is.
A few definitions related to waves open this slide show. Note that the information only covers light waves even though the title mentions sound. Correct the title before using this resource. Another mention is a set of photos of a class project, which you can delete. Making these alterations will leave you with a very colorful and impactful lesson on the electromagnetic spectrum, reflection, refraction, color, uses of light, and more!
All aspects of the path of light are included in a great summary. Internal reflection and the angles of paths in different materials are explained and the behavior of visible light through lenses and the effect on focal points are detailed. Your class will enjoy the diagram giving parallels between the eyeball and a camera.
Third graders utilize the scientific method to explain light and optics in this five lessons unit. Through experimentation and discussion, 3rd graders canvass the concepts of light traveling, reflection and refraction.
When young physicists study light, they will need to explore refraction, fiber optics, and birefringence. Within this resource are the background information and activity instructions for exploring all of these phenomena. Use all of the included material for a well-rounded unit on the behavior of light, or choose one of the many activities to support your own curriuclum.
For this Sombrero Galaxy worksheet, students observe infrared images taken by the Spitzer Infrared Telescope and the Hubble Space Telescope. They answer 9 questions about the details of the images such as the radius of the stellar component, the thickness of the dust disk and the diameter of the bright nuclear core.
Eighth graders are introduced to concepts related to the Solar System. In groups, they participate in an experiment in which they must describe a ray of light and how it travels. They draw a diagram of the electromagnetic spectrum and describe the wavelengths associated with each type of light. They end the lesson by showing how light is reflected by mirrors.
You could call this five presentations rolled into one resource! The first topic of concern is the characteristics of electromagnetic waves.The electromagnetic spectrum is examined next, followed by the behavior of light. Several slides are dedicated to color, and even optical illusions are introduced. This may be one of the most comprehensive collections of slides on the topic of electromagnetic waves that you will come across! Use it when introducing high school physical science starters to this brilliant topic!
Students use a software program to create artwork and to manipulate images to study mirror and rotational symmetry. They take pictures of items in their environment in which they identify symmetry.
In this letter o words worksheet, students match the letter o words to their definitions. Students complete 10 matches for letter o words to definitions.
Pupils research space flight exploration and technology. In small groups, they research a significant event from early time until the start of the space age. A class time line is created from the research groups.
Fourth graders research a person who made a difference in New York's history, they write short biographies, and then they become the person during The Living History Museum. They can choose a person from any timie period.
Students work together to test how the color of a material affects how much heat it absorbs. They make predictions and take notes on their observations. They discover how engineers use this type of information.
Students explore photographs from the Civil War Era. In this Civil War lesson, students consider how photography impacted public opinion of the war as they analyze the provided photographs and discuss the evolution of early photography.
Students investigate with sea urchins. In this ocean habitat lesson, students observe sea urchins and other ocean grazers. Students work with lab equipment to examine the anatomy of these creatures.
Students draw a diagram that shows the law of reflection. In this physics activity, students investigate the relationship between the angle of incidence and angle of reflection. They explain how light travels as it reflects on a surface.
Students investigate the integrity and strength of different types of food wraps. They test the wraps and create a graphic organizer for the data. Once it is organized then a lab report can be written. The lesson plan contains background information for the teacher who may not understand chemistry.
Students explain how telescopes work and how they can contribute to our knowledge of the universe.
Fourth graders use circles to "home in" on particular spots, showing the ability of scientists to locate unseen objects in space. This activity shows how scientists know certain objects exist in space due to the forces exerted by adjacent bodies.
Learners work in teams to research common categories of inventions and their development over time. They access primary and secondary sources; create timelines, glossaries and oral presentations and include a developed bibliography. |
Welcome to your ultimate guide on unlocking the secrets of computer hardware! Computer hardware may seem complex and intimidating at first, but with this guide, we aim to simplify it for you. In this article, we will take an in-depth look at the various components of computer hardware, including the CPU, motherboard, RAM, graphics card, storage devices, and more. We will also explore how each component works together to create a complete and functional computer system. Whether you are a seasoned computer enthusiast or a curious newcomer to the world of hardware, this guide will provide you with a comprehensive understanding of computer hardware and its inner workings. So sit back, relax, and let us guide you through the exciting world of computer hardware!
-Understanding the Basics of Computer Hardware
Components of a computer that can be seen and touched are referred to as hardware. Computer hardware is made up of different interconnected parts that work together to allow the computer to function properly. Understanding the basics of computer hardware is crucial for anyone who uses a computer.
The Central Processing Unit (CPU) is the brain of the computer. It is responsible for handling all the computations in the computer. The Random Access Memory (RAM) is a short-term memory that the CPU uses to store data that is currently in use. The Hard Disk Drive (HDD) is a long-term memory storage device where all information is stored, including the operating system, software, and data. Other important hardware components include the motherboard, power supply unit, and peripherals like the keyboard, mouse, and monitor.
Maintaining computer hardware is essential for ensuring proper functionality and longevity. Regular cleaning of the internal components and peripherals, periodic software updates, and proper handling of the equipment can help to extend the lifespan of computer hardware. By understanding how to maintain and troubleshoot computer hardware, users can reduce the risk of hardware failure and ensure a smooth and efficient computing experience.
-The Inner Workings of Processing Units: CPUs and GPUs
CPUs and GPUs are two different types of processing units, each with its own strengths and weaknesses. CPUs are responsible for general-purpose computing tasks such as running applications, managing operating systems, and processing inputs from various devices. They have a small number of cores but each core operates at a higher frequency allowing it to execute instructions quickly. GPUs, on the other hand, are optimized for parallel operations, making them ideal for tasks that require intensive mathematical calculations such as gaming and video rendering.
Both CPUs and GPUs have a similar design in terms of how they process data, but they differ in their approach to executing instructions. CPUs utilize a pipeline architecture to process instructions in a linear fashion, while GPUs employ a massively parallel architecture with thousands of computing cores executing instructions simultaneously.
Another key difference between CPUs and GPUs is the way they are optimized for power consumption. CPUs are designed to conserve power by reducing their frequency or shutting down cores when they are not in use. GPUs, on the other hand, consume a lot of power due to their massive computing capability making them less efficient when performing non-intensive tasks.
In summary, CPUs and GPUs play important roles in computing, with each having its own strengths and weaknesses. While CPUs are ideal for general computing tasks, GPUs are optimized for parallel operations and heavy-duty computational tasks. Understanding the differences between CPUs and GPUs is essential for optimizing computer performance and building a computer geared towards specific use cases.
-The Role of Memory and Storage Devices in Your Computer
Understanding the role of memory and storage devices in your computer is crucial for maintaining optimal performance. Memory is the short-term storage space that holds information temporarily while your computer is in use. It allows your computer to access data quickly and efficiently, but it has limited capacity. When the memory is full, your computer can slow down or even crash.
On the other hand, storage devices, such as hard drives or solid-state drives (SSDs), provide long-term storage for your data, applications, and operating system. They are essential for running your computer and housing your files, documents, photos, and videos. Spinning hard drives are slower, but they offer more storage capacity, whereas SSDs are faster, but they typically have less storage capacity.
To ensure your computer runs smoothly, it’s important to strike a balance between memory and storage. Upgrading your computer’s memory or storage can help increase its performance and enhance your computing experience. However, keep in mind that adding more memory or storage may not solve every issue, and it’s important to maintain your computer regularly to ensure its longevity.
-Exploring Input and Output Devices: Keyboards, Mice, and Monitors
Keyboards are essential input devices that allow users to interact with their computers. They come in different sizes and layouts, including QWERTY, AZERTY, and DVORAK. Some keyboards have additional features like multimedia keys, gaming keys, and backlit keys. The most common type of keyboard is the wired keyboard, which connects to the computer via a USB cable. However, wireless options like Bluetooth and RF are becoming more popular.
A computer mouse is a handheld device that allows users to move the cursor and interact with the graphical user interface (GUI) of the computer. There are different types of mice, including the traditional wired mouse and the wireless mouse. The latter uses Bluetooth or RF technology to connect to the computer. Mice come in various shapes and sizes, including the standard design and ergonomic design. A gaming mouse has additional features like programmable buttons, adjustable sensitivity, and customizable lighting.
The monitor is the output device that displays the content generated by the computer. The most common type of monitor is the LCD (Liquid Crystal Display) monitor, which is light and energy-efficient. The display size of monitors ranges from 15 inches to more than 30 inches, with a resolution of 1920×1080 (Full HD) to 3840×2160 (4K Ultra HD). Some monitors have additional features like built-in webcams, speakers, and USB hubs. Monitors can be connected to the computer using VGA, DVI, HDMI, or DisplayPort.
-Hardware Maintenance: Tips and Tricks for Keeping Your System Running Smoothly
Prevent dust buildup inside your hardware by cleaning it regularly. You can use a soft-bristled brush or a can of compressed air to remove dirt and dust from the fans, heatsinks, and other components. Make sure to turn off your system before cleaning and avoid using water or abrasive cleaners as they can damage your hardware.
Check your hardware for signs of wear and tear such as loose connections, bent pins, or cracks. If you notice any issues, fix them immediately before they cause permanent damage. You can also run diagnostic tools to detect hardware problems such as hard drive errors or malfunctioning RAM.
Make sure to update your hardware drivers and firmware regularly to ensure optimal performance and compatibility. Check the manufacturer’s website for updates and follow the instructions carefully to avoid any issues during the update process. By following these tips and tricks, you can keep your system running smoothly for years to come.
Questions People Also Ask:
1. What is computer hardware?
Computer hardware consists of physical components that make up the computer, such as the processor, motherboard, hard drive, graphics card, and memory. These components work together to allow the computer to perform various functions and tasks.
2. What are the different types of computer hardware?
There are several different types of computer hardware, including input devices (e.g., keyboard, mouse), output devices (e.g., monitor, printer), storage devices (e.g., hard drive, SSD), processing devices (e.g., CPU, GPU), and memory (e.g., RAM, ROM).
3. What is the importance of computer hardware?
Computer hardware is essential for the functioning of a computer. Without hardware components, a computer would not be able to perform any tasks or functions. Hardware components also play a critical role in determining the speed and overall performance of a computer.
4. What are the various functions of computer hardware components?
Different hardware components perform different functions in a computer. For example, the CPU processes data and instructions, the GPU renders graphics, the memory stores temporary data and instructions, and the hard drive stores long-term data and applications.
5. How can computer hardware be upgraded?
Computer hardware can be upgraded by replacing existing components with newer or more powerful ones. This could involve adding more memory, upgrading the CPU, or installing a faster hard drive. It is important to ensure compatibility between the new hardware and the existing computer components.
6. Can computer hardware components be repaired?
In some cases, computer hardware components can be repaired rather than replaced. For example, a broken motherboard may be repairable by a knowledgeable technician. However, in many cases, it is more cost-effective to replace the faulty component rather than repair it.
7. What is the future of computer hardware?
The future of computer hardware looks promising, with advancements in technology such as quantum computing and artificial intelligence. These advancements will likely lead to faster and more powerful computer hardware, better energy efficiency, and increased connectivity between devices.
- Clean your hardware regularly: Regular cleaning of hardware components will maintain optimum performance and longevity of your system by removing accumulated dust and dirt.
- Upgrade and replace hardware components when necessary: Check if your hardware meets the recommended system requirements for software programs and games and upgrade components accordingly.
- Back up your system regularly: Create a backup of your data in a separate storage device or on the cloud to protect against unexpected hardware failures.
- Use surge protectors and uninterruptible power supply: Use these devices to prevent your system from power surges and fluctuations that can cause damage to internal components.
- Monitor system temperatures: Check the temperature of your system components and ensure they are within the recommended ranges to prevent damage.
- Update drivers and firmware: Keep your hardware drivers and firmware up to date to ensure the best performance and security.
- Use anti-virus and anti-malware software: Protect your system from malicious software infections that could cause hardware damage or data loss.
- Be cautious when handling hardware components: Follow proper handling procedures when installing or removing hardware components to avoid damage.
- Check for errors: Regularly check system logs for errors and address them promptly to prevent hardware damage or system failures.
- Seek professional help when needed: If you are unsure of how to perform hardware maintenance tasks or suspect hardware damage, seek professional help.
- About the Author
- Latest Posts
The writers of Digital West Virginia News are a dedicated group of journalists who are passionate about telling the stories that matter. They are committed to providing their readers with accurate, unbiased, and informative news coverage. The team is made up of experienced journalists with a wide range of expertise. They have a deep understanding of the issues that matter to their readers, and they are committed to providing them with the information they need to make informed decisions. The writers at this site are also committed to using their platform to make a difference in the world. They believe that journalism can be a force for good, and they are committed to using their skills to hold those in power accountable and to make the world a better place. |
Transcription is the first step of gene expression, in which a particular segment of DNA is copied into RNA (especially mRNA) by the enzyme RNA polymerase. Both DNA and RNA are nucleic acids, which use base pairs of nucleotides as a complementary language. During transcription, a DNA sequence is read by an RNA polymerase, which produces a complementary, antiparallel RNA strand called a primary transcript.
Transcription proceeds in the following general steps:
- RNA polymerase, together with one or more general transcription factors, binds to promoter DNA.
- RNA polymerase creates a transcription bubble, which separates the two strands of the DNA helix. This is done by breaking the hydrogen bonds between complementary DNA nucleotides.
- RNA polymerase adds RNA nucleotides (which are complementary to the nucleotides of one DNA strand).
- RNA sugar-phosphate backbone forms with assistance from RNA polymerase to form an RNA strand.
- Hydrogen bonds of the RNA–DNA helix break, freeing the newly synthesized RNA strand.
- If the cell has a nucleus, the RNA may be further processed. This mayclude polyadenylation, capping, and splicing.
- The RNA may remain in the nucleus or exit to the cytoplasm through the nuclear pore complex.
The stretch of DNA transcribed into an RNA molecule is called a transcription unit and encodes at least one gene. If the gene encodes a protein, the transcription produces messenger RNA (mRNA); the mRNA, in turn, serves as a template for the protein's synthesis through translation. Alternatively, the transcribed gene may encode for either non-coding RNA (such as microRNA), ribosomal RNA (rRNA), transfer RNA (tRNA), or other enzymatic RNA molecules called ribozymes. Overall, RNA helps synthesize, regulate, and process proteins; it therefore plays a fundamental role in performing functions within a cell.
In virology, the term may also be used when referring to mRNA synthesis from an RNA molecule (i.e., RNA replication). For instance, the genome of a negative-sense single-stranded RNA (ssRNA -) virus may be template for a positive-sense single-stranded RNA (ssRNA +). This is because the positive-sense strand contains the information needed to translate the viral proteins for viral replication afterwards. This process is catalyzed by a viral RNA replicase.
A DNA transcription unit encoding for a protein may contain both a coding sequence, which will be translated into the protein, and regulatory sequences, which direct and regulate the synthesis of that protein. The regulatory sequence before ("upstream" from) the coding sequence is called the five prime untranslated region (5'UTR); the sequence after ("downstream" from) the coding sequence is called the three prime untranslated region (3'UTR).
Only one of the two DNA strands serve as a template for transcription. The antisense strand of DNA is read by RNA polymerase from the 3' end to the 5' end during transcription (3' → 5'). The complementary RNA is created in the opposite direction, in the 5' → 3' direction, matching the sequence of the sense strand with the exception of switching uracil for thymine. This directionality is because RNA polymerase can only add nucleotides to the 3' end of the growing mRNA chain. This use of only the 3' → 5' DNA strand eliminates the need for the Okazaki fragments that are seen in DNA replication. This also removes the need for an RNA primer to initiate RNA synthesis, as is the case in DNA replication.
The non-template (sense) strand of DNA is called the coding strand, because its sequence is the same as the newly created RNA transcript (except for the substitution of uracil for thymine). This is the strand that is used by convention when presenting a DNA sequence.
Transcription has some proofreading mechanisms, but they are fewer and less effective than the controls for copying DNA; therefore, transcription has a lower copying fidelity than DNA replication.
Transcription is divided into initiation, promoter escape, elongation, and termination.
Transcription begins with the binding of RNA polymerase, together with one or more general transcription factor, to a specific DNA sequence referred to as a "promoter" to form an RNA polymerase-promoter "closed complex" (called a "closed complex" because the promoter DNA is fully double-stranded).
RNA polymerase, assisted by one or more general transcription factors, then unwinds approximately 14 base pairs of DNA to form an RNA polymerase-promoter "open complex" (called an "open complex" because the promoter DNA is partly unwound and single-stranded) that contains an unwound, single-stranded DNA region of approximately 14 base pairs referred to as the "transcription bubble."
RNA polymerase, assisted by one or more general transcription factors, then selects a transcription start site in the transcription bubble, binds to an initiating NTP and an extending NTP (or a short RNA primer and an extending NTP) complementary to the transcription start site sequence, and catalyzes bond formation to yield an initial RNA product.
In bacteria, RNA polymerase core enzyme consists of five subunits: 2 α subunits, 1 β subunit, 1 β' subunit, and 1 ω subunit. In bacteria, there is one general RNA transcription factor: sigma. RNA polymerase core enzyme binds to the bacterial general transcription factor sigma to form RNA polymerase holoenzyme and then binds to a promoter.
In archaea and eukaryotes, RNA polymerase contains subunits homologous to each of the five RNA polymerase subunits in bacteria and also contains additional subunits. In archaea and eukaryotes, the functions of the bacterial general transcription factor sigma are performed by multiple general transcription factors that work together. In archaea, there are three general transcription factors: TBP, TFB, and TFE. In eukaryotes, in RNA polymerase II-dependent transcription, there are six general transcription factors: TFIIA, TFIIB (an ortholog of archaeal TFB), TFIID (a multisubunit factor in which the key subunit, TBP, is an ortholog of archaeal TBP), TFIIE (an ortholog of archaeal TFE), TFIIF, and TFIIH. In archaea and eukaryotes, the RNA polymerase-promoter closed complex is usually referred to as the "preinitiation complex."
Transcription initiation is regulated by additional proteins, known as activators and repressors, and, in some cases, associated coactivators or corepressors, which modulate formation and function of the transcription initiation complex.
After the first bond is synthesized, the RNA polymerase must escape the promoter. During this time there is a tendency to release the RNA transcript and produce truncated transcripts. This is called abortive initiation, and is common for both eukaryotes and prokaryotes. Abortive initiation continues to occur until an RNA product of a threshold length of approximately 10 nucleotides is synthesized, at which point promoter escape occurs and a transcription elongation complex is formed.
In bacteria, upon and following promoter clearance, the σ factor is released according to a stochastic model.
In eukaryotes, at an RNA polymerase II-dependent promoter, upon promoter clearance, TFIIH phosphorylates serine 5 on the carboxy terminal domain of RNA polymerase II, leading to the recruitment of capping enzyme (CE). The exact mechanism of how CE induces promoter clearance in eukaryotes is not yet known.
One strand of the DNA, the template strand (or noncoding strand), is used as a template for RNA synthesis. As transcription proceeds, RNA polymerase traverses the template strand and uses base pairing complementarity with the DNA template to create an RNA copy. Although RNA polymerase traverses the template strand from 3' → 5', the coding (non-template) strand and newly formed RNA can also be used as reference points, so transcription can be described as occurring 5' → 3'. This produces an RNA molecule from 5' → 3', an exact copy of the coding strand (except that thymines are replaced with uracils, and the nucleotides are composed of a ribose (5-carbon) sugar where DNA has deoxyribose (one fewer oxygen atom) in its sugar-phosphate backbone).
mRNA transcription can involve multiple RNA polymerases on a single DNA template and multiple rounds of transcription (amplification of particular mRNA), so many mRNA molecules can be rapidly produced from a single copy of a gene. The characteristic elongation rates in prokaryotes and eukaryotes are about 10-100nts/sec.
Elongation also involves a proofreading mechanism that can replace incorrectly incorporated bases. In eukaryotes, this may correspond with short pauses during transcription that allow appropriate RNA editing factors to bind. These pauses may be intrinsic to the RNA polymerase or due to chromatin structure.
Bacteria use two different strategies for transcription termination – Rho-independent termination and Rho-dependent termination. In Rho-independent transcription termination, RNA transcription stops when the newly synthesized RNA molecule forms a G-C-rich hairpin loop followed by a run of Us. When the hairpin forms, the mechanical stress breaks the weak rU-dA bonds, now filling the DNA–RNA hybrid. This pulls the poly-U transcript out of the active site of the RNA polymerase, terminating transcription. In the "Rho-dependent" type of termination, a protein factor called "Rho" destabilizes the interaction between the template and the mRNA, thus releasing the newly synthesized mRNA from the elongation complex.
Transcription termination in eukaryotes is less well understood than in bacteria, but involves cleavage of the new transcript followed by template-independent addition of adenines at its new 3' end, in a process called polyadenylation.
Transcription inhibitors can be used as antibiotics against, for example, pathogenic bacteria (antibacterials) and fungi (antifungals). An example of such an antibacterial is rifampicin, which inhibits Bacterial transcription of DNA into mRNA by inhibiting DNA-dependent RNA polymerase by binding its beta-subunit. 8-Hydroxyquinoline is an antifungal transcription inhibitor. The effects of histone methylation may also work to inhibit the action of transcription.
In vertebrates, the majority of gene promoters contain a CpG island with numerous CpG sites. When many of a gene's promoter CpG sites are methylated the gene becomes inhibited (silenced). Colorectal cancers typically have 3 to 6 driver mutations and 33 to 66 hitchhiker or passenger mutations. However, transcriptional inhibition (silencing) may be of more importance than mutation in causing progression to cancer. For example, in colorectal cancers about 600 to 800 genes are transcriptionally inhibited by CpG island methylation (see regulation of transcription in cancer). Transcriptional repression in cancer can also occur by other epigenetic mechanisms, such as altered expression of microRNAs. In breast cancer, transcriptional repression of BRCA1 may occur more frequently by over-expressed microRNA-182 than by hypermethylation of the BRCA1 promoter (see Low expression of BRCA1 in breast and ovarian cancers).
Active transcription units are clustered in the nucleus, in discrete sites called transcription factories or euchromatin. Such sites can be visualized by allowing engaged polymerases to extend their transcripts in tagged precursors (Br-UTP or Br-U) and immuno-labeling the tagged nascent RNA. Transcription factories can also be localized using fluorescence in situ hybridization or marked by antibodies directed against polymerases. There are ~10,000 factories in the nucleoplasm of a HeLa cell, among which are ~8,000 polymerase II factories and ~2,000 polymerase III factories. Each polymerase II factory contains ~8 polymerases. As most active transcription units are associated with only one polymerase, each factory usually contains ~8 different transcription units. These units might be associated through promoters and/or enhancers, with loops forming a "cloud" around the factor.
A molecule that allows the genetic material to be realized as a protein was first hypothesized by François Jacob and Jacques Monod. Severo Ochoa won a Nobel Prize in Physiology or Medicine in 1959 for developing a process for synthesizing RNA in vitro with polynucleotide phosphorylase, which was useful for cracking the genetic code. RNA synthesis by RNA polymerase was established in vitro by several laboratories by 1965; however, the RNA synthesized by these enzymes had properties that suggested the existence of an additional factor needed to terminate transcription correctly.
In 1972, Walter Fiers became the first person to actually prove the existence of the terminating enzyme.
Measuring and detecting
Transcription can be measured and detected in a variety of ways:
- G-Less Cassette transcription assay: measures promoter strength
- Run-off transcription assay: identifies transcription start sites (TSS)
- Nuclear run-on assay: measures the relative abundance of newly formed transcripts
- RNase protection assay and ChIP-Chip of RNAP: detect active transcription sites
- RT-PCR: measures the absolute abundance of total or nuclear RNA levels, which may however differ from transcription rates
- DNA microarrays: measures the relative abundance of the global total or nuclear RNA levels; however, these may differ from transcription rates
- In situ hybridization: detects the presence of a transcript
- MS2 tagging: by incorporating RNA stem loops, such as MS2, into a gene, these become incorporated into newly synthesized RNA. The stem loops can then be detected using a fusion of GFP and the MS2 coat protein, which has a high affinity, sequence-specific interaction with the MS2 stem loops. The recruitment of GFP to the site of transcription is visualized as a single fluorescent spot. This new approach has revealed that transcription occurs in discontinuous bursts, or pulses (see Transcriptional bursting). With the notable exception of in situ techniques, most other methods provide cell population averages, and are not capable of detecting this fundamental property of genes.
- Northern blot: the traditional method, and until the advent of RNA-Seq, the most quantitative
- RNA-Seq: applies next-generation sequencing techniques to sequence whole transcriptomes, which allows the measurement of relative abundance of RNA, as well as the detection of additional variations such as fusion genes, post-transcriptional edits and novel splice sites
Some viruses (such as HIV, the cause of AIDS), have the ability to transcribe RNA into DNA. HIV has an RNA genome that is reverse transcribed into DNA. The resulting DNA can be merged with the DNA genome of the host cell. The main enzyme responsible for synthesis of DNA from an RNA template is called reverse transcriptase.
In the case of HIV, reverse transcriptase is responsible for synthesizing a complementary DNA strand (cDNA) to the viral RNA genome. The enzyme ribonuclease H then digests the RNA strand, and reverse transcriptase synthesises a complementary strand of DNA to form a double helix DNA structure ("cDNA"). The cDNA is integrated into the host cell's genome by the enzyme integrase, which causes the host cell to generate viral proteins that reassemble into new viral particles. In HIV, subsequent to this, the host cell undergoes programmed cell death, or apoptosis of T cells. However, in other retroviruses, the host cell remains intact as the virus buds out of the cell.
Some eukaryotic cells contain an enzyme with reverse transcription activity called telomerase. Telomerase is a reverse transcriptase that lengthens the ends of linear chromosomes. Telomerase carries an RNA template from which it synthesizes a repeating sequence of DNA, or "junk" DNA. This repeated sequence of DNA is called a telomere and can be thought of as a "cap" for a chromosome. It is important because every time a linear chromosome is duplicated, it is shortened. With this "junk" DNA or "cap" at the ends of chromosomes, the shortening eliminates some of the non-essential, repeated sequence rather than the protein-encoding DNA sequence, that is farther away from the chromosome end.
Telomerase is often activated in cancer cells to enable cancer cells to duplicate their genomes indefinitely without losing important protein-coding DNA sequence. Activation of telomerase could be part of the process that allows cancer cells to become immortal. The immortalizing factor of cancer via telomere lengthening due to telomerase has been proven to occur in 90% of all carcinogenic tumors in vivo with the remaining 10% using an alternative telomere maintenance route called ALT or Alternative Lengthening of Telomeres.
- Crick's central dogma, in which the product of transcription, mRNA, is translated to form polypeptides, and where it is asserted that the reverse processes never occur
- Gene regulation
- Splicing - process of removing introns from precursor messenger RNA (pre-mRNA) to make messenger RNA (mRNA)
- Eldra P. Solomon, Linda R. Berg, Diana W. Martin. Biology, 8th Edition, International Student Edition. Thomson Brooks/Cole. ISBN 978-0495317142
- Koonin EV, Gorbalenya AE, Chumakov KM (July 1989). "Tentative identification of RNA-dependent RNA polymerases of dsRNA viruses and their relationship to positive strand RNA viral polymerases". FEBS Letters. 252 (1-2): 42–6. PMID 2759231. doi:10.1016/0014-5793(89)80886-5.
- Berg J, Tymoczko JL, Stryer L (2006). Biochemistry (6th ed.). San Francisco: W. H. Freeman. ISBN 0-7167-8724-5.
- Watson JD, Baker TA, Bell SP, Gann AA, Levine M, Losick RM (2013). Molecular Biology of the Gene (7th ed.). Pearson.
- Goldman SR, Ebright RH, Nickels BE (May 2009). "Direct detection of abortive RNA transcripts in vivo". Science. 324 (5929): 927–8. PMC . PMID 19443781. doi:10.1126/science.1169237.
- Revyakin A, Liu C, Ebright RH, Strick TR (November 2006). "Abortive initiation and productive initiation by RNA polymerase involve DNA scrunching". Science. 314 (5802): 1139–43. PMC . PMID 17110577. doi:10.1126/science.1131398.
- Raffaelle M, Kanin EI, Vogt J, Burgess RR, Ansari AZ (November 2005). "Holoenzyme switching and stochastic release of sigma factors from RNA polymerase in vivo". Molecular Cell. 20 (3): 357–66. PMID 16285918. doi:10.1016/j.molcel.2005.10.011.
- Mandal SS, Chu C, Wada T, Handa H, Shatkin AJ, Reinberg D (May 2004). "Functional interactions of RNA-capping enzyme with factors that positively and negatively regulate promoter escape by RNA polymerase II". Proceedings of the National Academy of Sciences of the United States of America. 101 (20): 7572–7. PMC . PMID 15136722. doi:10.1073/pnas.0401493101.
- Goodrich JA, Tjian R (April 1994). "Transcription factors IIE and IIH and ATP hydrolysis direct promoter clearance by RNA polymerase II". Cell. 77 (1): 145–56. PMID 8156590. doi:10.1016/0092-8674(94)90242-9.
- Milo, Ron; Philips, Rob. "Cell Biology by the Numbers: What is faster, transcription or translation?". book.bionumbers.org. Retrieved 8 March 2017.
- Richardson JP (September 2002). "Rho-dependent termination and ATPases in transcript termination". Biochimica et Biophysica Acta. 1577 (2): 251–260. PMID 12213656. doi:10.1016/S0167-4781(02)00456-6.
- Lykke-Andersen S, Jensen TH (October 2007). "Overlapping pathways dictate termination of RNA polymerase II transcription". Biochimie. 89 (10): 1177–82. PMID 17629387. doi:10.1016/j.biochi.2007.05.007.
- 8-Hydroxyquinoline info from SIGMA-ALDRICH. Retrieved Feb 2012
- Saxonov S, Berg P, Brutlag DL (January 2006). "A genome-wide analysis of CpG dinucleotides in the human genome distinguishes two distinct classes of promoters". Proceedings of the National Academy of Sciences of the United States of America. 103 (5): 1412–7. PMC . PMID 16432200. doi:10.1073/pnas.0510310103.
- Bird A (January 2002). "DNA methylation patterns and epigenetic memory". Genes & Development. 16 (1): 6–21. PMID 11782440. doi:10.1101/gad.947102.
- Vogelstein B, Papadopoulos N, Velculescu VE, Zhou S, Diaz LA, Kinzler KW (March 2013). "Cancer genome landscapes". Science. 339 (6127): 1546–58. PMC . PMID 23539594. doi:10.1126/science.1235122.
- Tessitore A, Cicciarelli G, Del Vecchio F, Gaggiano A, Verzella D, Fischietti M, Vecchiotti D, Capece D, Zazzeroni F, Alesse E (2014). "MicroRNAs in the DNA Damage/Repair Network and Cancer". International Journal of Genomics. 2014: 820248. PMC . PMID 24616890. doi:10.1155/2014/820248.
- Papantonis, A (2012-10-26). "TNFα signals through specialized factories where responsive coding and miRNA genes are transcribed". EMBO J. CiteSeerX . PMC . PMID 23103767. doi:10.1038/emboj.2012.288.
- "Chemistry 2006". Nobel Foundation. Retrieved March 29, 2007.
- Raj A, van Oudenaarden A (October 2008). "Nature, nurture, or chance: stochastic gene expression and its consequences". Cell. 135 (2): 216–26. PMC . PMID 18957198. doi:10.1016/j.cell.2008.09.050.
- Kolesnikova IN (2000). "Some patterns of apoptosis mechanism during HIV-infection". Dissertation (in Russian). Retrieved February 20, 2011.
- Cesare AJ, Reddel RR (2010). "Alternative lengthening of telomeres: models, mechanisms and implications". Nature Reviews. Genetics. 11 (5): 319–30. PMID 20351727. doi:10.1038/nrg2763.
|Wikimedia Commons has media related to Transcription (genetics).|
|Wikiversity has learning resources about Transcription (biology)|
- Interactive Java simulation of transcription initiation. From Center for Models of Life at the Niels Bohr Institute.
- Interactive Java simulation of transcription interference--a game of promoter dominance in bacterial virus. From Center for Models of Life at the Niels Bohr Institute.
- Biology animations about this topic under Chapter 15 and Chapter 18
- Virtual Cell Animation Collection, Introducing Transcription
- Easy to use DNA transcription site |
A thought experiment (German: Gedankenexperiment, Gedanken-Experiment, or Gedankenerfahrung) considers a hypothesis, theory, or principle for the purpose of thinking through its consequences.
Johann Witt-Hansen established that Hans Christian Ørsted was the first to use the German term Gedankenexperiment (lit. thought experiment) circa 1812. Ørsted was also the first to use the equivalent term Gedankenversuch in 1820.
Much later, Ernst Mach used the term Gedankenexperiment in a different way, to denote exclusively the imaginary conduct of a real experiment that would be subsequently performed as a real physical experiment by his students. Physical and mental experimentation could then be contrasted: Mach asked his students to provide him with explanations whenever the results from their subsequent, real, physical experiment differed from those of their prior, imaginary experiment.
The English term thought experiment was coined (as a calque) from Mach’s Gedankenexperiment, and it first appeared in the 1897 English translation of one of Mach’s papers. Prior to its emergence, the activity of posing hypothetical questions that employed subjunctive reasoning had existed for a very long time (for both scientists and philosophers). However, people had no way of categorizing it or speaking about it. This helps to explain the extremely wide and diverse range of the application of the term “thought experiment” once it had been introduced into English.
The common goal of a thought experiment is to explore the potential consequences of the principle in question:
“A thought experiment is a device with which one performs an intentional, structured process of intellectual deliberation in order to speculate, within a specifiable problem domain, about potential consequents (or antecedents) for a designated antecedent (or consequent)” (Yeates, 2004, p. 150).
Given the structure of the experiment, it may not be possible to perform it, and even if it could be performed, there need not be an intention to perform it.
Examples of thought experiments include Schrödinger’s cat, illustrating quantum indeterminacy through the manipulation of a perfectly sealed environment and a tiny bit of radioactive substance, and Maxwell’s demon, which attempts to demonstrate the ability of a hypothetical finite being to violate the 2nd law of thermodynamics.
The ancient Greek deiknymi (δείκνυμι), or thought experiment, “was the most ancient pattern of mathematical proof”, and existed before Euclidean mathematics, where the emphasis was on the conceptual, rather than on the experimental part of a thought-experiment.
Perhaps the key experiment in the history of modern science is Galileo’s demonstration that falling objects must fall at the same rate regardless of their masses. This is widely thought to have been a straightforward physical demonstration, involving climbing up the Leaning Tower of Pisa and dropping two heavy weights off it, whereas in fact, it was a logical demonstration, using the ‘thought experiment’ technique. The ‘experiment’ is described by Galileo in Discorsi e dimostrazioni matematiche (1638) (literally, ‘Discourses and Mathematical Demonstrations’) thus:
Salviati. If then we take two bodies whose natural speeds are different, it is clear that on uniting the two, the more rapid one will be partly retarded by the slower, and the slower will be somewhat hastened by the swifter. Do you not agree with me in this opinion?
Simplicio. You are unquestionably right.
Salviati. But if this is true, and if a large stone moves with a speed of, say, eight while a smaller moves with a speed of four, then when they are united, the system will move with a speed less than eight; but the two stones when tied together make a stone larger than that which before moved with a speed of eight. Hence the heavier body moves with less speed than the lighter; an effect which is contrary to your supposition. Thus you see how, from your assumption that the heavier body moves more rapidly than the lighter one, I infer that the heavier body moves more slowly.
Although the extract does not convey the elegance and power of the ‘demonstration’ terribly well, it is clear that it is a ‘thought’ experiment, rather than a practical one. Strange then, as Cohen says, that philosophers and scientists alike refuse to acknowledge either Galileo in particular, or the thought experiment technique in general for its pivotal role in both science and philosophy. (The exception proves the rule — the iconoclastic philosopher of science, Paul Feyerabend, has also observed this methodological prejudice.) Instead, many philosophers prefer to consider ‘Thought Experiments’ to be merely the use of a hypothetical scenario to help understand the way things are.
Thought experiments, which are well-structured, well-defined hypothetical questions that employ subjunctive reasoning (irrealis moods) – “What might happen (or, what might have happened) if . . . ” – have been used to pose questions in philosophy at least since Greek antiquity, some pre-dating Socrates. In physics and other sciences many thought experiments date from the 19th and especially the 20th Century, but examples can be found at least as early as Galileo.
In thought experiments we gain new information by rearranging or reorganizing already known empirical data in a new way and drawing new (a priori) inferences from them or by looking at these data from a different and unusual perspective. In Galileo’s thought experiment, for example, the rearrangement of empirical experience consists in the original idea of combining bodies of different weight.
Thought experiments have been used in philosophy (especially ethics), physics, and other fields (such as cognitive psychology, history, political science, economics, social psychology, law, organizational studies, marketing, and epidemiology). In law, the synonym “hypothetical” is frequently used for such experiments.
Regardless of their intended goal, all thought experiments display a patterned way of thinking that is designed to allow us to explain, predict and control events in a better and more productive way.
In terms of their theoretical consequences, thought experiments generally:
- challenge (or even refute) a prevailing theory, often involving the device known as reductio ad absurdum, (as in Galileo’s original argument, a proof by contradiction),
- confirm a prevailing theory,
- establish a new theory, or
- simultaneously refute a prevailing theory and establish a new theory through a process of mutual exclusion
Thought experiments can produce some very important and different outlooks on previously unknown or unaccepted theories. However, they may make those theories themselves irrelevant, and could possibly create new problems that are just as difficult, or possibly more difficult to resolve.
In terms of their practical application, thought experiments are generally created to:
- challenge the prevailing status quo (which includes activities such as correcting misinformation (or misapprehension), identify flaws in the argument(s) presented, to preserve (for the long-term) objectively established fact, and to refute specific assertions that some particular thing is permissible, forbidden, known, believed, possible, or necessary);
- extrapolate beyond (or interpolate within) the boundaries of already established fact;
- predict and forecast the (otherwise) indefinite and unknowable future;
- explain the past;
- the retrodiction, postdiction and hindcasting of the (otherwise) indefinite and unknowable past;
- facilitate decision making, choice and strategy selection;
- solve problems, and generate ideas;
- move current (often insoluble) problems into another, more helpful and more productive problem space (e.g.: functional fixedness);
- attribute causation, preventability, blame and responsibility for specific outcomes;
- assess culpability and compensatory damages in social and legal contexts;
- ensure the repeat of past success; or
- examine the extent to which past events might have occurred differently.
- ensure the (future) avoidance of past failures
Generally speaking, there are seven types of thought experiments in which one reasons from causes to effects, or effects to causes:
Prefactual (before the fact) thought experiments — the term prefactual was coined by Lawrence J. Sanna in 1998 — speculate on possible future outcomes, given the present, and ask “What will be the outcome if event E occurs?”
Counterfactual (contrary to established fact) thought experiments — the term counterfactual was coined by Nelson Goodman in 1947, extending Roderick Chisholm’s (1946) notion of a “contrary-to-fact conditional” — speculate on the possible outcomes of a different past; and ask “What might have happened if A had happened instead of B?” (e.g., “If Isaac Newton and Gottfried Leibniz had cooperated with each other, what would mathematics look like today?”).
The study of counterfactual speculation has increasingly engaged the interest of scholars in a wide range of domains such as philosophy, psychology, cognitive psychology, history, political science, economics, social psychology, law, organizational theory, marketing, and epidemiology.
Semifactual thought experiments — the term semifactual was coined by Nelson Goodman in 1947 — speculate on the extent to which things might have remained the same, despite there being a different past; and asks the question Even though X happened instead of E, would Y have still occurred? (e.g., Even if the goalie had moved left, rather than right, could he have intercepted a ball that was traveling at such a speed?).
Semifactual speculations are an important part of clinical medicine.
The activity of prediction attempts to project the circumstances of the present into the future. According to David Sarewitz and Roger Pielke (1999, p123), scientific prediction takes two forms:
(1) “The elucidation of invariant — and therefore predictive — principles of nature”; and
(2) “[Using] suites of observational data and sophisticated numerical models in an effort to foretell the behavior or evolution of complex phenomena”.
Although they perform different social and scientific functions, the only difference between the qualitatively identical activities of predicting, forecasting, and nowcasting is the distance of the speculated future from the present moment occupied by the user. Whilst the activity of nowcasting, defined as “a detailed description of the current weather along with forecasts obtained by extrapolation up to 2 hours ahead”, is essentially concerned with describing the current state of affairs, it is common practice to extend the term “to cover very-short-range forecasting up to 12 hours ahead” (Browning, 1982, p.ix).
The activity of hindcasting involves running a forecast model after an event has happened in order to test whether the model’s simulation is valid.
In 2003, Dake Chen and his colleagues “trained” a computer using the data of the surface temperature of the oceans from the last 20 years. Then, using data that had been collected on the surface temperature of the oceans for the period 1857 to 2003, they went through a hindcasting exercise and discovered that their simulation not only accurately predicted every El Niño event for the last 148 years, it also identified the (up to 2 years) looming foreshadow of every single one of those El Niño events.
The activity of retrodiction (or postdiction) involves moving backwards in time, step-by-step, in as many stages as are considered necessary, from the present into the speculated past to establish the ultimate cause of a specific event (e.g., reverse engineering and forensics).
Given that retrodiction is a process in which “past observations, events and data are used as evidence to infer the process(es) the produced them” and that diagnosis “involve[s] going from visible effects such as symptoms, signs and the like to their prior causes”, the essential balance between prediction and retrodiction could be characterized as:
retrodiction : diagnosis :: prediction : prognosis
regardless of whether the prognosis is of the course of the disease in the absence of treatment, or of the application of a specific treatment regimen to a specific disorder in a particular patient.
The activity of backcasting — the term backcasting was coined by John Robinson in 1982 — involves establishing the description of a very definite and very specific future situation. It then involves an imaginary moving backwards in time, step-by-step, in as many stages as are considered necessary, from the future to the present to reveal the mechanism through which that particular specified future could be attained from the present.
Backcasting is not concerned with predicting the future:
The major distinguishing characteristic of backcasting analyses is the concern, not with likely energy futures, but with how desirable futures can be attained. It is thus explicitly normative, involving ‘working backwards’ from a particular future end-point to the present to determine what policy measures would be required to reach that future.
According to Jansen (1994, p. 503:
Within the framework of technological development, “forecasting” concerns the extrapolation of developments towards the future and the exploration of achievements that can be realized through technology in the long term. Conversely, the reasoning behind “backcasting” is: on the basis of an interconnecting picture of demands technology must meet in the future — “sustainability criteria” — to direct and determine the process that technology development must take and possibly also the pace at which this development process must take effect.
Backcasting [is] both an important aid in determining the direction technology development must take and in specifying the targets to be set for this purpose. As such, backcasting is an ideal search toward determining the nature and scope of the technological challenge posed by sustainable development, and it can thus serve to direct the search process toward new — sustainable — technology.
In philosophy, a thought experiment typically presents an imagined scenario with the intention of eliciting an intuitive or reasoned response about the way things are in the thought experiment. (Philosophers might also supplement their thought experiments with theoretical reasoning designed to support the desired intuitive response.) The scenario will typically be designed to target a particular philosophical notion, such as morality, or the nature of the mind or linguistic reference. The response to the imagined scenario is supposed to tell us about the nature of that notion in any scenario, real or imagined.
For example, a thought experiment might present a situation in which an agent intentionally kills an innocent for the benefit of others. Here, the relevant question is not whether the action is moral or not, but more broadly whether a moral theory is correct that says morality is determined solely by an action’s consequences (See Consequentialism). John Searle imagines a man in a locked room who receives written sentences in Chinese, and returns written sentences in Chinese, according to a sophisticated instruction manual. Here, the relevant question is not whether or not the man understands Chinese, but more broadly, whether a functionalist theory of mind is correct.
It is generally hoped that there is universal agreement about the intuitions that a thought experiment elicits. (Hence, in assessing their own thought experiments, philosophers may appeal to “what we should say,” or some such locution.) A successful thought experiment will be one in which intuitions about it are widely shared. But often, philosophers differ in their intuitions about the scenario.
Other philosophical uses of imagined scenarios arguably are thought experiments also. In one use of scenarios, philosophers might imagine persons in a particular situation (maybe ourselves), and ask what they would do.
For example, in the veil of ignorance, John Rawls asks us to imagine a group of persons in a situation where they know nothing about themselves, and are charged with devising a social or political organization. The use of the state of nature to imagine the origins of government, as by Thomas Hobbes and John Locke, may also be considered a thought experiment. Søren Kierkegaard explored the possible ethical and religious implications of Abraham’s binding of Isaac in Fear and Trembling Similarly, Friedrich Nietzsche, in On the Genealogy of Morals, speculated about the historical development of Judeo-Christian morality, with the intent of questioning its legitimacy.
An early written thought experiment was Plato’s allegory of the cave. Another historic thought experiment was Avicenna’s “Floating Man” thought experiment in the 11th century. He asked his readers to imagine themselves suspended in the air isolated from all sensations in order to demonstrate human self-awareness and self-consciousness, and the substantiality of the soul.
In many thought experiments, the scenario would be nomologically possible, or possible according to the laws of nature. John Searle’s Chinese room is nomologically possible.
Some thought experiments present scenarios that are not nomologically possible. In his Twin Earth thought experiment, Hilary Putnam asks us to imagine a scenario in which there is a substance with all of the observable properties of water (e.g., taste, color, boiling point), but is chemically different from water. It has been argued that this thought experiment is not nomologically possible, although it may be possible in some other sense, such as metaphysical possibility. It is debatable whether the nomological impossibility of a thought experiment renders intuitions about it moot.
In some cases, the hypothetical scenario might be considered metaphysically impossible, or impossible in any sense at all. David Chalmers says that we can imagine that there are zombies, or persons who are physically identical to us in every way but who lack consciousness. This is supposed to show that physicalism is false. However, some argue that zombies are inconceivable: we can no more imagine a zombie than we can imagine that 1+1=3. Others have claimed that the conceivability of a scenario may not entail its possibility.
Interactive thought experiments in digital environments
The philosophical work of Stefano Gualeni focuses on the use of virtual worlds to materialize thought experiments and to playfully negotiate philosophical ideas. His arguments were originally presented in his 2015 book Virtual Worlds as Philosophical Tools.
Gualeni’s argument is that the history of philosophy has, until recently, merely been the history of written thought, and digital media can complement and enrich the limited and almost exclusively linguistic approach to philosophical thought. He considers virtual worlds to be philosophically viable and advantageous in contexts like those of thought experiments, when the recipients of a certain philosophical notion or perspective are expected to objectively test and evaluate different possible courses of action, or in cases where they are confronted with interrogatives concerning non-actual or non-human phenomenologies.
Among the most visible thought experiments designed by Stefano Gualeni:
- Something Something Soup Something(2017)
- Necessary Evil(2013)
Other examples of playful, interactive thought experiments:
- The Evolution of Trust (Niki Case, 2017)
- We Become what We Behold(Niki Case, 2016)
- To Build a Better Mouse Trap(La Molleindustria, 2014)
- Parable of the Polygons(Vi Hart & Niki Case, 2014)
Scientists tend to use thought experiments as imaginary, “proxy” experiments prior to a real, “physical” experiment (Ernst Mach always argued that these gedankenexperiments were “a necessary precondition for physical experiment”). In these cases, the result of the “proxy” experiment will often be so clear that there will be no need to conduct a physical experiment at all.
Scientists also use thought experiments when particular physical experiments are impossible to conduct (Carl Gustav Hempel labeled these sorts of experiment “theoretical experiments-in-imagination“), such as Einstein’s thought experiment of chasing a light beam, leading to special relativity. This is a unique use of a scientific thought experiment, in that it was never carried out, but led to a successful theory, proven by other empirical means.
The first characteristic pattern that thought experiments display is their orientation in time. They are either:
- Antefactual speculations: experiments that speculate about what might have happened prior to a specific, designated event, or
- Postfactual speculations: experiments that speculate about what may happen subsequent to (or consequent upon) a specific, designated event.
The second characteristic pattern is their movement in time in relation to “the present moment standpoint” of the individual performing the experiment; namely, in terms of:
- Their temporal direction: are they past-oriented or future-oriented?
- Their temporal sense:
(a) in the case of past-oriented thought experiments, are they examining the consequences of temporal “movement” from the present to the past, or from the past to the present? or,
(b) in the case of future-oriented thought experiments, are they examining the consequences of temporal “movement” from the present to the future, or from the future to the present?
Thought experiments have been used in a variety of fields, including philosophy, law, physics, and mathematics. In philosophy they have been used at least since classical antiquity, some pre-dating Socrates. In law, they were well-known to Roman lawyers quoted in the Digest. In physics and other sciences, notable thought experiments date from the 19th and especially the 20th century, but examples can be found at least as early as Galileo.
Relation to real experiments
The relation to real experiments can be quite complex, as can be seen again from an example going back to Albert Einstein. In 1935, with two coworkers, he published a paper on a newly created subject called later the EPR effect (EPR paradox). In this paper, starting from certain philosophical assumptions, on the basis of a rigorous analysis of a certain, complicated, but in the meantime assertedly realizable model, he came to the conclusion that quantum mechanics should be described as “incomplete”. Niels Bohr asserted a refutation of Einstein’s analysis immediately, and his view prevailed. After some decades, it was asserted that feasible experiments could prove the error of the EPR paper. These experiments tested the Bell inequalities published in 1964 in a purely theoretical paper. The above-mentioned EPR philosophical starting assumptions were considered to be falsified by empirical fact (e.g. by the optical real experiments of Alain Aspect).
Thus thought experiments belong to a theoretical discipline, usually to theoretical physics, but often to theoretical philosophy. In any case, it must be distinguished from a real experiment, which belongs naturally to the experimental discipline and has “the final decision on true or not true“, at least in physics.
- ^Perkowitz, Sidney (February 12, 2010). “Gedankenexperiment”. Encyclopædia Britannica Online. Retrieved March 27, 2017.
- ^See the German edition of The Logic of Scientific Discovery(Logik der Forschung, 1935) by Karl Popper.
- ^See occurrences on Google Books.
- ^Robert Brown, James (August 12, 2014). “Thought Experiments”. Stanford Encyclopedia of Philosophy. Retrieved March 27, 2017.
- ^“[C]onjectures or hypotheses … are really to be regarded as thought “experiments” through which we wish to discover whether something can be explained by a specific assumption in connection with other natural laws.” —Hans Christian Ørsted(“First Introduction to General Physics” ¶16-¶18, part of a series of public lectures at the University of Copenhagen. Copenhagen 1811, in Danish, printed by Johan Frederik Schulz. In Kirstine Meyer’s 1920 edition of Ørsted’s works, vol.III 151-190. ) “First Introduction to Physics: the Spirit, Meaning, and Goal of Natural Science”. Reprinted in German in 1822, Schweigger’s Journal für Chemie und Physik 36, pp. 458–488, as translated in Ørsted 1997, pp. 296–298
- ^Witt-Hansen (1976). Although Experiment is a German word, it is derived from Latin. The synonym Versuch has purely Germanicroots.
- ^Mach, Ernst (1883), The Science of Mechanics (6th edition, translated by Thomas J. McCormack), LaSalle, Illinois: Open Court, 1960. pp. 32-41, 159-62.
- ^Mach, Ernst (1897), “On Thought Experiments”, in Knowledge and Error (translated by Thomas J. McCormack and Paul Foulkes), Dordrecht Holland: Reidel, 1976, pp. 134-47.
- ^Szábo, Árpád. (1958) ” ‘Deiknymi’ als Mathematischer Terminus fur ‘Beweisen’ “, MaiaS. 10 pp. 1–26 as cited by Imre Lakatos(1976) in Proofs and Refutations p.9. (John Worrall and Elie Zahar, eds.) Cambridge University Press ISBN 0-521-21078-X. The English translation of the title of Szábo’s article is “‘Deiknymi’ as a mathematical expression for ‘to prove'”, as translated by András Máté Archived 2012-04-25 at the Wayback Machine, p.285
- ^Cohen, Martin, “Wittgenstein’s Beetle and Other Classic Thought Experiments”, Blackwell, (Oxford), 2005, pp. 55–56.
- ^“Galileo on Aristotle and Acceleration”. Retrieved 2008-05-24.
- ^See, for example, Paul Feyerabend, ‘Against Method’, Verso (1993)
- ^Rescher, N. (1991), “Thought Experiment in Pre-Socratic Philosophy”, in Horowitz, T.; Massey, G.J. (eds.), Thought Experiments in Science and Philosophy, Rowman & Littlefield, (Savage), pp. 31–41.
- ^Brendal, Elke, “Intuition Pumps and the Proper Use of Thought Experiments”. Dialectica. V.58, Issue 1, p 89–108, March 2004
- ^Taken from Yeates, 2004, p.143.
- ^See Yeates, 2004, pp.138-159.
- ^Sanna, L.J., “Defensive Pessimism and Optimism: The Bitter-Sweet Influence of Mood on Performance and Prefactual and Counterfactual Thinking”, Cognition and Emotion, Vol.12, No.5, (September 1998), pp.635-665. (Sanna used the term prefactualto distinguish these sorts of thought experiment from both semifactuals and counterfactuals.)
- ^ Jump up to:ab Taken from Yeates, 2004, p.144.
- ^ Jump up to:ab Goodman, N., “The Problem of Counterfactual Conditionals”, The Journal of Philosophy, Vol.44, No.5, (27 February 1947), pp.113-128.
- ^Chisholm, R.M., “The Contrary-to-Fact Conditional”, Mind, Vol.55, No.220, (October 1946), pp.289-307.
- ^Roger Penrose (Shadows of the Mind: A Search for the Missing Science of Consciousness, Oxford University Press, (Oxford),1994, p.240) considers counterfactuals to be “things that might have happened, although they did not in fact happen”.
- ^In 1748, when defining causation, David Hume referred to a counterfactual case: “…we may define a cause to be an object, followed by another, and where all objects, similar to the first, are followed by objects similar to the second. Or in other words, where, if the first object had not been, the second never had existed …” (Hume, D. (Beauchamp, T.L., ed.), An Enquiry Concerning Human Understanding, Oxford University Press, (Oxford), 1999, (7), p.146.)
- ^Goodman, N., “The Problem of Counterfactual Conditionals”, The Journal of Philosophy, Vol.44, No.5, (27 February 1947), pp.113-128; Brown, R, & Watling, J., “Counterfactual Conditionals”, Mind, Vol.61, No.242, (April 1952), pp.222-233; Parry, W.T., “Reëxamination of the Problem of Counterfactual Conditionals”, The Journal of Philosophy, Vol.54, No.4, (14 February 1957), pp.85-94; Cooley, J.C., “Professor Goodman’s Fact, Fiction, & Forecast“, The Journal of Philosophy, Vol.54, No.10, (9 May 1957), pp.293-311; Goodman, N., “Parry on Counterfactuals”, The Journal of Philosophy, Vol.54, No.14, (4 July 1957), pp.442-445; Goodman, N., “Reply to an Adverse Ally”, The Journal of Philosophy, Vol.54, No.17, (15 August 1957), pp.531-535; Lewis, D., Counterfactuals, Basil Blackwell, (Oxford), 1973, etc.
- ^Fillenbaum, S., “Information Amplified: Memory for Counterfactual Conditionals”, Journal of Experimental Psychology, Vol.102, No.1, (January 1974), pp.44-49; Crawford, M.T. & McCrea, S.M., “When Mutations meet Motivations: Attitude Biases in Counterfactual Thought”, Journal of Experimental Social Psychology, Vol.40, No.1, (January 2004), pp.65-74, etc.
- ^Kahneman, D. & Tversky, A., “The Simulation Heuristic”, pp.201-208 in Kahneman, D., Slovic, P. & Tversky, A. (eds), Judgement Under Uncertainty: Heuristics and Biases, Cambridge University Press, (Cambridge), 1982; Sherman, S.J. & McConnell, A.R., “Dysfunctional Implications of Counterfactual Thinking: When Alternatives to reality Fail Us”, pp.199-231 in Roese, N.J. & Olson, J.M. (eds.), What Might Have Been: The Social Psychology of Counterfactual Thinking, Lawrence Erlbaum Associates, (Mahwah), 1995;Nasco, S.A. & Marsh, K.L., “Gaining Control Through Counterfactual Thinking”, Personality and Social Psychology Bulletin, Vol.25, No.5, (May 1999), pp.556-568; McCloy, R. & Byrne, R.M.J., “Counterfactual Thinking About Controllable Events”, Memory and Cognition, Vol.28, No.6, (September 2000), pp.1071-1078; Byrne, R.M.J., “Mental Models and Counterfactual Thoughts About What Might Have Been”, Trends in Cognitive Sciences, Vol.6, No.10, (October 2002), pp.426-431; Thompson, V.A. & Byrne, R.M.J., “Reasoning Counterfactually: Making Inferences About Things That Didn’t Happen”, Journal of Experimental Psychology: Learning, Memory, and Cognition, Vol.28, No.6, (November 2002), pp.1154-1170, etc.
- ^Greenberg, M. (ed.), The Way It Wasn’t: Great Science Fiction Stories of Alternate History, Citadel Twilight, (New York), 1996; Dozois, G. & Schmidt, W. (eds.), Roads Not Taken: Tales of Alternative History, The Ballantine Publishing Group, (New York), 1998; Sylvan, D. & Majeski, S., “A Methodology for the Study of Historical Counterfactuals”, International Studies Quarterly, Vol.42, No.1, (March 1998), pp.79-108; Ferguson, N., (ed.), Virtual History: Alternatives and Counterfactuals, Basic Books, (New York), 1999; Cowley, R. (ed.), What If?: The World’s Foremost Military Historians Imagine What Might have Been, Berkley Books, (New York), 2000; Cowley, R. (ed.), What If? 2: Eminent Historians Imagine What Might have Been, G.P. Putnam’s Sons, (New York), 2001, etc.
- ^Fearon, J.D., “Counterfactuals and Hypothesis Testing in Political Science”, World Politics, Vol.43, No.2, (January 1991), pp.169-195; Tetlock, P.E. & Belkin, A. (eds.), Counterfactual Thought Experiments in World Politics, Princeton University Press, (Princeton), 1996; Lebow, R.N., “What’s so Different about a Counterfactual?”, World Politics, Vol.52, No.4, (July 2000), pp.550-585; Chwieroth, J.M., “Counterfactuals and the Study of the American Presidency”, Presidential Studies Quarterly, Vol.32, No.2, (June 2002), pp.293-327, etc.
- ^Cowan, R. & Foray, R., “Evolutionary Economics and the Counterfactual Threat: On the Nature and Role of Counterfactual History as an Empirical Tool in Economics”, Journal of Evolutionary Economics, Vol.12, No.5, (December 2002), pp.539-562, etc.
- ^Roese, N.J. & Olson, J.M. (eds.), What Might Have Been: The Social Psychology of Counterfactual Thinking, Lawrence Erlbaum Associates, (Mahwah), 1995; Sanna, L.J., “Defensive Pessimism, Optimism, and Simulating Alternatives: Some Ups and Downs of Prefactual and Counterfactual Thinking”, Journal of Personality and Social Psychology, Vol.71, No.5, (November 1996), pp1020-1036; Roese, N.J., “Counterfactual Thinking”, Psychological Bulletin, Vol.121, No.1, (January 1997), pp.133-148; Sanna, L.J., “Defensive Pessimism and Optimism: The Bitter-Sweet Influence of Mood on Performance and Prefactual and Counterfactual Thinking”, Cognition and Emotion, Vol.12, No.5, (September 1998), pp.635-665; Sanna, L.J. & Turley-Ames, K.J., “Counterfactual Intensity”, European Journal of Social Psychology, Vol.30, No.2, (March/April 2000), pp.273-296; Sanna, L.J., Parks, C.D., Meier, S., Chang, E.C., Kassin, B.R., Lechter, J.L., Turley-Ames, K.J. & Miyake, T.M., “A Game of Inches: Spontaneous Use of Counterfactuals by Broadcasters During Major League Baseball Playoffs”, Journal of Applied Social Psychology, Vol.33, No.3, (March 2003), pp.455-475, etc.
- ^Strassfeld, R.N., “If…: Counterfactuals in the Law”, George Washington Law Review, Volume 60, No.2, (January 1992), pp.339-416; Spellman, B.A. & Kincannon, A., “The Relation between Counterfactual (“but for”) and Causal reasoning: Experimental Findings and Implications for Juror’s Decisions”, Law and Contemporary Problems, Vol.64, No.4, (Autumn 2001), pp.241-264; Prentice, R.A. & Koehler, J.J., “A Normality Bias in Legal Decision Making”, Cornell Law Review, Vol.88, No.3, (March 2003), pp.583-650, etc.
- ^Creyer, E.H. & Gürhan, Z., “Who’s to Blame? Counterfactual Reasoning and the Assignment of Blame”, Psychology and Marketing, Vol.14, No.3, (May 1997), pp.209-307; Zeelenberg, M., van Dijk, W.W., van der Plight, J., Manstead, A.S.R., van Empelen, P., & Reinderman, D., “Emotional Reactions to the Outcomes of Decisions: The Role of Counterfactual Thought in the Experience of Regret and Disappointment”, Organizational Behavior and Human Decision Processes, Vol.75, No.2, (August 1998), pp.117-141; Naquin, C.E. & Tynan, R.O., “The Team Halo Effect: Why Teams Are Not Blamed for Their Failures”, Journal of Applied Psychology, Vol.88, No.2, (April 2003), pp.332-340; Naquin, C.E., “The Agony of Opportunity in Negotiation: Number of Negotiable Issues, Counterfactual Thinking, and Feelings of Satisfaction”, Organizational Behavior and Human Decision Processes, Vol.91, No.1, (May 2003), pp.97-107, etc.
- ^Hetts, J.J., Boninger, D.S., Armor, D.A., Gleicher, F. & Nathanson, A., “The Influence of Anticipated Counterfactual Regret on Behavior”, Psychology & Marketing, Vol.17, No.4, (April 2000), pp.345-368; Landman, J. & Petty, R., ““It Could Have Been You”: How States Exploit Counterfactual Thought to Market Lotteries”, Psychology & Marketing, Vol.17, No.4, (April 2000), pp.299-321; McGill, A.L., “Counterfactual Reasoning in Causal Judgements: Implications for Marketing”, Psychology & Marketing, Vol.17, No.4, (April 2000), pp.323-343; Roese, N.J., “Counterfactual Thinking and Marketing: Introduction to the Special Issue”, Psychology & Marketing’, Vol.17, No.4, (April 2000), pp.277-280; Walchli, S.B. & Landman, J., “Effects of Counterfactual Thought on Postpurchase Consumer Affect”, Psychology & Marketing, Vol.20, No.1, (January 2003), pp.23-46, etc.
- ^Randerson, J., “Fast action would have saved millions”, New Scientist, Vol.176, No.2372, (7 December 2002), p.19; Haydon, D.T., Chase-Topping, M., Shaw, D.J., Matthews, L., Friar, J.K., Wilesmith, J. & Woolhouse, M.E.J., “The Construction and Analysis of Epidemic Trees With Reference to the 2001 UK Foot-and-Mouth Outbreak”, Proceedings of the Royal Society of London Series B: Biological Sciences, Vol.270, No.1511, (22 January 2003), pp.121-127, etc.
- ^Goodman’s original concept has been subsequently developed and expanded by (a) Daniel Cohen (Cohen, D., “Semifactuals, Even-Ifs, and Sufficiency”, International Logic Review, Vol.16, (1985), pp.102-111), (b) Stephen Barker (Barker, S., “Even, Stilland Counterfactuals”, Linguistics and Philosophy, Vol.14, No.1, (February 1991), pp.1-38; Barker, S., “Counterfactuals, Probabilistic Counterfactuals and Causation”, Mind, Vol.108, No.431, (July 1999), pp.427-469), and (c) Rachel McCloy and Ruth Byrne (McCloy, R. & Byrne, R.M.J., “Semifactual ‘Even If’ Thinking”, Thinking and Reasoning, Vol.8, No.1, (February 2002), pp.41-67).
- ^ Jump up to:ab Taken from Yeates, 2004, p.145.
- ^Sarewitz, D. & Pielke, R., “Prediction in Science and Policy”, Technology in Society, Vol.21, No.2, (April 1999), pp.121-133.
- ^Nowcasting (obviously based on forecasting) is also known as very-short-term forecasting; thus, also indicating a very-short-term, mid-range, and long-range forecasting
- ^Browning, K.A. (ed.), Nowcasting, Academic Press, (London), 1982.
- ^Murphy, and Brown — Murphy, A.H. & Brown, B.G., “Similarity and Analogical Reasoning: A Synthesis”, pp.3-15 in Browning, K.A. (ed.), Nowcasting, Academic Press, (London), 1982 — describe a large range of specific applications for meteorological nowcasting over wide range of user demands:
(1) Agriculture: (a) wind and precipitation forecasts for effective seeding and spraying from aircraft; (b) precipitation forecasts to minimize damage to seedlings; (c) minimum temperature, dewpoint, cloud cover, and wind speed forecasts to protect crops from frost; (d) maximum temperature forecasts to reduce adverse effects of high temperatures on crops and livestock; (e) humidity and cloud cover forecasts to prevent fungal disease crop losses ; (f) hail forecasts to minimize damage to livestock and greenhouses; (g) precipitation, temperature, and dewpoint forecasts to avoid during- and after-harvest losses due to crops rotting in the field; (h) precipitation forecasts to minimize losses in drying raisins; and (i) humidity forecasts to reduce costs and losses resulting from poor conditions for drying tobacco.
(2) Construction: (a) precipitation and wind speed forecasts to avoid damage to finished work (e.g. concrete) and minimize costs of protecting exposed surfaces, structures, and work sites; and (b) precipitation, wind speed, and high/low temperature forecasts to schedule work in an efficient manner.
(3) Energy: (a) temperature, humidity, wind, cloud, etc. forecasts to optimize procedures related to generation and distribution of electricity and gas; (b) forecasts of thunderstorms, strong winds, low temperatures, and freezing precipitation minimize damage to lines and equipment and to schedule repairs.
(4) Transportation: (a) ceiling height and visibility, winds and turbulence, and surface ice and snow forecasts minimize risk, maximize efficiency in pre-flight and in-flight decisions and other adjustments to weather-related fluctuations in traffic; (b) forecasts of wind speed and direction, as well as severe weather and icing conditions along flight paths facilitate optimal airline route planning; (c) forecasts of snowfall, precipitation, and other storm-related events allow truckers, motorists, and public transportation systems to avoid damage to weather-sensitive goods, select optimum routes, prevent accidents, minimize delays, and maximize revenues under conditions of adverse weather.
(5) Public Safety & General Public: (a) rain, snow, wind, and temperature forecasts assist the general public in planning activities such as commuting, recreation, and shopping; (b) forecasts of temperature/humidity extremes (or significant changes) alert hospitals, clinics, and the public to weather conditions that may seriously aggravate certain health-related illnesses; (c) forecasts related to potentially dangerous or damaging natural events (e.g., tornados, severe thunderstorms, severe winds, storm surges, avalanches, precipitation, floods) minimize loss of life and property damage; and (d) forecasts of snowstorms, surface icing, visibility, and other events (e.g. floods) enable highway maintenance and traffic control organizations to take appropriate actions to reduce risks of traffic accidents and protect roads from damage.
- ^Chen, D., Cane, M.A., Kaplan, A., Zebiak, S.E. & Huang, D., “Predictability of El Niño Over the Past 148 Years”, Nature, Vol.428, No.6984, (15 April 2004), pp.733-736; Anderson, D., “Testing Time for El Niño”, Nature, Vol.428, No.6984, (15 April 2004), pp.709, 711.
- ^Not only did their hindcasting demonstrate that the computerized simulation models could predict the onset of El Niño climatic events from changes in the temperature of the ocean’s surface temperature that occur up to two years earlier — meaning that there was now, potentially, at least 2 years’ lead time — but the results also implied that El Niño events seemed to be the effects of some causal regularity; and, therefore, were not due to simple chance.
- ^Taken from Yeates, 2004, p.146.
- ^24, Einhorn, H.J. & Hogarth, R.M., “Prediction, Diagnosis, and Causal Thinking in Forecasting”, Journal of Forecasting, (January–March 1982), Vol.1, No.1, pp.23-36.
- ^“…We consider diagnostic inference to be based on causal thinking, although in doing diagnosis one has to mentally reverse the time order in which events were thought to have occurred (hence the term “backward inference”). On the other hand, predictions involve forward inference; i.e., one goes forward in time from present causes to future effects. However, it is important to recognize the dependence of forward inference/prediction on backward inference/diagnosis. In particular, it seems likely that success in predicting the future depends to a considerable degree on making sense of the past. Therefore, people are continually engaged in shifting between forward and backward inference in both making and evaluating forecasts. Indeed, this can be eloquently summarized by Kierkegaard’s observation that, ‘Life can only be understood backwards; but it must be lived forwards’ …”(Einhorn & Hogarth, 1982, p.24).
- ^Taken from Yeates, 2004, p.147.
- ^See Robinson, J.B., “Energy Backcasting: A Proposed Method of Policy Analysis”, Energy Policy, Vol.10, No.4 (December 1982), pp.337-345; Robinson, J.B., “Unlearning and Backcasting: Rethinking Some of the Questions We Ask About the Future”, Technological Forecasting and Social Change, Vol.33, No.4, (July 1988), pp.325-338; Robinson, J., “Future Subjunctive: Backcasting as Social Learning”, Futures, Vol.35, No.8, (October 2003), pp.839-856.
- ^Robinson’s backcasting approach is very similar to the anticipatory scenarios of Ducot and Lubben (Ducot, C. & Lubben, G.J., “A Typology for Scenarios”, Futures, Vol.11, No.1, (February 1980), pp.51-57), and Bunn and Salo (Bunn, D.W. & Salo, A.A., “Forecasting with scenarios”, European Journal of Operational Research, Vol.68, No.3, (13 August 1993), pp.291-303).
- ^814, Dreborg, K.H., “Essence of Backcasting”, Futures, Vol.28, No.9, (November 1996), pp.813-828.
- ^Jansen, L., “Towards a Sustainable Future, en route with Technology”, pp.496-525 in Dutch Committee for Long-Term Environmental Policy (ed.), The Environment: Towards a Sustainable Future (Environment & Policy, Volume 1), Kluwer Academic Publishers, (Dortrecht), 1994.
- ^Rep. vii, I–III, 514–518B.
- ^Seyyed Hossein Nasr and Oliver Leaman (1996), History of Islamic Philosophy, p. 315, Routledge, ISBN 0-415-13159-6.
- ^ Jump up to:ab c Gualeni, Stefano (2015). Virtual Worlds as Philosophical Tools: How to Philosophize with a Digital Hammer. Basingstoke (UK): Palgrave MacMillan. ISBN 978-1-137-52178-1.
- ^ Jump up to:ab Gualeni, Stefano (2016). “Self-reflexive videogames: observations and corollaries on virtual worlds as philosophical artifacts”. G a M E, the Italian Journal of Game Studies. 1, 5.
- ^Yeates, 2004, pp.138-143.
- ^Catholic Encyclopedia (1913)/Pandects “every logical rule of law is capable of illumination from the law of the Pandects.”
- ^Jaynes, E.T. (1989).Clearing up the Mysteries, opening talk at the 8th International MAXENT Workshop, St John’s College, Cambridge UK.
- ^French, A.P., Taylor, E.F. (1979/1989). An Introduction to Quantum Physics, Van Nostrand Reinhold (International), London, ISBN 0-442-30770-5.
- ^Wheeler, J.A, Zurek, W.H., editors (1983). Quantum Theory and Measurement, Princeton University Press, Princeton.
- ^d’Espagnat, B. (2006). On Physics and Philosophy, Princeton University Press, Princeton, ISBN 978-0-691-11964-9
- ^While the problem presented in this short story’s scenario is not unique, it is extremely unusual. Most thought experiments are intentionally (or, even, sometimes unintentionally) skewed towards the inevitable production of a particular solution to the problem posed; and this happens because of the way that the problem and the scenario are framed in the first place. In the case of The Lady, or the Tiger?, the way that the story unfolds is so “end-neutral” that, at the finish, there is no “correct” solution to the problem. Therefore, all that one can do is to offer one’s own innermost thoughts on how the account of human nature that has been presented might unfold ? according to one’s own experience of human nature ? which is, obviously, the purpose of the entire exercise. The extent to which the story can provoke such an extremely wide range of (otherwise equipollent) predictions of the participants’ subsequent behaviour is one of the reasons the story has been so popular over time. |
This section concentrates on the module random from the Python standard library. It contains tools for generating random numbers and other randomized functionality.
The sections in this part of the material contain many links to the documentation of the Python standard library. We recommend following the links to familiarize yourself with how the documentation works.
Generating a random number
The function randint(a, b) returns a random integer value between
b, inclusive. For example, the following program works like a generic die:
from random import randint print("The result of the throw:", randint(1, 6))
Executing this could print out:
The result of the throw: 4
The following program throws the die ten times:
from random import randint for i in range(10): print("The result of the throw:", randint(1, 6))
Running the above could print out
The result of the throw: 5 The result of the throw: 4 The result of the throw: 3 The result of the throw: 2 The result of the throw: 3 The result of the throw: 4 The result of the throw: 6 The result of the throw: 4 The result of the throw: 4 The result of the throw: 3
NB: it is worth remembering that the function
randint works a bit differently when compared to, for example, slices, or the function
range, which we've come across previously. The function call
randint(1, 6) results in a number between 1 and 6 inclusive, but the function call
range(1, 6) results in a range of numbers from 1 to 5.
More randomizing functions
The function shuffle will shuffle any data structure passed as an argument, in place. For example, the following program shuffles a list of words:
from random import shuffle words = ["atlas", "banana", "carrot"] shuffle(words) print(words)
['banana', 'atlas', 'carrot']
choice returns a randomly picked item from a data structure:
from random import choice words = ["atlas", "banana", "carrot"] print(choice(words))
A common example for studying randomness is the case of lottery numbers. Let's try and draw some lottery numbers. In Finland the national lottery consists of a pool of 40 numbers, 7 of which are chose for each week's draw.
A first attempt at drawing a set of numbers could look like this:
from random import randint for i in range(7): print(randint(1, 40))
This would not work in the long run, however, as the same number may appear twice in a single weekly draw of seven numbers. We need a way to make sure the numbers drawn are all unique.
One possibility is to store the drawn numbers in a list, and only add a number if it is not already on the list. This can be repeated until the length of the list is seven:
from random import randint weekly_draw = while len(weekly_draw) < 7: new_rnd = randint(1, 40) if new_rnd not in weekly_draw: weekly_draw.append(new_rnd) print(weekly_draw)
A more compact approach would be to use the
from random import shuffle number_pool = list(range(1, 41)) shuffle(number_pool) weekly_draw = number_pool[0:7] print(weekly_draw)
Here the idea is that we first create a list containing the available numbers 1 to 40, rather like the balls in a lottery machine. The pool of numbers is then shuffled, and the first seven numbers chosen for the weekly draw. This saves us the trouble of writing a loop.
In fact, the
random module contains an even easier way to select lottery numers: the sample function. It returns a random selection of a specified size from a given data structure:
from random import sample number_pool = list(range(1, 41)) weekly_draw = sample(number_pool, 7) print(weekly_draw)
Where do these random numbers come from?
The features of the module random are based on an algorithm which produces random numbers based on a specific initialization value and some arithmetic operations. The initialization value is often called a seed value.
The seed value can be supplied by the user with the seed function:
from random import randint, seed seed(1337) # this will always produce the same "random" number print(randint(1, 100))
If we have functions which rely on randomization, and we set seed value, the function will produce the same result each time it is executed. The result may be different with different Python versions, but in essence randomness is lost by setting a seed value. This can be a useful feature when testing a program, for example.
You can check your current points from the blue blob in the bottom-right corner of the page. |
Global Positioning System
What Is GPS?
The Global Positioning System ( GPS ) is a navigation and precise-positioning tool. Developed by the Department of Defense in 1973, GPS was originally designed to assist soldiers and military vehicles, planes, and ships in accurately determining their locations world-wide. Today, the uses of GPS have extended to include both the commercial and scientific worlds. Commercially, GPS is used as a navigation and positioning tool in airplanes, boats, cars, and for almost all outdoor recreational activities such as hiking, fishing, and kayaking. In the scientific community, GPS plays an important role in the earth sciences. Meteorologists use it for weather forecasting and global climate studies; and geologists can use it as a highly accurate method of surveying and in earthquake studies to measure tectonic motions during and in between earthquakes.
How Does It Work?
Three distinct parts make up the Global Positioning System. The first segment of the system consists of 24 satellites, orbiting 20,000 km above the Earth in 12-hour circular orbits. This means that it takes each satellite 12 hours to make a complete circle around the Earth. In order to make sure that they can be detected from anywhere on the Earth's surface, the satellites are divided into six groups of four. Each group is assigned a different path to follow. This creates six orbital planes which completely surround the Earth.
These satellites send radio signals to Earth that contain information about the satellite. Using GPS ground-based receivers, these signals can be detected and used to determine the receivers' positions (latitude, longitude, height.)The radio signals are sent at two different L-band frequencies. L-band refers to a range of frequencies between 390 and 1550 MHz. Within each signal, a coded sequence is sent. By comparing the received sequence with the original sequence, scientists can determine how long it takes for the signal to reach the Earth from the satellite. The signal delay is useful in learning about the Ionosphere and the Troposphere, two atmospheric layers that surround Earth's surface. A third signal is also sent to the receivers from the satellite. This signal contains data about the health and position of the satellite.
The second part of the GPS system is the ground station, comprised of a receiver and antenna, as well as communication tools to transmit data to the data center. The omni-directional antenna at each site, acting much like a car radio antenna, picks up the satellite signals and transmits them to the site receiver as electric currents. The receiver then separates the signals into different channels designated for a particular satellite and frequency at a particular time. Once the signals have been isolated, the receiver can decode them and split them into individual frequencies. With this information the receiver produces a general position (latitude, longitude, and height) for the antenna. Later, the data collected by the receiver can be processed again by scientists to determine different things, including another set of position coordinates for the same antenna, this time with millimeter accuracy.
The third part of the system is the data center. The role of the data center is two fold. It both monitors and controls the global GPS stations, and it uses automated computer systems to retrieve and analyze data from the receivers at those stations. Once processed, the data , along with the original raw data, is made available to scientists around the world for use in a variety of applications. Since global GPS sites are constructed and monitored by different institutions all over the world, there are many different data center locations. |
Question 1. What Is Marginal Cost And Marginal Costing?
Marginal Cost :is the amount at any given volume of output by which aggregate costs are changed if the volume of output is increased or decreased by one unity. The aggregate costs consists of both, fixed cost and variable cost. In simple words, marginal cost indicates the per unit variable cost.
Marginal Costing :is on the other hand is the ascertainment, by differentiating between fixed costs, variable costs, of the marginal costs and of the effect on profit of changes in volume and type of output.
Question 2. What Is Sunk Cost?
Sunk cost indicates the historical cost which has been incurred in the past. This type of cost is not relevant in the decision making process. For example-while deciding about the replacement of a machine, the depreciated book value of the machine may not be relevant in the form of sunk cost.
Question 3. What Do You Understand By Cost Accountancy? What Are The Objectives Of Cost Accountancy?
Cost accountancy is the application of Costing and Cost accounting principles, methods, and techniques to the science, art and practice of cost control and the ascertainment of profitability as well as the presentation of information for the purpose of managerial decision making.
Following are the objective of cost accountancy:
- Ascertainment of cost and profitability with the help of various principles, methods and techniques.
- Cost control
- Presentation of information to enable managerial decision making.
Question 4. What Do You Understand By Cost Center? What Are The Types Of Cost Centers?
Cost center is defined as a location, person, or item of equipment in relation to which costs may be ascertained and used for the purpose of cost control. Identification of a cost center is a prerequisite for the successful implementation of the cost accounting process as the costs are ascertained and controlled with respect tot the cost centers. In many cases cost centers are termed as Responsibility Centers.
Types of cost centers:
1. Impersonal cost center – Consists of location or item of equipment.
Example – department, branch etc.
2. Personal cost center – Consists of a person or a group of persons.
Example – finance manager, sales manager etc.
3. Production cost center – Is the one where the production activity is carried on.
For example – paint shop, a machine shop, etc.
4. Service cost centers – Is the one which assists the production activity.
For example – store department, internal transport department, labour office, accounts department, etc.
Question 5. What Are The Different Types Of Cost?
Cost indicates the amount of expenditure incurred on a given thing.
Following are the different types of cost:
Direct Cost – also termed as Prime cost. It indicates that cost which can be identified with the individual cost center. It consists of direct material cost, direct labour cost and direct expenses.
Indirect Cost – also termed as Overhead. It indicates that cost which cannot be identified with the individual cost center. It consists of indirect material cost, indirect labour cost and indirect expenses.
Fixed Cost – indicates that portion of total cost which remains constant at all the levels of production. As the volume of production increases, per unit fixed cost may reduce, but not the total fixed cost.
Variable – indicated that portion of the total cost which varies directly with the level of production. The higher the volume of production, the higher the variable cost and vice versa, though per unit variable cost remains constant at all the levels of production.
Semi-variable cost – indicates that portion of the total cost which is partly fixed and partly variable in relation to the volume of production.
Controllable cost – indicates that cost which can be controlled by a specific number of persons in the organization
Uncontrollable cost – indicates that cost which cannot be controlled by a specific number of persons in the organization.
Normal cost – indicates that cost which is normally incurred at a certain level of output under normal circumstances.
Abnormal cost – indicates that cost which is normally not incurred at a certain level of output under normal circumstances.
Question 6. Which Factors Should Be Considered Before Installing A Costing System?
- Nature of the Product
- Nature of the Organization
- Manufacturing Process
- Simplicity and Cost
- Reporting Systems
Question 7. What Are The Elements Of Costs?
Elements of costs
- Material Cost – is the cost of commodities and material used by the organization. It can be direct and indirect material. Direct material indicates that material which can be identified with the individual cost center and which becomes an integral part of the finished goods. Indirect material indicates that material which cannot be identified with the individual cost center. This material assists the manufacturing process and does not become an integral part of finished goods.
- Labour Cost – is the cost of remuneration paid to the employees of the organization. It can be direct or indirect. Direct labour cost indicates that labour cost which can be identified with the individual cost center and is incurred for those employees who are engaged in the manufacturing process. Indirect labour cost indicates that labour cost which cannot be identified with the individual cost center and is incurred for those employees who are not engaged in the manufacturing process but only assist in the same.
- Expenses – is the cost of services provided to the organization. It can be direct or indirect. Direct expenses are those expenses which can be identified with the individual cost centers. Indirect expenses are those expenses which cannot be identified with that individual cost centers.
Question 8. What Items Are Included In Prime Cost?
Prime Cost is an aggregate of direct material cost, direct labour cost and direct expenses.
Question 9. What Is Overhead? What Items Are Included In Overhead?
Overhead is an aggregate of indirect material cost, indirect labour cost and indirect expenses.
Overheads are further classified as:
- Factory Overheads – Consists of all overhead costs incurred from the stage of procurement of material till the stage of production of finished goods
- Office and Administration Overheads – Consists of all overhead costs incurred for the overall administration of the organization.
- Selling and Distribution Overheads – Consists of all overhead costs insured from the stage of final manufacturing of finished goods till the stage of sale of goods in the market and collection of dues from the customers.
Question 10. What Are Non Operating Financial Incomes And Non Operating Financial Expenses?
Non operating financial income represents that income which arises not as a part of regular operations of the organization. Due to these incomes operating profit as per cost statement may be less than profit as per Profit and Loss account. For example: profit on the sale of assets, dividend received etc.
Non operating financial expense represents that expense which arises not as a part of regular operations of the organization. Due to these expenses the operating profit as per the cost statement may be more than the profit as per Profit and Loss Account. For example: a loss on the sale of assets, provision for income tax, interest paid etc.
Question 11. What Are The Main Consequences Of Overstocking?
- It will block a large amount of working capital.
- More storage facilities will be required.
- Risk of deterioration of quality and obsolescence of material.
- More attention will be required in material handling and up keeping.
- Additional Insurance cost.
Question 12. What Is The Difference Between Bin Card And Stores Ledger?
- Bin Card is a quantitative record of receipts, issues and closing balance of an item of material. Whereas Stores ledger records not only quantities received or issued or in stock but also the financial expressions of the same.
- Bin Card is maintained by stores department while stores ledger is maintained by costing department.
- Maintenance of stores ledger provides a second check on maintenance of bin cards.
Question 13. What Are The Various Ways To Classify Overhead?
Element wise Classification:
- Indirect Material
- Indirect Labour
- Indirect Expenses
Function wise Classification:
- Factory Overheads
- Administration Overheads
- Selling and Distribution Overheads
Variability wise Classification:
- Fixed Overheads
- Variable Overheads
- Semi-variable Overheads
Controllability wise Classification:
- Controllable Overheads
- Uncontrollable Overheads
Normality wise Classification:
- Normal Overheads
- Abnormal Overheads
Question 14. What Is The Difference Between Simple Average Method And Weighted Average Method?
Under Simple average method: the simple average of the prices of the lots available for making the issues is considered for pricing the issues. After the receipt of new lot, a new average price is worked out. This method is suitable if the material is received in uniform quantity.
Under Weighted average method: the price of each lot and the quantity of the same is considered. This method proves to be very useful in the event of varying prices and quantities. It is very simple to calculate.
Question 15. What Are The Limitations Of Marginal Costing?
- The classification of total cost as variable cost and fixed cost is difficult as no cost can be completely variable or completely fixed.
- Fixed costs are eliminated for the valuation of inventory of finished goods and semi-finished goods in spite of the fact that they might have been actually incurred.
- It does not provide any standard for the evaluation of performance.
- Fixation of selling price on marginal cost basis may be useful for short term only and may be dangerous in the long run.
- It does not consider the fixed overheads.
- It can be used for assessment of profitability only in the short run.
Question 16. What Is P/v Ratio?
P/V Ratio is Profit Volume Ratio which indicates the contribution earned with respect to one rupee of sales. The fundamental property of P/V Ratio is that it remains constant at all the levels of activities, provided per unit sales price and variable cost remains constant. A high P/V ration indicates that a slight increase in sales without corresponding increase in fixed costs will result in higher profits whereas a low P/V ratio indicates low profitability so that efforts can be made to increase the profits by increasing selling price or by reducing variable cost.
Question 17. What Are The Basic Assumptions Made By Marginal Costing?
Marginal Costing is based on the following the basic assumptions
- Variable cost varies in direct proportion with the level of activity whereas per unit variable cost remains constant at all the levels of activities.
- Per unit selling price remains constant at all the levels of activities.
- There are no variations due to the stock.
Question 18. What Do You Understand By Margin Of Safety?
Margin of safety are the sales beyond Break Even Point. In simple words, this is the amount of sales which generates profits. The soundness of the business is indicated by the margin of safety. A high margin of safety indicates that the Break Even Point is much below the actual sales and even if there is reduction in sales, business will be still in profits whereas a low margin of safety accompanied by high fixed cost and high P/V ration indicates that efforts are required to be made for reducing the fixed cost or increasing sales volume.
Question 19. What Are The Different Methods Of Remunerating The Workers?
Remuneration on time basis
- High Wage Plan
- Differential Time Rate
Remuneration on work basis
- Straight Piece Rate System
- Piece Rate with Guaranteed Time Rate
- Differential Piece Rate System
- Individual Incentive systems
- Group Incentive systems
Indirect monetary remuneration
- Profit Sharing
Question 20. Explain Maximum Level And What Are The Main Factors Considered While Fixing This Level?
Maximum level is the level above which the actual stock show Following factors are considered while fixing this level:
- Maximum Usage.
- Lead Time
- Price of Material
- Cost of Storage
- Availability of Funds
- Economic Order Quantity.
Financial Accounting Interview Questions
Business Management for Financial Advisers Tutorial
Accounts Interview Questions
Accounting Basics Tutorial
Business Management for Financial Advisers Interview Questions
Cost Accounting Interview Questions
Finance Interview Questions
Financial Accounting Interview Questions
Account executive Interview Questions
Chartered accountant Interview Questions
Accounts Interview Questions
Accounting Reports Interview Questions |
Back in February, researchers at LIGO made the historic discovery of gravitational waves, predicted a century earlier. The waves were generated by a pair of black holes in their final in-spiral before an inevitable collision and merger.
Now, a group of researchers is investigating the possibility that the discovery may have been even more historic than we thought. Last week, Physical Review Letters published a paper titled “Did LIGO detect Dark Matter?” It explores the possibility that dark matter could really be black holes, such as the pair seen by LIGO, provided enough are distributed throughout the halos of galaxies. If so, in addition to finally observing the long-sought gravitational waves, we may have simultaneously discovered dark matter.
But we shouldn’t break out the champagne just yet. Black holes aren’t among the leading candidates for dark matter, and there are good reasons for that.
The leading hypothesis is that dark matter is composed of WIMPs, or Weakly Interacting Massive Particles. There are, however, a plethora of alternatives, but most of them are particles of one stripe or another. There are also a few outliers such as topological defects—perturbations in a quantum field—but even these are miniscule.
The proposal that macroscopic objects (such as black holes) are dark matter was once a strong candidate. In contrast to WIMPs, they were called MACHOs, or MAssive Compact Halo Objects. MACHOs include things like black holes, neutron stars, and/or brown dwarfs, none of which would emit much light.
But observations haven't been consistent with MACHOs accounting for dark matter, and the idea has largely fallen out of favor. The authors of the new paper point out that not all black holes have been ruled out. There’s still a narrow range of black hole masses that could fit the observations: those about 20 to 100 times the mass of the Sun (solar masses).
It couldn’t be black holes with less than 20 solar masses, as those have been ruled out by surveys that look for gravitational microlensing, visible when the black hole passes between us and a distant star. If there were enough black holes to account for dark matter, they would have shown up in these surveys.
Black holes above 100 solar masses don’t cut it either. If they were common in our galaxy's halo, binary star systems there would be disrupted by encounters with black holes.
Even though black holes of a certain size haven't been ruled out, they're still not great dark matter candidates. Most low-mass black holes began their lives as stars that collapsed after running out of fuel. But the dark matter we see influencing the Universe at large scales has been acting since very early in the Universe’s history, before there were stars that could collapse into black holes.
As such, if black holes make up dark matter, they’d have to be primordial black holes: ones that were formed very early in the Universe’s history. The very early Universe was extremely dense, with matter packed tightly enough that it would be possible for pockets of it to clump together and form a black hole.
This would be consistent with the modern Universe, too. The dark matter halos that galaxies sit in are much larger than the galaxies themselves. For the halo to be composed of black holes, those black holes have to be present where there are very few stars. If they were primordial black holes, this wouldn't be a problem.
Other studies, however, have argued that even primordial black holes should be ruled out. That’s because any black holes that formed in the early Universe would rapidly accrete gas, releasing significant x-ray radiation in the process. These x-rays could be detectable even now, but we don’t see evidence of them.
The authors of the present study argue it’s not that simple. The early Universe was a messy, chaotic place with a lot to account for. Therefore, they argue, there should be a significant uncertainty attached to our expectations of these x-ray emissions. If so, dark matter black holes wouldn’t yet be totally ruled out.
When LIGO detected gravitational waves from a pair of colliding black holes, scientists first saw it as an extraordinarily lucky event. Black hole mergers like the one observed should be rare; expectations were that LIGO would pick up a neutron star merger first. This led researchers to ask whether it wasn’t luck—maybe such black hole mergers are really common in the Universe.
So they calculated the number of collisions that could be taking place based on this one observation (insert caution about extrapolating from a single instance here). The estimated rate was two to 53 mergers per cubic gigaparsec of space per year.
Separately: if there were enough primordial black holes to make up the dark matter halos we observe, they’d collide with each other every so often—a rate we can also calculate at approximately five mergers per cubic gigaparsec per year. Obviously, that fits within the above-mentioned LIGO estimate window.
“It is interesting that—although there are theoretical uncertainties—our best estimates of the merger rate for 30 solar mass [primordial black holes], obtained with canonical models for the [dark matter] distribution, fall in the LIGO window,” the authors write in their paper.
And that’s not all. Primordial black holes, if they existed, would be distributed through space more like dark matter than like traditional black holes. “The possibility that LIGO has seen [dark matter] thus cannot be immediately excluded,” they write.
Advanced LIGO should become sensitive enough by 2019 to be able to detect more of these events, so the researchers spell out what they should see if black holes are making up most of the dark matter. They predict it should detect roughly 600 events within its 50 cubic gigaparsec range if essentially all the dark matter is primordial black holes.
It will be a significant challenge to determine whether any merger involves a primordial black hole. The researchers present some possibilities, though. For one thing, these collisions are not expected to emit any observable light or neutrinos. So by comparing data from the newborn field of gravitational wave astronomy with other observations, researchers could gain a clue as to a black hole’s identity. Locating the black hole would also help, since most of the primordial objects are expected to be in the halo.
Another possible avenue of investigation is the gravitational wave background. The Universe is expected to be filled by gravitational waves from a steady stream of black hole mergers, neutron star mergers, and stars collapsing to form black holes. In the early Universe, before stars were common, these events would be dominated by the merger of primordial black holes. So researchers can look at the gravitational wave background coming from very distant sources, which could allow an estimate of the number of primordial black holes.
Even if these observations put limits on the number of primordial black holes, it may be that these black holes make up some percentage of dark matter and the rest is composed of particles. But it may also be that MACHOs aren't dead yet, and the new era of gravitational wave astronomy will get people to consider them seriously again.
Listing image by Illustris Collaboration |
Braja Sorensen Team January 3, 2021 Worksheet
With the math mystery, students will complete the math problems to solve a secret myst Perfect for third graders, these pages include illustrated fraction practice, visual explanations about how fractions work, and various fraction math drills.
3rd grade fractions worksheets grade 3. 3rd grade fractions worksheets, lessons, and printables: See more ideas about fractions, math fractions, math classroom. Our grade 3 fractions and decimals worksheets provide practice exercises on introductory fraction and decimal concepts, including identifying simple fractions, equivalent fractions and simple fraction and decimal addition and subtraction.several worksheets (with answer keys) are provided for each type of.
Choose your grade 3 topic to help the third grade student with basic skill that they need in grade 3. Free worksheets for grade 3: Worksheets > math > grade 3 > fractions & decimals.
Some of the topics in this collection include fraction on a number line, equiva Choose the right material that you want your students to work on. 3rd grade math worksheets pdf printable on third grade topics:
Now things get real interesting, as the third grade math menu features mixed and equivalent fractions, plus fraction conversion, adding and subtracting fractions, and comparing like fractions. Free fraction and decimals worksheets. Fractions look at the shaded part of each shape and circle the from 3rd grade fractions worksheets
Here is a collection of our printable worksheets for topic equivalent fractions of chapter understand fractions in section fractions and decimals. The worksheets for changing mixed numbers to fractions or vice versa are optional, as it is not required the student be able to do these in 3rd grade without a visual model. 3rd grade math printable worksheet worksheets for all from 3rd grade fractions worksheets, source:
3rd grade fractions worksheets fractions are a big deal in third grade, so give your child a little extra practice identifying, modeling, and comparing fractions with these math worksheets. Addition and subtraction up to 3 and 4 place numbers, basic division and quick facts, adding, subtracting and recognizing fractions, algebra concept, fractions, word problems, math logic, metric systems and measurements, algebraic thinking etc. Now things get real interesting, as the third grade math menu features mixed and equivalent fractions, plus fraction conversion, adding and subtracting fractions, and comparing like fractions.
Some of the worksheets for this concept are comparing fractions work, comparing fractions work, comparing fractions, comparing and ame ordering fractions, grade 3 fractions work, fractions packet, ordering fractions, fractions number line. Add and subtract like fractions (grade 3) fraction comparison order fractions from least to greatest Splashlearn is an award winning math learning program used by more than 30 million kids for fun math practice.
3rd grade math fractions worksheets fraction activities lessons from 3rd grade fractions worksheets, source: Third grade fractions worksheets and printables last year, your second grader was introduced to the fundamentals of fractions. Click on the images to view, download, or print.
Use as simple test prep, or as a grade 3 math mystery. Addition, subtraction, multiplication, and english. Improve math scores on standardized tests using these practice test questions.
Check 3rd grade math games and fun math worksheets curriculum interactive practice learning. 1st grade 2nd grade 3rd grade 4th grade 5th grade 6th grade activities adult alphabet coloring flashcards math pre k science fraction worksheets for grade 3 72 fraction worksheets for grade 3 images. Fraction review study guide 3rd grade common core standard:
Third grade fractions worksheets and printables last year, your second grader was introduced to the fundamentals of fractions. Printable from 3rd grade fraction worksheets, source:k5learning.com You'll find a variety of fun third grade worksheets to print and use at home or in the classroom.
With these engaging worksheets, your students will start comparing and matching equivalent fractions right away. Equivalent fractions with pie charts. The intent is to visually reinforce the meaning of equivalent for fractions.
Greatschools staff | april 16, 2016 Learn third grade math online for free. Students are asked to color in the pie charts which represent each fraction.
Some of the worksheets for this concept are math 3rd grade fractions crossword name, fractions grade 3, comparing fractions work, juggling fractions 3rd grade fractions work, fraction word problems, grade 3 math practice test, equivalent fractions work, grade 3 fraction unit of instruction. This coloring math worksheet introduces your child to fractions by asking kids to shade in parts of circles and rectangles. You'll find these products fun and motivating for your students.
Browse our selection of resources. Nicki newton's board 3rd grade fractions, followed by 13724 people on pinterest. Worksheets > math > grade 3 > fractions & decimals > equivalent fractions.
A brief description of the worksheets is on each of the worksheet widgets. Grade 3 fractions and decimals worksheets free & Below are six versions of our grade 3 math worksheet on equivalent fractions; |
Tracking a Propeller
July 8, 2010
NASA’s Cassini spacecraft captured a propeller-shaped disturbance in one of Saturn's rings created by a moon that is too small to be seen here.
The moon, likely about a kilometer (half a mile) across, is invisible at the center of the image. However, it is larger than many other "propeller" moons and has cleared ring material from the dark wing-like structures to its left and right in the image. Disturbed ring material closer to the moon reflects sunlight brightly and appears like a white airplane propeller. This propeller appears in the A ring, which is the outermost of Saturn’s main rings.
Taken in 2006, this image is part of a growing catalogue of "propeller" moons that, despite being too small to be seen, enhance their visibility by creating larger disturbances in the surrounding fabric of Saturn's rings. Cassini scientists now have tracked several of these individual propeller moons embedded in Saturn's disk over several years.
These images are important because they represent the first time scientists have been able to track the orbits of objects in space that are embedded in a disk of material. Continued monitoring of these objects may lead to direct observations of the interaction between a disk of material and embedded moons. Such interactions help scientists understand fundamental principles of how solar systems formed from disks of matter. Indeed, Cassini scientists have seen changes in the orbits of these moons, although they don't yet know exactly what causes these changes. Imaging scientists nicknamed the propeller shown here "Bleriot" after a French aviator named Louis Bleriot. The propeller structure is 5 kilometers (3 miles) in the radial dimension -- the dimension moving directly outward from Saturn. The dark wings appear 1100 kilometers (700 miles) in the azimuthal (longitudinal) dimension, while the central propeller structure is 110 kilometers (70 miles) long.
See Propeller Churns the A Ring to watch a movie of "Bleriot." Giant Propeller in A Ring shows the giant propeller "Earhart" named after another aviator, Amelia Earhart. See Propeller Motion and Locating the Propellers to learn more about propeller shapes and to see smaller propellers.
This image has been re-projected so that orbiting material moves to the right and Saturn is down. The propeller was seen at the edge of the camera's field of view when the image was taken, so some data were missing; the blank space at the top of the image was filled in with a gray color. Scale in the original image was 2 kilometers (1 mile) per pixel. Image scale in this re-projected view is about 1 kilometer (half a mile) per pixel.
This view looks toward the southern, sunlit side of the rings from about 30 degrees below the ring plane. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on Dec. 15, 2006. The view was acquired at a distance of approximately 463,000 kilometers (288,000 miles) from Saturn and at a sun-Saturn-spacecraft, or phase, angle of 15 degrees.
The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, Calif., manages the mission for NASA's Science Mission Directorate, Washington, D.C. The Cassini orbiter and its two onboard cameras were designed, developed and assembled at JPL. The imaging operations center is based at the Space Science Institute in Boulder, Colo.
Image Credit: NASA/JPL/SSI |
The human tolerance for sound is, on a galactic level, puny. Volcano eruptions, jackhammer-intensive construction work, My Bloody Valentine concerts—these tinnitus-inducing phenomena are barely whispers besides the majestic, roiling bursts and collisions going on in outer space.
Of course, much of this activity is technically soundless—space’s atmosphere lacks the material that make sound waves possible. So for this week’s Giz Asks, we asked experts in astronomy and astrophysics what the loudest sound would be, if sound as we understand it existed up there. As it turns out, it sometimes does—and when it doesn’t, we can sometimes convert the relevant emissions to a sound tolerable to our tiny, earthbound ears.
National Science Foundation Postdoctoral Fellow, Astronomy & Astrophysics, University of California, Santa Barbara
As far as I’m aware, the Perseus galaxy cluster is the current record holder for the loudest sound discovered in the Universe. Generating sound requires two conditions. First, there must be a medium that the sound waves can travel through, like air or some other gas. Indeed, there is very hot gas that pervades the space between the thousands of galaxies that make up the Perseus galaxy cluster. This gas shines as X-ray light that we can observe with X-ray telescopes in space, like the Chandra X-ray Observatory. The second condition for sound is a source to actually produce the sound waves. A powerful black hole is at the center of one of these galaxies that make up the Perseus galaxy cluster. Periodically, this black hole ejects an enormous amount of energy into the hot surrounding gas, which transports the energy as sound waves traveling out through the cluster like expanding bubbles.
What makes the sound loud is the ability of the gas to efficiently carry away the energy released by the black hole, which amounts to an energy comparable to 100 million exploding stars! Although this sound from the Perseus galaxy cluster is very loud—that is, the amplitude of the sound waves is huge—we couldn’t actually hear it with our own ears. That’s because the sound corresponds to a B-flat some 57 octaves below middle-C on a piano. That means it takes about 10 million years for one sound wave to pass by, which is quite a bit longer than you’re likely to live even if you exercise regularly and eat healthy.
Astronomer and Professor at the UCO/Lick Observatory at the University of California Santa Cruz
Sound is really a form of energy transmittal, it’s vibration. The problem is the transmittal of that energy in the form of sound—there is no sound in space. But energy gets transmitted in other ways—a blast wave from an explosion, for instance. Gamma ray bursts are considered to be the most energetic events in the universe—they’re not fully understood, but they’re almost certainly explosions of stars, and they release more energy in 10 seconds than the sun will in its entire ten billion year lifetime.
Professor, Mathematics and Statistics, University of Sheffield, whose research is focused on solar, space and plasma physics, MHD waves, linear and non-linear waves
Sound cannot really travel in empty space. For sound you need some medium—like gas, for instance, in the Earth’s atmosphere—and in space that material is very, very rare—maybe one atom per cubic kilometer, or less. But that doesn’t mean that a big explosion couldn’t generate acoustic waves.
Space is filled by plasma, which is the fourth state of matter, the others being (according to our current knowledge) the solid, the liquid and the gas. The universe itself is 99.9% in a plasma state. It’s only on Earth that we haven’t got so much plasma.
In space, there is magnetic field everywhere. The same is true of Earth, but we don’t really feel it. In space, if the magnetic field is not very strong, and there is plasma under these circumstances, sound could propagate.
Stars are continuously bubbling, you could say, through a process called convection. That type of disturbance in the plasma state generates a lot of acoustic waves—sound waves. The Sun itself does this. Sometimes these acoustic periods can last for hours, sometimes just a few seconds. You could interpret these kinds of acoustic waves as very loud sounds.
The energies involved in the generation of these acoustic waves are billions of billions of billions of times the power of an atomic bomb. The explosions that produce these sounds are absolutely massive—you cannot imagine.
Assistant Professor, Theoretical Astrophysics, Caltech
The loudest sound in the universe definitely comes from black hole mergers. In this case the “sound” comes out in gravitational waves and not ordinary sound waves. As long as the black holes are in the range of roughly 1-100 solar masses (which is the case for black hole mergers recently detected LIGO), the sound is indeed in the human hearing range! These mergers output something like 10^52 Watts of power. That’s about a billion billion times the energy output of the Sun. If translated to the decibel Watt scale, that equates to something like 520 decibels. That doesn’t sound too large but remember the decibel scale is logarithmic, so an increase by 10 decibels is a factor of ten in volume.
Professor, Physics and Astronomy, University of Iowa, whose research is focused on experimental space plasma physics
This isn’t a sound, it’s a radio emission—but you could convert it to sound.
The signal came back to us as a waveform, and then on the ground we converted it to a sound that you can listen to, and it is very, very loud.
It is something called a heliospheric radio emission. There is a very special radio receiver on the Voyager that covers the frequency range from about 10 kilohertz to 50 kilohertz—a very low frequency, well below a car radio, for instance. We detected an intense radio emission, produced out at the boundary between the solar wind (the wind that comes out from the sun, and flows at about a million miles per hour, expanding outward almost to infinity) and the interstellar plasma (called the heliopause) which eventually stops the solar wind.
So there were an intense series of explosions on the sun—often called solar flares—in 1991. These sent a shockwave out through the solar system. We detected this shockwave with four spacecraft: Pioneer 10, Pioneer 11 and Voyagers 1 and 2. We also detected it when it went by the Earth. It was moving at 600-800 km per second—several million miles an hour. I postulated that this radio emission was produced when the shockwave finally reached the the heliopause and ran into the interstellar plasma.
I think this is the most powerful radio emission we’ve ever detected. In 1995 I quoted the radiated power as 10^13 watts. As far as emissions detected anywhere near our solar system go, it is clearly one of the most intense.
Do you have a burning question for Giz Asks? Email us at email@example.com. |
The international Spitzer Adaptation of the Red-sequence Cluster Survey (SpARCS) collaboration based at the University of California, Riverside has combined observations from several of the world’s most powerful telescopes to carry out one of the largest studies yet of molecular gas the raw material which fuels star formation throughout the universe in three of the most distant clusters of galaxies ever found, detected as they appeared when the universe was only four billion years old.
Clusters are rare regions of the universe consisting of tight groups of hundreds of galaxies containing trillions of stars, as well as hot gas and mysterious dark matter. First, the research team used spectroscopic observations from the W. M. Keck Observatory on Mauna Kea, Hawai’i, and the Very Large Telescope in Chile that confirmed 11 galaxies were star-forming members of the three massive clusters. Next, the researchers took images through multiple filters from NASA’s Hubble Space Telescope, which revealed a surprising diversity in the galaxies’ appearance, with some galaxies having already formed large disks with spiral arms.
One of the telescopes the SpARCS scientists used is the extremely sensitive Atacama Large Millimeter Array (ALMA) telescope capable of directly detecting radio waves emitted from the molecular gas found in galaxies in the early universe. ALMA observations allowed the scientists to determine the amount of molecular gas in each galaxy, and provided the best measurement yet of how much fuel was available to form stars.
The researchers compared the properties of galaxies in these clusters with the properties of “field galaxies” (galaxies found in more typical environments with fewer close neighbors). To their surprise, they discovered that cluster galaxies had higher amounts of molecular gas relative to the amount of stars in the galaxy, compared to field galaxies. The finding puzzled the team because it has long been known that when a galaxy falls into a cluster, interactions with other cluster galaxies and hot gas accelerate the shut off of its star formation relative to that of a similar field galaxy (the process is known as environmental quenching).
“This is definitely an intriguing result,” said Gillian Wilson, a professor of physics and astronomy at UC Riverside and the leader of the SpARCS collaboration. “If cluster galaxies have more fuel available to them, you might expect them to be forming more stars than field galaxies, and yet they are not.”
Noble, a SpARCS collaborator and the study’s leader, suggests several possible explanations: It is possible that something about being in the hot, harsh cluster environment surrounded by many neighboring galaxies perturbs the molecular gas in cluster galaxies such that a smaller fraction of that gas actively forms stars. Alternatively, it is possible that an environmental process, such as increased merging activity in cluster galaxies, results in the observed differences between the cluster and field galaxy populations.
“While the current study does not answer the question of which physical process is primarily responsible for causing the higher amounts of molecular gas, it provides the most accurate measurement yet of how much molecular gas exists in galaxies in clusters in the early universe,” Wilson said.
The SpARCS team has developed new techniques using infrared observations from NASA’s Spitzer Space Telescope to identify hundreds of previously undiscovered clusters of galaxies in the early universe. In the future, they plan to study a larger sample of clusters. The team has recently been awarded additional time on ALMA, the W. M. Keck Observatory, and the Hubble Space Telescope to continue investigating how the neighborhood in which a galaxy lives determines for how long it can form stars. |
An alphabetical list of definitions crossreferenced to more thorough explanations in the main text or in external documents.
- In computing jargon an argument is one of the pieces of data passed to a procedure. Another name is parameter.
- assign, assigning, assignment
- Assignment is one of the fundamental operations of computing. All it means is copying a value into the memory location pointed at by a variable. The value can be a literal or the value of some other variable, q.v. Assignment
- Declares an argument to a procedure as a pointer to the argument instead of as a copy of the value of the argument. This allows the procedure to permanently change the variable. If neither ByRef or ByVal is stated, ByRef is assumed by the compiler.
- Declares an argument to a procedure as a copy of the value of the argument. The value of the original variable cannot be changed by the procedure. The value of the newly created variable can be changed within the procedure, but this does not affect the variable it was copied from. Programs are more robust if variables are declared ByVal whenever possible since this means that an argument will not be unexpectedly changed by calling a function. This also results in a faster program (see pg 758 of the Microsoft Visual Basic 6 Programmer's Guide).
- compiler directives
- These are instructions included in the text of the program that affect the way the compiler behaves. For instance it might be directed to include one or another version of a piece of code depending on whether the target operating system is Windows 95 or Windows XP.
- Immediate Windows
- This is the window in the IDE which receives output from Debug.Print. See IDE
- Operands and Operators
- An operand is operated on by an operator. Expressions are built of operands and operators. For instance in this expression:
a = b + c
there are three operands (a, b, and c) and two operators (= and +).
- ragged array
aRagged = Array(Array(1, 2, 3), Array(1, 2))
- Such arrays are inefficient in Visual Basic because they are implemented as Variants but functionaly identical arrays can be created as instances of a class and can be made much more efficient both in storage space and execution time, although not quite as efficient as C arrays.
- real number
- A variable that can hold a real number is one that can hold a number that can have any value including fractional value. In computer languages variables only approximate real numbers because the irrational numbers are also real, see Real number. In Visual Basic Classic there are two real number types: Single and Double, see Data Types
- A variable that holds a pointer to a value rather than holding the the value itself. In strict Visual Basic usage only object references work like this. |
A kernel is the part of an operating system that performs the lowest-level functions. In standard operating system design, the kernel implements operations such as synchronization, interprocess communication, message passing, and interrupt handling. The kernel is also called a nucleus or core. The notion of designing an operating system around a kernel is described by Lampson and Sturgis [LAM76] and by Popek and Kline [POP78].
A security kernel is responsible for enforcing the security mechanisms of the entire operating system. The security kernel provides the security interfaces among the hardware, operating system, and other parts of the computing system. Typically, the operating system is designed so that the security kernel is contained within the operating system kernel. Security kernels are discussed in detail by Ames [AME83].
There are several good design reasons why security functions may be isolated in a security kernel.
· Coverage. Every access to a protected object must pass through the security kernel. In a system designed in this way, the operating system can use the security kernel to ensure that every access is checked.
· Separation. Isolating security mechanisms both from the rest of the operating system and from the user space makes it easier to protect those mechanisms from penetration by the operating system or the users.
· Unity. All security functions are performed by a single set of code, so it is easier to trace the cause of any problems that arise with these functions.
· Modifiability. Changes to the security mechanisms are easier to make and easier to test.
· Compactness. Because it performs only security functions, the security kernel is likely to be relatively small.
· Verifiability. Being relatively small, the security kernel can be analyzed rigorously. For example, formal methods can be used to ensure that all security situations (such as states and state changes) have been covered by the design.
Notice the similarity between these advantages and the design goals of operating systems that we described earlier. These characteristics also depend in many ways on modularity, as described in Chapter 3.
On the other hand, implementing a security kernel may degrade system performance because the kernel adds yet another layer of interface between user programs and operating system resources. Moreover, the presence of a kernel does not guarantee that it contains all security functions or that it has been implemented correctly. And in some cases a security kernel can be quite large.
How do we balance these positive and negative aspects of using a security kernel? The design and usefulness of a security kernel depend somewhat on the overall approach to the operating system's design. There are many design choices, each of which falls into one of two types: Either the kernel is designed as an addition to the operating system, or it is the basis of the entire operating system. Let us look more closely at each design choice.
The most important part of a security kernel is the reference monitor, the portion that controls accesses to objects [AND72, LAM71]. A reference monitor is not necessarily a single piece of code; rather, it is the collection of access controls for devices, files, memory, interprocess communication, and other kinds of objects. As shown in Figure 5-12, a reference monitor acts like a brick wall around the operating system or trusted software.
A reference monitor must be
tamperproof, that is, impossible to weaken or disable
unbypassable, that is, always invoked when access to any object is required
analyzable, that is, small enough to be subjected to analysis and testing, the completeness of which can be ensured
A reference monitor can control access effectively only if it cannot be modified or circumvented by a rogue process, and it is the single point through which all access requests must pass. Furthermore, the reference monitor must function correctly if it is to fulfill its crucial role in enforcing security. Because the likelihood of correct behavior decreases as the complexity and size of a program increase, the best assurance of correct policy enforcement is to build a small, simple, understandable reference monitor.
The reference monitor is not the only security mechanism of a trusted operating system. Other parts of the security suite include audit, identification, and authentication processing, as well as the setting of enforcement parameters, such as who the allowable subjects are and which objects they are allowed to access. These other security parts interact with the reference monitor, receiving data from the reference monitor or providing it with the data it needs to operate.
The reference monitor concept has been used for many trusted operating systems and also for smaller pieces of trusted software. The validity of this concept is well supported both in research and in practice.
Trusted Computing Base
The trusted computing base, or TCB, is the name we give to everything in the trusted operating system necessary to enforce the security policy. Alternatively, we say that the TCB consists of the parts of the trusted operating system on which we depend for correct enforcement of policy. We can think of the TCB as a coherent whole in the following way. Suppose you divide a trusted operating system into the parts that are in the TCB and those that are not, and you allow the most skillful malicious programmers to write all the non-TCB parts. Since the TCB handles all the security, there is nothing the malicious non-TCB parts can do to impair the correct security policy enforcement of the TCB. This definition gives you a sense that the TCB forms the fortress-like shell that protects whatever in the system needs protection. But the analogy also clarifies the meaning of trusted in trusted operating system: Our trust in the security of the whole system depends on the TCB.
It is easy to see that it is essential for the TCB to be both correct and complete. Thus, to understand how to design a good TCB, we focus on the division between the TCB and non-TCB elements of the operating system and spend our effort on ensuring the correctness of the TCB.
Just what constitutes the TCB? We can answer this question by listing system elements on which security enforcement could depend:
z hardware, including processors, memory, registers, and I/O devices
some notion of processes, so that we can separate and protect security-critical processes
primitive files, such as the security access control database and identification/authentication data
protected memory, so that the reference monitor can be protected against tampering
some interprocess communication, so that different parts of the TCB can pass data to and activate other parts. For example, the reference monitor can invoke and pass data securely to the audit routine.
It may seem as if this list encompasses most of the operating system, but in fact the TCB is only a small subset. For example, although the TCB requires access to files of enforcement data, it does not need an entire file structure of hierarchical directories, virtual devices, indexed files, and multidevice files. Thus, it might contain a primitive file manager to handle only the small, simple files needed for the TCB. The more complex file manager to provide externally visible files could be outside the TCB. Figure 5-13 shows a typical division into TCB and non-TCB sections.
The TCB, which must maintain the secrecy and integrity of each domain, monitors four basic interactions.
Process activation. In a multiprogramming environment, activation and deactivation of processes occur frequently. Changing from one process to another requires a complete change of registers, relocation maps, file access lists, process status information, and other pointers, much of which is security-sensitive information.
Execution domain switching. Processes running in one domain often invoke processes in other domains to obtain more sensitive data or services.
Memory protection. Because each domain includes code and data stored in memory, the TCB must monitor memory references to ensure secrecy and integrity for each domain.
I/O operation. In some systems, software is involved with each character transferred in an I/O operation. This software connects a user program in the outermost domain to an I/O device in the innermost (hardware) domain. Thus, I/O operations can cross all domains.
The division of the operating system into TCB and non-TCB aspects is convenient for designers and developers because it means that all security-relevant code is located in one (logical) part. But the distinction is more than just logical. To ensure that the security enforcement cannot be affected by non-TCB code, TCB code must run in some protected state that distinguishes it. Thus, the structuring into TCB and non-TCB must be done consciously. However, once this structuring has been done, code outside the TCB can be changed at will, without affecting the TCB's ability to enforce security. This ability to change helps developers because it means that major sections of the operating systemutilities, device drivers, user interface managers, and the likecan be revised or replaced any time; only the TCB code must be controlled more carefully. Finally, for anyone evaluating the security of a trusted operating system, a division into TCB and non-TCB simplifies evaluation substantially because non-TCB code need not be considered.
Security-related activities are likely to be performed in different places. Security is potentially related to every memory access, every I/O operation, every file or program access, every initiation or termination of a user, and every interprocess communication. In modular operating systems, these separate activities can be handled in independent modules. Each of these separate modules, then, has both security-related and other functions.
Collecting all security functions into the TCB may destroy the modularity of an existing operating system. A unified TCB may also be too large to be analyzed easily. Nevertheless, a designer may decide to separate the security functions of an existing operating system, creating a security kernel. This form of kernel is depicted in Figure 5-14.
A more sensible approach is to design the security kernel first and then design the operating system around it. This technique was used by Honeywell in the design of a prototype for its secure operating system, Scomp. That system contained only twenty modules to perform the primitive security functions, and it consisted of fewer than 1,000 lines of higher-level-language source code. Once the actual security kernel of Scomp was built, its functions grew to contain approximately 10,000 lines of code.
In a security-based design, the security kernel forms an interface layer, just atop system hardware. The security kernel monitors all operating system hardware accesses and performs all protection functions. The security kernel, which relies on support from hardware, allows the operating system itself to handle most functions not related to security. In this way, the security kernel can be small and efficient. As a byproduct of this partitioning, computing systems have at least three execution domains: security kernel, operating system, and user. See Figure 5-15. |
Worksheet. September 09th , 2020.
This page features a variety of free printable division worksheets for home and school use. Our word problem worksheets review skills in real world scenarios.
Printable division worksheets grade 3. Free third grade math worksheets for practicing multiplication and division. Our grade 3 math worksheets are free and printable in pdf format. Our 3rd grade division worksheets include i) simple division worksheets to help kids with their division facts and mental division skills and ii) an introduction into long division including simple division with remainder questions.practice dividing by tens and hundreds is also emphasized.
All worksheets are printable pdf files. Understand division as the inverse of multiplication using our picture division exercises. Our third grade math worksheets continue earlier numeracy concepts and introduce division, decimals, roman numerals, calendars and new concepts in measurement and geometry.
Our team is working on a new methodology for preparing engaging , colorful worksheets. You can also customize them using the generator below. The teachers only recommend them for students in grade 3, 4 and 5.
Grade 3 worksheets are free for download. The worksheets on this page have been designed to support your child on their division journey from the start of 3rd grade to the end. Choose your grade 3 topic to help the third grade student with basic skill that they need in grade 3.
The first two sheets involve drawing out different amounts in groups and solving simple problems which do not require any reasoning skills. Nothing from this site may be stored on google drive or any other online file storage system. Printable division worksheets grade 3 was created by combining each of gallery on letter worksheets, letter worksheets is match and guidelines that suggested for you, for enthusiasm about you search.
Quick links to the topics listed below for free printable division worksheets : Division skills are key to becoming a math pro, and these third grade division worksheets will help your students build math confidence while having a blast! Topics include division facts, mental division, long division, division with remainders, order of operations, equations, and factoring.
Free pdf worksheets from k5 learning's online reading and math program. Included in our free printable 3rd grade worksheets, we've got lots of fun, creative educational activities for you! Choose your grade 3 topic:
Addition , subtraction , multiplication and division problems are given. Addition, subtraction, multiplication, and english. Whether your students are learning these concepts for the first time or reinforcing past lessons, they will love exploring division through board games, word problems, fractions activities.
Place value, spelling, addition, subtraction, division, multiplication, fractions, graphing, measurement, mixed operations, geometry, area and perimeter, and time. Free worksheets for grade 3: Some of the worksheets displayed are annual national assessment 2015 grade 3 english home, trinity gese grade 3 work 1, grade 3 parts of speech work, words and their meanings, english home language work, simple and complex sentences work, macmillan english 3 unit 18 work student name total mark, grade 3 assessment in reading.
The topics that the division made easy worksheets include mental division, division with remainders, psychological division, division facts, equations, long division, the order of operation and even factoring. Based on the singaporean math curriculum grade level 3, these worksheets are made for students in third grade level and cover math topics such as: Our division worksheets are free to download, easy to use, and very flexible.
This images was posted by admin on january 30, 2020. These division worksheets are a great resource for children in kindergarten, 1st grade, 2nd grade, 3rd grade, 4th grade, and 5th grade. Worksheets > math > grade 3 > division.
Included here are division times tables and charts, various division models, division facts, divisibility rules, timed division drills, worksheets with grid assistance, basic. These workbooks are ideal for both children and adults to make use of. You'll find a variety of fun third grade worksheets to print and use at home or in the classroom.
Welcome to the division worksheet page at tlsbooks.com. Whether your students are learning these concepts for the first time or reinforcing past lessons, they will love exploring division through board games, word problems, fractions activities. You may use these worksheets to help your child or students learn and build strong division skills.
Worksheets > math > grade 3. No worksheet or portion thereof is to be hosted on, uploaded to, or stored on any other web site, blog, forum, file sharing, computer, file storage device, etc. Division tables, long division without remainder, long division with remainder, horizontal number division, division activities
Division skills are key to becoming a math pro, and these third grade division worksheets will help your students build math confidence while having a blast! Use our times table as your reference. Choose your grade 3 topic to help the third grade student with basic skill that they need in grade 3.
Free grade 3 math worksheets. The worksheets can be made in html or pdf format — both are easy to print. Printable division worksheets grade 3 9389 in letter worksheets.
Any content, trademark/s, or other material that might be found on this site that is not this site property remains the copyright of its respective owner/s. In no way does LocalHost claim ownership or responsibility for such items and you should seek legal consent for any use of such materials from its owner. |
Astronomers long have thought that a supermassive black hole and the bulge of stars at the center of its host galaxy grow at the same rate -- the bigger the bulge, the bigger the black hole. A new study of Chandra data has revealed two nearby galaxies whose supermassive black holes are growing faster than the galaxies themselves.
Credit: X-ray: NASA/CXC/SAO/A.Bogdan et al; Infrared: 2MASS/UMass/IPAC-Caltech/NASA/NSF
The mass of a giant black hole at the center of a galaxy typically is a tiny fraction (about 0.2 percent) of the mass contained in the bulge, or region of densely packed stars, surrounding it. The targets of the latest Chandra study, galaxies NGC 4342 and NGC 4291, have black holes that are 10 times to 35 times more massive than they should be compared to their bulges. The new observations with Chandra show that the halos, or massive envelopes of dark matter in which these galaxies reside, also are overweight.
The new study suggests the two supermassive black holes and their evolution are tied to their dark matter halos and they did not grow in tandem with the galactic bulges. In this view, the black holes and dark matter halos are not overweight, but the total mass in the galaxies is too low.
"This gives us more evidence of a link between two of the most mysterious and darkest phenomena in astrophysics -- black holes and dark matter -- in these galaxies," said Akos Bogdan of the Harvard-Smithsonian Center for Astrophysics (CfA) in Cambridge, Mass, who led the new study.
NGC 4342 and NGC 4291 are close to Earth in cosmic terms, at distances of 75 million and 85 million light years, respectively. Astronomers had known from previous observations that these galaxies host black holes with relatively large masses, but astronomers are not certain what is responsible for the disparity. Based on the new Chandra observations, however, they are able to rule out a phenomenon known as tidal stripping.
Tidal stripping occurs when some of a galaxy's stars are stripped away by gravity during a close encounter with another galaxy. If such tidal stripping had taken place, the halos also mostly would have been missing. Because dark matter extends farther away from the galaxies, it is more loosely tied to them than the stars and is more likely to be pulled away.
To rule out tidal stripping, astronomers used Chandra to look for evidence of hot, X-ray emitting gas around the two galaxies. Because the pressure of hot gas – estimated from X-ray images -- balances the gravitational pull of all the matter in the galaxy, the new Chandra data can provide information about the dark matter halos. The hot gas was found to be widely distributed around both NGC 4342 and NGC 4291, implying that each galaxy has an unusually massive dark matter halo, and therefore that that tidal stripping is unlikely.
"This is the clearest evidence we have, in the nearby universe, for black holes growing faster than their host galaxy," said co-author Bill Forman, also of CfA. "It's not that the galaxies have been compromised by close encounters, but instead they had some sort of arrested development."
How can the mass of a black hole grow faster than the stellar mass of its host galaxy? The study's authors suggest that a large concentration of gas spinning slowly in the galactic center is what the black hole consumes very early in its history. It grows quickly, and as it grows, the amount of gas it can accrete, or swallow, increases along with the energy output from the accretion. Once the black hole reaches a critical mass, outbursts powered by the continued consumption of gas prevent cooling and limits the production of new stars.
"It's possible that the supermassive black hole reached a hefty size before there were many stars at all in the galaxy," said Bogdan. "That is a significant change in our way of thinking about how galaxies and black holes evolve together."
These results were presented June 11 at the 220th meeting of the American Astronomical Society in Anchorage, Alaska. The study also has been accepted for publication in The Astrophysical Journal.
NASA's Marshall Space Flight Center in Huntsville, Ala., manages the Chandra program for the agency's NASA's Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory in Cambridge, Mass., controls Chandra's science and flight operations.For Chandra images, multimedia and related materials, visit:
Megan Watzke | EurekAlert!
NASA's Fermi catches gamma-ray flashes from tropical storms
25.04.2017 | NASA/Goddard Space Flight Center
DGIST develops 20 times faster biosensor
24.04.2017 | DGIST (Daegu Gyeongbuk Institute of Science and Technology)
More and more automobile companies are focusing on body parts made of carbon fiber reinforced plastics (CFRP). However, manufacturing and repair costs must be further reduced in order to make CFRP more economical in use. Together with the Volkswagen AG and five other partners in the project HolQueSt 3D, the Laser Zentrum Hannover e.V. (LZH) has developed laser processes for the automatic trimming, drilling and repair of three-dimensional components.
Automated manufacturing processes are the basis for ultimately establishing the series production of CFRP components. In the project HolQueSt 3D, the LZH has...
Reflecting the structure of composites found in nature and the ancient world, researchers at the University of Illinois at Urbana-Champaign have synthesized thin carbon nanotube (CNT) textiles that exhibit both high electrical conductivity and a level of toughness that is about fifty times higher than copper films, currently used in electronics.
"The structural robustness of thin metal films has significant importance for the reliable operation of smart skin and flexible electronics including...
The nearby, giant radio galaxy M87 hosts a supermassive black hole (BH) and is well-known for its bright jet dominating the spectrum over ten orders of magnitude in frequency. Due to its proximity, jet prominence, and the large black hole mass, M87 is the best laboratory for investigating the formation, acceleration, and collimation of relativistic jets. A research team led by Silke Britzen from the Max Planck Institute for Radio Astronomy in Bonn, Germany, has found strong indication for turbulent processes connecting the accretion disk and the jet of that galaxy providing insights into the longstanding problem of the origin of astrophysical jets.
Supermassive black holes form some of the most enigmatic phenomena in astrophysics. Their enormous energy output is supposed to be generated by the...
The probability to find a certain number of photons inside a laser pulse usually corresponds to a classical distribution of independent events, the so-called...
Microprocessors based on atomically thin materials hold the promise of the evolution of traditional processors as well as new applications in the field of flexible electronics. Now, a TU Wien research team led by Thomas Müller has made a breakthrough in this field as part of an ongoing research project.
Two-dimensional materials, or 2D materials for short, are extremely versatile, although – or often more precisely because – they are made up of just one or a...
20.04.2017 | Event News
18.04.2017 | Event News
03.04.2017 | Event News
25.04.2017 | Physics and Astronomy
25.04.2017 | Materials Sciences
25.04.2017 | Life Sciences |
Stoichiometry Problems Worksheet Answers
Worksheet for basic. part mole mass conversions. convert the following number of moles of chemical into its corresponding mass in grams. moles of ammonium chloride. moles of lead ii oxide. moles of aluminum iodide.Worksheet answers. given the following equation c o co h, show what the following molar ratios should be.
a. c o b. o co c. o h d. c co e. c h. given the following equation o a. how many moles of o can be produced by letting. moles of react. o. Aug, worksheets with answer keys. admin august,. some of the worksheets below are worksheets with answer keys, definition of with tons of interesting examples and exercises involving with step by step solutions with several colorful illustrations and diagrams.
List of Stoichiometry Problems Worksheet Answers
Basic instructions.Solution worksheet solve the following solutions problems. how many grams of silver will precipitate when. of. m silver nitrate are added to. of. m potassium k s . l. moles moles . Worksheet problems with a twist name chemistry a study of matter,.
a key. when mercury ii oxide is heated, it decomposes into mercury and oxygen gas Answers to mole to mass problems. hydrogen gas can be produced through the following reaction. g how many grams of are consumed by the reaction of. moles of magnesium.
1. 9 Stoichiometry Ideas Apologia Chemistry Science
Some of the worksheets for this concept are, practice work, mole calculation work, chemistry computing formula mass work, mole ratios answers key, mole calculation work,, moles answers key questions exercises.Dec, practice problems answers. practice problems writing and classifying equations answers.
from the chem team worksheet of mass mole conversions. worksheet of problems from the on density, mass percent, and. these problems have the answers worked out in detail.Chm worksheet the following flow chart may help you work problems. remember to pay careful attention to what you are given, and what you are trying to find.
2. Simple Activity Mini Lab Full Teacher Demonstration Students Chemistry Labs Stoichiometry Lessons
3. Mole Gram Stoichiometry Mass Detailed Examples Problems Literal Equations Chemistry Classroom Simplifying Rational Expressions
Is the study of the quantities of reactants and products in a chemical reaction. the word comes from the words element and measure. sometimes see covered by another name mass relations.
4. Mole Practice Worksheet 2 Practices Worksheets Super Teacher
Student exploration moles gizmo answer key. read online now student exploration disease spread gizmo answer key at our library. acquired trait, asexual reproduction, clone some of the worksheets for this concept are student exploration advanced circuits gizmo answers work, answer key to circuits gizmo, answer.
5. Mole Practice Worksheet 3 Moles Molecules Mass Conversions Chemistry Worksheets Scientific Notation Word Problems Math
6. Mole Relationships Scientific Notation Word Problems Simplifying Rational Expressions Chemistry Worksheets
7. Organic Chemistry Study Notes
Problems worksheet. solving quadratic equations worksheet. practice worksheet. central angles and inscribed angles worksheet answer key. free worksheet. exponential growth and decay word problems worksheet. structure worksheet. replication worksheet answer key.
8. Percent Yield Stoichiometry Notes Practice Problems Worksheet Scientific Notation Word Problem Worksheets Persuasive Writing Prompts
9. Periodic Table Sample Chemistry Worksheets Science Notes
10. Physical Chemical Change Work Sheet Answers Properties
11. Practice Worksheet Practices Worksheets Template
12. Professionally Designed Worksheets
13. Stoichiometry Chemistry Worksheets Notes College
14. Molar Mass Worksheet Chemistry Worksheets Education
The questions involve the definitions, differences, units of measure, measuring tools, and affect on mass and weight.this mass and weight worksheet can also be purchased at a discount as par.I show the students the following video of the mass vs. weight song.
15. Stoichiometry Ideas Chemistry Teaching Class
16. Stoichiometry Limiting Reactant Worksheets Set 2 Teaching Chemistry Classroom Interactive Science Notebook
17. Stoichiometry Maze Worksheet Review Science South Teachers Pay Worksheets Assessment
18. Stoichiometry Problems Answers 4 Chemistry Worksheets College
19. Stoichiometry Quiz Persuasive Writing Prompts Memorize Algebra Worksheets
20. Stoichiometry Worksheet Answer Key Answers Chemistry Worksheets Mole Conversion Molar Mass
21. Stoichiometry Worksheet Answer Key Images Chemistry Notes Worksheets
22. Stoichiometry Worksheets Online Tweaked Electronic Keys Teaching
23. Thanksgiving Stoichiometry Practice Worksheet Practices Worksheets Chemistry
24. Ting Chemical Formulas Worksheet Answer Key Worksheets Formula Chemistry Equation Balancing Equations
25. Worksheet Answer Key Ideas Worksheets Answers Keys
26. Mole Conversion Problems Chemistry Map Worksheet 2 Worksheets Simplifying Rational Expressions Scientific Notation Word
Posted in worksheet,, by scientific notation practice worksheets with answers, some of the worksheets below are scientific notation practice worksheets with answers, converting from decimal form into scientific notation, adding, subtracting, dividing and multiplying scientific notation exercises, several fun problems with solutions.
27. Ma Stoichiometry Simple Breaking Steps Set 3 Paper Saving Worksheets Answer Physical Science Education Blog Interactive Notebooks
Name chemistry. last first. how many grams of calcium phosphate can be produced from the reaction of. l of. m calcium chloride with and excess of phosphoric acid worksheet name solution worksheet.Chemistry mixed word problems answers solved worksheet gen chem ion.
28. Answer Key Periodic Table Scavenger Hunt Worksheet Related Chemistry Worksheets Science Notes
Com. Start studying periodic table worksheet answers. learn vocabulary, terms, and more with flashcards, games, and other study tools. The periodic table). period a horizontal row (left to right) in the periodic table. group a vertical column (up and down) on the periodic table.
reactivity describes how likely an element is to form bonds with other elements. Periodic table puns worksheet answers. puns worksheet teachers pay teachers, directions pun element symbol, periodic table worksheet answer key, chemistry puns and jokes any science nerd will love, periodic table puns flashcards, chemistry jokes puns and riddles.
29. Chemistry Standards
Grams to moles displaying top worksheets found for this concept. some of the worksheets for this concept are work and key, gram to work answers, work on moles and, practice work, mole problems work answers, mole to grams grams to moles conversions work, calculations work i, work Problems worksheet answers mass problems worksheet answers if you ally obsession such a referred mass problems worksheet answers book that will come up with the money for you worth, get the completely best seller from us currently from several preferred authors.
if This is unlike regular solids where we only had to account for the mass of the solids and solve for the mass of the product by. in order to solve for the temperature, pressure, or volume of a gas using chemical reactions, we often need to have information on two out of three of these variables.
30. Atomic Structure Practice Worksheet Answers Point Grey Secondary School Chemistry Worksheets Dimensional Analysis
31. Balancing Chemical Equations Worksheets Answers Chemistry Lessons Equation
Molecular mass determination from ideal gas law.Oct, problems worksheet with answers. problems worksheet written by admin, , edit. ml of m potassium. so h o l so how many grams of sodium sulfate will be formed if you start with grams of sodium hydroxide and you have an excess of sulfuric acid.
Practice worksheet solve the following problems using the following equation h h how many grams of sodium sulfate will be formed if you start with. grams of sodium hydroxide and you have an excess of sulfuric acid using the following, practice sheet preparation for test problems from chemistry fourth edition extra practice problems worksheets.
32. Balancing Chemical Equations Worksheets Lessons Education
Normal community high school was established in. our continued mission is to establish a community of learners, pursuing excellence every day. as a community, we work together and support each other. iron sharpens iron.Chemistry scavenger hunt internet lesson using the sites listed on the chemistry page of the kid zone.
33. Calculating Molar Masses Teaching Chemistry Lessons Science
34. Calculating Ty Worksheet Grade Favourite Answers Calculate Dimensional Analysis Word Problem Worksheets
35. Chemical Equations Stoichiometry Ideas Equation Chemistry Teaching
Balance the following chemical reactions hint a. co o co b. o c. o o d. no n o h o e. ch o co h o n hint f. h o write the balanced chemical equations of each, solution worksheet solve the following solutions problems. ml of m potassium. is one of the most important topics on the chemistry exam so it s vital that you understand it and all of its applications.
worksheet of mass mole conversions answers to worksheet of mass mole conversions.Answer answers to worksheet.
37. Chemistry Lab Mole Concept Labs Classroom Teaching
38. Chemistry Notes Chemical Equations Mole Stoichiometry Equation
Aqueous reactions and solution precipitation reactions reactions that result in the formation of an insoluble product are known as precipitation reactions. electron configuration worksheet answers part a worksheets for electron This chemistry video tutorial shows you how to identify the limiting reagent and excess reactant.
39. Chemistry Real Life Stoichiometry Problems Key Teaching Lessons Classroom
This resource includes a full answer key. this is part of a larger worksheet bundle that includes sets. each set is Displaying top worksheets found for mole volume. some of the worksheets for this concept are, work and key, problems work answers, chm work, calculation practice work,, gas law work answers, mixed work answers.
Nov, please find attached the used for the discussion of. please feel free to use them to review the material or to get the notes if you missed a day or two. i will also post answers to some class activities here as well for those of you that did not get these answers in class.
40. Classroom Freebies Chemistry Mole Stoichiometry Practice Problem Worksheet Labs Teaching
Parents letter. on , , was notified of confirmed case of coronavirus. we are happy to say that this is the. Answers. g of zinc and. l of h. l. l. l. m. title with solutions problems author keywords solutions, Review answers. a. b. x. g. g ca x. g. g p x.
g. g n x. g. g o x. g. g o x. g. g. g. g c. capo d.Title answers to problems author last modified by created date am company st academy other problems answer key d. no n o h o. f. oh h o. a. calcium carbide reacts with water to form calcium hydroxide ca oh and acetylene gas c h.
41. Introduction Reactions Worksheet Answers Calculations Word Problem Worksheets Chemistry
42. Common Chemical Formula Chemistry Worksheets Teaching Classroom
43. Electrons Flame Tests Physical Science High School Chemistry Lessons
44. Empirical Molecular Formula Notes Chemistry Worksheets Teaching
45. Engage Students Stoichiometry Practice Mazes Provide Space Work Student Engagement Chemistry Teaching
The topics in this section are as follows section. equation worksheet answers and write exponential equations two points section provides answers to the following challenges the topics covered problem set skills worksheet sample problem set teacher notes and answers.
so. a. g. g c. o. g h. h so k so o. g h so. h b. c. kg . a. c h o b.Chemistry problem sheet key . x i i. x cl g cl cl x g cl g Sep, solution worksheet solve the following solutions problems. grams of so.
46. Find Balancing Equation Daunting Task Download Equations Chemistry
Using the following balance equation. more exciting problems. practice worksheet balancing equations and simple balance the following equations. using and.Answers limiting reagent worksheet. balanced equation c o co h a o b. co c. g h d. g c so b. c. g d.
g. balanced equation a b. c. g d. g Get free mixed mole problems answers chemistry worksheet more mole problems name mixed problems key what volume of is produced if. of reacted with an excess of . l. l n The results for worksheet answers.
47. Fun Chemistry Activity Intro Stoichiometry Activities Worksheets
Structure worksheet. worksheet answers. free worksheet. worksheet answers. function worksheet. density practice problem worksheet answers. practice worksheet. replication worksheet answers. function worksheet. mitosis worksheet answers. problems worksheet.
Chm worksheet the following flow chart may help you work problems. remember to pay careful attention to what you are given, and what you are trying to find. fermentation is a complex chemical process of making wine by converting glucose into ethanol and carbon dioxide l cog a.
48. Genetics Worksheet Answers Word Problem Worksheets Problems Template
49. Good High School Chem Teacher Assess Students Mole Concept Chemistry Questions
50. Gram Formula Mass Worksheet Teacher Worksheets Math Facts Addition Free Kids
51. Heat Calculations Phase Change Specific Earth Science Lessons Scientific Notation Word Problems Chemistry Worksheets
Calculations using significant figures. scientific notation worksheet name. scientific notation is a smart way of writing huge whole numbers and too small to dealing with scientific notation worksheet answers, make sure you recognize that schooling is our own factor to an improved down the road, as well as learning does not only cease right after the institution bell rings.
52. Ideas Chemistry Classroom Teaching Science
The solution below uses the information given in the original problem solution the h h o ratio of could have been used also. in that case, the ratio from the problem would have been. over x, since you were now using the water data and not the oxygen data.
Worksheet answer key. how many moles of hydrogen gas would be produced from the use of. moles of aluminum with an excess of hydrogen chloride ans h. x molecules h o h o worksheet continued. hematite,, is an important ore of iron. httpsstudylib.
53. Worksheet Features Step Stoichiometry Mole Conversion Problems Designed Study Tips College Chemistry Class
What is the mass in grams of gas when. moles of is added to the reaction.g.Answer key. problems. how many moles of hydrogen are needed to completely react with. moles of nitrogen. moles of hydrogen. how many moles of oxygen are produced by the decomposition of.
moles of potassium. moles of oxygen. Worksheet. s hf l a. how many moles of are needed to react with. of b. how many grams of form when. of reacts with excess c. how many grams of There are four steps in solving a problem write the balanced chemical equation. |
The moon’s magnificent desolation is far wetter than scientists imagined. A NASA spacecraft sent to study lunar dust and atmosphere also picked up signs of water being released from the moon as meteors collide with its surface. This unprecedented detection, reported today in the journal Nature Geoscience, shows that tiny impacts release up to 200 tonnes of water a year—much more than should be on the surface based on previously known delivery systems.
“There was so much that the instrument on the spacecraft acted like a sponge, soaking up the water that was moving through the atmosphere,” says study leader Mehdi Benna, a planetary scientist at NASA’s Goddard Space Flight Center. “When we turned the instrument on, what we found was extremely exciting.”
The discovery offers fresh clues to our understanding of how the moon formed in the first place, and it provides tantalising targets for future human missions, which could one day use the moon’s watery bounty for both hydration and propulsion.
“We always think of the moon as a very peaceful and desolate place,” Benna says. “And now with this data, we see that the moon is actually very active and responsive.”
What is the moon made of, and how did it form? Learn about the moon's violent origins, how its phases shaped the earliest calendars, and how humans first explored Earth's only natural satellite half a century ago.
Hail of meteors
We’ve long known that there is some amount of water on the moon, most of it locked up as ice in permanently shadowed craters or hidden deep below the surface. Water can be delivered to the moon in two ways. Hydrogen from the solar wind can mix with oxygen on the surface and make a chemical relative called hydroxyl, which in turn interacts with lunar rocks to create hydrated minerals. Comets and asteroids can also deposit water on the moon when they slam into it.
But the new data, collected by a retired NASA spacecraft called LADEE, revealed something unexpected. While LADEE was in orbit around the moon, it witnessed meteor showers, the same way we do here on Earth. At certain times of the year, our planetary system crosses into the orbits of comets, some of which are strewn with debris. Most of these cometary leftovers burn up in our atmosphere, sparking the annual sky shows we call the Geminids, the Perseids, the Leonids, and more. On the airless moon, though, these meteor showers bombard the surface.
“Every stream is millions of particles, like a rain of small impactors,” Benna says. “We saw 29 known streams of meteors, and each stream is related to a comet.”
As these little particles collided with the surface, they kicked up the top layer of fine soil, or regolith, revealing much more water than the team expected to find below the first few centimeters.
“This loss of water can’t be compensated for by the solar wind hydrogen implantation or by the water that comes with micrometeorites themselves,” Benna says. “So there must be more water in the soil of the moon that can’t be replenished by those two known sources. The only way to explain that is to have an ancient reservoir of water that's been basically depleted over geological time.”
Benna and his team estimate that the moon has a fairly even amount of water just a few centimeters below the surface. This means the moon holds more water than could have been delivered to it over its lifetime by solar wind or comets, which speaks to a problem planetary scientists have been trying to solve for decades.
A fresh 12-metre impact crater and its dark ejecta are seen on the lunar surface in an image taken by the Lunar Reconnaissance Orbiter.
PHOTOGRAPH BY NASA/GSFC/ARIZONA STATE UNIVERSITY
During the early days of our solar system’s formation, giant masses of young planets crashed into each other, flinging debris out into space. All the material that created Earth and the moon swirled around each other in a cosmic ballet. As a result, the moon and Earth share some history, but it’s been hard to explain why the moon seemed to have so little water in relation to Earth’s reserves. While the exact connections are unsure, the amount of water could be linked to the moon’s early volcanic history or the exchange of material between the moon and Earth in the earliest days of the solar system.
“This is an important paper because it's measuring the release of water in the present day,” says Carle Pieters, a planetary scientist at Brown University who was not involved with the study. “They have started the discussion about asking, Well, what happens here? Is the water young? Is it old? Is it related to a surface process or is it an ancient reservoir? They're the right questions to ask.”
The team's data can now inform scientists working on theories for the moon's origin story and how it might have obtained so much water. In addition, as NASA prepares to send humans back to the moon, whole missions will be dedicated to mapping lunar water and figuring out how the moon may supply future crews with the resources they need to survive.
“This is so exciting because they are catching all of this in progress—watching the water move in the exosphere before it either lands back on the surface or is lost to space,” Pieters says. “This is a really important piece of the story.” |
Chapter 2: MOTION AND SPEED Section 1DESCRIBING MOTION Motion occurs when an object changes its position. To know whether the
position of something has changed, you need a reference point. A reference point helps you determine how far an object has moved. An important part of describing the motion of an object is to describe how far it has moved, which is distance. The SI unit
of length or distance is the meter (m). 1 meter = 100 centimeters Sometimes you may want to know not only your distance, but also your direction from a reference point. Displacement is the distance and
direction of an objects change in position from a reference point. DISTANCE VS. DISPLACEMENT What is speed? Speed is the distance
an object travels per unit of time. Any change over time is called a rate. Speed is the rate at
which distance is traveled. CALCULATING SPEED Speed = distance time If s = speed, d =
distance, and t = time, this relationship can be written as: s = d t Suppose you ran 2 km in 10 minutes. Your speed or rate of change of
position, would be: s = d = 2 km t 10 min 0.2 km/min
= CONSTANT SPEED If an object is in motion and neither slows down nor speeds up, the object is traveling at a constant speed. (Ex. Car traveling on a freewayCRUISE CONTROL)
CHANGING SPEED Much of the time, the speeds you experience are not constant. (Ex. Riding a bicycle for 5 km) CHANGING SPEED
AVERAGE SPEED Describes speed of motion when speed is changing. AVERAGE SPEED is the total distance traveled divided by the total time of travel.
For the bicycle trip, the total distance traveled was 5 km and the total time was 15 min. or .25 h. The AVERAGE SPEED was: s = d = 5 km = t 0.25 h 20 km/h
INSTANTANEOUS SPEED INSTANTANEOUS SPEED is the speed at a given point in time. (Ex. CARS SPEEDOMETER)
VELOCITY VELOCITY includes the speed of an object and the direction of its motion. Ex. HURRICANE
traveling at a speed of 60 km/h; located 100 km east of your location Velocity VELOCITY IS SPEED WITH DIRECTION! VELOCITY
SPEED same DIRECTION different (VELOCITY = DIFFERENT)
VELOCITY SPEED constant DIRECTION changing (VELOCITY = CHANGING)
VELOCITY SPEED constant DIRECTION changing (VELOCITY = CHANGING)
SPEED UNITS REMEMBER VELOCITY includes the speed and direction of an object; Therefore, a change in velocity can be either a change in how fast something is moving or a
change in the direction it is moving. CHAPTER 2: MOTION AND SPEED Section 2: ACCELERATION ACCELERATION is a change in velocity.
Acceleration occurs when an object changes its speed, its direction, or both. When you think of acceleration, you probably think of something speeding up (positive acceleration); However, an object that is slowing down also is
accelerating (negative acceleration). In both cases, acceleration occurs, because its speed is changing. Calculating ACCELERATION Remember Acceleration is the rate of change in velocity. The change in velocity or speed is divided by the length of the time interval over which the change occurred.
Acceleration = change in velocity time How is the change in velocity calculated? Always subtract the initial velocity(the velocity at the beginning of the time interval) from the final velocity(the velocity at the end of the time interval).
Change in velocity = final vel. initial vel. Change in velocity = vf vi a = (vf vi) = t s (units) m/s
UNITS The SI unit for velocity is meters/second (m/s), and the SI unit for time is seconds (s). So, the unit for acceleration is meters/second/second. This unit is written as m/s 2 and is read meters per second squared.
CALCULATING POSITIVE ACCELERATION Suppose a jet airliner starts at rest at the end of a runway and reaches a speed of 80 m/s in 20 s. Because it started from rest, its initial speed was zero. Its acceleration can be calculated as follows: a = (vf vi) = (80m/s-0m/s)= 4 m/s2 t 20s
CALCULATING NEGATIVE ACCELERATION Now imagine a skateboarder is moving at a speed of 3 m/s and comes to a stop in 2 s. The final speed is zero and the initial speed was 3 m/s. The skateboarders acceleration is calculated as follows: a = (vf vi) = (0m/s-3m/s)= -1.5 m/s2 t
2s ACCELERATION Will always be positive if an object is speeding up Will always be negative if an object slowing down Chapter 2: MOTION AND SPEED
Section 3MOTION AND FORCES What is a force? A force is a push or a pull that one body exerts on another. A force can cause the motion of an object to change.
OBVIOUS VS. NOT SO OBVIOUS Some forces are obviousthe force applied to a soccer ball as it is kicked into the goal Some forces are not
so obviousthe force of the floor being exerted on your feet OR gravity pulling down on your body BALANCED FORCES
When two or more forces act on an object at the same time, the forces combine to form the net force. What is the net force acting on this box? The net force on the box is zero, because the two forces cancel each other.
Forces on an object that are equal in size and opposite in direction are called balanced forces. UNBALANCED FORCES When two students are pushing with unequal forces in
opposite directions. A net force occurs in the direction of the larger force. UNBALANCED FORCES The students are
pushing on the box in the same direction. The net force is formed by adding the two forces together. IT IS IMPORTANT TO REMEMBER Students often assume that NO MOTION =
NO FORCE (not true), but an objects lack of motion is because the forces acting on it are balanced. NO MOTION = BALANCED FORCES MOTION = UNBALANCED FORCES What is inertia? Inertia is the tendency of an object to resist any
change in motion. (NEWTONS 1st LAWThe Law of Inertia) QUESTION: Would a bowling ball or a table tennis ball have a greater inertia? Why? RememberMass is the amount of matter in an object, and a bowling ball has more mass than a table-tennis ball.
The INERTIA of an object is related to its MASS. The greater the mass of an object, the greater its inertia. MASS = INERTIA
British Scientist Sir Isaac Newton (1642-1727) was able to describe the effects of forces on the motion of objects. These rules are known as Newtons Laws of Motion. According to Newtons first law of motion, an object moving at a constant velocity keeps moving at that velocity unless a net force acts on it (Part ICar-CC). Also, if an object is at rest, it stays at rest, unless a net force acts on
it (Part IISoccer ball). SHORT VERSIONNewtons 1st Law An object will resist any change in motion. What happens in a car crash?
This can be explained by the law of inertia When a car traveling about 50 km/h collides head-on with something solid, the car crumples, slows down, and stops within appproximately
0.1s. A passenger without a seatbelt Will continue to move forward at the same speed that the car was traveling Within 0.02 s after the car stops, unbelted passengers slam into the steering wheel, dashboard, etc. They are traveling at the cars original speed of
Fifth Wheel Test Machine Project Concept Evaluation Group members: Jonathan Eckart Wayne Farley Rob Grace Chris Bozarth Academic advisor: Dr. Faryar Etesami Preview Introduction Design Concepts Scoring Matrix Conclusion Introduction Fifth wheel - trailer to truck connection ConMet wants a...
Principles of Hemodialysis . The objectives of hemodialysis are to extract toxic nitrogenous substances from the blood and to remove excess water.. In hemodialysis, the blood the blood, loaded with toxins and nitrogenous wastes, is diverted from the patient to...
Silver flowed OUT OF Rome for Asian luxury goods Horses INTO Rome from pastoralists Pax Romana = stability of gov't which allowed an increase in trade Trade around the Mediterranean Sea, as well Silk Roads - trade of luxury goods...
The Excel program window has the same basic parts as all Office programs: the title bar, the Quick Access Toolbar, the Ribbon, Backstage view, and the status bar. ... The advantage of using a computer spreadsheet is that you can...
ILO and Tripartism: Workers' Perspectives Workshop for junior Diplomats Programme for Workers' Activities (ACTRAV) ITC-ILO ILO Constitution Tripartism: Concept "Tripartism" is : The active interactions among the government, workers and employers as equal and independent social partners.
SS8H5a Explain the establishment of the University of Georgia, Louisville, and the spread of Baptist and Methodist churches. ... Signed 1790 Treaty of New York. US gov't promised to protect Creek lands west of Oconee River. ... SS8H5a Explain the...
Can we develop a compact circuit model for a BJT? ... and narrow base for high gain ECE 663 Circuit models If VCB=0 then the equation for the emitter current looks like the ideal diode equation: ECE 663 Circuit models...
Ready to download the document? Go ahead and hit continue! |
A newly published study examines the movement of small galaxies throughout the universe, finding that they ‘dance’ in orderly disc-shaped orbits around larger galaxies.
The discovery that many small galaxies throughout the universe do not ‘swarm’ around larger ones like bees do but ‘dance’ in orderly disc-shaped orbits is a challenge to our understanding of how the universe formed and evolved.
The finding, by an international team of astronomers, including Professor Geraint Lewis from the University of Sydney’s School of Physics, is announced in the journal Nature.
“Early in 2013 we announced our startling discovery that half of the dwarf galaxies surrounding the Andromeda Galaxy are orbiting it in an immense plane” said Professor Lewis. “This plane is more than a million light years in diameter, but is very thin, with a width of only 300,000 light years.”
The universe contains billions of galaxies. Some, such as the Milky Way, are immense, containing hundreds of billions of stars. Most galaxies, however, are dwarfs, much smaller and with only a few billion stars.
For decades astronomers have used computer models to predict how these dwarf galaxies should orbit large galaxies. They had always found that they should be scattered randomly.
“Our Andromeda discovery did not agree with expectations, and we felt compelled to explore if it was true of other galaxies throughout the universe,” said Professor Lewis.
Using the Sloan Digital Sky Survey, a remarkable resource of color images and 3-D maps covering more than a third of the sky, the researchers dissected the properties of thousands of nearby galaxies.
“We were surprised to find that a large proportion of pairs of satellite galaxies have oppositely directed velocities if they are situated on opposite sides of their giant galaxy hosts”, said lead author Neil Ibata of the Lycée International in Strasbourg, France.
“Everywhere we looked we saw this strangely coherent coordinated motion of dwarf galaxies. From this we can extrapolate that these circular planes of dancing dwarfs are universal, seen in about 50 percent of galaxies,” said Professor Geraint Lewis.
“This is a big problem that contradicts our standard cosmological models. It challenges our understanding of how the universe works including the nature of dark matter.”
The researchers believe the answer may be hidden in some currently unknown physical process that governs how gas flows in the universe, although, as yet, there is no obvious mechanism that can guide dwarf galaxies into narrow planes.
Some experts, however, have made more radical suggestions, including bending and twisting the laws of gravity and motion. “Throwing out seemingly established laws of physics is unpalatable,” said Professor Lewis, “but if our observations of nature are pointing us in this direction, we have to keep an open mind. That’s what science is all about.”
Publication: Neil G. Ibata, et al., “Velocity anti-correlation of diametrically opposed galaxy satellites in the low-redshift Universe,” Nature, 2014; doi:10.1038/nature13481
Image: Geraint Lewis
Early Mars might have been a warm version of modern Titan and at least as…
The research linked vitamin D deficiency to premature death. One in three Australian individuals still…
Physicists model more than one million equations of state to uncover other previously unexplained properties…
Research findings could help explain rare symptoms such as problems with language and vision. The…
Immune checkpoints are a normal part of the immune system. Their function is to prevent…
The NIA Interventions Testing Program, which included UT Health San Antonio, worked with counterparts in… |
This essay is based on the working paper “Jim Crow and Black Economic Progress After Slavery” by Lukas Althoff and Hugo Reichardt.
Racial inequality has been one of the most stubborn challenges confronting American society. Black Americans continue to face significant disadvantages in economic opportunity and prosperity. For instance, the average wealth of a Black person today is less than one-fifth of that of a White person. Large gaps also prevail in education, income, and numerous other socioeconomic indicators, where Black Americans consistently lag behind their White counterparts.
The roots of these disparities reach deep into America's history, into the dark periods of slavery and Jim Crow. The repercussions of these eras continue to affect the lives of Black Americans. Our study, "Jim Crow and Black Economic Progress After Slavery," sheds light on the long-term economic impact of these historical injustices, revealing a stark economic divide among Black families based on their ancestral history.
My colleague Hugo Reichardt and I delved into millions of records spanning 150 years for individual Black families, providing insights into their evolving economic status and the institutional factors their ancestors encountered. This data allowed us to assess whether a Black family had been enslaved until the Civil War and, after the abolition of slavery, to which specific Jim Crow regimes they were subjected until the 1960s.
We found that Black families enslaved until the Civil War have significantly lower income, education, and wealth today than those whose ancestors were free before the war. These "Free-Enslaved gaps" account for 20 to 70 percent of the corresponding Black-White gaps. Our findings emphasize the enduring effects of slavery and Jim Crow. Racial inequality in the United States is not merely the result of current policies or individual choices. It is deeply rooted in the nation's history.
While the first roots of these disparities among Black families trace back to the era of slavery, we found that state institutions after the end of slavery drove their persistence—regimes called Jim Crow.
Upon gaining freedom from slavery, Black families were eager to pursue formal education. As Booker T. Washington stated in 1907, "It was a whole race trying to go to school." However, Black Americans' ambition was met with fierce resistance, fueling the rise of the new anti-Black institution of Jim Crow.
Jim Crow aimed to lower Black economic progress by racially segregating virtually all areas of life, disenfranchising Black voters, and limiting Black Americans' geographic mobility.
The intensity of Jim Crow regimes often varied drastically across states. For example, Louisiana passed almost one hundred Jim Crow laws through 1950, while its neighboring state of Texas passed fewer than one-third of that number.
The largest category of Jim Crow laws targeted education directly. These laws racially segregated schools, unequally divided educational resources between Black and White children, and barred Black parents from participating in the local bodies that governed their children's education. Consistent with the difference in the number of Jim Crow laws that were passed, the quality of Black schools in Louisiana was far worse than in Texas.
Enforced in the Southern states until the mid-twentieth century, Jim Crow systematically disadvantaged descendants of enslaved people. Most families who had been enslaved until the Civil War resided in the states that adopted the strictest regimes after slavery ended.
This lack of economic opportunities during the Jim Crow era—especially the lack of access to education—is the leading factor in why those Black families have lower levels of education, income, and wealth today.
However, our study also offers hope: access to education can significantly improve the long-run economic outcomes for Black families, even for descendants of those who lived under the most restrictive Jim Crow regimes.
Around the 1920s, a philanthropic program started to build approximately five thousand schools across the rural South. These schools aimed to undo some of the harm caused by Jim Crow's restrictions on Black education. We compared the long-term outcomes of families whose children could attend such a school with those who could not. Our findings reveal that gaining access to a newly built school in the 1920s and 1930s closed the vast majority of the loss in human capital caused by exposure to strict Jim Crow regimes.
Even Black Americans today whose fathers had attended such a school in the mid-twentieth century are far more educated and have higher incomes and wealth than Black Americans whose fathers had not been able to attend. This discovery underscores the transformative power of education and its potential to help reduce racial inequality.
Our research has significant implications for present-day policy makers who aim to mitigate the disadvantages faced by the descendants of enslaved people.
First, our findings underscore the significance of disparities within racial groups that race-specific policies may not adequately address. Take college affirmative action as an example. Studies have shown that the more selective a college, the less likely it is that Black students are descendants of enslaved people.
While affirmative action enhances racial diversity on campuses, it may fall short of reducing the disadvantages experienced by descendants of families enslaved until the Civil War. Considering an applicant's race and socioeconomic background could make affirmative action more effective.
Second, our study highlights the importance of ensuring access to quality education for all. Policies to improve educational opportunities for Black Americans could play a crucial role in addressing the racial income and wealth gaps.
During the Jim Crow era, the new construction of schools was especially effective in regions where Black children were most deprived of educational resources. Our research also indicates that such interventions can have substantial effects across generations. Overlooking these effects could result in policies with a smaller scale than optimal ones.
Third, there has been a recent resurgence in discussions around the concept of reparations, or wealth transfers, to the descendants of enslaved individuals. In our study, we emphasize that any evaluation of slavery's legacy should consider both the timing and location of a family's emancipation—how long they were enslaved and the extent of their exposure to Jim Crow laws following slavery. Our research reveals that the present-day circumstances of Black families are significantly influenced by the timing and location of their ancestors' freedom.
It's important to reiterate that our study primarily quantifies the additional challenges faced by those whose ancestors were enslaved until the Civil War compared to those who gained freedom earlier. It's worth noting that many free Black Americans had been enslaved in earlier periods, and all Black Americans faced discrimination due to slavery and Jim Crow, regardless of their specific family history.
While some argue that reparations should only be given to those who can trace their lineage back to enslaved ancestors, our findings suggest that post-slavery institutions also negatively impacted Black Americans whose ancestors were free before the Civil War. This group may find it more challenging to provide proof of their ancestors' enslavement, as it occurred decades before the Civil War.
In sum, our results serve as a reminder of the enduring economic impact of slavery and Jim Crow laws on racial inequality. It underscores the need for policies that address these historical injustices and promote economic equity. As we strive to build a fairer society, we must understand and address the historical roots of today's economic disparities.
Indeed, even "race-blind" policies can inadvertently interact with differences caused by historical institutions. Without this crucial understanding, systemic discrimination—the exposure to ongoing discrimination because of past injustices—will likely continue to be at the core of racial inequality in America.
Read the full working paper here.
Lukas Althoff is a Postdoctoral Fellow at Stanford Institute for Economic Policy Research (SIEPR). He will be joining Yale School of Management as an Assistant Professor of Economics in 2024.
Research briefings highlight the findings of research featured in the Long-Run Prosperity Working Paper Series and broaden our understanding of what drives long-run economic growth. |
Measures of Central Tendency and Dispersion
Suppose the heights in inches of the students in your class are as follows: 58, 58, 59, 60, 62, 64, 64, 65, 66, 66, 66, 66, 68, 68, 69, 70, 71, 72, 72, 74, 75, 77. What would be the mean of this data? How about the median and mode? Would you be able to calculate the variance for this data? How about the standard deviation?
Central Tendency and Dispersion
The majority of this textbook centers upon two-variable data, data with an input and an output. This is also known as bivariate data. There are many types of situations in which only one set of data is given. This data is known as univariate data. Unlike data you have seen before, no rule can be written relating univariate data. Instead, other methods are used to analyze the data. Three such methods are the measures of central tendency.
Measures of central tendency are the center values of a data set.
- Mean is the average of all the data. Its symbol is x¯.
- Mode is the data value appearing most often in the data set.
- Median is the middle value of the data set, arranged in ascending order.
Let's find the mean, median, and mode of the following data representing test scores:
90, 76, 53, 78, 88, 80, 81, 91, 99, 68, 62, 78, 67, 82, 88, 89, 78, 72, 77, 96, 93, 88, 88
Find the mean, median, mode, and range of this data.
- To find the mean, add all the values and divide by the number of values you added.
- To find the mode, look for the value(s) repeating the most.
- To find the median, organize the data from least to greatest. Then find the middle value.
53, 62, 62, 67, 68, 72, 76, 77, 78, 78, 78, 78, 80, 81, 82, 88, 88, 88, 88, 89, 90, 91, 93, 96, 99
- To find the range, subtract the highest value and the lowest value.
When a data set has two modes, it is bimodal.
If the data does not have a “middle value,” the median is the average of the two middle values. This occurs when data sets have an even number of entries.
Which Measure Is Best?
While the mean, mode, and median represent centers of data, one is usually more beneficial than another when describing a particular data set.
For example, if the data has a wide range, the median is a better choice to describe the center than the mean.
- The income of a population is described using the median, because there are very low and very high incomes in one given region.
If the data were categorical, meaning it can be separated into different categories, the mode may be a better choice.
- If a sandwich shop sold ten different sandwiches, the mode would be useful to describe the favorite sandwich.
Measures of Dispersion
In statistics, measures of dispersion describe how spread apart the data is from the measure of center. There are three main types of dispersion:
- Variance - the mean of the squares of the distance each data item (xi) is from the mean.
The symbol for variance is σ2.
- Standard deviation - the square root of the variance.
- Range - the difference between the highest and lowest values in the data.
Let's find the variance for the following data: 11, 13, 14, 15, 19, 22, 24, 26:
First find the mean (x¯).
It’s easier to create a table of the differences and their squares.
Compute the variance:
The variance is a measure of the dispersion and its value is lower for tightly grouped data than for widely spread data. In the example above, the variance is 27. What does it mean to say that tightly grouped data will have a low variance? You can probably already imagine that the size of the variance also depends on the size of the data itself. Below we see ways that mathematicians have tried to standardize the variance.
The Standard Deviation
Standard deviation measures how closely the data clusters around the mean. It is the square root of the variance. Its symbol is σ.
Now, let's calculate the standard deviation of the data set that you found the variance of:
The standard deviation is the square root of the variance.
σ2=27 so σ=5.196
Earlier, you were asked what the mean, median, and mode of the heights of the students in your class would be. Additionally, you were asked if you could calculate the variance and standard deviation for this data.
The heights in inches of the students in your class are as follows: 58, 58, 59, 60, 62, 64, 64, 65, 66, 66, 66, 66, 68, 68, 69, 70, 71, 72, 72, 74, 75, 77.
- To find the mean, add all the values and divide by the total number of values (22 in this case).
- To find the mode, look for the value(s) that repeat the most.
- To find the median, organize the data from least to greatest. Then find the middle value.
This data set is already organized from least to greatest, so you can go straight to finding the middle value.
- To find the variance, calculate the mean of the squares of the distance each value is from the mean.
It's easiest to set this up in a table.
Now, plug these values into the equation for variance and solve.
- To find the standard deviation, take the square root of the variance.
Find the mean, median, mode, range, variance, and standard deviation of the data set below.
|518 CLEVELAND AVE||$117,424|
|1808 MARKESE AVE||$128,000|
|1770 WHITE AVE||$132,485|
|1459 LINCOLN AVE||$77,900|
|1462 ANNE AVE||$60,000|
|2414 DIX HWY||$250,000|
|1523 ANNE AVE||$110,205|
|1763 MARKESE AVE||$70,000|
|1460 CLEVELAND AVE||$111,710|
|1478 MILL ST||$102,646|
Use a table to find variance.
- Define measures of central tendency. What are the three listed in this Concept?
- Define median. Explain its difference from the mean. In which situations is the median more effective to describe the center of the data?
- What is bimodal? Give an example of a set of data that is bimodal.
- What are the three measures of dispersion described in this Concept? Which is the easiest to compute?
- Give the formula for variance and define its variables.
- Why may variance be difficult to use as a measure of spread? Use the housing example to help you explain.
- Describe standard variation.
- Explain why the standard deviation of 2, 2, 2, 2, 2, 2, and 2 is zero.
- Find the mean, median, and range of the salaries given below.
|Professional Realm||Annual income|
|Farming, Fishing, and Forestry||$19,630|
|Sales and Related||$28,920|
|Architecture and Engineering||$56,330|
|Teaching & Education||$39,130|
|Professional Baseball Player*||$2,476,590|
(Source: Bureau of Labor Statistics, except (*) - The Baseball Players' Association (playbpa.com)).
Find the mean, median, mode,and range of the following data sets.
- 11, 16, 9, 15, 5, 18
- 53, 32, 49, 24, 62
- 11, 9, 19, 9, 19, 9, 13, 11
- 3, 2, 6, 9, 0, 1, 6, 6, 3, 2, 3, 5
- 2, 17, 1, –3, 12, 8, 12, 16
- 11, 21, 6, 17, 9.
- 223, 121, 227, 433, 122, 193, 397, 276, 303, 199, 197, 265, 366, 401, 222
Find the mean, median, and standard deviation of the following numbers. Which, of the mean and median, will give the best average?
- 15, 19, 15, 16, 11, 11, 18, 21, 165, 9, 11, 20, 16, 8, 17, 10, 12, 11, 16, 14
- 11, 12, 14, 14, 14, 14, 19
- 11, 12, 14, 16, 17, 17, 18
- 6, 7, 9, 10, 13
- 121, 122, 193, 197, 199, 222, 223, 227, 265, 276, 303, 366, 397, 401, 433
- If each score on an algebra test is increased by seven points, how would this affect the:
- Standard deviation?
- If each score of a golfer was multiplied by two, how would this affect the:
- Henry has the following World History scores: 88, 76, 97, 84. What would Henry need to score on his fifth test to have an average of 86?
- Explain why it is not possible for Henry to have an average of 93 after his fifth score.
- The mean of nine numbers is 105. What is the sum of the numbers?
- A bowler has the following scores: 163, 187, 194, 188, 205, 196. Find the bowler’s average.
- Golf scores for a nine-hole course for five different players were: 38, 45, 58, 38, 36.
- Find the mean golf score.
- Find the standard deviation to the nearest hundredth.
- Does the mean represent the most accurate center of tendency? Explain.
- Ten house sales in Encinitas, California are shown in the table below. Find the mean, median, and standard deviation for the sale prices. Explain, using the data, why the median house price is most often used as a measure of the house prices in an area.
|Address||Sale Price||Date Of Sale|
|643 3RD ST||$1,137,000||6/5/2007|
|911 CORNISH DR||$879,000||6/5/2007|
|911 ARDEN DR||$950,000||6/13/2007|
|715 S VULCAN AVE||$875,000||4/30/2007|
|510 4TH ST||$1,499,000||4/26/2007|
|415 ARDEN DR||$875,000||5/11/2007|
|226 5TH ST||$4,000,000||5/3/2007|
|710 3RD ST||$975,000||3/13/2007|
|68 LA VETA AVE||$796,793||2/8/2007|
|207 WEST D ST||$2,100,000||3/15/2007|
- Determine which statistical measure (mean, median, or mode) would be most appropriate for the following.
- The life expectancy of store-bought goldfish.
- The age in years of the audience for a kids' TV program.
- The weight of potato sacks that a store labels as “5-pound bag.”
- James and John both own fields in which they plant cabbages. James plants cabbages by hand, while John uses a machine to carefully control the distance between the cabbages. The diameters of each grower’s cabbages are measured, and the results are shown in the table. John claims his method of machine planting is better. James insists it is better to plant by hand. Use the data to provide a reason to justify both sides of the argument.
|Mean Diameter (inches)||7.10||6.85|
|Standard Deviation (inches)||2.75||0.60|
- Two bus companies run services between Los Angeles and San Francisco. The mean journey times and standard deviations in those times are given below. If Samantha needs to travel between the cities, which company should she choose if:
- She needs to catch a plane in San Francisco.
- She travels weekly to visit friends who live in San Francisco and wishes to minimize the time she spends on a bus over the entire year.
|Inter-Cal Express||Fast-dog Travel|
|Mean Time (hours)||9.5||8.75|
|Standard Deviation (hours)||0.25||2.5|
- A square garden has dimensions of 20 yards by 20 yards. How much shorter is it to cut across the diagonal than to walk around two joining sides?
- Rewrite in standard form: y=x/6−5.
- Solve for m: −2=(x+7).25
- A sail has a vertical length of 15 feet and a horizontal length of 8 feet. To the nearest foot, how long is the diagonal?
- Rationalize the denominator: 220.5.
To see the Review answers, open this PDF file and look for section 11.9.
|bivariate data||So far we have seen two-variable data, which is data with an input and an output. This is also known as bivariate data.|
|measures of dispersion||In statistics, measures of dispersion describe how spread apart the data is from the measure of center. There are three main types of dispersion:|
|Median||The median of a data set is the middle value of an organized data set.|
|range||Difference between highest and lowest values in data.|
|standard deviation||The square root of the variance.|
|univariate data||There are many types of situations in which only one set of data is given. This data is known as univariate data.|
|variance||is the mean of the squares of the distance each data item is from the mean, or σ2.|
|arithmetic mean||The arithmetic mean is also called the average.|
|descriptive statistics||In descriptive statistics, the goal is to describe the data that found in a sample or given in a problem.|
|inferential statistics||With inferential statistics, your goal is use the data in a sample to draw conclusions about a larger population.|
|measure of central tendency||In statistics, a measure of central tendency of a data set is a central or typical value of the data set.|
|Mode||The mode of a data set is the value or values with greatest frequency in the data set.|
|multimodal||When a set of data has more than 2 values that occur with the same greatest frequency, the set is called multimodal .|
|Outlier||In statistics, an outlier is a data value that is far from other data values.|
|Population Mean||The population mean is the mean of all of the members of an entire population.|
|resistant||A statistic that is not affected by outliers is called resistant.|
|Sample Mean||A sample mean is the mean only of the members of a sample or subset of a population.|
PLIX: Play, Learn, Interact, eXplore - The Tree Conundrum
Video: Mean, Median, and Mode
Activities: Measure of Central Tendency and Dispersion Discussion Questions
Study Aid: Describing Data
Practice: Introduction to Mean, Median, and Mode
Real World: Mean or Median? |
By the end of this section, you will be able to:
- Describe the observed features of SN 1987A both before and after the supernova
- Explain how observations of various parts of the SN 1987A event helped confirm theories about supernovae
Supernovae were discovered long before astronomers realized that these spectacular cataclysms mark the death of stars (see Making Connections: Supernovae in History). The word nova means “new” in Latin; before telescopes, when a star too dim to be seen with the unaided eye suddenly flared up in a brilliant explosion, observers concluded it must be a brand-new star. Twentieth-century astronomers reclassified the explosions with the greatest luminosity as supernovae.
From historical records of such explosions, from studies of the remnants of supernovae in our Galaxy, and from analyses of supernovae in other galaxies, we estimate that, on average, one supernova explosion occurs somewhere in the Milky Way Galaxy every 25 to 100 years. Unfortunately, however, no supernova explosion has been observable in our Galaxy since the invention of the telescope. Either we have been exceptionally unlucky or, more likely, recent explosions have taken place in parts of the Galaxy where interstellar dust blocks light from reaching us.
Although many supernova explosions in our own Galaxy have gone unnoticed, a few were so spectacular that they were clearly seen and recorded by sky watchers and historians at the time. We can use these records, going back two millennia, to help us pinpoint where the exploding stars were and thus where to look for their remnants today.
The most dramatic supernova was observed in the year 1006. It appeared in May as a brilliant point of light visible during the daytime, perhaps 100 times brighter than the planet Venus. It was bright enough to cast shadows on the ground during the night and was recorded with awe and fear by observers all over Europe and Asia. No one had seen anything like it before; Chinese astronomers, noting that it was a temporary spectacle, called it a “guest star.”
Astronomers David Clark and Richard Stephenson have scoured records from around the world to find more than 20 reports of the 1006 supernova (SN 1006) (Figure). This has allowed them to determine with some accuracy where in the sky the explosion occurred. They place it in the modern constellation of Lupus; at roughly the position they have determined, we find a supernova remnant, now quite faint. From the way its filaments are expanding, it indeed appears to be about 1000 years old.
Supernova 1006 Remnant.
This composite view of SN 1006 from the Chandra X-Ray Observatory shows the X-rays coming from the remnant in blue, visible light in white-yellow, and radio emission in red.
Another guest star, now known as SN 1054, was clearly recorded in Chinese records in July 1054. The remnant of that star is one of the most famous and best-studied objects in the sky, called the Crab Nebula ([link]). It is a marvelously complex object, which has been key to understanding the death of massive stars. When its explosion was first seen, we estimate that it was about as bright as the planet Jupiter: nowhere near as dazzling as the 1006 event but still quite dramatic to anyone who kept track of objects in the sky. Another fainter supernova was seen in 1181.
The next supernova became visible in November 1572 and, being brighter than the planet Venus, was quickly spotted by a number of observers, including the young Tycho Brahe (see Orbits and Gravity). His careful measurements of the star over a year and a half showed that it was not a comet or something in Earth’s atmosphere since it did not move relative to the stars. He correctly deduced that it must be a phenomenon belonging to the realm of the stars, not of the solar system. The remnant of Tycho’s Supernova (as it is now called) can still be detected in many different bands of the electromagnetic spectrum.
Not to be outdone, Johannes Kepler, Tycho Brahe’s scientific heir, found his own supernova in 1604, now known as Kepler’s Supernova ([link]). Fainter than Tycho’s, it nevertheless remained visible for about a year. Kepler wrote a book about his observations that was read by many with an interest in the heavens, including Galileo.
No supernova has been spotted in our Galaxy for the past 300 years. Since the explosion of a visible supernova is a chance event, there is no way to say when the next one might occur. Around the world, dozens of professional and amateur astronomers keep a sharp lookout for “new” stars that appear overnight, hoping to be the first to spot the next guest star in our sky and make a little history themselves.
At their maximum brightness, the most luminous supernovae have about 10 billion times the luminosity of the Sun. For a brief time, a supernova may outshine the entire galaxy in which it appears. After maximum brightness, the star’s light fades and disappears from telescopic visibility within a few months or years. At the time of their outbursts, supernovae eject material at typical velocities of 10,000 kilometers per second (and speeds twice that have been observed). A speed of 20,000 kilometers per second corresponds to about 45 million miles per hour, truly an indication of great cosmic violence.
Supernovae are classified according to the appearance of their spectra, but in this chapter, we will focus on the two main causes of supernovae. Type Ia supernovae are ignited when a lot of material is dumped on degenerate white dwarfs (Figure); these supernovae will be discussed later in this chapter. For now, we will continue our story about the death of massive stars and focus on type II supernovae, which are produced when the core of a massive star collapses.
This image of supernova 2014J, located in Messier 82 (M82), which is also known as the Cigar galaxy, was taken by the Hubble Space Telescope and is superposed on a mosaic image of the galaxy also taken with Hubble. The supernova event is indicated by the box and the inset. This explosion was produced by a type Ia supernova, which is theorized to be triggered in binary systems consisting of a white dwarf and another star—and could be a second white dwarf, a star like our Sun, or a giant star. This type of supernova will be discussed later in this chapter. At a distance of approximately 11.5 million light-years from Earth, this is the closest supernova of type Ia discovered in the past few decades. In the image, you can see reddish plumes of hydrogen coming from the central region of the galaxy, where a considerable number of young stars are being born.
Our most detailed information about what happens when a type II supernova occurs comes from an event that was observed in 1987. Before dawn on February 24, Ian Shelton, a Canadian astronomer working at an observatory in Chile, pulled a photographic plate from the developer. Two nights earlier, he had begun a survey of the Large Magellanic Cloud, a small galaxy that is one of the Milky Way’s nearest neighbors in space. Where he expected to see only faint stars, he saw a large bright spot. Concerned that his photograph was flawed, Shelton went outside to look at the Large Magellanic Cloud . . . and saw that a new object had indeed appeared in the sky (see Figure). He soon realized that he had discovered a supernova, one that could be seen with the unaided eye even though it was about 160,000 light-years away.
Hubble Space Telescope Image of SN 1987A.
The supernova remnant with its inner and outer red rings of material is located in the Large Magellanic Cloud. This image is a composite of several images taken in 1994, 1996, and 1997—about a decade after supernova 1987A was first observed.
Now known as SN 1987A, since it was the first supernova discovered in 1987, this brilliant newcomer to the southern sky gave astronomers their first opportunity to study the death of a relatively nearby star with modern instruments. It was also the first time astronomers had observed a star before it became a supernova. The star that blew up had been included in earlier surveys of the Large Magellanic Cloud, and as a result, we know the star was a blue supergiant just before the explosion.
By combining theory and observations at many different wavelengths, astronomers have reconstructed the life story of the star that became SN 1987A. Formed about 10 million years ago, it originally had a mass of about 20 MSun. For 90% of its life, it lived quietly on the main sequence, converting hydrogen into helium. At this time, its luminosity was about 60,000 times that of the Sun (LSun), and its spectral type was O. When the hydrogen in the center of the star was exhausted, the core contracted and ultimately became hot enough to fuse helium. By this time, the star was a red supergiant, emitting about 100,000 times more energy than the Sun. While in this stage, the star lost some of its mass.
This lost material has actually been detected by observations with the Hubble Space Telescope (Figure). The gas driven out into space by the subsequent supernova explosion is currently colliding with the material the star left behind when it was a red giant. As the two collide, we see a glowing ring.
Ring around Supernova 1987A.
These two images show a ring of gas expelled about 30,000 years ago when the star that exploded in 1987 was a red giant. The supernova, which has been artificially dimmed, is located at the center of the ring. The left-hand image was taken in 1997 and the right-hand image in 2003. Note that the number of bright spots has increased from 1 to more than 15 over this time interval. These spots occur where high-speed gas ejected by the supernova and moving at millions of miles per hour has reached the ring and blasted into it. The collision has heated the gas in the ring and caused it to glow more brightly. The fact that we see individual spots suggests that material ejected by the supernova is first hitting narrow, inward-projecting columns of gas in the clumpy ring. The hot spots are the first signs of a dramatic and violent collision between the new and old material that will continue over the next few years. By studying these bright spots, astronomers can determine the composition of the ring and hence learn about the nuclear processes that build heavy elements inside massive stars.
Helium fusion lasted only about 1 million years. When the helium was exhausted at the center of the star, the core contracted again, the radius of the surface also decreased, and the star became a blue supergiant with a luminosity still about equal to 100,000 LSun. This is what it still looked like on the outside when, after brief periods of further fusion, it reached the iron crisis we discussed earlier and exploded.
Some key stages of evolution of the star that became SN 1987A, including the ones following helium exhaustion, are listed in Table. While we don’t expect you to remember these numbers, note the patterns in the table: each stage of evolution happens more quickly than the preceding one, the temperature and pressure in the core increase, and progressively heavier elements are the source of fusion energy. Once iron was created, the collapse began. It was a catastrophic collapse, lasting only a few tenths of a second; the speed of infall in the outer portion of the iron core reached 70,000 kilometers per second, about one-fourth the speed of light.
|Evolution of the Star That Exploded as SN 1987A|
|Phase||Central Temperature (K)||Central Density (g/cm3)||Time Spent in This Phase|
|Hydrogen fusion||40 × 106||5||8 × 106 years|
|Helium fusion||190 × 106||970||106 years|
|Carbon fusion||870 × 106||170,000||2000 years|
|Neon fusion||1.6 × 109||3.0 × 106||6 months|
|Oxygen fusion||2.0 × 109||5.6 × 106||1 year|
|Silicon fusion||3.3 × 109||4.3 × 107||Days|
|Core collapse||200 × 109||2 × 1014||Tenths of a second|
In the meantime, as the core was experiencing its last catastrophe, the outer shells of neon, oxygen, carbon, helium, and hydrogen in the star did not yet know about the collapse. Information about the physical movement of different layers travels through a star at the speed of sound and cannot reach the surface in the few tenths of a second required for the core collapse to occur. Thus, the surface layers of our star hung briefly suspended, much like a cartoon character who dashes off the edge of a cliff and hangs momentarily in space before realizing that he is no longer held up by anything.
The collapse of the core continued until the densities rose to several times that of an atomic nucleus. The resistance to further collapse then became so great that the core rebounded. Infalling material ran into the “brick wall” of the rebounding core and was thrown outward with a great shock wave. Neutrinos poured out of the core, helping the shock wave blow the star apart. The shock reached the surface of the star a few hours later, and the star began to brighten into the supernova Ian Shelton observed in 1987.
The Synthesis of Heavy Elements
The variations in the brightness of SN 1987A in the days and months after its discovery, which are shown in Figure, helped confirm our ideas about heavy element production. In a single day, the star soared in brightness by a factor of about 1000 and became just visible without a telescope. The star then continued to increase slowly in brightness until it was about the same apparent magnitude as the stars in the Little Dipper. Up until about day 40 after the outburst, the energy being radiated away was produced by the explosion itself. But then SN 1987A did not continue to fade away, as we might have expected the light from the explosion to do. Instead, SN 1987A remained bright as energy from newly created radioactive elements came into play.
Change in the Brightness of SN 1987A over Time.
Note how the rate of decline of the supernova’s light slowed between days 40 and 500. During this time, the brightness was mainly due to the energy emitted by newly formed (and quickly decaying) radioactive elements. Remember that magnitudes are a backward measure of brightness: the larger the magnitude, the dimmer the object looks.
One of the elements formed in a supernova explosion is radioactive nickel, with an atomic mass of 56 (that is, the total number of protons plus neutrons in its nucleus is 56). Nickel-56 is unstable and changes spontaneously (with a half-life of about 6 days) to cobalt-56. (Recall that a half-life is the time it takes for half the nuclei in a sample to undergo radioactive decay.) Cobalt-56 in turn decays with a half-life of about 77 days to iron-56, which is stable. Energetic gamma rays are emitted when these radioactive nuclei decay. Those gamma rays then serve as a new source of energy for the expanding layers of the supernova. The gamma rays are absorbed in the overlying gas and re-emitted at visible wavelengths, keeping the remains of the star bright.
As you can see in Figure, astronomers did observe brightening due to radioactive nuclei in the first few months following the supernova’s outburst and then saw the extra light die away as more and more of the radioactive nuclei decayed to stable iron. The gamma-ray heating was responsible for virtually all of the radiation detected from SN 1987A after day 40. Some gamma rays also escaped directly without being absorbed. These were detected by Earth-orbiting telescopes at the wavelengths expected for the decay of radioactive nickel and cobalt, clearly confirming our understanding that new elements were indeed formed in the crucible of the supernova.
Neutrinos from SN 1987A
If there had been any human observers in the Large Magellanic Cloud about 160,000 years ago, the explosion we call SN 1987A would have been a brilliant spectacle in their skies. Yet we know that less than 1/10 of 1% of the energy of the explosion appeared as visible light. About 1% of the energy was required to destroy the star, and the rest was carried away by neutrinos. The overall energy in these neutrinos was truly astounding. In the initial second of the event, as we noted earlier in our general discussion of supernovae, their total luminosity exceeded the luminosity of all the stars in over a billion galaxies. And the supernova generated this energy in a volume less than 50 kilometers in diameter! Supernovae are one of the most violent events in the universe, and their light turns out to be only the tip of the iceberg in revealing how much energy they produce.
In 1987, the neutrinos from SN 1987A were detected by two instruments—which might be called “neutrino telescopes”—almost a full day before Shelton’s observations. (This is because the neutrinos get out of the exploding star more easily than light does, and also because you don’t need to wait until nightfall to catch a “glimpse” of them.) Both neutrino telescopes, one in a deep mine in Japan and the other under Lake Erie, consist of several thousand tons of purified water surrounded by several hundred light-sensitive detectors. Incoming neutrinos interact with the water to produce positrons and electrons, which move rapidly through the water and emit deep blue light.
Altogether, 19 neutrinos were detected. Since the neutrino telescopes were in the Northern Hemisphere and the supernova occurred in the Southern Hemisphere, the detected neutrinos had already passed through Earth and were on their way back out into space when they were captured.
Only a few neutrinos were detected because the probability that they will interact with ordinary matter is very, very low. It is estimated that the supernova actually released 1058 neutrinos. A tiny fraction of these, about 30 billion, eventually passed through each square centimeter of Earth’s surface. About a million people actually experienced a neutrino interaction within their bodies as a result of the supernova. This interaction happened to only a single nucleus in each person and thus had absolutely no biological effect; it went completely unnoticed by everyone concerned.
Since the neutrinos come directly from the heart of the supernova, their energies provided a measure of the temperature of the core as the star was exploding. The central temperature was about 200 billion K, a stunning figure to which no earthly analog can bring much meaning. With neutrino telescopes, we are peering into the final moment in the life stories of massive stars and observing conditions beyond all human experience. Yet we are also seeing the unmistakable hints of our own origins.
Key Concepts and Summary
A supernova occurs on average once every 25 to 100 years in the Milky Way Galaxy. Despite the odds, no supernova in our Galaxy has been observed from Earth since the invention of the telescope. However, one nearby supernova (SN 1987A) has been observed in a neighboring galaxy, the Large Magellanic Cloud. The star that evolved to become SN 1987A began its life as a blue supergiant, evolved to become a red supergiant, and returned to being a blue supergiant at the time it exploded. Studies of SN 1987A have detected neutrinos from the core collapse and confirmed theoretical calculations of what happens during such explosions, including the formation of elements beyond iron. Supernovae are a main source of high-energy cosmic rays and can be dangerous for any living organisms in nearby star systems.
- (click for details)
Callstack: at (TextBooks_and_TextMaps/Astronomy_and_Cosmology_TextMaps/Map:_Astronomy_(OpenStax)/23:_The_Death_of_Stars/23.3:_Supernova_Observations), /content/body/section/div/div/ul/li/span, line 1, column 33 |
The notion of line or straight line was introduced by ancient mathematicians to represent straight objects (i.e., having no curvature) with negligible width and depth. Lines are an idealization of such objects. Until the 17th century, lines were defined in this manner: "The [straight or curved] line is the first species of quantity, which has only one dimension, namely length, without any width nor depth, and is nothing else than the flow or run of the point which […] will leave from its imaginary moving some vestige in length, exempt of any width. […] The straight line is that which is equally extended between its points."
Euclid described a line as "breadthless length" which "lies equally with respect to the points on itself"; he introduced several postulates as basic unprovable properties from which he constructed all of geometry, which is now called Euclidean geometry to avoid confusion with other geometries which have been introduced since the end of the 19th century (such as non-Euclidean, projective and affine geometry).
In modern mathematics, given the multitude of geometries, the concept of a line is closely tied to the way the geometry is described. For instance, in analytic geometry, a line in the plane is often defined as the set of points whose coordinates satisfy a given linear equation, but in a more abstract setting, such as incidence geometry, a line may be an independent object, distinct from the set of points which lie on it.
When a geometry is described by a set of axioms, the notion of a line is usually left undefined (a so-called primitive object). The properties of lines are then determined by the axioms which refer to them. One advantage to this approach is the flexibility it gives to users of the geometry. Thus in differential geometry a line may be interpreted as a geodesic (shortest path between points), while in some projective geometries a line is a 2-dimensional vector space (all linear combinations of two independent vectors). This flexibility also extends beyond mathematics and, for example, permits physicists to think of the path of a light ray as being a line.
- 1 Definitions versus descriptions
- 2 In Euclidean geometry
- 3 In projective geometry
- 4 Extensions
- 5 See also
- 6 Notes
- 7 References
- 8 External links
Definitions versus descriptions
All definitions are ultimately circular in nature since they depend on concepts which must themselves have definitions, a dependence which cannot be continued indefinitely without returning to the starting point. To avoid this vicious circle certain concepts must be taken as primitive concepts; terms which are given no definition. In geometry, it is frequently the case that the concept of line is taken as a primitive. In those situations where a line is a defined concept, as in coordinate geometry, some other fundamental ideas are taken as primitives. When the line concept is a primitive, the behaviour and properties of lines are dictated by the axioms which they must satisfy.
In a non-axiomatic or simplified axiomatic treatment of geometry, the concept of a primitive notion may be too abstract to be dealt with. In this circumstance it is possible that a description or mental image of a primitive notion is provided to give a foundation to build the notion on which would formally be based on the (unstated) axioms. Descriptions of this type may be referred to, by some authors, as definitions in this informal style of presentation. These are not true definitions and could not be used in formal proofs of statements. The "definition" of line in Euclid's Elements falls into this category. Even in the case where a specific geometry is being considered (for example, Euclidean geometry), there is no generally accepted agreement among authors as to what an informal description of a line should be when the subject is not being treated formally.
In Euclidean geometry
When geometry was first formalised by Euclid in the Elements, he defined a general line (straight or curved) to be "breadthless length" with a straight line being a line "which lies evenly with the points on itself". These definitions serve little purpose since they use terms which are not, themselves, defined. In fact, Euclid did not use these definitions in this work and probably included them just to make it clear to the reader what was being discussed. In modern geometry, a line is simply taken as an undefined object with properties given by axioms, but is sometimes defined as a set of points obeying a linear relationship when some other fundamental concept is left undefined.
In an axiomatic formulation of Euclidean geometry, such as that of Hilbert (Euclid's original axioms contained various flaws which have been corrected by modern mathematicians), a line is stated to have certain properties which relate it to other lines and points. For example, for any two distinct points, there is a unique line containing them, and any two distinct lines intersect in at most one point. In two dimensions, i.e., the Euclidean plane, two lines which do not intersect are called parallel. In higher dimensions, two lines that do not intersect are parallel if they are contained in a plane, or skew if they are not.
On the Cartesian plane
Lines in a Cartesian plane or, more generally, in affine coordinates, can be described algebraically by linear equations. In two dimensions, the equation for non-vertical lines is often given in the slope-intercept form:
- m is the slope or gradient of the line.
- b is the y-intercept of the line.
- x is the independent variable of the function y = f(x).
The slope of the line through points and , when , is given by and the equation of this line can be written .
In , every line (including vertical lines) is described by a linear equation of the form
with fixed real coefficients a, b and c such that a and b are not both zero. Using this form, vertical lines correspond to the equations with b = 0.
There are many variant ways to write the equation of a line which can all be converted from one to another by algebraic manipulation. These forms (see Linear equation for other forms) are generally named by the type of information (data) about the line that is needed to write down the form. Some of the important data of a line is its slope, x-intercept, known points on the line and y-intercept.
The equation of the line passing through two different points and may be written as
If x0 ≠ x1, this equation may be rewritten as
In three dimensions, lines can not be described by a single linear equation, so they are frequently described by parametric equations:
- x, y, and z are all functions of the independent variable t which ranges over the real numbers.
- (x0, y0, z0) is any point on the line.
- a, b, and c are related to the slope of the line, such that the vector (a, b, c) is parallel to the line.
They may also be described as the simultaneous solutions of two linear equations
such that and are not proportional (the relations imply ). This follows since in three dimensions a single linear equation typically describes a plane and a line is what is common to two distinct intersecting planes.
In normal form
The normal form (also called the Hesse normal form, after the German mathematician Ludwig Otto Hesse), is based on the normal segment for a given line, which is defined to be the line segment drawn from the origin perpendicular to the line. This segment joins the origin with the closest point on the line to the origin. The normal form of the equation of a straight line on the plane is given by:
where θ is the angle of inclination of the normal segment (the oriented angle from the unit vector of the x axis to this segment), and p is the (positive) length of the normal segment. The normal form can be derived from the general form by dividing all of the coefficients by
Unlike the slope-intercept and intercept forms, this form can represent any line but also requires only two finite parameters, θ and p, to be specified. If p > 0, then θ is uniquely defined modulo 2π. On the other hand, if the line is through the origin (c = 0, p = 0), one drops the c/|c| term to compute sinθ and cosθ, and θ is only defined modulo π.
In polar coordinates
In polar coordinates on the Euclidean plane the slope-intercept form of the equation of a line is expressed as:
where m is the slope of the line and b is the y-intercept. When θ = 0 the graph will be undefined. The equation can be rewritten to eliminate discontinuities in this manner:
In polar coordinates on the Euclidean plane, the intercept form of the equation of a line that is non-horizontal, non-vertical, and does not pass through pole may be expressed as,
where and represent the x and y intercepts respectively. The above equation is not applicable for vertical and horizontal lines because in these cases one of the intercepts does not exist. Moreover, it is not applicable on lines passing through the pole since in this case, both x and y intercepts are zero (which is not allowed here since and are denominators). A vertical line that doesn't pass through the pole is given by the equation
Similarly, a horizontal line that doesn't pass through the pole is given by the equation
The equation of a line which passes through the pole is simply given as:
where m is the slope of the line.
As a vector equation
The vector equation of the line through points A and B is given by (where λ is a scalar).
If a is vector OA and b is vector OB, then the equation of the line can be written: .
A ray starting at point A is described by limiting λ. One ray is obtained if λ ≥ 0, and the opposite ray comes from λ ≤ 0.
In Euclidean space
In three-dimensional space, a first degree equation in the variables x, y, and z defines a plane, so two such equations, provided the planes they give rise to are not parallel, define a line which is the intersection of the planes. More generally, in n-dimensional space n-1 first-degree equations in the n coordinate variables define a line under suitable conditions.
The direction of the line is from a (t = 0) to b (t = 1), or in other words, in the direction of the vector b − a. Different choices of a and b can yield the same line.
Equivalently for three points in a plane, the points are collinear if and only if the slope between one pair of points equals the slope between any other pair of points (in which case the slope between the remaining pair of points will equal the other slopes). By extension, k points in a plane are collinear if and only if any (k–1) pairs of points have the same pairwise slopes.
- The points a, b and c are collinear if and only if d(x,a) = d(c,a) and d(x,b) = d(c,b) implies x=c.
However, there are other notions of distance (such as the Manhattan distance) for which this property is not true.
Types of lines
In a sense, all lines in Euclidean geometry are equal, in that, without coordinates, one can not tell them apart from one another. However, lines may play special roles with respect to other objects in the geometry and be divided into types according to that relationship. For instance, with respect to a conic (a circle, ellipse, parabola, or hyperbola), lines can be:
- tangent lines, which touch the conic at a single point;
- secant lines, which intersect the conic at two points and pass through its interior;
- exterior lines, which do not meet the conic at any point of the Euclidean plane; or
- a directrix, whose distance from a point helps to establish whether the point is on the conic.
For more general algebraic curves, lines could also be:
- i-secant lines, meeting the curve in i points counted without multiplicity, or
- asymptotes, which a curve approaches arbitrarily closely without touching it.
With respect to triangles we have:
Parallel lines are lines in the same plane that never cross. Intersecting lines share a single point in common. Coincidental lines coincide with each other—every point that is on either one of them is also on the other.
In projective geometry
In many models of projective geometry, the representation of a line rarely conforms to the notion of the "straight curve" as it is visualised in Euclidean geometry. In elliptic geometry we see a typical example of this. In the spherical representation of elliptic geometry, lines are represented by great circles of a sphere with diametrically opposite points identified. In a different model of elliptic geometry, lines are represented by Euclidean planes passing through the origin. Even though these representations are visually distinct, they satisfy all the properties (such as, two points determining a unique line) that make them suitable representations for lines in this geometry.
Given a line and any point A on it, we may consider A as decomposing this line into two parts. Each such part is called a ray (or half-line) and the point A is called its initial point. The point A is considered to be a member of the ray. Intuitively, a ray consists of those points on a line passing through A and proceeding indefinitely, starting at A, in one direction only along the line. However, in order to use this concept of a ray in proofs a more precise definition is required.
Given distinct points A and B, they determine a unique ray with initial point A. As two points define a unique line, this ray consists of all the points between A and B (including A and B) and all the points C on the line through A and B such that B is between A and C. This is, at times, also expressed as the set of all points C such that A is not between B and C. A point D, on the line determined by A and B but not in the ray with initial point A determined by B, will determine another ray with initial point A. With respect to the AB ray, the AD ray is called the opposite ray.
Thus, we would say that two different points, A and B, define a line and a decomposition of this line into the disjoint union of an open segment (A, B) and two rays, BC and AD (the point D is not drawn in the diagram, but is to the left of A on the line AB). These are not opposite rays since they have different initial points.
In Euclidean geometry two rays with a common endpoint form an angle.
The definition of a ray depends upon the notion of betweenness for points on a line. It follows that rays exist only for geometries for which this notion exists, typically Euclidean geometry or affine geometry over an ordered field. On the other hand, rays do not exist in projective geometry nor in a geometry over a non-ordered field, like the complex numbers or any finite field.
A line segment is a part of a line that is bounded by two distinct end points and contains every point on the line between its end points. Depending on how the line segment is defined, either of the two end points may or may not be part of the line segment. Two or more line segments may have some of the same relationships as lines, such as being parallel, intersecting, or skew, but unlike lines they may be none of these, if they are coplanar and either do not intersect or are collinear.
The "shortness" and "straightness" of a line, interpreted as the property that the distance along the line between any two of its points is minimized (see triangle inequality), can be generalized and leads to the concept of geodesics in metric spaces.
- Line coordinates
- Line segment
- Distance from a point to a line
- Distance between two lines
- Affine function
- Incidence (geometry)
- Plane (geometry)
- In (rather old) French: "La ligne est la première espece de quantité, laquelle a tant seulement une dimension à sçavoir longitude, sans aucune latitude ni profondité, & n'est autre chose que le flux ou coulement du poinct, lequel […] laissera de son mouvement imaginaire quelque vestige en long, exempt de toute latitude. […] La ligne droicte est celle qui est également estenduë entre ses poincts." Pages 7 and 8 of Les quinze livres des éléments géométriques d'Euclide Megarien, traduits de Grec en François, & augmentez de plusieurs figures & demonstrations, avec la corrections des erreurs commises és autres traductions, by Pierre Mardele, Lyon, MDCXLV (1645).
- Coxeter 1969, pg. 4
- Faber 1983, pg. 95
- Faber 1983, pg. 95
- Faber, Appendix A, p. 291.
- Faber, Part III, p. 95.
- Faber, Part III, p. 108.
- Faber, Appendix B, p. 300.
- Bôcher, Maxime (1915), Plane Analytic Geometry: With Introductory Chapters on the Differential Calculus, H. Holt, p. 44.
- Alessandro Padoa, Un nouveau système de définitions pour la géométrie euclidienne, International Congress of Mathematicians, 1900
- Bertrand Russell, The Principles of Mathematics, p.410
- Technically, the collineation group acts transitively on the set of lines.
- Faber, Part III, p. 108.
- On occasion we may consider a ray without its initial point. Such rays are called open rays, in contrast to the typical ray which would be said to be closed.
- Wylie, Jr. 1964, pg. 59, Definition 3
- Pedoe 1988, pg. 2
|Wikisource has the text of the 1911 Encyclopædia Britannica article Line.|
- Coxeter, H.S.M (1969), Introduction to Geometry (2nd ed.), New York: John Wiley & Sons, ISBN 0-471-18283-4
- Faber, Richard L. (1983). Foundations of Euclidean and Non-Euclidean Geometry. New York: Marcel Dekker. ISBN 0-8247-1748-1.
- Pedoe, Dan (1988), Geometry: A Comprehensive Course, Mineola, NY: Dover, ISBN 0-486-65812-0
- Wylie, Jr., C. R. (1964), Foundations of Geometry, New York: McGraw-Hill, ISBN 0-07-072191-2
|Wikimedia Commons has media related to Line (geometry).| |
September 1, 2022
Primer: Orbital Debris
- Orbital debris – man-made objects in the Earth’s orbit that no longer serve a useful function – is becoming a more prevalent hazard for countries and commercial firms as they push to expand their satellite operations, threatening the viability of investments and future missions.
- The adoption of domestic and international guidelines to mitigate this hazard has reduced the average amount of orbital debris created per mission, but these measures are largely voluntary, meaning generators of debris face virtually no consequences for their behavior.
- Congress should work with the National Aeronautics and Space Administration and commercial partners to promote transparency and information sharing about the debris their activities create; continue to engage in international efforts to monitor and limit debris creation; experiment with market mechanisms to create incentives for private actors to abide by debris-mitigation guidelines; and embrace new technologies and innovations to remediate debris moving forward.
President Biden’s recent actions on climate change take a small step in addressing a global problem that will require cooperation and innovation to solve. World leaders face a similar challenge that will require strong cooperation in the so-called orbital commons: orbital debris, or “space junk.”
The space economy is taking off. Between the creation of low Earth orbit (LEO) satellite constellations, the expansion of space tourism, and plans to build private space stations and research facilities, firms are vying to establish permanent fixtures among the stars. Orbital launch attempts around the world have increased significantly over the past five years and show no signs of slowing down. Yet this increase in activity is threatened by orbital debris.
Orbital debris is any human-made object in the Earth’s orbit that no longer serves a useful function, such as nonfunctional spacecrafts, abandoned launch vehicle stages, mission-related debris, and fragmentation debris. Mission-related and fragmentation debris include such things as paint chips, segments of fuel tanks and batteries, scraps of metal, detached launch hardware, and tools or waste discarded from missions. Because of their speed and location in LEO, and increasing presence in geosynchronous equatorial orbit (GEO), each piece of debris represents a potentially catastrophic risk to safety and functionality. LEO is home to thousands of satellites that are critical for broadband internet, communications networks, infrared imaging, and military surveillance. GEO is farther away from Earth, but houses satellites supporting electronic intelligence, optical imaging, global positioning and navigation, and commercial broadcasting. Put simply, a piece of debris could do irreparable damage to satellites that provide broadband internet, real-time GPS navigation, and even the International Space Station. If left unresolved, orbital debris accumulation poses a serious threat to future economic growth and national security.
While there are several international treaties, memorandums, and guidelines for minimizing orbital debris accumulation, they are voluntary and thus lack the teeth to hold parties accountable. The adoption of orbital debris mitigation guidelines domestically and internationally has reduced the average amount of debris produced per mission, but the significant increase in launches since the mid-2000s has increased the total amount of debris in orbit. This accumulation threatens future investment and mission safety. The U.S. has been a leader in this area and should push for greater adherence to existing guidelines and explore new initiatives to promote cooperation and collaboration to minimize debris creation.
Congress should work with the National Aeronautics and Space Administration (NASA) and commercial partners to improve mitigation practices as well as promote research and development (R&D) in novel remediation technologies before debris accumulation becomes an insurmountable challenge. Some courses of action could include promoting greater information sharing and transparency about the debris operations create, continuing to work independently and with other nations to establish and enforce debris mitigation guidelines, instituting market mechanisms to incentivize better mitigation practices and technology, and investing in and testing innovations to actively remove debris from LEO.
This primer discusses the state of orbital debris, the current regulatory environment governing orbital debris, and recommendations for legislators on ways to better mitigate and remediate orbital debris going forward.
Orbital Debris: A Crash Course
Orbital debris is a type of space debris, specifically man-made debris that occupies space in orbit around the earth. Because orbital debris is a byproduct of human activity in space, it is largely concentrated in LEO (160 to 2,000 km above surface), home to satellites critical for terrestrial communication and transportation networks. Yet debris of all sizes has steadily increased in GEO (35,000 km above surface) as well. GEO’s higher altitude makes it a target for vehicles and satellites de-orbiting after their mission ends. But the greater distance from Earth makes mitigation and remediation difficult and costly. While the composition of debris in each region varies, both are experiencing an increase in objects launched and debris accumulation, imperiling operations and increasing uncertainty for future space developments.
NASA describes LEO as an “orbital space junk yard,” and debris continues to accumulate there, in GEO, and between the two. There are over 26,000 “large” pieces of debris, and more than half a million “smaller” fragments of debris in orbit. Adding to this is another 100 million pieces of debris that are .04 inches, or one millimeter and larger in diameter, which are untracked within LEO and GEO orbit. Currently, NASA tracks and catalogues larger pieces of debris with help from the Department of Defense (DoD). While NASA has improved its debris assessment software and models, millions of untracked debris fragments pose a compounding threat to active and future operations.
Free Fallin’: Mitigation Guidelines to Prevent Orbital Debris and Their Shortcomings
The United States has pioneered debris mitigation efforts at the domestic and international level but with limited success. The United States was one of the original signatories of the Outer Space Treaty (OST), along with the Russian Federation and the United Kingdom, in 1967. Other than Russia, no other country has launched as much payload into orbit as the United States. The United States’ role in the proliferation of debris led NASA to erect the Orbital Debris Program Office in 1979, which monitors the orbital debris environment and works to establish technical consensus around debris mitigation strategies and guidelines. In 1995, the United States published a set of orbital debris mitigation guidelines, which became the “U.S. Government Orbital Debris Mitigation Standard Practices (USG ODMSP)” in 1997. This guidance became the framework for the Inter-Agency Space Debris Coordination Committee (IADC) guidelines for mitigating orbital debris, which were adopted by thirteen countries in 2002. These efforts were worthwhile, and as seen in Figure 1, the number of fragmentation events per launch declined in the mid-’90s and have remained low while payloads and launches increased. But this has not prevented the accumulation of debris, and as more objects are launched into orbit, the chances of catastrophic collisions steadily increase, as illustrated in Figure 2.
Russia and China have contributed significantly to the growth of debris in recent years. In 2007, what was called “the most devastating impact on the LEO environment,” a Chinese anti-satellite test (ASAT), created more than 2,700 pieces of catalogued debris. In 2009, a Russian spacecraft crashed into an American commercial satellite, creating almost 2,000 pieces of catalogued debris. The OST established that regardless of purpose, the state of a mission’s origin is liable for any damage an operation causes while in space. Neither the treaty, nor any other set of guidelines, however, include any avenues to mediate these issues or punish violators. Both countries adopted the IADC guidelines in 2002, but the lack of enforcement mechanisms meant impacted parties had little to no recourse. These two events have increased the large orbital debris population in LEO by approximately 70 percent, represented in Figure 3. While China has moved away from debris-generating ASAT, a Russian ASAT in 2021 created over 1,500 pieces of large space debris, increasing the number of avoidance maneuvers satellites and other vehicles in LEO perform. These instances illustrate the limits of mitigation guidelines alone, especially when countries that violate the guidelines face no repercussions.
Beyond countries, businesses’ proliferation of private satellites increases crowding and the potential for more collisions. If there are no consequences for creating debris, private companies have no incentive to do more than the bare minimum to comply with existing guidelines. Further, the current trend of debris proliferation is unlikely to reverse as market entrants and operations grow. In 2021, there were several instances of near collisions in LEO involving Starlink’s constellation of satellites. The company set a record for the number of rockets launched this year, and with their recent acquisition, shows no sign of slowing down, as seen in Figure 4. With this increase in activity, the chances for debris generation rise, a problem that can have impacts in orbit and on the ground. As commercial space launches and operations continue to account for a larger share of activity, a world of greater collisions and debris will become a reality under the status quo.
Ready for Liftoff: Recommendations for Congress and NASA
Congress and NASA have several options to address orbital debris. Those options follow two paths: mitigation (preventing the accumulation of debris preemptively) and remediation (actively removing debris from orbit). Both paths include tradeoffs that could hamper meaningful progress if not weighed correctly. But both mitigation and remediation are needed to tackle the threats posed by orbital debris.
Regarding domestic mitigation, Congress could first focus on transparency, as information sharing is critical to building trust, planning missions, and holding parties accountable for their operations. One researcher suggests releasing information on waivers given to NASA and DoD that exempt certain missions from orbital debris guidelines. Congress could also examine the current process for granting waivers and ensure it does not incentivize more risky operations. If the rules are interfering with innovation or national security, then relevant agencies and officials should review these and draft new ones. If not, then the onus should be on NASA and DoD to explain why they should continue receiving waivers.
As another component of increasing transparency, NASA could look to partner with commercial operators to bolster their logistics and orbital debris tracking capabilities. Many companies in the United States and around the world are competing to provide more accurate and up– to– date logistical information on orbital location and collision avoidance. By partnering with private firms, Congress and NASA could receive better information while rewarding innovative companies and novel technology.
Beyond domestic mitigation efforts, Congress and NASA should continue to engage with the European Space Agency (ESA) and other partners on international measures. One avenue that shows promise is the emergence of the Space Sustainability Rating Initiative (SSR) led by the ESA, World Economic Forum, and research partners in the United States and Europe. The rating system will score the sustainability of space flights based on factors such as data sharing, measures taken to avoid collisions and de-orbit satellites, and features making active debris removal (ADR) feasible. The rating could act as an incentive for companies and nations, leading to lower insurance costs or more funding. Several companies in the commercial space and aviation market support the SSR effort and have expressed interest in participating. Congress could have NASA evaluate the rating system to see if it improves upon current efforts to promote mitigation, and if so, consider partnering with the initiative. The orbital environment is shared among all nations and promoting effective collaboration could lead to significant benefits.
Finally, Congress could incentivize operators to prepare for debris mitigation and de-orbit measures, as well as force companies to bear responsibility for adding debris to the orbital environment through fines or other penalties. As an example, Congress could levy fines for debris-generating satellites, create a deposit system under which a company receives its deposit after its mission has safely concluded, or utilize cap-and-trade credits for debris like those used for carbon and green-house gas emissions. Another path could be to create “bounties” for debris, using an auction or bidding system to allow companies to compete for junk. These ideas incentivize companies to improve debris mitigation and ascribe a “cost” to the tons of debris orbiting Earth. By leading on these efforts, the United States can put itself in a position to continue to attract innovators and benefit from the unintended gains created by new technology.
Turning to remediation, there are companies and researchers vying to develop technologies capable of ADR. The White House’s 2021 and 2022 National Orbital Debris Research and Development Plans highlight the need for R&D investment in ADR technologies. A provision in the recently passed CHIPS and Science Act of 2022 increases general NASA Science Directorate funding available for government, university, or private R&D projects, and other provisions include funding for research on innovation in aviation technology, materials, and design. These funds can be directed to companies working on ADR technology, new satellites and rockets better suited to avoid debris, and materials for spacecrafts and satellites that are more resilient and create fewer pieces of debris in orbit. Congress and NASA could work to direct funding toward ADR and technologies that create less debris.
Orbital debris is becoming a more prevalent hazard as countries and commercial firms push to expand their satellite operations, threatening the viability of investments and future missions. Current regulations have allowed debris to accumulate in near-Earth orbit without penalizing those who generate it. To mitigate the accumulation of debris, Congress should embrace a variety of approaches. NASA should engage with public and private counterparts operating in near-Earth orbit to share information as well as pursue agreements that actively enforce mitigation best practices. Another is utilizing market forces to price debris generation, so operators bear the cost of generating debris, while incentivizing better habits and new solutions. Finally, promoting innovation to remediate debris through novel technologies is critical. By taking an all-of-the-above approach, Congress can ensure the United States is well positioned to be a leader in space for years to come. |
The Peasants' Revolt, also named Wat Tyler's Rebellion or the Great Rising, was a major uprising across large parts of England in 1381. The revolt had various causes, including the socio-economic and political tensions generated by the Black Death in the 1340s, the high taxes resulting from the conflict with France during the Hundred Years' War, and instability within the local leadership of London. The final trigger for the revolt was the intervention of a royal official, John Bampton, in Essex on 30 May 1381. His attempts to collect unpaid poll taxes in Brentwood ended in a violent confrontation, which rapidly spread across the south-east of the country. A wide spectrum of rural society, including many local artisans and village officials, rose up in protest, burning court records and opening the local gaols. The rebels sought a reduction in taxation, an end to the system of unfree labour known as serfdom, and the removal of the King's senior officials and law courts.
Richard II meets the rebels on 14 June 1381 in a miniature from a 1470s copy of Jean Froissart's Chronicles.
|Rebel forces||Royal government|
|Commanders and leaders|
King Richard II|
Sir William Walworth
Bp. Henry le Despenser
|Casualties and losses|
|At least 1,500||Unknown|
Inspired by the sermons of the radical cleric John Ball and led by Wat Tyler, a contingent of Kentish rebels advanced on London. They were met at Blackheath by representatives of the royal government, who unsuccessfully attempted to persuade them to return home. King Richard II, then aged 14, retreated to the safety of the Tower of London, but most of the royal forces were abroad or in northern England. On 13 June, the rebels entered London and, joined by many local townsfolk, attacked the gaols, destroyed the Savoy Palace, set fire to law books and buildings in the Temple, and killed anyone associated with the royal government. The following day, Richard met the rebels at Mile End and acceded to most of their demands, including the abolition of serfdom. Meanwhile, rebels entered the Tower of London, killing the Lord Chancellor and the Lord High Treasurer, whom they found inside.
On 15 June, Richard left the city to meet Tyler and the rebels at Smithfield. Violence broke out, and Richard's party killed Tyler. Richard defused the tense situation long enough for London's mayor, William Walworth, to gather a militia from the city and disperse the rebel forces. Richard immediately began to re-establish order in London and rescinded his previous grants to the rebels. The revolt had also spread into East Anglia, where the University of Cambridge was attacked and many royal officials were killed. Unrest continued until the intervention of Henry le Despenser, who defeated a rebel army at the Battle of North Walsham on 25 or 26 June. Troubles extended north to York, Beverley and Scarborough, and as far west as Bridgwater in Somerset. Richard mobilised 4,000 soldiers to restore order. Most of the rebel leaders were tracked down and executed; by November, at least 1,500 rebels had been killed.
The Peasants' Revolt has been widely studied by academics. Late 19th-century historians used a range of sources from contemporary chroniclers to assemble an account of the uprising, and these were supplemented in the 20th century by research using court records and local archives. Interpretations of the revolt have shifted over the years. It was once seen as a defining moment in English history, but modern academics are less certain of its impact on subsequent social and economic history. The revolt heavily influenced the course of the Hundred Years' War, by deterring later Parliaments from raising additional taxes to pay for military campaigns in France. The revolt has been widely used in socialist literature, including by the author William Morris, and remains a potent political symbol for the political left, informing the arguments surrounding the introduction of the Community Charge in the United Kingdom during the 1980s.
Background and causesEdit
The Peasants' Revolt was fed by the economic and social upheaval of the 14th century. At the start of the century, the majority of English people worked in the countryside, as part of a sophisticated economy that fed the country's towns and cities and supported an extensive international trade. Across much of England, production was organised around manors, controlled by local lords – including the gentry and the Church – and governed through a system of manorial courts. Some of the population were unfree serfs, who had to work on their lords' lands for a period each year, although the balance of free and unfree varied across England, and in the south-east there were relatively few serfs. Some serfs were born unfree and could not leave their manors to work elsewhere without the consent of the local lord; others accepted limitations on their freedom as part of the tenure agreement for their farmland. Population growth led to pressure on the available agricultural land, increasing the power of local landowners.
In 1348 a plague known as the Black Death crossed from mainland Europe into England, rapidly killing an estimated 50 per cent of the population. After an initial period of economic shock, England began to adapt to the changed economic situation. The death rate among the peasantry meant that suddenly land was relatively plentiful and manpower in much shorter supply. Labourers could charge more for their work and, in the consequent competition for labour, wages were driven sharply upwards. In turn, the profits of landowners were eroded. The trading, commercial and financial networks in the towns disintegrated.
The authorities responded to the chaos with emergency legislation; the Ordinance of Labourers was passed in 1349, and the Statute of Labourers in 1351. These attempted to fix wages at pre-plague levels, making it a crime to refuse work or to break an existing contract, imposing fines on those who transgressed. The system was initially enforced through special Justices of Labourers and then, from the 1360s onwards, through the normal Justices of the Peace, typically members of the local gentry. Although in theory these laws applied to both labourers seeking higher wages and to employers tempted to outbid their competitors for workers, they were in practice applied only to labourers, and then in a rather arbitrary fashion. The legislation was strengthened in 1361, with the penalties increased to include branding and imprisonment. The royal government had not intervened in this way before, nor allied itself with the local landowners in quite such an obvious or unpopular way.
Over the next few decades, economic opportunities increased for the English peasantry. Some labourers took up specialist jobs that would have previously been barred to them, and others moved from employer to employer, or became servants in richer households. These changes were keenly felt across the south-east of England, where the London market created a wide range of opportunities for farmers and artisans. Local lords had the right to prevent serfs from leaving their manors, but when serfs found themselves blocked in the manorial courts, many simply left to work illegally on manors elsewhere. Wages continued to rise, and between the 1340s and the 1380s the purchasing power of rural labourers increased by around 40 percent. As the wealth of the lower classes increased, Parliament brought in fresh laws in 1363 to prevent them from consuming expensive goods formerly only affordable by the elite. These sumptuary laws proved unenforceable, but the wider labour laws continued to be firmly applied.
War and financeEdit
Another factor in the revolt of 1381 was the conduct of the war with France. In 1337 Edward III of England had pressed his claims to the French throne, beginning a long-running conflict that became known as the Hundred Years' War. Edward had initial successes, but his campaigns were not decisive. Charles V of France became more active in the conflict after 1369, taking advantage of his country's greater economic strength to commence cross-Channel raids on England. By the 1370s, England's armies on the continent were under huge military and financial pressure; the garrisons in Calais and Brest alone, for example, were costing £36,000 a year to maintain, while military expeditions could consume £50,000 in only six months.[nb 1] Edward died in 1377, leaving the throne to his grandson, Richard II, then only ten years old.
Richard's government was formed around his uncles, most prominently the rich and powerful John of Gaunt, and many of his grandfather's former senior officials. They faced the challenge of financially sustaining the war in France. Taxes in the 14th century were raised on an ad hoc basis through Parliament, then comprising the Lords, the titled aristocracy and clergy; and the Commons, the representatives of the knights, merchants and senior gentry from across England. These taxes were typically imposed on a household's movable possessions, such as their goods or stock. The raising of these taxes affected the members of the Commons much more than the Lords. To complicate matters, the official statistics used to administer the taxes pre-dated the Black Death and, since the size and wealth of local communities had changed greatly since the plague, effective collection had become increasingly difficult.
Just before Edward's death, Parliament introduced a new form of taxation called the poll tax, which was levied at the rate of four pence on every person over the age of 14, with a deduction for married couples.[nb 2] Designed to spread the cost of the war over a broader economic base than previous tax levies, this round of taxation proved extremely unpopular but raised £22,000. The war continued to go badly and, despite raising some money through forced loans, the Crown returned to Parliament in 1379 to request further funds. The Commons were supportive of the young King, but had concerns about the amounts of money being sought and the way this was being spent by the King's counsellors, whom they suspected of corruption. A second poll tax was approved, this time with a sliding scale of taxes against seven different classes of English society, with the upper classes paying more in absolute terms. Widespread evasion proved to be a problem, and the tax only raised £18,600 — far short of the £50,000 that had been hoped for.
In November 1380, Parliament was called together again in Northampton. Archbishop Simon Sudbury, the new Lord Chancellor, updated the Commons on the worsening situation in France, a collapse in international trade, and the risk of the Crown having to default on its debts. The Commons were told that the colossal sum of £160,000 was now required in new taxes, and arguments ensued between the royal council and Parliament about what to do next. Parliament passed a third poll tax (this time on a flat-rate basis of 12 pence on each person over 15, with no allowance made for married couples) which they estimated would raise £66,666. The third poll tax was highly unpopular and many in the south-east evaded it by refusing to register. The royal council appointed new commissioners in March 1381 to interrogate local village and town officials in an attempt to find those who were refusing to comply. The extraordinary powers and interference of these teams of investigators in local communities, primarily in the south-east and east of England, raised still further the tensions surrounding the taxes.
The decades running up to 1381 were a rebellious, troubled period. London was a particular focus of unrest, and the activities of the city's politically active guilds and fraternities often alarmed the authorities. Londoners resented the expansion of the royal legal system in the capital, in particular the increased role of the Marshalsea Court in Southwark, which had begun to compete with the city authorities for judicial power in London.[nb 3] The city's population also resented the presence of foreigners, Flemish weavers in particular. Londoners detested John of Gaunt because he was a supporter of the religious reformer John Wycliffe, whom the London public regarded as a heretic. John of Gaunt was also engaged in a feud with the London elite and was rumoured to be planning to replace the elected mayor with a captain, appointed by the Crown. The London elite were themselves fighting out a vicious, internal battle for political power. As a result, in 1381 the ruling classes in London were unstable and divided.
Rural communities, particularly in the south-east, were unhappy with the operation of serfdom and the use of the local manorial courts to exact traditional fines and levies, not least because the same landowners who ran these courts also often acted as enforcers of the unpopular labour laws or as royal judges. Many of the village elites refused to take up positions in local government and began to frustrate the operation of the courts. Animals seized by the courts began to be retaken by their owners, and legal officials were assaulted. Some started to advocate the creation of independent village communities, respecting traditional laws but separate from the hated legal system centred in London. As the historian Miri Rubin describes, for many, "the problem was not the country's laws, but those charged with applying and safeguarding them".
Concerns were raised about these changes in society. William Langland wrote the poem Piers Plowman in the years before 1380, praising peasants who respected the law and worked hard for their lords, but complaining about greedy, travelling labourers demanding higher wages. The poet John Gower warned against a future revolt in both Mirour de l'Omme and Vox Clamantis. There was a moral panic about the threat posed by newly arrived workers in the towns and the possibility that servants might turn against their masters. New legislation was introduced in 1359 to deal with migrants, existing conspiracy laws were more widely applied and the treason laws were extended to include servants or wives who betrayed their masters and husbands. By the 1370s, there were fears that if the French invaded England, the rural classes might side with the invaders.
The discontent began to give way to open protest. In 1377, the "Great Rumour" occurred in south-east and south-west England. Rural workers organised themselves and refused to work for their lords, arguing that, according to the Domesday Book, they were exempted from such requests. The workers made unsuccessful appeals to the law courts and the King. There were also widespread urban tensions, particularly in London, where John of Gaunt narrowly escaped being lynched. The troubles increased again in 1380, with protests and disturbances across northern England and in the western towns of Shrewsbury and Bridgwater. An uprising occurred in York, during which John de Gisborne, the city's mayor, was removed from office, and fresh tax riots followed in early 1381. There was a great storm in England during May 1381, which many felt to prophesy future change and upheaval, adding further to the disturbed mood.
Outbreak of revoltEdit
Essex and KentEdit
The revolt of 1381 broke out in Essex, following the arrival of John Bampton to investigate non-payment of the poll tax on 30 May. Bampton was a member of Parliament, a Justice of the Peace and well-connected with royal circles. He based himself in Brentwood and summoned representatives from the neighbouring villages of Corringham, Fobbing and Stanford-le-Hope to explain and make good the shortfalls on 1 June. The villagers appear to have arrived well-organised, and armed with old bows and sticks. Bampton first interrogated the people of Fobbing, whose representative, Thomas Baker, declared that his village had already paid their taxes, and that no more money would be forthcoming. When Bampton and two sergeants attempted to arrest Baker, violence broke out. Bampton escaped and retreated to London, but three of his clerks and several of the Brentwood townsfolk who had agreed to act as jurors were killed. Robert Bealknap, the Chief Justice of the Court of Common Pleas, who was probably already holding court in the area, was empowered to arrest and deal with the perpetrators.
By the next day, the revolt was rapidly growing. The villagers spread the news across the region, and John Geoffrey, a local bailiff, rode between Brentwood and Chelmsford, rallying support. On 4 June, the rebels gathered at Bocking, where their future plans seem to have been discussed. The Essex rebels, possibly a few thousand strong, advanced towards London, some probably travelling directly and others via Kent. One group, under the leadership of John Wrawe, a former chaplain, marched north towards the neighbouring county of Suffolk, with the intention of raising a revolt there.
Revolt also flared in neighbouring Kent. Sir Simon de Burley, a close associate of both Edward III and the young Richard, had claimed that a man in Kent, called Robert Belling, was an escaped serf from one of his estates. Burley sent two sergeants to Gravesend, where Belling was living, to reclaim him. Gravesend's local bailiffs and Belling tried to negotiate a solution under which Burley would accept a sum of money in return for dropping his case, but this failed and Belling was taken away to be imprisoned at Rochester Castle. A furious group of local people gathered at Dartford, possibly on 5 June, to discuss the matter. From there the rebels travelled to Maidstone, where they stormed the gaol, and then onto Rochester on 6 June. Faced by the angry crowds, the constable in charge of Rochester Castle surrendered it without a fight and Belling was freed.
Some of the Kentish crowds now dispersed, but others continued. From this point, they appear to have been led by Wat Tyler, whom the Anonimalle Chronicle suggests was elected their leader at a large gathering at Maidstone on 7 June. Relatively little is known about Tyler's former life; chroniclers suggest that he was from Essex, had served in France as an archer and was a charismatic and capable leader. Several chroniclers believe that he was responsible for shaping the political aims of the revolt. Some also mention a Jack Straw as a leader among the Kentish rebels during this phase in the revolt, but it is uncertain if this was a real person, or a pseudonym for Wat Tyler or John Wrawe.[nb 4]
Tyler and the Kentish men advanced to Canterbury, entering the walled city and castle without resistance on 10 June. The rebels deposed the absent Archbishop of Canterbury, Sudbury, and made the cathedral monks swear loyalty to their cause. They attacked properties in the city with links to the hated royal council, and searched the city for suspected enemies, dragging the suspects out of their houses and executing them. The city gaol was opened and the prisoners freed. Tyler then persuaded a few thousand of the rebels to leave Canterbury and advance with him on London the next morning.
March on the capitalEdit
The Kentish advance on London appears to have been coordinated with the movement of the rebels in Essex, Suffolk and Norfolk. Their forces were armed with weapons including sticks, battle axes, old swords and bows.[nb 5] Along their way, they encountered Lady Joan, the King's mother, who was travelling back to the capital to avoid being caught up in the revolt; she was mocked but otherwise left unharmed. The Kentish rebels reached Blackheath, just south-east of the capital, on 12 June.[nb 6]
Word of the revolt reached the King at Windsor Castle on the night of 10 June. He travelled by boat down the River Thames to London the next day, taking up residence in the powerful fortress of the Tower of London for safety, where he was joined by his mother, Archbishop Sudbury, the Lord High Treasurer Sir Robert Hales, the Earls of Arundel, Salisbury and Warwick and several other senior nobles. A delegation, headed by Thomas Brinton, the Bishop of Rochester, was sent out from London to negotiate with the rebels and persuade them to return home.
At Blackheath, John Ball gave a famous sermon to the assembled Kentishmen. Ball was a well-known priest and radical preacher from Kent, who was by now closely associated with Tyler. Chroniclers' accounts vary as to how he came to be involved in the revolt; he may have been released from Maidstone gaol by the crowds, or might have been already at liberty when the revolt broke out. Ball rhetorically asked the crowds "When Adam delved and Eve span, who was then a gentleman?" and promoted the rebel slogan "With King Richard and the true commons of England". The phrases emphasised the rebel opposition to the continuation of serfdom and to the hierarchies of the Church and State that separated the subject from the King, while stressing that they were loyal to the monarchy and, unlike the King's advisers, were "true" to Richard. The rebels rejected proposals from the Bishop of Rochester that they should return home, and instead prepared to march on.
Discussions took place in the Tower of London about how to deal with the revolt. The King had only a few troops at hand, in the form of the castle's garrison, his immediate bodyguard and, at most, several hundred soldiers.[nb 7] Many of the more experienced military commanders were in France, Ireland and Germany, and the nearest major military force was in the north of England, guarding against a potential Scottish invasion. Resistance in the provinces was also complicated by English law, which stated that only the King could summon local militias or lawfully execute rebels and criminals, leaving many local lords unwilling to attempt to suppress the uprisings on their own authority.
Since the Blackheath negotiations had failed, the decision was taken that the King himself should meet the rebels, at Greenwich, on the south side of the Thames. Guarded by four barges of soldiers, Richard sailed from the Tower on the morning of 13 June, where he was met on the other side by the rebel crowds. The negotiations failed, as Richard was unwilling to come ashore and the rebels refused to enter discussions until he did. Richard returned across the river to the Tower.
Events in LondonEdit
Entry to the cityEdit
The rebels began to cross from Southwark onto London Bridge on the afternoon of 13 June. The defences on London Bridge were opened from the inside, either in sympathy for the rebel cause or out of fear, and the rebels advanced into the city.[nb 8] At the same time, the rebel force from Essex made its way towards Aldgate on the north side of the city. The rebels swept west through the centre of the city, and Aldgate was opened to let the rest of the rebels in.
The Kentish rebels had assembled a wide-ranging list of people whom they wanted the King to hand over for execution. It included national figures, such as John of Gaunt, Archbishop Sudbury and Hales; other key members of the royal council; officials, such as Belknap and Bampton who had intervened in Kent; and other hated members of the wider royal circle. When they reached the Marshalsea Prison in Southwark, they tore it apart. By now the Kent and Essex rebels had been joined by many rebellious Londoners. The Fleet and Newgate Prisons were attacked by the crowds, and the rebels also targeted houses belonging to Flemish immigrants.
On the north side of London, the rebels approached Smithfield and Clerkenwell Priory, the headquarters of the Knights Hospitaller which was headed by Hales. The priory was destroyed, along with the nearby manor. Heading west along Fleet Street, the rebels attacked the Temple, a complex of legal buildings and offices owned by the Hospitallers. The contents, books and paperwork were brought out and burned in the street, and the buildings systematically demolished. Meanwhile, John Fordham, the Keeper of the Privy Seal and one of the men on the rebels' execution list, narrowly escaped when the crowds ransacked his accommodation but failed to notice he was still in the building.
Next to be attacked along Fleet Street was the Savoy Palace, a huge, luxurious building belonging to John of Gaunt. According to the chronicler Henry Knighton it contained "such quantities of vessels and silver plate, without counting the parcel-gilt and solid gold, that five carts would hardly suffice to carry them"; official estimates placed the value of the contents at around £10,000. The interior was systematically destroyed by the rebels, who burnt the soft furnishings, smashed the precious metal work, crushed the gems, set fire to the Duke's records and threw the remains into the Thames and the city drains. Almost nothing was stolen by the rebels, who declared themselves to be "zealots for truth and justice, not thieves and robbers". The remains of the building were then set alight. In the evening, rebel forces gathered outside the Tower of London, from where the King watched the fires burning across the city.
Taking the Tower of LondonEdit
On the morning of 14 June, the crowd continued west along the Thames, burning the houses of officials around Westminster and opening the Westminster gaol. They then moved back into central London, setting fire to more buildings and storming Newgate Prison. The hunt for Flemings continued, and those with Flemish-sounding accents were killed, including the royal adviser, Richard Lyons.[nb 9] In one city ward, the bodies of 40 executed Flemings were piled up in the street, and at the Church of St Martin Vintry, popular with the Flemish, 35 of the community were killed. Historian Rodney Hilton argues that these attacks may have been coordinated by the weavers' guilds of London, who were commercial competitors of the Flemish weavers.
Isolated inside the Tower, the royal government was in a state of shock at the turn of events. The King left the castle that morning and made his way to negotiate with the rebels at Mile End in east London, taking only a very small bodyguard with him. The King left Sudbury and Hales behind in the Tower, either for their own safety or because Richard had decided it would be safer to distance himself from his unpopular ministers. Along the way, several Londoners accosted the King to complain about alleged injustices.
It is uncertain who spoke for the rebels at Mile End, and Wat Tyler may not have been present on this occasion, but they appear to have put forward their various demands to the King, including the surrender of the hated officials on their lists for execution; the abolition of serfdom and unfree tenure; "that there should be no law within the realm save the law of Winchester", and a general amnesty for the rebels. It is unclear precisely what was meant by the law of Winchester, but it probably referred to the rebel ideal of self-regulating village communities.[nb 10] Richard issued charters announcing the abolition of serfdom, which immediately began to be disseminated around the country. He declined to hand over any of his officials, apparently instead promising that he would personally implement any justice that was required.
While Richard was at Mile End, the Tower was taken by the rebels. This force, separate from those operating under Tyler at Mile End, approached the castle, possibly in the late morning.[nb 11] The gates were open to receive Richard on his return and a crowd of around 400 rebels entered the fortress, encountering no resistance, possibly because the guards were terrified by them.
Once inside, the rebels began to hunt down their key targets, and found Archbishop Sudbury and Robert Hales in the chapel of the White Tower. Along with William Appleton, John of Gaunt's physician, and John Legge, a royal sergeant, they were taken out to Tower Hill and beheaded. Their heads were paraded around the city, before being affixed to London Bridge. The rebels found John of Gaunt's son, the future Henry IV, and were about to execute him as well, when John Ferrour, one of the royal guards, successfully interceded on his behalf. The rebels also discovered Lady Joan and Joan Holland, Richard's sister, in the castle but let them go unharmed after making fun of them. The castle was thoroughly looted of armour and royal paraphernalia.
The historian Sylvia Federico, translating Latin court documents from the The National Archives, named Johanna Ferrour as the leader of this force that took the castle. Alongside her husband, she is described as "chief perpetrator and leader of rebellious evildoers from Kent". She arrested Sudbury and dragged him to the chopping block, ordering that he be beheaded as well as ordering the death of the treasurer, Robert Hales. It has been speculated that her name does not appear in the work of contemporary chroniclers as they may have felt that a female leader would be perceived as trivialising the revolt. From then onwards, however, comments Marc Boone, women were more regularly accepted in contemporary literature as playing a role in societal violence.
In the aftermath of the attack, Richard did not return to the Tower but instead travelled from Mile End to the Great Wardrobe, one of his royal houses in Blackfriars, part of south-west London. There he appointed the military commander Richard FitzAlan, the Earl of Arundel, to replace Sudbury as Chancellor, and began to make plans to regain an advantage over the rebels the following day. Many of the Essex rebels now began to disperse, content with the King's promises, leaving Tyler and the Kentish forces the most significant faction in London. Tyler's men moved around the city that evening, seeking out and killing John of Gaunt's employees, foreigners and anyone associated with the legal system.
On 15 June the royal government and the remaining rebels, who were unsatisfied with the charters granted the previous day, agreed to meet at Smithfield, just outside the city walls. London remained in confusion, with various bands of rebels roaming the city independently. Richard prayed at Westminster Abbey, before setting out for the meeting in the late afternoon. The chronicler accounts of the encounter all vary on matters of detail, but agree on the broad sequence of events. The King and his party, at least 200 strong and including men-at-arms, positioned themselves outside St Bartholomew's Priory to the east of Smithfield, and the thousands of rebels massed along the western end.[nb 12]
Richard probably called Tyler forwards from the crowd to meet him, and Tyler greeted the King with what the royal party considered excessive familiarity, terming Richard his "brother" and promising him his friendship. Richard queried why Tyler and the rebels had not yet left London following the signing of the charters the previous day, but this brought an angry rebuke from Tyler, who requested that a further charter be drawn up. The rebel leader rudely demanded refreshment and, once this had been provided, attempted to leave.
An argument then broke out between Tyler and some of the royal servants. The Mayor of London, William Walworth, stepped forward to intervene, Tyler made some motion towards the King, and the royal soldiers leapt in. Either Walworth or Richard ordered Tyler to be arrested, Tyler attempted to attack the Mayor, and Walworth responded by stabbing Tyler. Ralph Standish, a royal squire, then repeatedly stabbed Tyler with his sword, mortally injuring him.
The situation was now precarious and violence appeared likely as the rebels prepared to unleash a volley of arrows. Richard rode forward towards the crowd and persuaded them to follow him away from Smithfields, to Clerkenwell Fields, defusing the situation. Walworth meanwhile began to regain control of the situation, backed by reinforcements from the city. Tyler's head was cut off and displayed on a pole and, with their leader dead and the royal government now backed by the London militia, the rebel movement began to collapse. Richard promptly knighted Walworth and his leading supporters for their services.
While the revolt was unfolding in London, John Wrawe led his force into Suffolk. Wrawe had considerable influence over the development of the revolt across eastern England, where there may have been almost as many rebels as in the London revolt. The authorities put up very little resistance to the revolt: the major nobles failed to organise defences, key fortifications fell easily to the rebels and the local militias were not mobilised. As in London and the south-east, this was in part due to the absence of key military leaders and the nature of English law, but any locally recruited men might also have proved unreliable in the face of a popular uprising.
On 12 June, Wrawe attacked Sir Richard Lyons' property at Overhall, advancing on to Cavendish and Bury St Edmunds in west Suffolk the next day, gathering further support as they went. John Cambridge, the Prior of the wealthy Bury St Edmunds Abbey, was disliked in the town, and Wrawe allied himself with the townspeople and stormed the abbey. The Prior escaped, but was found two days later and beheaded. A small band of rebels marched north to Thetford to extort protection money from the town, and another group tracked down Sir John Cavendish, the Chief Justice of the King's Bench and Chancellor of the University of Cambridge. Cavendish was caught in Lakenheath and killed. John Battisford and Thomas Sampson independently led a revolt near Ipswich on 14 June. They took the town without opposition and looted the properties of the archdeacon and local tax officials. The violence spread out further, with attacks on many properties and the burning of the local court records. One official, Edmund Lakenheath, was forced to flee from the Suffolk coast by boat.
Revolt began to stir in St Albans in Hertfordshire late on 13 June, when news broke of the events in London. There had been long-running disagreements in St Albans between the town and the local abbey, which had extensive privileges in the region. On 14 June, protesters met with the Abbot, Thomas de la Mare, and demanded their freedom from the abbey. A group of townsmen under the leadership of William Grindecobbe traveled to London, where they appealed to the King for the rights of the abbey to be abolished. Wat Tyler, then still in control of the city, granted them authority in the meantime to take direct action against the abbey. Grindecobbe and the rebels returned to St Albans, where they found the Prior had already fled. The rebels broke open the abbey gaol, destroyed the fences marking out the abbey lands and burnt the abbey records in the town square. They then forced Thomas de la Mare to surrender the abbey's rights in a charter on 16 June. The revolt against the abbey spread out over the next few days, with abbey property and financial records being destroyed across the county.
On 15 June, revolt broke out in Cambridgeshire, led by elements of Wrawe's Suffolk rebellion and some local men, such as John Greyston, who had been involved in the events in London and had returned to his home county to spread the revolt, and Geoffrey Cobbe and John Hanchach, members of the local gentry. The University of Cambridge, staffed by priests and enjoying special royal privileges, was widely hated by the other inhabitants of the town. A revolt backed by the Mayor of Cambridge broke out with the university as its main target. The rebels ransacked Corpus Christi College, which had connections to John of Gaunt, and the University's church, and attempted to execute the University bedel, who escaped. The university's library and archives were burnt in the centre of the town, with one Margery Starre leading the mob in a dance to the rallying cry "Away with the learning of clerks, away with it!" while the documents burned. The next day, the university was forced to negotiate a new charter, giving up its royal privileges. Revolt then spread north from Cambridge toward Ely, where the gaol was opened and the local Justice of the Peace executed.
In Norfolk, the revolt was led by Geoffrey Litster, a weaver, and Sir Roger Bacon, a local lord with ties to the Suffolk rebels. Litster began sending out messengers across the county in a call to arms on 14 June, and isolated outbreaks of violence occurred. The rebels assembled on 17 June outside Norwich and killed Sir Robert Salle, who was in charge of the city defences and had attempted to negotiate a settlement. The people of the town then opened the gates to let the rebels in. They began looting buildings and killed Reginald Eccles, a local official. William de Ufford, the Earl of Suffolk fled his estates and travelled in disguise to London. The other leading members of the local gentry were captured and forced to play out the roles of a royal household, working for Litster. Violence spread out across the county, as gaols were opened, Flemish immigrants killed, court records burned, and property looted and destroyed.
Northern and western EnglandEdit
Revolts also occurred across the rest of England, particularly in the cities of the north, traditionally centres of political unrest. In the town of Beverley, violence broke out between the richer mercantile elite and the poorer townspeople during May. By the end of the month the rebels had taken power and replaced the former town administration with their own. The rebels attempted to enlist the support of Alexander Neville, the Archbishop of York, and in June forced the former town government to agree to arbitration through Neville. Peace was restored in June 1382 but tensions continued to simmer for many years.
Word of the troubles in the south-east spread north, slowed by the poor communication links of medieval England. In Leicester, where John of Gaunt had a substantial castle, warnings arrived of a force of rebels advancing on the city from Lincolnshire, who were intent on destroying the castle and its contents. The mayor and the town mobilised their defences, including a local militia, but the rebels never arrived. John of Gaunt was in Berwick when word reached him on 17 June of the revolt. Not knowing that Wat Tyler had by now been killed, John of Gaunt placed his castles in Yorkshire and Wales on alert. Fresh rumours, many of them incorrect, continued to arrive in Berwick, suggesting widespread rebellions across the west and east of England and the looting of the ducal household in Leicester; rebel units were even said to be hunting for the Duke himself. Gaunt began to march to Bamburgh Castle, but then changed course and diverted north into Scotland, only returning south once the fighting was over.
News of the initial events in London also reached York around 17 June, and attacks at once broke out on the properties of the Dominican friars, the Franciscan friaries and other religious institutions. Violence continued over the coming weeks, and on 1 July a group of armed men, under the command of John de Gisbourne, forced their way into the city and attempted to seize control. The mayor, Simon de Quixlay, gradually began to reclaim authority, but order was not properly restored until 1382. The news of the southern revolt reached Scarborough where riots broke out against the ruling elite on 23 June, with the rebels dressed in white hoods with a red tail at the back. Members of the local government were deposed from office, and one tax collector was nearly lynched. By 1382 the elite had re-established power.
In the Somerset town of Bridgwater, revolt broke out on 19 June, led by Thomas Ingleby and Adam Brugge. The crowds attacked the local Augustine house and forced their master to give up his local privileges and pay a ransom. The rebels then turned on the properties of John Sydenham, a local merchant and official, looting his manor and burning paperwork, before executing Walter Baron, a local man. The Ilchester gaol was stormed, and one unpopular prisoner executed.
The royal suppression of the revolt began shortly after the death of Wat Tyler on 15 June. Sir Robert Knolles, Sir Nicholas Brembre and Sir Robert Launde were appointed to restore control in the capital. A summons was put out for soldiers, probably around 4,000 men were mustered in London, and expeditions to the other troubled parts of the country soon followed.
The revolt in East Anglia was independently suppressed by Henry le Despenser, the Bishop of Norwich. Henry was in Stamford in Lincolnshire when the revolt broke out, and when he found out about it he marched south with eight men-at-arms and a small force of archers, gathering more forces as he went. He marched first to Peterborough, where he routed the local rebels and executed any he could capture, including some who had taken shelter in the local abbey. He then headed south-east via Huntingdon and Ely, reached Cambridge on 19 June, and then headed further into the rebel-controlled areas of Norfolk. Henry reclaimed Norwich on 24 June, before heading out with a company of men to track down the rebel leader, Geoffrey Litster. The two forces met at the Battle of North Walsham on 25 or 26 June; the Bishop's forces triumphed and Litster was captured and executed. Henry's quick action was essential to the suppression of the revolt in East Anglia, but he was very unusual in taking matters into his own hands in this way, and his execution of the rebels without royal sanction was illegal.
On 17 June, the King dispatched his half-brother Thomas Holland and Sir Thomas Trivet to Kent with a small force to restore order. They held courts at Maidstone and Rochester. William de Ufford, the Earl of Suffolk, returned to his county on 23 June, accompanied by a force of 500 men. He quickly subdued the area and was soon holding court in Mildenhall, where many of the accused were sentenced to death. He moved on into Norfolk on 6 July, holding court in Norwich, Great Yarmouth and Hacking. Hugh, Lord la Zouche, led the legal proceedings against the rebels in Cambridgeshire. In St Albans, the Abbot arrested William Grindecobbe and his main supporters.
On 20 June, the King's uncle, Thomas of Woodstock, and Robert Tresilian, the replacement Chief Justice, were given special commissions across the whole of England. Thomas oversaw court cases in Essex, backed up by a substantial military force as resistance was continuing and the county was still in a state of unrest. Richard himself visited Essex, where he met with a rebel delegation seeking confirmation of the grants the King had given at Mile End. Richard rejected them, allegedly telling them that "rustics you were and rustics you are still. You will remain in bondage, not as before, but incomparably harsher".[nb 13] Tresilian soon joined Thomas, and carried out 31 executions in Chelmsford, then travelled to St Albans in July for further court trials, which appear to have utilised dubious techniques to ensure convictions. Thomas went on to Gloucester with 200 soldiers to suppress the unrest there. Henry Percy, the Earl of Northumberland, was tasked to restore order to Yorkshire.
A wide range of laws were invoked in the process of the suppression, from general treason to charges of book burning or demolishing houses, a process complicated by the relatively narrow definition of treason at the time. The use of informants and denunciations became common, causing fear to spread across the country; by November at least 1,500 people had been executed or killed in battle. Many of those who had lost property in the revolt attempted to seek legal compensation, and John of Gaunt made particular efforts to track down those responsible for destroying his Savoy Palace. Most had only limited success, as the defendants were rarely willing to attend court. The last of these cases was resolved in 1387.
The rebel leaders were quickly rounded up. A rebel leader by the name of Jack Straw was captured in London and executed.[nb 14] John Ball was caught in Coventry, tried in St Albans, and executed on 15 July. Grindecobbe was also tried and executed in St Albans. John Wrawe was tried in London; he probably gave evidence against 24 of his colleagues in the hope of a pardon, but was sentenced to be executed by being hanged, drawn and quartered on 6 May 1382. Sir Roger Bacon was probably arrested before the final battle in Norfolk, and was tried and imprisoned in the Tower of London before finally being pardoned by the Crown. As of September 1381, Thomas Ingleby of Bridgwater had successfully evaded the authorities.
The royal government and Parliament began to re-establish the normal processes of government after the revolt; as the historian Michael Postan describes, the uprising was in many ways a "passing episode". On 30 June, the King ordered England's serfs to return to their previous conditions of service, and on 2 July the royal charters signed under duress during the rising were formally revoked. Parliament met in November to discuss the events of the year and how best to respond to their challenges. The revolt was blamed on the misconduct of royal officials, who, it was argued, had been excessively greedy and overbearing. The Commons stood behind the existing labour laws, but requested changes in the royal council, which Richard granted. Richard also granted general pardons to those who had executed rebels without due process, to all men who had remained loyal, and to all those who had rebelled – with the exception of the men of Bury St Edmunds, any men who had been involved in the killing of the King's advisers, and those who were still on the run from prison.
Despite the violence of the suppression, the government and local lords were relatively circumspect in restoring order after the revolt, and continued to be worried about fresh revolts for several decades. Few lords took revenge on their peasants except through the legal processes of the courts. Low-level unrest continued for several more years. In September 1382 there was trouble in Norfolk, involving an apparent plot against the Bishop of Norwich, and in March the following year there was an investigation into a plot to kill the sheriff of Devon. When negotiating rents with their landlords, peasants alluded to the memory of the revolt and the threat of violence.
There were no further attempts by Parliament to impose a poll tax or to reform England's fiscal system. The Commons instead concluded at the end of 1381 that the military effort on the Continent should be "carefully but substantially reduced". Unable to raise fresh taxes, the government had to curtail its foreign policy and military expeditions and began to examine the options for peace. The institution of serfdom declined after 1381, but primarily for economic rather than political reasons. Rural wages continued to increase, and lords increasingly sold their serfs' freedom in exchange for cash, or converted traditional forms of tenure to new leasehold arrangements. During the 15th century the institution vanished in England.
Chroniclers primarily described the rebels as rural serfs, using broad, derogatory Latin terms such as serviles rustici, servile genus and rusticitas. Some chroniclers, including Knighton, also noted the presence of runaway apprentices, artisans and others, sometimes terming them the "lesser commons". The evidence from the court records following the revolt, albeit biased in various ways, similarly shows the involvement of a much broader community, and the earlier perception that the rebels were only constituted of unfree serfs is now rejected.[nb 15]
The rural rebels came from a wide range of backgrounds, but typically they were, as the historian Christopher Dyer describes, "people well below the ranks of the gentry, but who mainly held some land and goods", and not the very poorest in society, who formed a minority of the rebel movement. Many had held positions of authority in local village governance, and these seem to have provided leadership to the revolt. Some were artisans, including, as the historian Rodney Hilton lists, "carpenters, sawyers, masons, cobblers, tailors, weavers, fullers, glovers, hosiers, skinners, bakers, butchers, innkeepers, cooks and a lime-burner". They were predominantly male, but with some women in their ranks. The rebels were typically illiterate; only between 5 and 15 per cent of England could read during this period. They also came from a broad range of local communities, including at least 330 south-eastern villages.
Many of the rebels had urban backgrounds, and the majority of those involved in the events of London were probably local townsfolk rather than peasants. In some cases, the townsfolk who joined the revolt were the urban poor, attempting to gain at the expense of the local elites. In London, for example, the urban rebels appear to have largely been the poor and unskilled. Other urban rebels were part of the elite, such as at York where the protesters were typically prosperous members of the local community, while in some instances, townsfolk allied themselves with the rural population, as at Bury St Edmunds. In other cases, such as Canterbury, the influx of population from the villages following the Black Death made any distinction between urban and rural less meaningful.
The vast majority of those involved in the revolt of 1381 were not represented in Parliament and were excluded from its decision-making. In a few cases the rebels were led or joined by relatively prosperous members of the gentry, such as Sir Roger Bacon in Norfolk. Some of them later claimed to have been forced to join the revolt by the rebels. Clergy also formed part of the revolt; as well as the more prominent leaders, such as John Ball or John Wrawe, nearly 20 are mentioned in the records of the revolt in the south-east. Some were pursuing local grievances, some were disadvantaged and suffering relative poverty, and others appear to have been motivated by strong radical beliefs.
Many of those involved in the revolt used pseudonyms, particularly in the letters sent around the country to encourage support and fresh uprisings. They were used both to avoid incriminating particular individuals and to allude to popular values and stories. One popular assumed name was Piers Plowman, taken from the main character in William Langland's poem. Jack was also a widely used rebel pseudonym, and historians Steven Justice and Carter Revard suggest that this may have been because it resonated with the Jacques of the French Jacquerie revolt several decades earlier.
Contemporary chroniclers of the events in the revolt have formed an important source for historians. The chroniclers were biased against the rebel cause and typically portrayed the rebels, in the words of the historian Susan Crane, as "beasts, monstrosities or misguided fools". London chroniclers were also unwilling to admit the role of ordinary Londoners in the revolt, preferring to place the blame entirely on rural peasants from the south-east. Among the key accounts was the anonymous Anonimalle Chronicle, whose author appears to have been part of the royal court and an eye-witness to many of the events in London. The chronicler Thomas Walsingham was present for much of the revolt, but focused his account on the terror of the social unrest and was extremely biased against the rebels. The events were recorded in France by Jean Froissart, the author of the Chronicles. He had well-placed sources close to the revolt, but was inclined to elaborate the known facts with colourful stories. No sympathetic accounts of the rebels survive.
At the end of the 19th century there was a surge in historical interest in the Peasants' Revolt, spurred by the contemporary growth of the labour and socialist movements. Work by Charles Oman, Edgar Powell, André Réville and G. M. Trevelyan established the course of the revolt. By 1907 the accounts of the chroniclers were all widely available in print and the main public records concerning the events had been identified. Réville began to use the legal indictments that had been used against suspected rebels after the revolt as a fresh source of historical information, and over the next century extensive research was carried out into the local economic and social history of the revolt, using scattered local sources across south-east England.
Interpretations of the revolt have changed over the years. 17th-century historians, such as John Smyth, established the idea that the revolt had marked the end of unfree labour and serfdom in England. 19th-century historians such as William Stubbs and Thorold Rogers reinforced this conclusion, Stubbs describing it as "one of the most portentous events in the whole of our history". In the 20th century, this interpretation was increasingly challenged by historians such as May McKisack, Michael Postan and Richard Dobson, who revised the impact of the revolt on further political and economic events in England. Mid-20th century Marxist historians were both interested in, and generally sympathetic to, the rebel cause, a trend culminating in Hilton's 1973 account of the uprising, set against the context of wider peasant revolts across Europe during the period. The Peasants' Revolt has received more academic attention than any other medieval revolt, and this research has been interdisciplinary, involving historians, literary scholars and international collaboration.
The name "the Peasants' Revolt" emerged in the 18th and early 19th centuries, and its first recorded use by historians was in John Richard Green's Short History of the English People in 1874. Contemporary chronicles did not give the revolt a specific title, and the term "peasant" did not appear in the English language until the 15th century. The title has been critiqued by modern historians such as Miri Rubin and Paul Strohm, both on the grounds that many in the movements were not peasants, and that the events more closely resemble a prolonged protest or rising, rather than a revolt or rebellion.
The Peasants' Revolt became a popular literary subject. The poet John Gower, who had close ties to officials involved in the suppression of the revolt, amended his famous poem Vox Clamantis after the revolt, inserting a section condemning the rebels and likening them to wild animals. Geoffrey Chaucer, who lived in Aldgate and may have been in London during the revolt, used the rebel killing of Flemings as a metaphor for wider disorder in The Nun's Priest's Tale part of The Canterbury Tales, parodying Gower's poem. Chaucer otherwise made no reference to the revolt in his work, possibly because as he was a client of the King it would have been politically unwise to discuss it. William Langland, the author of the poem Piers Plowman, which had been widely used by the rebels, made various changes to its text after the revolt in order to distance himself from their cause.
The revolt formed the basis for the late 16th-century play, The Life and Death of Jack Straw, possibly written by George Peele and probably originally designed for production in the city's guild pageants. It portrays Jack Straw as a tragic figure, being led into wrongful rebellion by John Ball, making clear political links between the instability of late-Elizabethan England and the 14th century. The story of the revolt was used in pamphlets during the English Civil War of the 17th century, and formed part of John Cleveland's early history of the war. It was deployed as a cautionary account in political speeches during the 18th century, and a chapbook entitled The History of Wat Tyler and Jack Strawe proved popular during the Jacobite risings and American War of Independence. Thomas Paine and Edmund Burke argued over the lessons to be drawn from the revolt, Paine expressing sympathy for the rebels and Burke condemning the violence. The Romantic poet Robert Southey based his 1794 play Wat Tyler on the events, taking a radical and pro-rebel perspective.
As the historian Michael Postan describes, the revolt became famous "as a landmark in social development and [as] a typical instance of working-class revolt against oppression", and was widely used in 19th and 20th century socialist literature. William Morris built on Chaucer in his novel A Dream of John Ball, published in 1888, creating a narrator who was openly sympathetic to the peasant cause, albeit a 19th-century persona taken back to the 14th century by a dream. The story ends with a prophecy that socialist ideals will one day be successful. In turn, this representation of the revolt influenced Morris's utopian socialist News from Nowhere. Florence Converse used the revolt in her novel Long Will in 1903. Later 20th century socialists continued to draw parallels between the revolt and contemporary political struggles, including during the arguments over the introduction of the Community Charge in the United Kingdom during the 1980s.
Conspiracy theorists, including writer John Robinson, have attempted to explain alleged flaws in mainstream historical accounts of the events of 1381, such as the speed with which the rebellion was coordinated. Theories include that the revolt was led by a secret, occult organisation called "the Great Society", said to be an offshoot of the order of the Knights Templar destroyed in 1312, or that the fraternity of the Freemasons was covertly involved in organising the revolt.[nb 16]
- It is impossible to accurately compare 14th century and modern prices or incomes. For comparison, the income of a typical nobleman such as Richard le Scrope was around £600 a year, while only six earls in the kingdom enjoyed incomes of over £5,000 a year.
- For comparison, the wage for an unskilled labourer in Essex in 1380 was around three pence a day.
- The Marshalsea Court was originally intended to provide justice for the royal household and those doing business with it, travelling with the King around the country and having authority covering 12 miles (19 km) around the monarch. The monarchs of the 14th century were increasingly based in London, resulting in the Marshalsea Court taking up semi-permanent business in the capital. Successive monarchs used the court to exercise royal power, often at the expense of the City of London's Corporation.
- Walsingham highlights the role of a "Jack Straw", and is supported by Froissart, although Knighton argues that this was a pseudonym; other chroniclers fail to mention him at all. The historian Friedrich Brie popularised the argument in favour of the pseudonym in 1906. Modern historians recognise Tyler as the primary leader, and are doubtful about the role of "Jack Straw".
- Military historian Jonathan Sumption considers this description of the rebels' weaponry, drawn from the chronicler Thomas Walsingham, as reliable; literary historian Stephen Justice is less certain, noting the sarcastic manner in which Walsingham mocks the rebels' old and dilapidated arms, including their bows "reddened with age and smoke."
- Historian Andrew Prescott has critiqued these timings, arguing that it would have been unlikely that so many rebels could have advanced so fast on London, given the condition of the medieval road networks.
- Chronicler figures for the King's immediate forces in London vary; Henry Knighton argues that the King had between 150–180 men in the Tower of London, Thomas Walsingham suggests 1,200. These were probably over-estimates, and historian Alastair Dunn assesses that only a skeleton force was present; Jonathan Sumption judges that around 150 men-at-arms were present, and some archers.
- It is uncertain who opened the defences at London Bridge and Aldgate. After the revolt three aldermen, John Horn, Walter Sibil and William Tongue, were put on trial by the authorities, but it is unclear how far these accusations were motivated by the post-conflict London politics. The historian Nigel Saul is doubtful of their guilt in collaborating with the rebels. Rodney Hilton suggests that they may have opened the gates in order to buy time and so prevent the destruction of their city, although he prefers the theory that the London crowds forced the gates to be opened. Jonathan Sumption similarly argues that the aldermen were forced to open the gates in the face of popular pressure.
- The royal adviser Richard Lyons was believed to have Flemish origins, although he was also unpopular in his own right as a result of his role in government.
- The rebel call for a return to the "law of Winchester" has been much debated. One theory is that it was another term for the Domesday Book of William I, which was believed to provide protection for particular groups of tenants. Another is that it referred to the Statute of Winchester in 1285, which allowed for the enforcement of local law through armed village communities, and which had been cited in more recent legislation on the criminal law. The creation of special justices and royal officials during the 14th century were seen as eroding these principles.
- Most chroniclers stated that the force that attacked the Tower of London was separate to that operating under Tyler's command at Mile End; only the Anonimalle Chronicle links them to Tyler. The timing of the late morning attack relies on the account of the Westminster Chronicle.
- The primary sources for the events at Smithfield are the Anonimalle Chronicle, Thomas Walsingham, Jean Froissart, Henry Knighton and the Westminster Chronicler. There are minor differences in their accounts of events. Froissart suggests that Wat Tyler intended to capture the King and kill the royal party, and that Tyler initiated the engagement with Richard in order to carry out this plan. The Anonimalle Chronicle and Walsingham both go into some, if varying, detail as to the rebels' demands. Walsingham and Knighton wrote that Tyler, rather than being about to depart at the end of his discussions with Richard, appeared to be about to kill the King, triggering the royal response. Walsingham differs from the other chroniclers in giving a key role in the early part of the encounter to Sir John Newton.
- The "rustics" quotation from Richard II is from the chronicler Thomas Walsingham, and should be treated with caution. Historian Dan Jones suspects that although Richard no doubt despised the rebels, the language itself may have been largely invented by Walsingham.
- As noted above, questions exist over Jack Straw's identity. The chronicler Thomas Walsingham attributes a long confession to the Jack Straw executed in London, but the reliability of this is questioned by historians: Rodney Hilton refers to it as "somewhat dubious", while Alastair Dunn considers it to be essentially a fabrication. There are no reliable details of the trial or execution.
- Historian Sylvia Federico notes the dangers in treating the pardons lists simplistically, given the tendency for some innocent individuals to acquire pardons for additional security, and the tendency for cases to be brought against individuals for local, non-political reasons.
- The term "the Great Society" emerges from indictments against the rebels, in which references were made the magne societatis. This probably meant "large company" or "great band" of rebels, but was mistranslated in the late 19th century to refer to the "Great Society".
- Dunn 2002, pp. 22–23
- Rubin 2006, pp. 1–3
- Rubin 2006, p. 2; Dunn 2002, p. 14
- Postan 1975, p. 172
- Dunn 2002, p. 14; Postan 1975, p. 172
- Dyer 2009, p. 249; Dunn 2002, p. 15
- Dyer 2009, pp. 271–272
- Dyer 2009, pp. 273–274
- Rubin 2006, p. 65
- Dyer 2009, p. 278
- Dyer 2000, pp. 202–203
- Butcher 1987, p. 86
- Dyer 2009, p. 282
- Dyer 2009, p. 282; Rubin 2006, p. 69
- Dyer 2009, pp. 282, 285
- Dyer 2009, pp. 282–283
- Rubin 2006, p. 69
- Dyer 2009, p. 285
- Rubin 2006, p. 122
- Dyer 2009, p. 279; Rubin 2006, pp. 122–123
- Dyer 2000, p. 200
- Rubin 2006, p. 122; Dyer 2009, p. 278; Postan 1975, p. 172
- Dyer 2009, p. 279
- Dyer 2009, pp. 283–284; Jones 2010, p. 16
- Rubin 2006, p. 121; Sumption 2009, pp. 18, 53–60
- Sumption 2009, pp. 325–327, 354–355, 405; Dunn 2002, p. 52
- Given-Wilson 1996, p. 157; Rubin 2006, p. 161
- Rubin 2006, p. 120
- Rubin 2006, p. 50
- Dunn 2002, p. 50
- Jones 2010, pp. 19–20
- Dunn 2002, p. 51
- Jones 2010, p. 21; Dunn 2002, p. 51
- Dyer 2000, p. 168
- Sumption 2009, pp. 325–327, 354–355; Dunn 2002, pp. 51–52
- Rubin 2006, p. 120; Sumption 2009, p. 355
- Dunn 2002, pp. 50–51
- Dunn 2002, p. 51; Jones 2010, p. 22
- Dunn 2002, pp. 52–53
- Dunn 2002, p. 53; Sumption 2009, p. 407
- Dunn 2002, p. 53; Sumption 2009, p. 408
- Dunn 2002, p. 54; Sumption 2009, p. 419
- Dunn 2002, p. 55
- Sumption 2009, pp. 419–420; Powell 1896, p. 5
- Postan 1975, p. 171; Dyer 2000, p. 214
- Rubin 2006, pp. 121–122
- Harding 1987, pp. 176–180; Dunn 2002, pp. 80–81
- Dunn 2002, pp. 80–81
- Spindler 2012, pp. 65,72
- Jones 2010, p. 34
- Jones 2010, pp. 34, 35, 40
- Oman 1906, p. 18
- Jones 2010, p. 40
- Dyer 2000, pp. 213–217
- Dyer 2000, pp. 211–212
- Dyer 2000, p. 212
- Dyer 2000, p. 219; Rubin 2006, pp. 123–124
- Rubin 2006, p. 124
- Dyer 2009, p. 281
- Dyer 2009, pp. 281, 282
- Wickert 2016, p. 18
- Rubin 2006, p. 70
- Rubin 2006, p. 70; Harding 1987, pp. 18–190
- Faith 1987, p. 43
- Faith 1987, pp. 44–46
- Faith 1987, p. 69
- Dunn 2002, p. 88; Cohn 2013, p. 100
- Cohn 2013, p. 105; Dilks 1927, p. 59
- Dobson 1987, p. 123
- Dyer 2000, p. 218.
- Dunn 2002, p. 73
- Sumption 2009, p. 420
- Dunn 2002, p. 73; Sumption 2009, p. 420
- Dunn 2002, pp. 73–74
- Dunn 2002, p. 74
- Sumption 2009, pp. 420–421
- Dunn 2002, p. 122; Powell 1896, p. 9
- Dunn 2002, p. 75
- Dunn 2002, pp. 75–76
- Dunn 2002, pp. 60, 76
- Dunn 2002, p. 76
- Dunn 2002, p. 58; Sumption 2009, p. 421
- Dunn 2002, p. 58
- Dunn 2002, pp. 62–63
- Dunn 2002, pp. 62–63; Brie 1906, pp. 106–111; Matheson 1998, p. 150
- Dunn 2002, pp. 76–77; Lyle 2002, p. 91
- Dunn 2002, p. 77
- Dunn 2002, p. 77; Sumption 2009, p. 421
- Sumption 2009, p. 421
- Dunn 2002, p. 78
- Sumption 2009, p. 422
- Justice 1994, p. 204; Sumption 2009, p. 422
- Strohm 2008, p. 203
- Dunn 2002, p. 78; Sumption 2009, p. 423
- Sumption 2009, p. 423
- Dunn 2002, p. 60; Sumption 2009, p. 422
- Dunn 2002, p. 76; Sumption 2009, p. 422
- Dunn 2002, p. 58; Jones 2010, pp. 62, 80; Rubin 2006, p. 124
- Sumption 2009, p. 422; Dunn 2002, p. 135; Tuck 1987, p. 199
- Dunn 2002, pp. 91–92; Sumption 2009, p. 423
- Sumption 2009, p. 423; Dunn 2002, p. 135; Tuck 1987, p. 199
- Tuck 1987, pp. 198–200
- Dunn 2002, pp. 78–79
- Dunn 2002, p. 79
- Dunn 2002, p. 79; Sumption 2009, p. 424
- Sumption 2009, p. 424; Dobson 1983, p. 220; Barron 1981, p. 3
- Saul 1999, p. 424; Hilton 1995, pp. 189–190; Sumption 2009, p. 424
- Sumption 2009, p. 424
- Sumption 2009, p. 425
- Dunn 2002, p. 81; Sumption 2009, p. 424
- Sumption 2009, p. 425; Dunn 2002, p. 81
- Sumption 2009, p. 425; Dunn 2002, pp. 81–82
- Dunn 2002, p. 83
- Dunn 2002, p. 84
- Dunn 2002, pp. 85, 87
- Dunn 2002, p. 86
- Dunn 2002, pp. 86–87
- Dunn 2002, p. 92
- Dunn 2002, p. 88
- Dunn 2002, p. 90
- Cohn 2013, p. 286; Dunn 2002, p. 90
- Spindler 2012, pp. 62, 71; Saul 1999, p. 70
- Hilton 1995, p. 195
- Dunn 2002, pp. 92–93
- Dunn 2002, p. 95; Sumption 2009, p. 427
- Dunn 2002, p. 95
- Saul 1999, p. 68
- Dunn 2002, pp. 68, 96; Oman 1906, p. 200
- Dunn 2002, p. 69; Harding 1987, pp. 166–167
- Harding 1987, pp. 165–169; Dunn 2002, p. 69
- Dunn 2002, pp. 96–97
- Dunn 2002, p. 98
- Dunn 2002, p. 99
- Sumption 2009, p. 427; Saul 1999, p. 69
- Sumption 2009, pp. 427–428
- Dunn 2002, p. 101
- Dunn 2002, p. 101; Mortimer 1981, p. 18
- Dunn 2002, pp. 99–100
- Saul 1999, p. 69
- Bourin, Monique; Cherubini, Giovanni; Pinto, Giuliano (2008). Rivolte urbane e rivolte contadine nell'Europa del Trecento: un confronto (in Italian). Firenze University Press. ISBN 9788884538826.
- Melissa Hogenboom. "Peasants' Revolt: The time when women took up arms". BBC news. Retrieved 14 June 2012.
- Mortimer 1981, p. 18
- Dunn 2002, p. 102; Sumption 2009, p. 428
- Dunn 2002, p. 97
- Sumption 2009, p. 428.
- Dunn 2002, pp. 103, 105
- Dunn 2002, pp. 102–103
- Dunn 2002, p. 103
- Dunn 2002, p. 103; Saul 1999, p. 70
- Dunn 2002, pp. 103–106
- Dunn 2002, p. 104
- Dunn 2002, pp. 104–105
- Dunn 2002, pp. 106–107
- Dunn 2002, p. 106
- Dunn 2002, p. 107
- Dunn 2002, pp. 107–108
- Dunn 2002, p. 107; Jones 2010, pp. 154–155
- Dunn 2002, p. 122
- Powell 1896, pp. 41, 60–61
- Powell 1896, pp. 57–58
- Powell 1896, p. 58; Tuck 1987, pp. 197–198
- Dunn 2002, pp. 122–123
- Dunn 2002, pp. 123–124
- Dunn 2002, p. 124; Powell 1896, p. 19
- Dunn 2002, p. 124; Powell 1896, p. 12
- Dunn 2002, pp. 124–125
- Dunn 2002, p. 126
- Dunn 2002, p. 126; Powell 1896, p. 24.
- Dunn 2002, p. 126; Powell 1896, p. 21
- Dunn 2002, p. 113
- Dunn 2002, pp. 112–113
- Dunn 2002, p. 114
- Dunn 2002, pp. 114–115
- Dunn 2002, p. 115
- Dunn 2002, pp. 115–117
- Dunn 2002, pp. 117–118
- Dunn 2002, p. 119
- Dunn 2002, p. 127
- Dunn 2002, p. 128
- Dunn 2002, pp. 128–129
- Dunn 2002, p. 129
- Powell 1896, pp. 45–49
- Dunn 2002, p. 130; Powell 1896, p. 26
- Powell 1896, pp. 27–28
- Dunn 2002, p. 130; Powell 1896, p. 29
- Dunn 2002, pp. 130–131
- Dunn 2002, p. 131
- Powell 1896, pp. 31–36
- Dobson 1987, pp. 112–114
- Dobson 1987, p. 124
- Dobson 1987, pp. 126–127
- Dobson 1987, pp. 127–128
- Dobson 1987, pp. 128–129
- Dunn 2002, p. 121
- Dunn 2002, pp. 121–123
- Dunn 2002, p. 143
- Dunn 2002, pp. 143–144
- Dunn 2002, p. 144
- Dobson 1987, p. 121
- Dobson 1987, pp. 122–123
- Dobson 1987, pp. 130–136
- Dobson 1987, pp. 136–137
- Dobson 1987, p. 138
- Dilks 1927, p. 64
- Dilks 1927, p. 65
- Dilks 1927, pp. 65–66
- Dilks 1927, p. 66
- Dunn 2002, p. 135
- Dunn 2002, pp. 135–136
- Dunn 2002, pp. 135–136; Tuck 1987, p. 200
- Dunn 2002, p. 131; Oman 1906, pp. 130–132
- Jones 2010, pp. 172–173
- Jones 2010, pp. 178–182
- Jones 2010, p. 194
- Jones 2010, pp. 194–195
- Tuck 1987, pp. 197, 201; Powell 1896, p. 61
- Dunn 2002, p. 136
- Dunn 2002, pp. 126, 136
- Powell 1896, p. 25; Dunn 2002, p. 136
- Dunn 2002, pp. 140–141
- Dunn 2002, pp. 136–137
- Saul 1999, p. 74
- Jones 2010, p. 196; Saul 1999, p. 74; Strohm 2008, p. 198
- Dunn 2002, pp. 137, 140–141
- Dunn 2002, p. 137
- Dunn 2002, pp. 137–138; Federico 2001, p. 169
- Jones 2010, pp. 200–201; Prescott 2004, cited Jones 2010, p. 201
- Dunn 2002, p. 138; Rubin 2006, p. 127
- Jones 2010, p. 20
- Dunn 2002, p. 139
- Dunn 2002, pp. 71, 139;Hilton 1995, p. 219
- Dunn 2002, pp. 137, 139–140
- Powell 1896, p. 25; Dunn 2002, p. 139
- Powell 1896, p. 39
- Dilks 1927, p. 67
- Postan 1975, p. 172;Tuck 1987, p. 212
- Dunn 2002, pp. 141–142
- Tuck 1987, pp. 205–206
- Dunn 2002, p. 142
- Dunn 2002, pp. 142–143
- Hilton 1995, p. 231; Tuck 1987, p. 210
- Tuck 1987, p. 201
- Rubin 2006, p. 127
- Eiden 1999, p. 370; Rubin 2006, p. 127
- Dyer 2009, p. 291
- Tuck 1987, pp. 203–205
- Sumption 2009, p. 430
- Tuck 1987, pp. 208–209; Sumption 2009, p. 430
- Dunn 2002, p. 147
- Dunn 2002, p. 147; Hilton 1995, p. 232
- Hilton 1995, pp. 176–177; Crane 1992, p. 202
- Postan 1975, p. 171; Hilton 1995, pp. 178, 180; Strohm 2008, p. 197
- Federico 2001, pp. 162–163
- Dyer 2000, p. 196; Hilton 1995, p. 184; Strohm 2008, p. 197
- Dyer 2000, pp. 197–198
- Hilton 1995, p. 179
- Federico 2001, p. 165
- Crane 1992, p. 202
- Dyer 2000, p. 192
- Rubin 2006, p. 121; Strohm 2008, pp. 197–198
- Butcher 1987, pp. 84–85
- Butcher 1987, p. 85; Strohm 2008, p. 197
- Butcher 1987, p. 85
- Rubin 2006, p. 121
- Hilton 1995, p. 184
- Tuck 1987, p. 196
- Hilton 1995, pp. 207–208
- Hilton 1995, pp. 208–210
- Jones 2010, p. 169; Hilton 1995, pp. 214–215
- Jones 2010, p. 169
- Justice 1994, p. 223
- Justice 1994, p. 222
- Hilton 1987, p. 2
- Crane 1992, p. 208; Strohm 2008, pp. 198–199
- Strohm 2008, p. 201
- Jones 2010, p. 215
- Dunn 2002, pp. 99–100; Jones 2010, p. 215
- Reynaud 1897, p. 94
- Jones 2010, pp. 215–216
- Dyer 2003, p. x
- Dyer 2003, p. x; Powell 1896; Oman 1906; Réville 1898; Trevelyan 1899
- Dyer 2000, p. 191
- Dyer 2000, pp. 191–192; Hilton 1987, p. 5
- Hilton 1987, pp. 2–3
- Strohm 2008, p. 203; Hilton 1995; Jones 2010, p. 217; Dyer 2003, p. xii–xiii
- Cohn 2013, pp. 3–4
- Rubin 2006, p. 121; Strohm 2008, p. 202; Cohn 2013, p. 3
- Jones 2010, p. 208
- Fisher 1964, p. 102; Galloway 2010, pp. 298–299; Saul 2010, p. 87; Justice 1994, p. 208
- Justice 1994, pp. 207–208; Crow & Leland 2008, p. xviii
- Hussey 1971, p. 6
- Justice 1994, pp. 233–237; Crane 1992, pp. 211–213
- Ribner 2005, pp. 71–72
- Ribner 2005, pp. 71–74
- Jones 2010, p. 210; Matheson 1998, p. 135
- Jones 2010, p. 210; Matheson 1998, pp. 135–136
- Matheson 1998, pp. 138–139
- Matheson 1998, p. 143
- Ortenberg 1981, p. 79; Postan 1975, p. 171
- Ellis 2000, pp. 13–14
- Matheson 1998, p. 144
- Ousby 1996, p. 120
- Robinson 2009, pp. 51–59
- Robinson 2009, pp. 51–59; Silvercloud 2007, p. 287; Picknett & Prince 2007, p. 164
- Hilton 1995, pp. 214–216
- Barron, Caroline M. (1981). Revolt in London: 11 to 15 June 1381. London: Museum of London. ISBN 978-0-904818-05-5.
- Brie, Friedrich (1906). "Wat Tyler and Jack Straw". English Historical Review. 21: 106–111.
- Butcher, A. F. (1987). "English Urban Society and the Revolt of 1381". In Hilton, Rodney; Alton, T. H. (eds.). The English Rising of 1381. Cambridge: Cambridge University Press. pp. 84–111. ISBN 978-1-84383-738-1.
- Cohn, Samuel K. (2013). Popular Protest in Late Medieval English Towns. Cambridge: Cambridge University Press. ISBN 978-1-107-02780-0.
- Crane, Susan (1992). "The Writing Lesson of 1381". In Hanawalt, Barbara A. (ed.). Chaucer's England: Literature in Historical Context. Minneapolis: University of Minnesota Press. pp. 201–222. ISBN 978-0-8166-2019-7.
- Crow, Martin M.; Leland, Virginia E. (2008). "Chaucer's Life". In Cannon, Christopher (ed.). The Riverside Chaucer (3rd ed.). Oxford: Oxford University Press. pp. xi–xxi. ISBN 978-0-19-955209-2.
- Dilks, T. Bruce (1927). "Bridgwater and the Insurrection of 1381". Journal of the Somerset Archaeological and Natural History Society. 73: 57–67.
- Dobson, R. B. (1983). The Peasants' Revolt of 1381 (2nd ed.). London: Macmillan. ISBN 0-333-25505-4.
- Dobson, R. B. (1987). "The Risings in York, Beverley and Scarborough". In Hilton, Rodney; Alton, T. H. (eds.). The English Rising of 1381. Cambridge: Cambridge University Press. pp. 112–142. ISBN 978-1-84383-738-1.
- Dunn, Alastair (2002). The Great Rising of 1381: the Peasants' Revolt and England's Failed Revolution. Stroud, UK: Tempus. ISBN 978-0-7524-2323-4.
- Dyer, Christopher (2000). Everyday Life in Medieval England. London and New York: Hambledon and London. ISBN 978-1-85285-201-6.
- Dyer, Christopher (2003). "Introduction". In Hilton, Rodney (ed.). Bondmen Made Free: Medieval Peasant Movements and the English Rising of 1381 (New ed.). Abingdon, UK: Routledge. pp. ix–xv. ISBN 978-0-415-31614-9.
- Dyer, Christopher (2009). Making a Living in the Middle Ages: the People of Britain 850–1520. New Haven and London: Yale University Press. ISBN 978-0-300-10191-1.
- Eiden, Herbert (1999). "Norfolk, 1382: a Sequel to the Peasants' Revolt". The English Historical Review. 114 (456): 370–377. doi:10.1093/ehr/114.456.370.
- Ellis, Steve (2000). Chaucer at Large: the Poet in the Modern Imagination. Minneapolis: University of Minnesota Press. ISBN 978-0-8166-3376-0.
- Faith, Rosamond (1987). "The 'Great Rumour' of 1377 and Peasant Ideology". In Hilton, Rodney; Alton, T. H. (eds.). The English Rising of 1381. Cambridge: Cambridge University Press. pp. 43–73. ISBN 978-1-84383-738-1.
- Federico, Silvia (2001). "The Imaginary Society: Women in 1381". Journal of British Studies. 40 (2): 159–183. doi:10.1086/386239.
- Fisher, John H. (1964). John Gower, Moral Philosopher and Friend of Chaucer. New York: New York University Press. ISBN 978-0-8147-0149-2.
- Galloway, Andrew (2010). "Reassessing Gower's Dream Visions". In Dutton, Elizabeth; Hines, John; Yeager, R. F. (eds.). John Gower, Trilingual Poet: Language, Translation, and Tradition. Woodbridge, UK: Boydell Press. pp. 288–303. ISBN 978-1-84384-250-7.
- Given-Wilson, Chris (1996). The English Nobility in the Late Middle Ages. London: Routledge. ISBN 978-0-203-44126-8.
- Harding, Alan (1987). "The Revolt Against the Justices". In Hilton, Rodney; Alton, T. H. (eds.). The English Rising of 1381. Cambridge: Cambridge University Press. pp. 165–193. ISBN 978-1-84383-738-1.
- Hilton, Rodney (1987). "Introduction". In Hilton, Rodney; Alton, T. H. (eds.). The English Rising of 1381. Cambridge: Cambridge University Press. pp. 1–8. ISBN 978-1-84383-738-1.
- Hilton, Rodney (1995). Bondmen Made Free: Medieval Peasant Movements and the English Rising of 1381. London: Routledge. ISBN 978-0-415-01880-7.
- Hussey, Stanley Stewart (1971). Chaucer: an Introduction. London: Methuen. ISBN 978-0-416-29920-5.
- Jones, Dan (2010). Summer of Blood: the Peasants' Revolt of 1381. London: Harper Press. ISBN 978-0-00-721393-1.
- Justice, Steven (1994). Writing and Rebellion: England in 1381. Berkeley and Los Angeles: University of California Press. ISBN 0-520-20697-5.
- Lyle, Marjorie (2002). Canterbury: 2000 Years of History (Revised ed.). Stroud, UK: Tempus. ISBN 978-0-7524-1948-0.
- Matheson, Lister M. (1998). "The Peasants' Revolt through Five Centuries of Rumor and Reporting: Richard Fox, John Stow, and Their Successors". Studies in Philology. 95 (2): 121–151.
- Mortimer, Ian (1981). The Fears of Henry IV: the Life of England's Self-Made King. London: Vintage. ISBN 978-1-84413-529-5.
- Oman, Charles (1906). The Great Revolt of 1381. Oxford: Clarendon Press. OCLC 752927432.
- Ortenberg, Veronica (1981). In Search of the Holy Grail: the Quest for the Middle Ages. London: Hambledon Continuum. ISBN 978-1-85285-383-9.
- Ousby, Ian (1996). The Cambridge Paperback Guide to Literature in English. Cambridge: Cambridge University Press. ISBN 978-0-521-43627-4.
- Picknett, Lynn; Prince, Clive (2007). The Templar Revelation: Secret Guardians of the True Identity of Christ (10th anniversary ed.). London: Random House. ISBN 978-0-552-15540-3.
- Postan, Michael (1975). The Medieval Economy and Society. Harmondsworth, UK: Penguin Books. ISBN 0-14-020896-8.
- Powell, Edgar (1896). The Rising of 1381 in East Anglia. Cambridge: Cambridge University Press. OCLC 1404665.
- Prescott, Andrew (2004). "'The Hand of God': the Suppression of the Peasants' Revolt in 1381". In Morgan, Nigel (ed.). Prophecy, Apocalypse and the Day of Doom. Donington, UK: Shaun Tyas. pp. 317–341. ISBN 978-1-900289-68-9.
- Réville, André (1898). Étude sur le Soulèvement de 1381 dans les Comtés de Hertford, de Suffolk et de Norfolk (in French). Paris: A. Picard and sons. OCLC 162490454.
- Reynaud, Gaston (1897). Chroniques de Jean Froissart (in French). 10. Paris: Société de l'histoire de France.
- Ribner, Irving (2005). The English History Play in the Age of Shakespeare. Abingdon, UK: Routledge. ISBN 978-0-415-35314-4.
- Robinson, John J. (2009). Born in Blood: the Lost Secrets of Freemasonry. Lanham, US: Rowman and Littlefield. ISBN 978-1-59077-148-8.
- Rubin, Miri (2006). The Hollow Crown: a History of Britain in the Late Middle Ages. London: Penguin. ISBN 978-0-14-014825-1.
- Saul, Nigel (1999). Richard II. New Haven: Yale University Press. ISBN 978-0-300-07875-6.
- Saul, Nigel (2010). "John Gower: Prophet or Turncoat?". In Dutton, Elizabeth; Hines, John; Yeager, R. F. (eds.). John Gower, Trilingual Poet: Language, Translation, and Tradition. Woodbridge, UK: Boydell Press. pp. 85–97. ISBN 978-1-84384-250-7.
- Silvercloud, Terry David (2007). The Shape of God: Secrets, Tales, and Legends of the Dawn Warriors. Victoria, Canada: Trafford. ISBN 978-1-4251-0836-6.
- Spindler, Erik (2012). "Flemings in the Peasants' Revolt, 1381". In Skoda, Hannah; Lantschner, Patrick; Shaw, R. (eds.). Contact and Exchange in Later Medieval Europe: Essays in Honour of Malcolm Vale. Woodbridge, UK: The Boydell Press. pp. 59–78. ISBN 978-1-84383-738-1.
- Strohm, Paul (2008). "A 'Peasants' Revolt'?". In Harris,, Stephen J.; Grigsby, Bryon Lee (eds.). Misconceptions About the Middle Ages. New York: Routledge. pp. 197–203. ISBN 978-0-415-77053-8.
- Sumption, Jonathan (2009). Divided Houses: the Hundred Years War III. London: Faber and Faber. ISBN 978-0-571-24012-8.
- Trevelyan, George (1899). England in the Age of Wycliffe. London: Longmans and Green. OCLC 12771030.
- Tuck, J. A. (1987). "Nobles, Commons and the Great Revolt of 1381". In Hilton, Rodney; Alton, T. H. (eds.). The English Rising of 1381. Cambridge: Cambridge University Press. pp. 192–212. ISBN 978-1-84383-738-1.
- Wickert, Maria (2016) . Studies in John Gower. Translated by Meindl, Robert J. Tempe, Arizona: Arizona Center for Medieval and Renaissance Studies. p. 18. ISBN 9780866985413. |
The challenge for you is to make a string of six (or more!) graded cubes.
Make a cube with three strips of paper. Colour three faces or use the numbers 1 to 6 to make a die.
Make a ball from triangles!
Here are some ideas to try in the classroom for using counters to investigate number patterns.
Using these kite and dart templates, you could try to recreate part of Penrose's famous tessellation or design one yourself.
How can you make a curve from straight strips of paper?
This is a simple paper-folding activity that gives an intriguing result which you can then investigate further.
It's hard to make a snowflake with six perfect lines of symmetry, but it's fun to try!
Follow the diagrams to make this patchwork piece, based on an octagon in a square.
Kaia is sure that her father has worn a particular tie twice a week in at least five of the last ten weeks, but her father disagrees. Who do you think is right?
Watch the video to see how to fold a square of paper to create a flower. What fraction of the piece of paper is the small triangle?
Have a go at drawing these stars which use six points drawn around a circle. Perhaps you can create your own designs?
Follow these instructions to make a three-piece and/or seven-piece tangram.
Did you know mazes tell stories? Find out more about mazes and make one of your own.
Make a mobius band and investigate its properties.
This practical activity involves measuring length/distance.
What shapes can you make by folding an A4 piece of paper?
Have you noticed that triangles are used in manmade structures? Perhaps there is a good reason for this? 'Test a Triangle' and see how rigid triangles are.
Ideas for practical ways of representing data such as Venn and Carroll diagrams.
Follow these instructions to make a five-pointed snowflake from a square of paper.
Arrange your fences to make the largest rectangular space you can. Try with four fences, then five, then six etc.
Cut a square of paper into three pieces as shown. Now,can you use the 3 pieces to make a large triangle, a parallelogram and the square again?
Surprise your friends with this magic square trick.
A brief video looking at how you can sometimes use symmetry to distinguish knots. Can you use this idea to investigate the differences between the granny knot and the reef knot?
Can you recreate this Indian screen pattern? Can you make up similar patterns of your own?
We went to the cinema and decided to buy some bags of popcorn so we asked about the prices. Investigate how much popcorn each bag holds so find out which we might have bought.
What is the smallest cuboid that you can put in this box so that you cannot fit another that's the same into it?
How do you know if your set of dominoes is complete?
Make a flower design using the same shape made out of different sizes of paper.
Can you work out what shape is made by folding in this way? Why not create some patterns using this shape but in different sizes?
Paint a stripe on a cardboard roll. Can you predict what will happen when it is rolled across a sheet of paper?
Can you visualise what shape this piece of paper will make when it is folded?
What happens to the area of a square if you double the length of the sides? Try the same thing with rectangles, diamonds and other shapes. How do the four smaller ones fit into the larger one?
How many different cuboids can you make when you use four CDs or DVDs? How about using five, then six?
Let's say you can only use two different lengths - 2 units and 4 units. Using just these 2 lengths as the edges how many different cuboids can you make?
Take a counter and surround it by a ring of other counters that MUST touch two others. How many are needed?
What shape is made when you fold using this crease pattern? Can you make a ring design?
Can you deduce the pattern that has been used to lay out these bottle tops?
This practical problem challenges you to create shapes and patterns with two different types of triangle. You could even try overlapping them.
Looking at the picture of this Jomista Mat, can you decribe what you see? Why not try and make one yourself?
Using different numbers of sticks, how many different triangles are you able to make? Can you make any rules about the numbers of sticks that make the most triangles?
Can you make the most extraordinary, the most amazing, the most unusual patterns/designs from these triangles which are made in a special way?
These are pictures of the sea defences at New Brighton. Can you work out what a basic shape might be in both images of the sea wall and work out a way they might fit together?
Imagine you have an unlimited number of four types of triangle. How many different tetrahedra can you make?
This problem invites you to build 3D shapes using two different triangles. Can you make the shapes from the pictures?
Can you make the birds from the egg tangram?
These squares have been made from Cuisenaire rods. Can you describe the pattern? What would the next square look like?
In how many ways can you fit two of these yellow triangles together? Can you predict the number of ways two blue triangles can be fitted together?
How can you put five cereal packets together to make different shapes if you must put them face-to-face?
What do these two triangles have in common? How are they related? |
Table of Contents :
Top Suggestions Interpreting Graphs Worksheet Science :
Interpreting Graphs Worksheet Science How are kids in the city different from kids in the country find out in this worksheet about data word problems fifth grade students will practice interpreting double bar graphs as they look at how The worksheet layout and formatting visualizing a data set with a chart or graph can be an effective tool in understanding and interpreting it looking at a table of numbers may not provide Solve this equation for the value of x plot the solutions to the equation y x 8 on a graph on the same graph plot the solutions to the equation y x 3 what is the significance of the point.
Interpreting Graphs Worksheet Science The respect and admiration that i already had for many wonderful science teachers throughout the city grew student aptitudes with proportional reasoning interpreting a graph and reading and Spreadsheets offer a range of automated functions to perform calculations on data in addition to building graphs and other file consists of one or more worksheets each worksheet contains Data means information so interpreting and charts or graphs to represent them create your own nature diary of summer in your local area you can use the worksheet below to help you.
Interpreting Graphs Worksheet Science Using real data from nasa s grace satellites students will track water mass changes in the u s students will estimate water resources using heat map data create a line graph for a specific location Also explain the significance of the near vertical portion of the curve in the lower left quadrant of the graph not all zener diodes break down in the exact same manner some operate on the Learning is an extremely complex function and children with learning difficulties experience problems in the brain s capacity to process interpret and store information these aspects are all.
Interpreting Graphs The Biology Corner
Worksheet To Help Students With Interpreting Graphs And Data This Shows Pie Graphs Line Graphs And Bar Graphs With Questions That Go With Each Type
Reading Science Graphs Worksheets Teaching Resources Tpt
This Reading Interpreting Graphs Color By Number Would Work Great For Science Or Math It Includes 12 Questions In Which Students Will Use Their Knowledge Of Bar Graphs Line Graphs Tables Pie Graphs And More To Answer The Questions Students Will Need To Understand Percents And Ratios As W Subjects Math Algebra Geometry Grades 5 Th 6 Th 7 Th Homeschool Types Activities Fun
Interpreting Graphs Csun
6 11 1 Interpreting Graphs Interpreting Graphs In Science It Is Critical That Students Be Able To Interpret Graphs And Represent Scientific Phenomena In Graphic Form Although Graphing Skills Are Taught Extensively In Mathematics Classes Students Are Often Unable To Apply These Skills To Scientific Concepts The Inability Of Students To Transfer Such Basic Concepts Should Be A Concern To
Science Exam Skills Graphs Tables Diagrams Formulae
Worksheets To Help Students Practice Key Skills Required For Exams Interpreting Graphs Describing Patterns Understanding And Identifying Patterns In Table
How To Interpret Scientific Statistical Graphs
Graphics One Of The Most Important Aspects Of Presentation And Analysis Of Data Help Reveal Structure And Patterns Graphical Perception Ie Interpretation Of A Graph The Visual Decoding Of The Quantitative And Qualitative Information Encoded On Graphs Objective To Discuss How To Interpret Some Common GraphsScience Interpreting Data Worksheets Teaching Resources
The Ngss Standards Require That Students Understand Data In Charts And Graphs This Worksheet Gives Students Some Much Needed Practice On Making And Interpreting 3 Different Kinds Of Graphs And Tables Line Bar And Pie I Use This In My Biology Classes But It Could Be Used In Math And Other ScieInterpreting Graphs And Charts Grade 3 Worksheets Learny
Displaying Top 8 Worksheets Found For Interpreting Graphs And Charts Grade 3 Some Of The Worksheets For This Concept Are Bar Graph Work 1 Name Reading And Interpreting Graphs Work Interpreting Data In Graphs Reading Graphs Work Student Toolkit 3 Fifth Grade Science And Math Work Interpreting Graphs Baseball Bar Graph Once You Find Your Worksheet Click On Pop Out Icon Or Print IconInterpreting Graphs And Charts Of Scientific Data
As A Member You Ll Also Get Unlimited Access To Over 79 000 Lessons In Math English Science History And More Plus Get Practice Tests Quizzes And Personalized Coaching To Help You Succeed
Data Worksheets Reading Interpreting Graphs
Data Worksheets High Quality Printable Resources To Help Students Display Data As Well As Read And Interpret Data From Bar Graphs Pie Charts Pictographs Column Graphs Displaying Data In Column Graph Read The Data Presented In Tallies Display The Data Collected In A Column Graph Interpreting A Bar Graph Continue Reading
Analyzing And Interpreting Data Sfusd Science
The Graph Choice Chart An Article From Nsta S The Science Teachers By Hannah Webber Sarah J Nelson Ryan Weatherbee Bill Zoellick And Molly Schauffler Focusing On A Tool To Help Students Turn Data Into Evidence Graphing Stories Worksheet Videos For Students To Learn How To Graph Movement
People interested in Interpreting Graphs Worksheet Science also searched for :
Interpreting Graphs Worksheet Science. The worksheet is an assortment of 4 intriguing pursuits that will enhance your kid's knowledge and abilities. The worksheets are offered in developmentally appropriate versions for kids of different ages. Adding and subtracting integers worksheets in many ranges including a number of choices for parentheses use.
You can begin with the uppercase cursives and after that move forward with the lowercase cursives. Handwriting for kids will also be rather simple to develop in such a fashion. If you're an adult and wish to increase your handwriting, it can be accomplished. As a result, in the event that you really wish to enhance handwriting of your kid, hurry to explore the advantages of an intelligent learning tool now!
Consider how you wish to compose your private faith statement. Sometimes letters have to be adjusted to fit in a particular space. When a letter does not have any verticals like a capital A or V, the very first diagonal stroke is regarded as the stem. The connected and slanted letters will be quite simple to form once the many shapes re learnt well. Even something as easy as guessing the beginning letter of long words can assist your child improve his phonics abilities. Interpreting Graphs Worksheet Science.
There isn't anything like a superb story, and nothing like being the person who started a renowned urban legend. Deciding upon the ideal approach route Cursive writing is basically joined-up handwriting. Practice reading by yourself as often as possible.
Research urban legends to obtain a concept of what's out there prior to making a new one. You are still not sure the radicals have the proper idea. Naturally, you won't use the majority of your ideas. If you've got an idea for a tool please inform us. That means you can begin right where you are no matter how little you might feel you've got to give. You are also quite suspicious of any revolutionary shift. In earlier times you've stated that the move of independence may be too early.
Each lesson in handwriting should start on a fresh new page, so the little one becomes enough room to practice. Every handwriting lesson should begin with the alphabets. Handwriting learning is just one of the most important learning needs of a kid. Learning how to read isn't just challenging, but fun too.
The use of grids The use of grids is vital in earning your child learn to Improve handwriting. Also, bear in mind that maybe your very first try at brainstorming may not bring anything relevant, but don't stop trying. Once you are able to work, you might be surprised how much you get done. Take into consideration how you feel about yourself. Getting able to modify the tracking helps fit more letters in a little space or spread out letters if they're too tight. Perhaps you must enlist the aid of another man to encourage or help you keep focused.
Interpreting Graphs Worksheet Science. Try to remember, you always have to care for your child with amazing care, compassion and affection to be able to help him learn. You may also ask your kid's teacher for extra worksheets. Your son or daughter is not going to just learn a different sort of font but in addition learn how to write elegantly because cursive writing is quite beautiful to check out. As a result, if a kid is already suffering from ADHD his handwriting will definitely be affected. Accordingly, to be able to accomplish this, if children are taught to form different shapes in a suitable fashion, it is going to enable them to compose the letters in a really smooth and easy method. Although it can be cute every time a youngster says he runned on the playground, students want to understand how to use past tense so as to speak and write correctly. Let say, you would like to boost your son's or daughter's handwriting, it is but obvious that you want to give your son or daughter plenty of practice, as they say, practice makes perfect.
Without phonics skills, it's almost impossible, especially for kids, to learn how to read new words. Techniques to Handle Attention Issues It is extremely essential that should you discover your kid is inattentive to his learning especially when it has to do with reading and writing issues you must begin working on various ways and to improve it. Use a student's name in every sentence so there's a single sentence for each kid. Because he or she learns at his own rate, there is some variability in the age when a child is ready to learn to read. Teaching your kid to form the alphabets is quite a complicated practice.
Tags: #conversion graphs worksheet#bar graph worksheets 6th grade#double line graph worksheets#understanding graphing worksheet answer#interpreting biology graphs worksheets#bar graph worksheet for grade 2#math graphs worksheets#multiple line graph worksheet#bar graph worksheets grade 4#circle graphs grade 6 worksheet |
Supersymmetry, often referred to as SUSY in the scientific community, is a theory in particle physics that attempts to account for missing matter or dark matter in the universe, and unify gravity with the other three fundamental forces of nature, which are electromagnetism and the weak and strong nuclear forces. The concept behind supersymmetry is an aspect of string theory that can be tested with current nuclear accelerator technology to some degree, and states that all subatomic particles that carry a force are matched by subatomic particles that have mass. An example of this is the boson, which is believed to be a supersymmetric force carrier for that of the matter particle known as the fermion.
While the theory of supersymmetry solves many fundamental problems discovered in how elementary particles behave, there has been no direct evidence to support it as of 2011. The Large Hadron Collider (LHC), which, as of 2011, is the biggest particle accelerator to be built on Earth and consists of 17 miles (27 kilometers) of tunnel below the French-Swiss border, conducted a direct experiment in August 2011 to detect supersymmetry effects and failed to find any evidence to support the theory. This is in contrast to earlier promising indications from the Tevatron particle accelerator that suggested supersymmetry might be observed in the decay of B-meson subatomic particles. The Tevatron is a 3.9 mile (6.28 kilometer) accelerator located at Fermilab outside Chicago, Illinois, in the US.
The concept of partner particles in a grand supersymmetry theory has been evolving in particle physics for 20 years. Researchers are now questioning the foundation of the theory, as supporting experiments at the LHC, which should have also provided some evidence to support the theory, have not done. The theory has been attractive to physicists for some time, as it allows for a basic testing of aspects of string theory that are otherwise far beyond the capabilities of human technology for the foreseeable future.
The theory also could explain the great mystery of what dark matter is, which makes up an estimated 25% of the universe, with another roughly 70% attributed to dark energy. All normal matter and energy that are observable by conventional science make up less than 5% of the total mass and energy of the universe. Supersymmetry theory would also explain the presence of the concept of the Higgs boson. Bosons are hypothetical particles that have been worked into calculations to resolve issues with the Standard Model in particle physics, but they are the only subatomic or elementary particle that has not been observed in physics experiments as of 2011.
Though simple versions of supersymmetry may now be ruled out as likely, other complex approaches to it are also being considered. The most fundamental of elementary particles, the quark, would also have a supersymmetric partner known as the squark, which would be matched individually to each of the six quark flavors, which are up, down, strange, charm, bottom, and top. Other supersymmetric partners, if they are ever discovered, would be the gravitino matched to the graviton, the photino matched to the photon, the gluino matched to the gluon, and several others. Even well-known subatomic particles would have supersymmetry partners, such as the electron, which would have a selectron as its superpartner. |
Mathematical logic is the study of reasoning in mathematics. We analyze mechanisms of mathematical reasoning using mathematical techniques. In this course, we first present propositional logic, which is a logical framework focusing on logical connectives such as conjunction, disjunction, implication, and negation and show their mathematical properties. Then, we introduce first-order predicate logic, in which we can use universal and existential quantifiers binding individual parts. We extend the Boolean model of propositional logic to the one for first-order predicate logic. In this course, we study logic not only from Boolean semantics but also from a deductive system, that is, natural deduction. The soundness and completeness properties between the Boolean model and natural deduction are also explained in this course. We learn fundamental knowledge and the technique of formalization in mathematical logic, which is also applied to computer science, especially the theoretical study of programming languages.
You can learn fundamental knowledge of the mathematical logic of computer science, which will be indispensable for the theory of programming languages and the symbolic approach in artificial intelligence. You will understand the relationship between the syntax aspect and the semantical aspect in mathematical logic, through cases of propositional and first-order predicate logic. Furthermore, you will learn how the notion of truth and false in mathematical statements is defined mathematically and the reasoning in mathematical science including computer science.
Mathematical logic, reasoning, propositional logic, first-order predicate logic, Boolean model, functional completeness, conjunctive normal form, disjunctive normal form, duality between conjunction and disjunction, natural deduction, soundness, completeness, compactness, structure, model
|✔ Specialist skills||Intercultural skills||Communication skills||✔ Critical thinking skills||✔ Practical and/or problem-solving skills|
We will give lectures in this course. We will assign exercises in several classes, which help you to understand the material covered in the lecture.
|Course schedule||Required learning|
|Class 1||Introduction to Mathematical Logic: history of mathematical logic and the relationship between mathematical logic and computer science||Gain an understanding of the history of mathematical logic and the relationship between mathematical logic and computer science|
|Class 2||Syntax of propositional logic: proposition, structural||Gain an understanding of the syntax of propositional logic, especially structural induction of propositions|
|Class 3||Intuitive introduction of propositional logic's semantics: propositions as polynomials||Gain an intuitive understanding of Boolean semantics of propositional logic|
|Class 4||Formal introduction of propositional logic's semantics: valuation, semantic function, tautology, satisfiability, unsatisfiability||Gain a formal understanding of the semantics of propositional logic.|
|Class 5||Exercises of propositional logic's semantics I||Understanding how to prove propositional logic's properties|
|Class 6||Mathematical properties of propositional logic: functional completeness, disjunctive normal form, conjunctive normal form, and duality||Understanding the theoretical properties on the propositional logic|
|Class 7||Exercises of propositional logic's semantics II||Understanding how to prove propositional logic's properties|
|Class 8||Deductive system for propositional logic: natural deduction and formalization of mathematical proofs.||Understanding formalism of propositional logic's proof in the natural deduction|
|Class 9||Examples of formal proofs in natural deduction||Understanding how to write formal proofs in the natural deduction|
|Class 10||Soundness theorem of propositional logic and natural deduction||Understanding the soundness theorem of propositional logic and natural deduction|
|Class 11||Completeness theorem of propositional logic and natural deduction||Understanding the completeness theorem of propositional logic and natural deduction|
|Class 12||Exercises of natural deduction in propositional logic.||Learning how to handle the natural deduction|
|Class 13||Syntax of first-order predicate logic: first-order language and similarity type||Understanding the first-order language and similarity type|
|Class 14||The semantics of first-order predicate logic: structures and models||Understanding the semantics of first-order predicate logic|
|Class 15||The semantics of first-order predicate logic: structures and models||Understanding the semantics of first-order predicate logic|
|Class 16||Mathematical properties of first-order predicate logic: prenex normal form||Understanding prenex normal forms|
|Class 17||Examples of semantics of first-order predicate logic||Understanding the semantics of the first-order predicate logic and its theoretical structure.|
|Class 18||A deductive system of first-order predicate logic: natural deduction||Understanding of natural deduction for the first-order predicate logic|
|Class 19||A deductive system of first-order predicate logic: natural deduction||Understanding of natural deduction for the first-order predicate logic|
|Class 20||Soundness of the first-order predicate logic||Understanding of soundness theorem of first-order predicate logic and natural deduction|
|Class 21||Exercises of natural deduction in the first-order predicate logic||Understanding of resolution|
|Class 22||Advanced topics on the first-order predicate logic||Understanding of importance of Löwenheim-Skolem's theorem|
To enhance effective learning, students are encouraged to spend approximately 100 minutes preparing for class and another 100 minutes reviewing class content afterwards (including assignments) for each class.
They should do so by referring to textbooks and other course material.
Course materials will be given in class.
Dirk van Dalen: Logic and Structure, 4th edition, Springer-Verlag
Reports (50%) and final examination (50%)
Fundamental knowledge on discrete mathematics and naïve set theory
If it is necessary to limit the number of students in the course to prevent the spread of SARS-CoV-2 (the novel coronavirus), priority may be given to students in the department of computer science. |
math proportions template is a math proportions template sample that gives infomration on math proportions template doc. When designing math proportions template, it is important to consider different math proportions template format such as math proportions template word, math proportions template pdf. You may add related information such as proportion math definition, proportion examples, proportion problems, proportions worksheet.
multiply across the known corners, then divide by the third number. this time the known corners are top left and bottom right: sam tried using a ladder, tape measure, ropes and various other things, but still couldn’t work out how tall the tree was. sam measures a stick and its shadow (in meters), and also the shadow of the tree, and this is what he gets: now sam makes a sketch of the triangles, and writes down the “height to length” ratio for both triangles: the “height” could have been at the bottom, so long as it was on the bottom for both ratios, like this: that is ok, you simply have twice as many stones as the number in the ratio … so you need twice as much of everything to keep the ratio. so the answer is: add 2 buckets of cement and 4 buckets of sand. (you will also need water and a lot of stirring….) that is the good thing about ratios. you can make the amounts bigger or smaller and so long as the relative sizes are the same then the ratio is the same.
a proportion is a mathematical statement that two ratios or rates are equal. example of a true proportion: 4 24. 3 18. =. proportions. proportion says that two ratios (or fractions) are equal. example: proportion 1/3 : 2/ a proportion is simply a statement that two ratios are equal. it can be written in here’s an example. in a horror movie , proportion math definition, proportion math definition, proportion examples, proportion problems, proportions worksheet. what are examples of proportions? what are proportions in math? how do you calculate proportions? how do you do proportions with money?
solving proportions on mathhelp.com solving proportions then my ratio, in fractional (rather than in odds) format, is:. math dictionary vocabulary template. use this template to organize your students’ vocabulary section in their real world algebra explains this process in an easy to understand format using cartoons and drawings. this makes self- , ratio and proportion examples, proportions calculator, proportions calculator, ratio and proportion examples with answers, how to solve proportions
A math proportions template Word can contain formatting, styles, boilerplate text, macros, headers and footers, as well as custom dictionaries, toolbars and autotext entries. It is important to define the Word styles beforehand in the sample document as styles define the appearance of Word text elements throughout your document. You may design other styles and format such as math proportions template pdf, math proportions template powerpoint, math proportions template form. When designing math proportions template, you may add related content, ratio and proportion examples, proportions calculator, ratio and proportion examples with answers, how to solve proportions. |
Total Project Time
Crater, Meteorite, Energy
How did the Moon get its craters? What about the craters on Earth? Why do they look the way they do? Find out in this fun science activity, as you make your own craters by dropping balls into a tray of flour.
This activity is not recommended for use as a science fair project. Good science fair projects have a stronger focus on controlling variables, taking accurate measurements, and analyzing data.
This project is messy—if possible, you should do it outside. If you must do the project inside, lay down a sheet or towels first to make clean-up easier.
2. Use the sieve to put a thin layer of cocoa powder on top of the flour.
3. Try dropping a ball into the pan from about half a meter above it (optionally, use the meter stick so you can drop from a consistent height).
5. Try dropping the same ball from a different height. What does the resulting crater look like?
6. Try dropping balls of different sizes from the same height, and examine the resulting craters.
7. You can even try throwing a ball sideways so it hits the pan at an angle, instead of coming straight down. How is the resulting impact pattern different from when you dropped the balls straight down?
If you did the project inside, vacuum or sweep up any flour and cocoa powder that got on the floor.
You should have found that the bigger the ball, or the faster it was moving, the bigger the resulting crater would be. This is because larger, faster-moving balls have more kinetic energy than smaller, slower-moving balls. This energy is transferred to the flour and cocoa powder when the ball hits the ground, causing it to fly outward, creating the crater (and a mess!). You should also have seen that the impacts churned up the “soil,” bringing some of the white flour to the surface near the impact site. While the pattern around the crater was probably symmetric if you dropped the ball straight down, sideways impacts would result in asymmetric patterns as more flour/cocoa powder were thrown in one direction than the other.
Craters are round, bowl-shaped depressions surrounded by a ring, like the one shown below.
Impact craters are made when a meteorite crashes into a planet or moon (as opposed to volcanic craters, which are created when a volcano erupts). Just like in your science experiment, the size and shape of the crater depends on how big the meteorite was and how fast it was going when it hit the ground. A bigger, faster-moving meteorite will create a bigger crater, sometimes throwing material very far away from the impact site.
Some of the craters on the Moon are so big that you can see them with the naked eye! While Earth has over 100 known impact craters, not all of them are obvious. Unlike the Moon, Earth has an atmosphere with weather that causes erosion (wind and rain), along with animals and plants that can move soil and change landscapes over time. So, some craters on Earth’s surface may be eroded or overgrown. Many meteoroids (they are called meteoroids while they are still in space, and meteorites once they hit the ground) also burn up in Earth’s atmosphere, never reaching the ground at all.
Total Project Time
Lift, Aerodynamics, Weight, Gravity
During the Mars 2020 mission, NASA plans to explore the surface of Mars using a rover in combination with a lightweight helicopter. To be able to fly on Mars, this helicopter must be super light and have very efficient blades. If not, it will never generate enough lift to get off the ground. In this activity, you will make your own paper helicopter and test different blade designs. Will your findings be reflected in NASA’s design? Try it out and see for yourself!
This activity is not recommended for use as a science fair project. Good science fair projects have a stronger focus on controlling variables, taking accurate measurements, and analyzing data.
1. Hold the paper helicopter by the middle with the paper clip facing down, then let it go from high up. If it does not spin down, have an adult help you drop one of your paper helicopters from a safe, elevated location (such as while standing on a chair or a step stool, from a balcony, etc.)
2. Compare the two paper helicopters.
3. Try it out!
4. Drop each paper helicopter a couple more times from the same height.
5. As your helicopter starts to rotate, the spinning blades generate lift that slows it down. When you look carefully, you may notice this. Drop a paper helicopter and pay attention during the first fraction of a second before it starts to spin. Compare how fast the helicopter falls during that fraction of a second to how fast it falls once it starts spinning.
6. Because Mars’s atmosphere is about 100 times thinner than Earth’s atmosphere, it is much harder for a helicopter to create enough lift to get off the ground. Engineers had to change the blade design to create more lift so the helicopter could fly in Mars’s thin atmosphere. You have two paper helicopters where only the blade length is different.
7. Blade length is just one way to change the helicopter design.
8. Look at all your test results.
9. Compare your findings with the illustration of the helicopter Ingenuity on the surface of Mars (illustration from NASA). This helicopter will fly around and help NASA’s Perseverance rover explore Mars.
When you drop a paper helicopter, it will take a fraction of a second for it to start spinning and slow down. Did you notice how it fell faster before it started spinning? Once the paper helicopter spins, it should generate a push called “lift” which slows its descent to the ground. The paper helicopter that has shorter blades should fall faster because the shorter blades do not generate as much lift.
There might not be one single design for a paper helicopter that allows it to descend the slowest. That said, longer and wider blades that hit the air at an angle are generally better. These changes to the blades generally create more lift, and as a result, slow down the fall of the paper helicopter more. If you change the dimensions of your paper helicopter too drastically, however, your helicopter may actually become unstable and perform worse.
Due to its thin atmosphere, the blades of a Mars helicopter’s must be bigger and spin faster than they would on Earth in order to generate enough lift. NASA’s Ingenuity helicopter is very lightweight—only 4 pounds (on Earth)! Each blade is about 2 feet (0.6 meters) long, and the blades rotate about 2400 times per minute. A solar panel powers the helicopter and it operates autonomously. It is designed to land safely on the uneven Martian terrain. This way, it can help NASA’s Perseverance rover explore the Martian surface.
Mars’s gravity is much weaker than Earth’s (about 38%). This means that while the Ingenuity helicopter weighs 4 pounds on Earth, it only weighs about 1.5 pounds on Mars! You might think that this makes it much easier for the helicopter to fly. However, Mars’s thin atmosphere actually makes it more difficult. A helicopter needs air to fly. Air is made up of tiny particles that bounce around and press against everything around them. When the particles flow over a spinning helicopter blade, they collectively press up on the bottom of the blade harder than they press down on the top. This generates a net upward push, called lift. In general, the more particles there are packed closely together, the harder they can press on surfaces. In the thin Martian atmosphere, the particles are spaced much farther apart. In order to take off, the lift generated by the helicopter must be bigger than its weight, the force of gravity pulling it down. The reduction of lift due to the thinner atmosphere is much larger than the reduction of weight, therefore, in the thinner Martian atmosphere, the blades must be bigger and spin faster than they would in Earth’s thicker atmosphere.
Other factors, like changing the shape or angle of the blades, can also influence lift. You may have experimented with some of these factors with your paper helicopter designs. Your paper helicopters did not generate enough lift to fly upward, but the lift helped slow their descent. The more lift they generated, the slower they fell.
In this activity, students will:
Prepare for the lesson by watching the “Do It Yourself Space: Stomp Rockets” videos available above.
Prior to launch day, construct at least one rocket launcher. Take the Stomp Rocket Launcher Assembly Instructions to a hardware store to make purchasing the right pieces easy. While at the hardware store, purchase enough 1/2-inch PVC pipe to make the launchers and the rocket forms. If you do not own a PVC cutter, it’s a good idea to purchase one or ask the hardware store to pre-cut the PVC pipe for you in the specified lengths. You may also use a fine-tooth saw to cut PVC.
Safety Note: Use caution when cutting the PVC for the launcher and rocket forms.
1. Roll a piece of 8.5 x 11-inch paper snuggly (but not too tightly) around a 24-inch length of 1/2-inch PVC pipe. Optionally, use one of the custom skins.
2. Tape the paper to itself (but not to the PVC pipe). Use enough tape to completely seal the seam, making the seam airtight. This will be the body, or fuselage, of your rocket.
3. Slide the fuselage off the PVC form. Verify that the fuselage slips easily from the PVC form so that it will fit on the launch tube later.
4. Make a nose cone either by pinching one end of the fuselage, folding it over and taping it to the rocket body; or by cutting out a 3/4 circle, rolling it into a cone shape and taping it to the fuselage. Optionally, use the custom nose-cone template. Secure the nose cone using plenty of tape to make the rocket airtight. (Blow through the rocket from the bottom to check for leaks
5. Cut out fins (of any shape) and attach them symmetrically to the lower part of the fuselage (opposite the nose cone), leaving the opening at the bottom of the fuselage open and clear of tape.
Allow students to experiment with the size and shape of their rocket fins. Through repeated flights, students will discover that proportional, firm fins will provide the most stabilization to their rocket and eliminate drag.
6. Have students color and name their rockets to differentiate them from other rockets in the group.
When headed out to launch, always have spare empty 2-liter soda bottles and duct tape handy. Though some bottles will launch 20 to 40 rockets, bottles will eventually fail and will need to be replaced.
Because of their lightweight design, stomp rockets perform best on non-windy days. If you are located in a windy location, try to orient your launch location behind a windbreak such as a gymnasium or other large building.
Secure an outdoor location that is clear of overhead obstructions (trees, building roofs, power lines, etc.) and has a ground area of at least 100 meters by 25 meters for best altitude-tracking results. A shorter, 50-meter or 25-meter baseline may also be used.
If calculating altitude using tracking stations A and B, place the rocket launcher at the midpoint of a 100-meter baseline. If estimating altitude using local markers such as marks on buildings, orient the rocket launchers and observers appropriately.
Stomping: Be sure students stomp on the bottle across the bottle label, perpendicular to the body of the bottle. This is the most flexible zone of the bottle and allows for it to be reused numerous times. If students stomp on the bottom end of the bottle, it will often shatter, rendering the bottle unusable.
Aiming: The PVC legs of the launcher are different lengths. This allows for adjustment on uneven ground and aiming the launch into the wind if you are launching on a windy day. (Launching into the wind will compensate for rocket drift and make rockets easier to track and retrieve.) Additionally, horizontal distance competitions can be held and launch angles adjusted. Place a basketball in the landing zone, have students imagine the ball is Mars, and launch their rocket to Mars! If performing horizontal launches, a large indoor space such as a cafeteria or gymnasium may be used.
Re-inflating the bottle: Bottles can be easily re-inflated using air from your lungs. Place your hand in a fist around the open end of the launch tube and blow into your fist to re-inflate the bottle. Using your fist protects you from the unsanitary conditions that may exist on your rocket launcher.
Do you know that on everything you touch, you leave fingerprints? If your hands are very dirty, this is obvious, because you can actually see them. But even if your hands seem clean, your fingerprints will stay behind on the surfaces you touch—they are just invisible! Do you want proof? Then make them visible in this activity and collect your own fingerprints!
Detecting invisible fingerprints is an important task in forensic science, a branch of science that helps criminal investigations by collecting and analyzing evidence from crime scenes. Fingerprints are the most commonly-collected type of evidence. Because fingerprint patterns are unique to a specific person, they are a very reliable way of identifying a suspect. There are different types of fingerprints that can be left behind: 1) a fingerprint imprint in a soft surface, such as wax or soap; 2) a patent fingerprint, visible to the naked eye, such as fingerprints resulting from dirty hands; and 3) latent fingerprints, which are invisible, but still present.
These invisible latent fingerprints are made of water, fatty acids, amino acids, and triglycerides—in other words, they result from the oil and sweat that your skin produces naturally. To make them visible, you have to find a way to detect one of these substances present in the invisible fingerprint. The easiest method is called dusting, in which you use a very fine powder that can stick to the oil in the fingerprint. Once the fingerprint becomes visible, you can lift it from the surface with clear tape and transfer it to another surface to then take into the laboratory to analyze further. Other methods include using chemicals that react with the amino acids or water in the fingerprint; the chemical reaction results in a colored fingerprint, which you can then analyze easily.
Many factors determine the quality of a fingerprint on a surface. One of the most important factors is the surface texture. Fingerprints are most easily detected on smooth, non-textured, and dry surfaces. The rougher or more porous the material, the more difficult it will be to get good fingerprint evidence. Another factor is the condition of the skin on your fingertips. If it is very sweaty and oily, you are more likely to leave behind prints than if it is nice and clean. Of course, wearing gloves also prevents leaving behind fingerprints. Test it yourself and collect your own fingerprint evidence like a real crime scene investigator in this activity!
Extra: Can you prevent leaving behind fingerprints on a smooth surface? What about wearing gloves? Repeat steps 1–6 of this activity, but this time, wear gloves. Can you still find a fingerprint on the glass or metal surface?
Extra: In this activity, you tested a nice, smooth glass or metal surface. Do you think other surface textures or materials will result in fingerprints as well? There is only one way to find out! Test other materials such as paper, textiles, or wood. How do fingerprints look on these surfaces?
Extra: Now that you are a professional in collecting fingerprints from surfaces, try to find them in your house! Where is the best place to look for them? Can you find your own ones or some from your family members and make them visible?
Were you able to collect some of your own fingerprints? On a smooth surface like glass or metal, fingerprints tend to stick very well. With your unwashed hands, you should have been able to make your fingerprint visible with either cacao or baby powder. Just a little powder applied with a brush should be enough to reveal your fingerprint. If you apply too much powder, however, the fine details of your print tend to get lost. When you press too hard onto the surface with the brush, the fingerprint will be wiped away, so you have to be careful when treating the surface with the powder.
Your freshly-washed hands have much less oil and sweat on their skin as they have been washed away with the soap and water. This results in a much less pronounced fingerprint. You might have had difficulties in collecting this fingerprint or may not have found one at all. On the other hand, if you apply hand lotion, which contains lots of oil and fat, this will make your fingertips much stickier, which leads to a much more pronounced fingerprint. You should have seen a big fat fingerprint once you applied the powder to the surface where you put your finger. If you compare all the prints you collected, the one with hand lotion should be most visible, whereas the print with your washed hands should be barely visible.
If you did the extra activities, you might have noticed that porous or rough surfaces or materials such as paper or textiles are not very good for collecting fingerprints. Also, when wearing gloves, no fingerprints are left behind. These are all important factors that real crime scene investigators have to take into account when collecting fingerprints at a crime scene. Considering your results, where would you look for fingerprints in your home? Did you find some?
Make a colorful erupting volcano in your kitchen with lemons and baking soda!
The foam is safe to rinse down the drain. You can throw the lemons in the trash or compost.
When you added the baking soda, it started to fizzle and foam a little bit. When you mixed the baking soda into the lemon with a knife, it should have started foaming a lot more, bubbling up and over the sides of the lemon. Eventually the reaction slowed down and stopped. See the Digging Deeper section to learn about the chemical reaction that creates the foam!
Lemons are a type of citrus fruit, along with limes, oranges, and grapefruit. Citrus fruits are known for their sour taste, which you probably noticed if you’ve ever eaten one! Citrus fruits taste so sour because their flesh contains a lot of citric acid. Citric acid, like any other acid, is a chemical that has lots of hydrogen ions (H+). These hydrogen ions are what our taste buds recognize as a sour taste. Acids like to get rid of their hydrogen ions, and do this by reacting with other chemicals, called bases, that contain lots of hydroxide ions (OH–). When an acid and base combine (this is called an acid-base reaction), they neutralize each other.
Baking soda (NaHCO3) is a base, which means it contains hydroxide ions. When it comes in contact with an acid, such as citric acid, a chemical reaction starts. The reaction neutralizes the acid and releases carbon dioxide (CO2) gas. This gas wants to escape the liquid, creating bubbles. This is exactly what you see in the lemon volcano reaction. The citric acid, which is released into the lemon juice when mashing the fruit, reacts with the baking soda that you pour over the lemon. As soon as they both combine, carbon dioxide gas is produced and creates bubbly foam. Once the citric acid and the baking soda have neutralized each other, the reaction stops, so eventually your volcano will stop erupting.
45 minutes to 1 hour
Chemistry, Acid, Base, Chemical Reaction
Have you ever wanted to send your friend a secret message that no-one else can read? Then you might know of invisible ink—a special type of ink that you can use for writing and that does not show up on paper. Only after a special treatment will it appear again magically, and the message can be read. How does this work? Find out in this activity and write your own secret messages!
Were you able to reveal your secret message with both invisible ink methods? You should have been! Applying heat to the paper with lemon juice should have made your secret message visible. Lemon juice is a relatively strong organic acid as it contains citric acid. When you write on the paper with lemon juice, the acid weakens the fibers within the paper, and it starts to decompose. At the same time, the carbohydrates from the lemon juice—like the citric acid—are absorbed into the paper. Carbohydrates don’t like heat and therefore when you use the hot iron on the paper with your secret message, the carbohydrates start to carbonize. This process releases carbon which oxidizes when it comes in contact with air. This oxidation reaction yields brown substances which make your secret message visible.
The baking soda on the other hand is alkaline or basic. It gets absorbed by the paper similarly to the lemon juice and once dried isn’t visible anymore (although you might have noticed some baking soda powder residues on the paper, which you can wipe off easily). This most likely changed when you painted the paper with the turmeric solution as turmeric changes color depending on if it is in an acidic or alkaline environment. When it comes in contact with an alkaline or basic substance such as the baking soda on the paper, it turns from yellow to a deep red. Your secret message gets revealed! Concentrated grape juice should have also made your message visible as the grape juice contains acids that reacted with the alkaline baking soda absorbed by the paper. A similar reaction is possible with other acidic solutions such as blueberry juice or hibiscus tea.
Invisible ink is an ancient invention and has already been used over 2,000 years ago. Its purposes were manifold, ranging from plotting conspiracies or espionage to writing secret love letters. Since then, many different recipes for secret ink have been developed. All of the recipes are based on chemical reactions that make one of the components inside the ink visible. The type of chemical reaction can vary, but they all result in a colored end product that makes the ink visible.
There are several types of reactions that can be used to expose invisible ink including acid-base, oxidation-reduction, and heat reactions. Acid-base reactions occur when an acidic or alkaline component of the ink can be made visible by a special chemical (indicator) that changes color depending on if it is in an acidic or alkaline environment. Similarly, oxidation-reduction reactions can be used, in which a chemical compound changes color depending on its redox state. You can also make use of the fact that some chemical compounds are sensitive to heat or light and change color once they are exposed to these conditions.
Depending on the type of chemical reaction the invisible ink is based on, making the ink visible can take different forms such as adding heat, applying a certain chemical or using ultraviolet light. All these special treatments result in the secret message being revealed.
Solutions Miscibility, Density, Diffusion
Summertime often brings beautiful fireworks displays. Whereas you normally look up into the sky to see fireworks, in this activity we will take the bursts of color underwater—with chemistry. Although it is not exactly the same as real fireworks, you will be amazed by the color explosions you will see. Curious about what that looks like? See for yourself in this activity!
As you probably experienced, oil and food coloring do not mix well. This is because food coloring is a polar liquid, but oil is a nonpolar liquid. If you mix the two, you will see lots of little food coloring drops dispersed in the oil, but both liquids do not mix. When you add this mixture to the water in the glass, again, the oil will not mix with water, as this is also a polar substance. The oil will form a separate layer on top of the water, as oil is less dense (or lighter) than water. The food coloring, which is also heavier than oil and able to mix with water, sinks to the bottom of the oil drop or layer. As soon as it reaches the oil/water interface, it will start mixing with the water molecules through a process called diffusion.
That means that the food coloring molecules move from a high concentration of food coloring to a lower concentration of food coloring inside the water. This is why you do not see all of the water changing color immediately, but a slow mixing of both with some parts of the water still clear and others becoming colored. These color bursts and food coloring trails within the water might have reminded you of fireworks exploding in the sky and slowly falling to the ground. When you mix the solution with a fork or spoon, all of the food coloring molecules are spread equally within the water and the whole solution becomes colored.
On the other hand, if you mix the food coloring with the water, both liquids will mix immediately. When you add a drop of this mixture into the glass of oil, you should have noticed that the drop sinks all the way to the bottom of the glass, as water is denser than oil. At the same time, the food coloring stays mixed with the water, but does not mix with the oil. Even when you mix the solution with a spoon or fork, both liquids stay separated. You used this effect to make your own lava lamp when you did the extra activity!
You probably know the saying “oil and water do not mix,” which is true. But why is that so? Some liquids do mix to become a homogeneous mixture, while others do not. This depends on their miscibility. Whether a liquid mixes with another is dependent on their individual molecular structures. Molecules can be classified into polar and nonpolar molecules. When atoms come together to form a molecule, they share negatively-charged electrons in a chemical bond. Sometimes, one atom attracts the electrons more than the other atom does, which results in a slight separation of the charge into a positive and negative pole within the molecule, which is also called an electric dipole. When this happens, the molecule is usually a polar molecule. Molecules that have an equal charge balance are nonpolar molecules.
A simple rule, “like dissolves like,” can tell you which liquids mix or not. This means that liquids with similar polarity are miscible, whereas liquids with different polarities do not mix. Water is a polar liquid, which means its molecules have electric dipoles. On the other hand, oils are nonpolar, which is why they do not mix well with water. When liquids do not mix, they separate to build two separate layers on top of each other. Which layer is on top depends on the density of each liquid. The density is a measure of mass per unit of volume, which means that the heavier liquid will sink to the bottom and the lighter one will float on top.
Breathing occurs effortlessly, but did you ever wonder how we breathe? In this lesson, students will make a model to discover how air effortlessly flows in and out of our lungs. Next, students will compare lung breathing to other ways of breathing to discover reasons why humans might have developed lungs.
3. Help students explore their model.
Monday-Friday: 8:00am – 4:00pm
Saturdays, Sundays & Federal Holidays: Closed
Youth Center Bldg. 350
School Liaison Officer, Stephanie Iverson
Duty Cell: 720-661-7411 |
digitalRead(): digital functions in arduino programming (part 2)
This article details the digitalRead() function in Arduino programming, explaining how it works and how to use it in your own sketches.
As you may know, a microcontroller, in simplest terms, can be called the brain of the circuit. That is, the microcontroller is the component of the circuit responsible for storing and processing the code that is written. As you can see in this image, a microcontroller has pins - these tiny metal legs - that “attach” it to the breadboard, so to speak, and connect it to the rest of your circuit:
Formally speaking, a pin is a metal terminal of any component in an integrated circuit; however, in this case we’re only referring to those of the microcontroller (as pictured above).
If you haven’t familiarized yourself with the first article on digital functions, describing the pinMode function, it will be linked in the description below; for a brief overview, though, pinMode allows pins to be assigned as inputs or outputs based on their role in the circuit.
For inputs, as the name implies, the microcontroller will be receiving data. When a pin is configured to be an input, the data it receives can be accessed with the digitalRead function, which reads the value from a specified digital pin (as specified in parenthesis in the code setup), as either HIGH or LOW.
To use the digitalRead function in your Arduino program, write digitalRead(pin), where 'pin' refers to the pin number labeled on the microcontroller.
When we use this function in programming, most often, if not always, it refers to the use of a sensor or other input. That means that, if your circuit uses LED’s, most likely they will not serve as the components being “read.”
If you are having trouble picturing how it works in practice, imagine this: your circuit uses a sensor, which has three pins that are connected to a power, ground, and a pin on the microcontroller, respectively. With this setup, in your program, you can use the digitalRead() function to access the information from the pin of the microcontroller that is connected to your sensor ªthe input in your circuit).
Here is an example of the digitalRead() function used in one of my programs. In this fragment, I declare the variable 'state' as the the number that is returned when the sensor's input is read by the program. However, I could have also replaced the 'state' within the If statement with 'digitalRead(sensorpin).'
It is very important to ensure that the pin you instruct the computer to read is truly the pin used in the circuit; if not, the digitalRead() function can return HIGH or LOW values at random. Also, because writing/pin-labels on microcontrollers is often very small, it can be easy to confuse pins and their respective designations. For that reason, when I am working with my circuits, I often search for an image of the microcontroller schematic. Here is an example of the Arduino Nano 33 BLE sense schematic that I found off the internet:
Additionally, it is important to ensure that, when using digitalRead(), you are referencing the digital pins on a microcontroller, as opposed to the analog pins.
A future article will explain the difference between digital and analog signals, but an easy trick to identify the digital pins is either through the use of a schematic (which usually specifies the pin designation) or by simply using the pins that are not denoted with a capital A.
In some cases, however, analog pins can also serve as digital pins; nonetheless, there are some exceptions, so unless you are running short on pins, I suggest following to the tricks identified above. |
This page uses content from Wikipedia and is licensed under CC BY-SA.
|Part of a series on|
The history of Sikhism started with Guru Nanak Dev Ji. He was the first Guru of the fifteenth century in the Punjab region in the northern part of the Indian subcontinent. The religious practices were formalised by Guru Gobind Singh Ji on 13 April 1699. The latter baptised five persons from different social backgrounds to form Khalsa (ਖ਼ਾਲਸਾ). The first five, Pure Ones, then baptised Gobind Singh into the Khalsa fold. This gives the order of Khalsa, a history of around 300 years.
The history of Sikhism is closely associated with the history of Punjab and the socio-political situation in 16th-century Northwestern Indian subcontinent (modern Pakistan and India). During the Mughal rule of India (1556–1707), Sikhism was in conflict with the Mughal empire laws, because they were affecting political successions of Mughals while cherishing saints from Hinduism and Islam. Prominent Sikh Gurus were killed by Islamic rulers for refusing to convert to Islam, and for opposing the persecution of Sikhs and Hindus. Of total 10 Sikh gurus, 2 gurus themselves were tortured and executed (Guru Arjan Dev and Guru Tegh Bahadur), and close kins of several gurus brutally killed (such as 6 and 9 years old sons of Guru Gobind Singh), along with numerous other main revered figures of Sikhism were tortured and killed (such as Banda Bahadur, Bhai Mati Das, Bhai Sati Das and Bhai Dayala), by Islamic rulers for refusing to convert to Islam, and for opposing the persecution of Sikhs and Hindus. Subsequently, Sikhism militarised to oppose Mughal hegemony. The emergence of the Sikh Confederacy under the misls and Sikh Empire under reign of the Maharajah Ranjit Singh was characterised by religious tolerance and pluralism with Christians, Muslims and Hindus in positions of power. The establishment of the Sikh Empire is commonly considered the zenith of Sikhism at political level, during this time the Sikh Empire came to include Kashmir, Ladakh, and Peshawar. A number of Muslim and Hindu peasants converted to Sikhism. Hari Singh Nalwa, the Commander-in-chief of the Sikh army along the North West Frontier, took the boundary of the Sikh Empire to the very mouth of the Khyber Pass. The Empire's secular administration integrated innovative military, economic and governmental reforms.
The months leading up to the partition of India in 1947, saw heavy conflict in the Punjab between Sikh and Muslims, which saw the effective religious migration of Punjabi Sikhs and Hindus from West Punjab which mirrored a similar religious migration of Punjabi Muslims in East Punjab.
Guru Nanak Dev (1469–1539), founder of Sikhism, was born to Mehta Kalu and Mata Tripta, in the village of Talwandi, now called Nankana Sahib, near Lahore. His father, named Mehta Kalu, was a Patwari, an accountant of land revenue in the government. Nanak's mother was Mata Tripta, and he had one older sister, Bibi Nanki.
From an early age, Guru Nanak Dev Ji seemed to have acquired a questioning and enquiring mind and refused as a child to wear the ritualistic "sacred" thread called a Janeu and instead said that he would wear the true name of God in his heart as protection, as the thread which could be broken, be soiled, burnt or lost could not offer any security at all. From early childhood, Bibi Nanki saw in her brother the Light of God but she did not reveal this secret to anyone. She is known as the first disciple of Guru Nanak.
Even as a boy, his desire to explore the mysteries of life eventually led him to leave home. Nanak married Sulakhni, daughter of Moolchand Chona, a trader from Batala, and they had two sons, Sri Chand and Lakshmi Das.
His brother-in-law, Jai Ram, the husband of his sister Nanki, obtained a job for him in Sultanpur as the manager of the government granary. One morning, when he was twenty-eight, Guru Nanak Dev went as usual down to the river to bathe and meditate. It was said that he was gone for three days. When he reappeared, it is said he was "filled with the spirit of God". His first words after his re-emergence were: "There is no Hindu, there is no Muslim". With this secular principle he began his missionary work. He made four distinct major journeys, in the four different directions, which are called Udasis, spanning many thousands of kilometres, preaching the message of God.
Guru Nanak spent the final years of his life in Kartarpur where Langar free blessed food was available. The food would be partaken of by Hindus, rich, poor, both high and so-called low castes. Guru Nanak worked in the fields and earned his livelihood. After appointing Bhai Lehna as the new Sikh Guru, on 22 September 1539, aged 70, Guru Nanak passed away.
In 1538, Guru Nanak chose Lehna, his disciple, as a successor to the Guruship rather than one of his sons. Bhai Lehna was named Guru Angad and became the successor of Guru Nanak. Bhai Lehna was born in the village of Harike in Ferozepur district in Punjab, on 31 March 1504. He was the son of a small trader named Pheru. His mother's name was Mata Ramo (also known as Mata Sabhirai, Mansa Devi, Daya Kaur). Baba Narayan Das Trehan was his grand father, whose ancestral house was at Matte-di-Sarai near Mukatsar.
Under the influence of his mother, Bhai Lehna began to worship Durga (A Hindu Goddess). He used to lead a group of Hindu worshippers to Jawalamukhi Temple every year. He married Mata Khivi in January 1520 and had two sons, (Dasu and Datu), and two daughters (Amro and Anokhi). The whole Pheru family had to leave their ancestral village because of the ransacking by the Mughal and Baloch military who had come with Emperor Babur. After this the family settled at the village of Khadur Sahib by the River Beas, near Tarn Taran Sahib, a small town about 25 km. from Amritsar city.
One day, Bhai Lehna heard the recitation of a hymn of Guru Nanak from Bhai Jodha (a Sikh of Guru Nanak Sahib) who was in Khadur Sahib. He was thrilled and decided to proceed to Kartarpur to have an audience (darshan) with Guru Nanak. So while on the annual pilgrimage to Jwalamukhi Temple, Bhai Lehna left his journey to visit Kartarpur and see Baba Nanak. His very first meeting with Guru Nanak completely transformed him. He renounced the worship of the Hindu Goddess, dedicated himself to the service of Guru Nanak and so became his disciple, (his Sikh), and began to live in Kartarpur.
His devotion and service (Sewa) to Guru Nanak and his holy mission was so great that he was instated as the Second Nanak on 7 September 1539 by Guru Nanak. Earlier Guru Nanak tested him in various ways and found an embodiment of obedience and service in him. He spent six or seven years in the service of Guru Nanak at Kartarpur.
when Guru Nanak passed away on 22 September 1539, Guru Angad left Kartarpur for the village of Khadur Sahib (near Goindwal Sahib). He carried forward the principles of Guru Nanak both in letter and spirit. Yogis and Saints of different sects visited him and held detailed discussions about Sikhism with him.
Guru Angad introduced a new alphabet known as Gurmukhi Script, modifying the old Punjabi script's characters. Soon, this script became very popular and started to be used by the people in general. He took great interest in the education of children by opening many schools for their instruction and thus increased the number of literate people. For the youth, he started the tradition of Mall Akhara, where physical, as well as spiritual exercises, were held. He collected the facts about Guru Nanak's life from Bhai Bala and wrote the first biography of Guru Nanak. He also wrote 63 Saloks (stanzas), which are included in the Guru Granth Sahib. He popularised and expanded the institution of Guru ka Langar that had been started by Guru Nanak.
Guru Angad travelled widely and visited all important religious places and centres established by Guru Nanak for the preaching of Sikhism. He also established hundreds of new Centres of Sikhism (Sikh religious Institutions) and thus strengthened the base of Sikhism. The period of his Guruship was the most crucial one. The Sikh community had moved from having a founder to a succession of Gurus and the infrastructure of Sikh society was strengthened and crystallised – from being an infant, Sikhism had moved to being a young child and ready to face the dangers that were around. During this phase, Sikhism established its own separate spiritual path.
Guru Angad, following the example set by Guru Nanak, nominated Sri Amar Das as his successor (the Third Nanak) before his death. He presented all the holy scripts, including those he received from Guru Nanak, to Guru Amar Das. He died on 29 March 1552 at the age of forty-eight. It is said that he started to build a new town, at Goindwal near Khadur Sahib and Guru Amar Das Sahib was appointed to supervise its construction. It is also said that Humayun, when defeated by Sher Shah Suri, came to obtain the blessings of Guru Angad in regaining the throne of Delhi.
Guru Amar Das became the third Sikh guru in 1552 at the age of 73. Goindwal became an important centre for Sikhism during the Guruship of Guru Amar Das. He continued to preach the principle of equality for women, the prohibition of Sati and the practise of Langar. In 1567, Emperor Akbar sat with the ordinary and poor people of Punjab to have Langar. Guru Amar Das also trained 140 apostles, of which 52 were women, to manage the rapid expansion of the religion. Before he died in 1574 aged 95, he appointed his son-in-law Jetha as the fourth Sikh Guru.
It is recorded that before becoming a Sikh, Bhai Amar Das, as he was known at the time, was a very religious Vaishanavite Hindu who spent most of his life performing all of the ritual pilgrimages and fasts of a devout Hindu. One day, Bhai Amar Das heard some hymns of Guru Nanak being sung by Bibi Amro Ji, the daughter of Guru Angad, the second Sikh Guru. Bibi Amro was married to Bhai Sahib's brother, Bhai Manak Chand's son who was called Bhai Jasso. Bhai Sahib was so impressed and moved by these Shabads that he immediately decided to go to see Guru Angad at Khadur Sahib. It is recorded that this event took place when Bhai Sahib was 61 years old.
In 1535, upon meeting Guru Angad, Bhai Sahib was so touched by the Guru's message that he became a devout Sikh. Soon he became involved in Sewa (Service) to the Guru and the Community. Under the impact of Guru Angad and the teachings of the Gurus, Bhai Amar Das became a devout Sikh. He adopted Guru as his spiritual guide (Guru). Bhai Sahib began to live at Khadur Sahib, where he used to rise early in the morning and bring water from the Beas River for the Guru's bath; he would wash the Guru's clothes and fetch wood from the jungle for 'Guru ka Langar'. He was so dedicated to Sewa and the Guru and had completely extinguished pride and was totally lost in this commitment that he was considered an old man who had no interest in life; he was dubbed Amru, and generally forsaken.
However, as a result of Bhai Sahib's commitment to Sikhi principles, dedicated service and devotion to the Sikh cause, Guru Angad Sahib appointed Guru Amar Das Sahib as third Nanak in March 1552 at the age of 73. He established his headquarters at the newly built town of Goindwal, which Guru Angad had established.
Soon large numbers of Sikhs started flocking to Goindwal to see the new Guru. Here, Guru Amar Das propagated the Sikh faith in a vigorous, systematic and planned manner. He divided the Sikh Sangat area into 22 preaching centres or Manjis, each under the charge of a devout Sikh. He himself visited and sent Sikh missionaries to different parts of India to spread Sikhism.
Guru Amar Das was impressed with Bhai Gurdas' thorough knowledge of Hindi and Sanskrit and the Hindu scriptures. Following the tradition of sending out Masands across the country, Guru Amar Das deputed Bhai Gurdas to Agra to spread the gospel of Sikhism. Before leaving, Guru Amar Das prescribed the following routine for Sikhs:
|“||He who calls himself a Sikh of the True Guru, He must get up in the morning and say his prayers. He must rise in the early hours and bathe in the holy tank. He must meditate on God as advised by the Guru. And rid him of the afflictions of sins and evil. As the day dawns, he should recite scriptures, and repeat God's name in every activity. He to whom the Guru takes kindly is shown the path. Nanak! I seek the dust of the feet of the Guru's Sikh who himself remembers God and makes others remember Him. (Gauri)||”|
Guru Ji strengthened the tradition of 'Guru ka Langar' and made it compulsory for the visitor to the Guru to eat first, saying that 'Pehle Pangat Phir Sangat' (first visit the Langar then go to the Guru). Once the emperor Akbar came to see Guru Sahib and he had to eat the coarse rice in the Langar before he could have an interview with Guru Sahib. He was so much impressed with this system that he expressed his desire to grant some royal property for 'Guru ka Langar', but Guru Sahib declined it with respect.
He introduced new birth, marriage, and death ceremonies. Thus he raised the status of women and protected the rights of female infants who were killed without question as they were deemed to have no status. These teachings met with stiff resistance from the Orthodox Hindus.
Guru Amar Das not only preached the equality of people irrespective of their caste but he also fostered the idea of women's equality. He preached strongly against the practice of Sati (a Hindu wife burning on her husband's funeral pyre). Guru Amar Das also disapproved of a young widow remaining unmarried for the rest of her life.
Guru Amar Das constructed "Baoli" at Goindwal Sahib having eighty-four steps and made it a Sikh pilgrimage centre for the first time in the history of Sikhism. He reproduced more copies of the hymns of Guru Nanak and Guru Angad. He also composed 869 (according to some chronicles these were 709) verses (stanzas) including Anand Sahib, and then later on Guru Arjan (fifth Guru) made all the Shabads part of Guru Granth Sahib.
When the time came for the Guru's younger daughter Bibi Bhani to marry, he selected a pious and diligent young follower of his called Jetha from Lahore. Jetha had come to visit the Guru with a party of pilgrims from Lahore and had become so enchanted by the Guru's teachings that he had decided to settle in Goindwal. Here he earned a living selling wheat and would regularly attend the services of Guru Amar Das in his spare time.
Guru Amar Das did not consider anyone of his sons fit for Guruship and chose instead his son-in law (Guru) Ram Das to succeed him. Guru Amar Das Sahib at the age of 95 died on 1 September 1574 at Goindwal in District Amritsar, after giving responsibility of Guruship to the Fourth Nanak, Guru Ram Das.
Guru Ram Das (Punjabi: ਗੁਰੂ ਰਾਮ ਦਾਸ) (Born in Lahore, Punjab, Pakistan on 24 September 1534 – 1 September 1581, Amritsar, Punjab, India) was the fourth of the Ten Gurus of Sikhism, and he became Guru on 30 August 1574, following in the footsteps of Guru Amar Das. He was born in Lahore to a Sodhi family of the Khatri clan. His father was Hari Das and mother Anup Devi, and his name was Jetha, meaning 'first born'. His wife was Bibi Bhani, the younger daughter of Guru Amar Das, the third guru of the Sikhs. They had three sons: Prithi Chand, Mahadev and Arjan Dev.
As a Guru one of his main contributions to Sikhism was organising the structure of Sikh society. Additionally, he was the author of Laava, the hymns of the Marriage Rites, the designer of the Harmandir Sahib, and the planner and creator of the township of Ramdaspur (later Amritsar).
A hymn by Guru Ram Das from Ang 305 of the Guru Granth Sahib: "One who calls himself a Sikh of the True Guru shall get up early morning and meditate on the Lord's Name. Make effort regularly to cleanse, bathe and dip in the ambrosial pool. Upon Guru's instructions, chant Har, Har singing which, all misdeeds, sins and pains shall go away." Guru Ram Das nominated Guru Arjan, his youngest son, as the next Guru of the Sikhs.
In 1581, Guru Arjan — the youngest son of the fourth guru — became the Fifth Guru of the Sikhs. In addition to being responsible for building the Golden Temple, he prepared the Sikh Sacred text and his personal addition of some 2,000 plus hymns in the Gurū Granth Sāhib.
In 1604 he installed the Ādi Granth for the first time as the Holy Book of the Sikhs. In 1606, for refusing to make changes to the Gurū Granth Sāhib, he gave martyrdom and was executed by the Mughal rulers of the time.
Guru Har Gobind became the sixth guru of the Sikhs. He carried two swords — one for Spiritual reasons and one for temporal (worldly) reasons.[self-published source] From this point onward, the Sikhs became a military force and always had a trained fighting force to defend their independence.
Guru Hargobind fixed two Nishan Sahib's at Akal Bunga in front of the Akal Takht. One flag is towards the Harmandir Sahib and the other shorter flag is towards Akal Takht. The first represents the reins of the spiritual authority while the later represents temporal power stating temporal power should be under the reins of the spiritual authority.
Guru Har Rai (Punjabi: ਗੁਰੂ ਹਰਿ ਰਾਇ) (26 February 1630 – 6 October 1661) was the seventh of the ten Gurus of Sikhism, becoming Guru on 8 March 1644, following in the footsteps of his grandfather, Guru Har Gobind, who was the sixth guru. Before he died, he nominated Guru Har Krishan, his youngest son, as the next Guru of the Sikhs.
As a very young child, he was disturbed by the suffering of a flower damaged by his robe in passing. Though such feelings are common with children, Guru Har Rai would throughout his life be noted for his compassion for life and living things. His grandfather, who was famed as an avid hunter, is said to have saved the Moghul Emperor Jahangir's life during a tiger's attack. Guru Har Rai continued the hunting tath at age 31, Guru tradition of his grandfather, but he would allow no animals to be killed on his grand Shikars. The Guru instead captured the animal and added it to his zoo. He made several tours to the Malwa and Doaba regions of Punjab.
His son, Ram Rai, seeking to assuage concerns of Aurangzeb over one line in Guru Nanak's verse (Mitti Mussalmam ki pede pai kumhar) suggested that the word Mussalmam was a mistake on the copyist's part, therefore distorting Bani. The Guru refused to meet with him again. The Guru is believed to have said, "Ram Rai, you have disobeyed my order and sinned. I will never see you again on account of your infidelity." It was also reported to the Guru that Ram Rai had also worked miracles in the Mughal's court against his father's direct instructions. Sikhs are constrained by their Gurus to not believe in magic and myth or miracles. Just before his death at age 31, Guru Har Rai passed the Gaddi of Nanak on to his younger son, the five-year-old – Guru Har Krishan.
Guru Har Rai was the son of Baba Gurdita and Mata Nihal Kaur (also known as Mata Ananti Ji). Baba Gurdita was the son of the sixth Guru, Guru Hargobind. Guru Har Rai married Mata Kishan Kaur (sometimes also referred to as Sulakhni), daughter of Sri Daya Ram of Anoopshahr (Bulandshahr) in Uttar Pradesh on Har Sudi 3, Samvat 1697. Guru Har Rai had two sons: Baba Ram Rai and Sri Har Krishan.
Although Guru Har Rai was a man of peace, he never disbanded the armed Sikh Warriors (Saint Soldiers), who earlier were maintained by his grandfather, Guru Hargobind. He always boosted the military spirit of the Sikhs, but he never himself indulged in any direct political and armed controversy with the contemporary Mughal Empire. Once, Dara Shikoh (the eldest son of emperor Shah Jahan), came to Guru Har Rai asking for help in the war of succession with his brother, the murderous Aurangzeb. The Guru had promised his grandfather to use the Sikh Cavalry only in defense. Nevertheless, he helped him to escape safely from the bloody hands of Aurangzeb's armed forces by having his Sikh warriors hide all the ferry boats at the river crossing used by Dara Shikoh in his escape.
Guru Har Krishan born in Kirat Pur, Ropar (Punjabi: ਗੁਰੂ ਹਰਿ ਕ੍ਰਿਸ਼ਨ) (7 July 1656 – 30 March 1664) was the eighth of the Ten Gurus of Sikhism, becoming the Guru on 7 October 1661, following in the footsteps of his father, Guru Har Rai. Before Har Krishan died of complications of Smallpox, he nominated his granduncle, Guru Teg Bahadur, as the next Guru of the Sikhs. The following is a summary of the main highlights of his short life:
|“||Sri Guru Harkrishan Ji was the epitome of sensibility, generosity, and courage. There is a famous incident from his early age. Once on the way to Delhi from Punjab he met an arrogant Brahmin Pundit called Lal Chand in Panjokhara town. The Pundit asked him to recite Salokas from the Geeta since his name was similar to that of Lord Krishna. Guru Ji invited a mute person called Chhajju Mehra and placed his stick on his head. He immediately started interpreting salokas from the Geeta. Everybody around was dumbstruck. Lal Chand's arrogance too was shattered and he asked for Guru Ji's forgiveness.||”|
When Har Krishan stayed in Delhi there was a smallpox epidemic and many people were dying. According to Sikh history at Har Krishan's blessing, the lake at Bangla Sahib provided cure for thousands. Gurdwara Bangla Sahib was constructed in the Guru's memory. This is where he stayed during his visit to Delhi. Gurdwara Bala Sahib was built in south Delhi besides the bank of the river Yamuna, where Har Krishan was cremated at the age of about 7 years and 8 months. Guru Har Krishan was the youngest Guru at only 7 years of age. He did not make any contributions to Gurbani.
Guru Tegh Bahadur was the ninth of the Sikh Gurus. The eight Sikh Guru, Guru Har Krishan, nominated him, his grand-uncle as the next Guru before he died. Guru Tegh Bahadur was actually the son of the sixth Sikh Guru, Guru Hargobind.
He sacrificed himself to protect Hindus. Aurungzeb was forcibly converting Hindus to Muslims. Hindus from Kashmir came to Guru Teg Bahadur for protection and requested for assistance. Guru asked them to tell Aurungzeb that if he converted Guru Teg Bahadur to Islam then they all become Muslim. He was asked by Aurungzeb, the Mughal emperor, under coercion by Naqshbandi Islamists, to convert to Islam or to sacrifice himself. The exact place where he died is in front of the Red Fort in Delhi (Lal Qila) and the gurdwara is called Sisganj. This marked a turning point for Sikhism. His successor, Guru Gobind Singh further militarised his followers.
Guru Gobind Singh was the tenth guru of Sikhs. He was born in 1666 at Patna (Capital of Bihar, India). In 1675 Pundits from Kashmir in India came to Anandpur Sahib pleading to Guru Teg Bhadur (Father of Guru Gobind Singh ) about Aurangzeb forcing them to convert to Islam. Guru Teg Bahadur told them that martyrdom of a great man was needed. His son, Guru Gobind Singh said "Who could be greater than you", to his father. Guru Teg Bahadur told pundits to tell Aurangzeb's men that if Guru Teg Bahadur will become Muslim, they all will. Guru Teg Bahadur was then killed in Delhi, but before that he assigned Guru Gobind Singh as 10th Guru at age of 9. After becoming Guru he commanded Sikhs to be armed. He fought many battles with Aurangzeb and some other Kings of that time, but always won.
The creation of the Khalsa; initiated by Guru Gobind Singh, the tenth Sikh Guru.
In 1699 he created the Khalsa panth, by giving amrit to Sikhs. In 1704 he fought the great battle with collective forces of Aurangzeb, Wazir Khan (Chief of Sarhind), and other kings. He left Anandpur and went to Chamkaur with only 40 Sikhs. There he fought the Battle of Chamkaur with 40 Sikhs, vastly outnumbered by the Mughal soldiers. His two elder sons (at ages 17, 15) were killed there. Wazir Khan killed other two (ages 9, 6). Guru Ji sent Aurangzeb the Zafarnamah (Notification of Victory). Then he went to Nanded (Maharashtra, India). From there he made Baba Gurbakhash Singh, also aliased as Baba Banda Singh Bahadur, as his general and sent him to Punjab.
On the evening of the day when Baba Gurbakhash Singh left for Punjab, Guru Gobind Singh was visited by two Muslim soldiers. One of them was commissioned by Wazir Khan, Subedar of Sirhind, to assassinate Guru Gobind Singh. One of the assailants, Bashal Beg, kept a vigil outside the Guru's tent while Jamshed Khan, a hired assassin, stabbed the Guru twice. Khan was killed in one stroke by the Guru, while those outside, alerted by the tumult, killed Beg. Although the wound was sewn up the following day, the Guru died in Nanded, Maharashtra, India in 1708.
Shortly before passing away Guru Gobind Singh ordered that the Guru Granth Sahib (the Sikh Holy Scripture), would be the ultimate spiritual authority for the Sikhs and temporal authority would be vested in the Khalsa Panth – the Sikh Nation. The first Sikh Holy Scripture was compiled and edited by the Fifth Guru, Guru Arjan in AD 1604, although some of the earlier gurus are also known to have documented their revelations. This is one of the few scriptures in the world that has been compiled by the founders of faith during their own lifetime. The Guru Granth Sahib is particularly unique among sacred texts in that it is written in Gurmukhi script but contains many languages including Punjabi, Hindustani, Sanskrit, Bhojpuri, Assamese and Persian. Sikhs consider the Guru Granth Sahib the last, perpetual living guru.
Banda Singh Bahadur was chosen to lead the Sikhs by Guru Gobind Singh. He was successful in setting up a Sikh Empire that spread from Uttar Pradesh to Punjab. He fought the Islamist Mughal state tyranny and gave the common people of Punjab courage, equality, and rights. On his way to Punjab, Banda Singh punished robbers and other criminal elements making him popular with the people. Banda Singh inspired the minds of the non-Muslim people, who came to look upon the Sikhs as defenders of their faith and country. Banda Singh possessed no army but Guru Gobind Singh in a Hukamnama called to the people of Punjab to take arms under the leadership of Banda Singh overthrow and destroy the oppressive Mughal rulers, oppressed Muslims and oppressed Hindus also joined him in the popular revolt against the tyrants.
Banda Singh Bahadur camped in Khar Khoda, near Sonipat from there he took over Sonipat and Kaithal. In 1709 Banda Singh captured the Mughal city of Samana with the help of revolting oppressed Hindu and common folk, killing about 10,000 Mohammedans. Samana which was famous for minting coins, with this treasury the Sikhs became financially stable. The Sikhs soon took over Mustafabad and Sadhora (near Jagadhri). The Sikhs then captured the Cis-Sutlej areas of Punjab including Ghurham, Kapori, Banoor, Malerkotla, and Nahan. The Sikhs captured Sirhind in 1710 and killed the Governor of Sirhind, Wazir Khan who was responsible for the death of the two youngest sons of Guru Gobind Singh at Sirhind. Becoming the ruler of Sirhind Banda Singh gave order to give ownership of the land to the farmers and let them live in dignity and self-respect. Petty officials were also satisfied of with the change. Dindar Khan, an official of the nearby village, took Amrit and became Dinder Singh and the newspaper writer of Sirhind, Mir Nasir-ud-din, became Mir Nasir Singh
Banda Singh developed the village of Mukhlisgarh, and made it his capital He then renamed the city it to Lohgarh (fortress of steel) where he issued his own mint. The coin described Lohgarh: "Struck in the City of Peace, illustrating the beauty of civic life, and the ornament of the blessed throne." He briefly established a state in Punjab for half a year. Banda Singh sent Sikhs to the Uttar Pradesh and Sikhs took over Saharanpur, Jalalabad, Saharanpur, and other areas nearby bringing relief to the repressed population. In the regions of Jalandhar and Amritsar, the Sikhs started fighting for the rights of the people. They used their newly established power to remove corrupt officials and replace them with honest ones.
Banda Singh is known to have abolished or halted the Zamindari system in time he was active and gave the farmers proprietorship of their own land. It seems that all classes of government officers were addicted to extortion and corruption and the whole system of regulatory and order was subverted. Local tradition recalls that the people from the neighborhood of Sadaura came to Banda Singh complaining of the iniquities practices by their landlords. Banda Singh ordered Baj Singh to open fire on them. The people were astonished at the strange reply to their representation, and asked him what he meant. He told them that they deserved no better treatment when being thousands in number they still allowed themselves to be cowed down by a handful of Zamindars.
The rule of the Sikhs over the entire Punjab east of Lahore obstructed the communication between Delhi and Lahore, the capital of Punjab, and this worried Mughal Emperor Bahadur Shah He gave up his plan to subdue rebels in Rajasthan and marched towards Punjab. The entire Imperial force was organised to defeat and kill Banda Singh. All the generals were directed to join the Emperor’s army. To ensure that there were no Sikh agents in the army camps, an order was issued on 29 August 1710 to all Hindus to shave off their beards.
Banda Singh was in Uttar Pradesh when the Moghal army under the orders of Munim Khan marched to Sirhind and before the return of Banda Singh, they had already taken Sirhind and the areas around it. The Sikhs therefore moved to Lohgarh for their final battle. The Sikhs defeated the army but reinforcements were called and they laid siege on the fort with 60,000 troops. Gulab Singh dressed himself in the garments of Banda Singh and seated himself in his place. Banda Singh left the fort at night and went to a secret place in the hills and Chamba forests. The failure of the army to kill or catch Banda Singh shocked Emperor, Bahadur Shah and On 10 December 1710 he ordered that wherever a Sikh was found, he should be murdered. The Emperor became mentally disturbed and died on 18 February 1712.
Banda Singh Bahadur wrote Hukamnamas to the Sikhs telling them to get themselves reorganised and join him at once. In 1711 the Sikhs gathered near Kiratpur Sahib and defeated Raja Bhim Chand, who was responsible for organising all the Hill Rajas against Guru Gobind Singh and instigating battles with him. After Bhim Chand’s dead the other Hill Rajas accepted their subordinate status and paid revenues to Banda Singh. While Bahadur Shah's 4 sons were killing themselves for the throne of the Mughal Emperor Banda Singh Bahadur recaptured Sadhura and Lohgarh. Farrukh Siyar, the next Moghal Emperor, appointed Abdus Samad Khan as the governor of Lahore and Zakaria Khan, Abdus Samad Khan's son, the Faujdar of Jammu. In 1713 the Sikhs left Lohgarh and Sadhura and went to the remote hills of Jammu and where they built Dera Baba Banda Singh. During this time Sikhs were being hunted down especially by pathans in the Gurdaspur region. Banda Singh came out and captured Kalanaur and Batala which rebuked Farrukh Siyar to issue Mughal and Hindu officials and chiefs to proceed with their troops to Lahore to reinforce his army.
In March 1715, Banda Singh Bahadur was in the village of Gurdas Nangal, Gurdaspur, Punjab, when the army under the rule of Samad Khan, the Mogual king of Delhi laid siege to the Sikh forces. The Sikhs fought and defended the small fort for eight months. On 7 December 1715 Banda Singh starving soldiers were captured.
On 7 December 1715 Banda Singh Bahadur was captured from the Gurdas Nangal fort and put in an iron cage and the remaining Sikhs were captured, chained. The Sikhs were brought to Delhi in a procession with the 780 Sikh prisoners, 2,000 Sikh heads hung on spears, and 700 cartloads of heads of slaughtered Sikhs used to terrorise the population. They were put in the Delhi fort and pressured to give up their faith and become Muslims. On their firm refusal all of them were ordered to be executed. Every day, 100 Sikhs were brought out of the fort and murdered in public daily, which went on approximately seven days. The Mussalmans could hardly contain themselves of joy while the Sikhs showed no sign of dejection or humiliation, instead they sang their sacred hymns; none feared dead or gave up their faith. After 3 months of confinement On 9 June 1716, Banda Singh’s eyes were gouged, his limbs were severed, his skin removed, and then he was killed.
In 1716 Farrukh Siyar, the Mughal Emperor, issued all Sikhs to be converted to Islam or die, an attempt to destroy the power of the Sikhs and to exterminate the community as a whole. A reward was offered for the head of every Sikh. For a time it appeared as if the boast of Farrukh Siyar to wipe out the name of Sikhs from the land was going to be fulfilled. Hundreds of Sikhs were brought in from their villages and executed, and thousands who had joined merely for the sake of booty cut off their hair and went back to the Hindu fold again. Besides these there were some Sikhs who had not yet received the baptism of Guru Gobind Singh, nor did they feel encouraged to do so, as the adoption of the outward symbols meant courting death.
After a few years Adbus Samad Khan, the Governor of Lahore, Punjab and other Mughal officers began to pursue Sikhs less and thus the Sikhs came back to the villages and started going to the Gurdwaras again, which were managed by Udasis when the Sikhs were in hiding. The Sikhs celebrated Bandhi Chorh Diwas and Vaisakhi at Harmandir Sahib. The Khalsa had been split into two major factions Bandia Khalsa and Tat Khalsa and tensions were spewing between the two.
Under the authority of Mata Sundari Bhai Mani Singh become the Jathedar of the Harminder Sahib and a leader of the Sikhs and the Bandia Khalsa and Tat Khalsa joined by Bhai Mani Singh into the Tat Khalsa and after the event from that day the Bandeis assumed a quieter role and practically disappeared from the pages of history. A police post was established at Amritsar to keep a check on the Sikhs. Mani Singh was killed by cutting each of his body joint .
Abdus Samad Khan, was transferred to Multan in 1726, and his more energetic Son, Zakaria Khan, also known as Khan Bahadur, was appointed to take his place as the governor of Lahore. In 1726, Tarra Singh of Wan, a renowned Sikh leader, and his 26 men was killed after Governor Zakaria Khan, sent 2200 horses, 40 zamburaks, 5 elephants and 4 cannons, under the command of his deputy, Momim Khan. The murder of Tarra Singh spread across the Sikhs in Punjab and the Sikhs. Finding no Sikhs around, the government falsely announced in each village with the beat of a drum, that all Sikhs had been eliminated but the common people knew the truth that this was not the case. The Sikhs did not face the army directly, because of their small numbers, but adopted dhai phut guerrilla warfare (hit and run) tactics.
Under the leadership of Nawab Kapoor Singh and Jathedar Darbara Singh, in attempt to weaken their enemy looted many of the Mughals caravans and supplies and for some years no money from revenue could reach the government treasury. When the forces of government tried to punish the outlaws, they were unable to contact them, as the Sikhs did not live in houses or forts, but ran away to their rendezvous in forests or other places difficult to access.
Nawab Kapur Singh was born in 1697 in a village near Sheikhupura, Punjab, Pakistan.He was a volunteer at Darbar Sahib Amritsar. His was cleaning shoes of sangat that come to pay their respect to Darbar Sahib,work in the kitchen to feed the Sangat .He was given a jagir in 1733 when the Governor of Punjab offered the Sikhs the Nawabship (ownership of an estate) and a valuable royal robe, the Khalsa accepted it all in the name of Kapur Singh. Henceforth, he became known as Nawab Kapur Singh. In 1748 he would organise the early Sikh Misls into the Dal Khalsa (Budda Dal and Tarna Dal).
Nawab Kapur Singh’s father was Chaudhri Daleep Singh as a boy he memorised Gurbani Nitnem and was taught the arts of war. Kapur Singh was attracted to the Khalsa Panth after the execution of Bhai Tara Singh, of the village of Van, in 1726.
The Khalsa held a meeting to make plans to respond to the state repression against the people of the region and they decided to take procession of government money and weapons in order to weaken the administration, and to equip themselves to face the everyday attacks. Kapur Singh was assigned to plan and execute these projects.
Information was obtained that money was being transported from Multan to the Lahore treasure; the Khalsa looted the money and took over the arms and horses of the guards. They then took over one lakh rupees from the Kasoor estate treasury going from Kasur to Lahore. Next they captured a caravan from Afghanistan region which resulted in capturing numerous arms and horses.
The Khalsa seized a number of vilayati (Superior Central Asian) horses from Murtaza Khan was going to Delhi in the jungle of Kahna Kachha. Some additional war supplies were being taken from Afghanistan to Delhi and Kapur Singh organised an attack to capture them. In another attack the Khalsa recovered gold and silver which was intended to be carried from Peshawar to Delhi by Jaffar Khan, a royal official.
The Mughal rulers and the commanders alongside the Delhi government lost all hope of defeating the Sikhs through repression and decided to develop another strategy, Zakaria Khan, the Governor of Lahore, went to Delhi where it was decided to befriend the Sikhs and rule in cooperation with them and in 1733 the Dehli rulers withdrew all orders against the Khalsa. The Sikhs were now permitted to own land and to move freely without any state violence against them. To co-operate with the Khalsa Panth, and win the goodwill of the people, the government sent an offer of an estate and Nawabship through a famous Lahore Sikh, Subeg Singh. The Khalsa did not wanted to rule freely and not to be under the rule of a subordinate position. However this offer was eventually accepted and this title was bestowed on Kapur Singh after it was sanctified by the touch of Five Khalsas feet. Thus Kapur Singh became Nawab Kapur Singh. Kapur Singh guided the Sikhs in strengthening themselves and preaching Gurmat to the people. He knew that peace would be short-lived. He encouraged people to freely visit their Gurdwaras and meet their relatives in the villages.
The Khalsa reorganised themselves into two divisions, the younger generation would be part of the Taruna Dal, which provided the main fighting force, while the Sikhs above the age of forty years would be a part of the Budha Dal, which provided the responsibility of the management of Gurdwaras and Gurmat preaching. The Budha Dal would be responsible to keep track of the movements of government forces, plan their defense strategies, and they provide a reserve fighting force for the Taruna Dal.
The following measures were established by Nawab Kapur Singh:
The Taruna Dal quickly increased to more than 12,000 recruits and it soon became difficult to manage the house and feeding of such a large number of people at one place. It was then decided to have five divisions of the Dal, each to draw rations from the central stocks and cook its own langar. These five divisions were stationed around the five sarovars (sacred pools) around Amritsar they were Ramsar, Bibeksar, Lachmansar, Kaulsar and Santokhsar. The divisions later became known as Misls and their number increased to eleven. Each took over and ruled a different region of the Punjab. Collectively they called themselves the Sarbat Khalsa.
Being the leader of the Khalsa Nawab Kapur Singh was given an additional responsibility by Mata Sundari, the wife of Guru Gobind Singh sent Kapur Singh the young Jassa Singh Ahluwalia and told him that Ahluwalia was like a son to her and that the Nawab should raise him like an ideal Sikh. Ahluwalia under the guidance of Kapur Singh, was given a good education in Gurbani and thorough training in managing the Sikh affairs. Later Jassa Singh Ahluwalia would become an important role in leading the Sikhs to self-rule.
In 1735, the rulers of Lahore attacked and repossessed the jagir (estate) given to the Sikhs only two years before however Nawab Kapur Singh in reaction decided the whole Punjab should be taken over by the Sikhs. This decision was taken against heavy odds but was endorsed by the Khalsa and all the Sikhs assured him of their full cooperation in his endeavor for self-rule. Zakariya Khan Bahadur sent roaming squads to hunt and kill the Sikhs. Orders were issued to all administrators down to the village level officials to seek Sikhs, murder them, get them arrested, or report their whereabouts to the governments. One year's wages were offered to anyone who would murder a Sikh and deliver his head to the police station. Rewards were also promised to those who helped arrest Sikhs. Persons providing food or shelter to Sikhs or helping them in any way were severely punished.
This was the period when the Sikhs were sawed into pieces, burnt alive, their heads crushed with hammers and young children were pierced with spears before their mother’s eyes. To keep their morale high, the Sikhs developed their own high-sounding terminologies and slogans: For example. Tree leaves boiled for food were called ‘green dish’; the parched chickpeas were called ‘almonds’; the Babul tree was a ‘rose’; a blind man was a ‘brave man’, getting on the back of a buffalo was ‘riding an elephant’.
The army pursued the Sikhs hiding near the hills and forced them to cross the rivers and seek safety in the Malwa tract. When Kapur Singh reached Patiala he met Maharaja Baba Ala Singh who then took Amrit and Kapur Singh helped him increase the boundaries of his state. In 1736 the Khalsa attacked Sirhind, where the two younger sons of Guru Gobind Singh were killed. The Khalsa took over the city, the took over the treasury and they established the Gurdwaras at the historical places and withdrew. While near Amritsar the government of Lahore sent troops to attack the Sikhs. Kapur Singh entrusted the treasury to Jassa Singh Ahluwalia, while having sufficient amount of Sikhs with him to keep the army engaged. When Jassa Singh was reached a consider distance the Khalsa safely retreated to Tarn Taran Sahib. Kapur Singh sent messages to the Tauna Dal asking them to help them in the fight. After a day of fighting Kapur Singh from the trenches dug by the Khalsa surprisingly attacked the commanding posts killing three generals alongside many Mughal officers. The Mughal army thus retreated to Lahore.
Zakaria Khan called his advisers to plan another strategy to deal with the Sikhs. It was suggested that the Sikhs should not be allowed to visit the Amrit Sarovar, which was believed to be the fountain of their lives and source of their strength. Strong contingents were posted around the city and all entries to Harmandir Sahib were checked. The Sikhs, however, risking their lives, continued to pay their respects to the holy place and take a dip in the Sarovar (sacred pool) in the dark of the night. When Kapur Singh went to Amritsar he had a fight with Qadi Abdul Rehman. He had declared that Sikhs the so-called lions, would not dare to come to Amritsar and face him. In the ensuing fight Abdul Rehman was killed. When his son tried to save him, he too lost his life. In 1738 Bhai Mani Singh was executed.
In 1739 Nader Shah of the Turkic Afsharid dynasty invaded and looted the treasury of the Indian subcontinent. Nader Shah killed more than 100,000 people in Delhi and carried off all of the gold and valuables. He added to his caravan hundreds of elephants and horses, along with thousands of young women and Indian artisans. When Kapur Singh came to know of this, he decided to warn Nader Shah that if not the local rulers, then the Sikhs would protect the innocent women of Muslims and Hindus from being sold as slaves. While crossing The river Chenab, the Sikhs attacked the rear end of the caravan, freed many of the women, freed the artisans, and recovered part of the treasure. The Sikhs continued to harass him and lighten him of his loot until he withdrew from the Punjab.
Massa Ranghar, the Mughal official, had taken over the control of Amritsar. While smoking and drinking in the Harmandir Sahib, he watched the dances of nautch girls. The Sikhs who had moved to Bikaner, a desert region, for safety, were outraged to hear of this desecration. In 1740 Sukha Singh and Mehtab Singh, went to Amritsar disguised as revenue collectors. They tied their horses outside, walked straight into the Harmandir Sahib, cut off his head, and took it with them. It was a lesson for the ruler that no tyrant would go unpunished.
Abdus Samad Khan, a senior Mughal royal commander, was sent from Delhi to subdue the Sikhs. Kapur Singh learned of this scheme and planned his own strategy accordingly. As soon as the army was sent out to hunt for the Sikhs, a Jatha of commandos disguised as messengers of Khan went to the armory. The commander there was told that Abdus Samad Khan was holding the Sikhs under siege and wanted him with all his force to go and arrest them. The few guards left behind were then overpowered by the Sikhs, and all the arms and ammunition were looted and brought to the Sikh camp.
Abdus Samad Khan sent many roaming squads to search for and kill Sikhs. He was responsible for the torture and murder of Bhai Mani Singh, the head Granthi of Harimander Sahib. Samad Khan was afraid that Sikhs would kill him so he remained far behind the fighting lines. Kapur Singh had a plan to get him. During the battle Kapur Singh ordered his men to retreat drawing the fighting army with them. He then wheeled around and fell upon the rear of the army. Samad Khan and his guards were lying dead on the field within hours. The Punjab governor also took extra precautions for safety against the Sikhs. He started to live in the fort. He would not even dare to visit the mosque outside the fort for prayers.
On the request of the Budha Dal members, Kapur Singh visited Patiala. The sons of Sardar Ala Singh, the founder and Maharajah of the Patiala state, gave him a royal welcome. Kapur Singh subdued all local administrators around Delhi who were not behaving well towards their people.
Zakaria Khan died in 1745. His successor tightened the security around Amritsar. Kapur Singh planned to break the siege of Amritsar. Jassa Singh Ahluwalia was made the commander of the attacking Sikh forces. In 1748, the Sikhs attacked. Jassa Singh Ahluwalia, with his commandos behind him, dashed to the army commander and cut him into two with his sword. The commander's nephew was also killed.
The Sikhs built their first fort Ram Rauni at Amritsar in 1748. In December 1748, Governor Mir Mannu had to take his forces outside of Lahore to stop the advance of Ahmad Shah Abdali. The Sikhs quickly overpowered the police defending the station in Lahore and confiscated all of their weapons and released all the prisoners. Nawab Kapur Singh told the sheriff to inform the Governor that, the sheriff of God, the True Emperor, came and did what he was commanded to do. Before the policemen could report the matter to the authorities, or the army could be called in, the Khalsa were already riding their horses back to the forest. Nawab Kapur Singh died in 1753.
Jassa Singh Ahluwalia was born in 1718. His father, Badar Singh, died when Ahluwalia was only four years old. His mother took him to Mata Sundari, the wife of Guru Gobind Singh when Ahluwalia was young. Mata Sundri was impressed by his melodious singing of hymns and kept the Ahluwalia near her. Later Jassa Singh Ahluwalia was adopted by Nawab Kapoor Singh, then the leader of the Sikh nation. Ahluwalia followed all Sikh qualities required for a leader Ahluwalia would sing Asa di Var in the morning and it was appreciated by all the Dal Khalsa and Ahluwalia kept busy doing seva (selfless service). He became very popular with the Sikhs. He used to tie his turban in the Mughal fashion as he grew up in Delhi. Ahluwalia learned horseback riding and swordsmanship from expert teachers.
In 1748 Jassa Singh Ahluwalia became the supreme commander of all the Misls. Jassa Singh Ahluwalia was honored with the title of Sultanul Kaum (King of the Nation). Jassa Singh Ahluwalia was the head of the Ahluwalia Misl and then after Nawab Kapoor Singh become the leader of all the Misls jointly called Dal Khalsa. He played a major role In leading the Khalsa to self-rule in Punjab. In 1761 The Dal Khalsa under the leadership of Ahluwalia, would take over Lahore, the capital of Punjab, for the first time. They were the masters of Lahore for a few months and minted their own Nanakshahi rupee coin in the name of 'Guru Nanak – Guru Gobind Singh'.
In 1746 about seven thousand Sikhs were killed and three thousand to fifteen thousand Sikhs were taken prisoners during by the order of the Mughal Empire when Zakaria Khan, The Governor of Lahore, and Lakhpat Rai, the Divan (Revenue Minister) of Zakaria Khan, sent military squads to kill the Sikhs.
Jaspat Rai, a jagirdar (landlord) of the Eminabad area and also the brother of Lakhpat Rai, faced the Sikhs in a battle one of the Sikhs held the tail of his elephant and got on his back from behind and with a quick move, he chopped off his head. Seeing their master killed, the troops fled. Lakhpat Rai, after this incident, committed himself to destroying the Sikhs.
Through March–May 1746, a new wave of violence was started against the Sikhs with all of the resources available to the Mughal government, village officials were ordered to co-operate in the expedition. Zakaria Khan issued the order that no one was to give any help or shelter to Sikhs and warned that severe consequences would be taken against anyone disobeying these orders. Local people were forcibly employed to search for the Sikhs to be killed by the army. Lakhpat Rai ordered Sikh places of worship to be destroyed and their holy books burnt. Information about including Jassa Singh Ahluwalia and a large body of Sikhs were camping in riverbeds in the Gurdaspur district (Kahnuwan tract). Zakaria Khan managed to have 3,000 Sikhs of these Sikhs captured and later got them beheaded in batches at Nakhas (site of the horse market outside the Delhi gate). Sikhs raised a memorial shrine known as the Shahidganj (the treasure house of martyrs) at that place latter.
In 1747, Shah Nawaz took over as Governor of Lahore. To please the Sikhs, Lakhpat Rai was put in prison by the new Governor. Lakhpat Rai received severe punishment and was eventually killed by the Sikhs.
In 1747 Salabat Khan, a newly appointed Mughal commander, placed police around Amritsar and built observation posts to spot and kill Sikhs coming to the Amrit Sarovar for a holy dip. Jassa Singh Ahluwalia and Nawab Kapoor Singh led the Sikhs to Amritsar, and Salabat Khan was killed by Ahluwalia, and his nephew was killed by the arrow of Kapur Singh. The Sikhs restored Harmandir Sahib and celebrated their Diwali gathering there.
In 1748 all the Misls joined themselves under one command and on the advice of the aging Jathedar Nawab Kapoor Singh Jassa Singh Ahluwalia was made the supreme leader. They also decided to declare that the Punjab belonged to them and they would be the sovereign rulers of their state. The Sikhs also built their first fort, called Ram Rauni, at Amritsar.
Adina Beg, the Faujdar (garrison commander) of Jalandhar, sent a message to the Dal Khalsa chief to cooperate with him in the civil administration, and he wanted a meeting to discuss the matter. This was seen as a trick to disarm the Sikhs and keep them under government control. Jassa Singh Ahluwalia replied that their meeting place would be the battleground and the discussion would be carried out by their swords. Beg attacked the Ram Rauni fort at Amritsar and besieged the Sikhs there. Dewan Kaura Mal advised the Governor to lift the siege and prepare the army to protect the state from the Durrani invader, Ahmed Shah Abdali. Kaura Mal had a part of the revenue of Patti area given to the Sikhs for the improvement and management of Harmandir Sahib, Amritsar.
Kaura Mal had to go to Multan to quell a rebellion there. He asked the Sikhs for help and they agreed to join him. After the victory at Multan, Kaura came to pay his respects to the Darbar Sahib, and offered 11,000 rupees and built Gurdwara Bal-Leela; He also spent 3,000,000 rupees to build a Sarover (holy water) at Nankana Sahib, the birthplace of Guru Nanak Dev. In 1752, Kaura Mall was killed in a battle with Ahmed Shah Abdali and state policy towards the Sikhs quickly changed. Mir Mannu, the Governor, started hunting Sikhs again. He arrested many men and women, put them in prison and tortured them. In November 1753, when he went to kill the Sikhs hiding in the fields, they showered him with a hail of bullets and Mannu fell from the horse and the animal dragged him to death. The Sikhs immediately proceeded to Lahore, attacked the prison, and got all the prisoners released and led them to safety in the forests.
In May 1757, the Afghan Durrani general of Ahmad Shah Abdali, Jahan Khan attacked Amritsar with a huge army and the Sikhs because of their small numbers decided to withdraw to the forests. Their fort, Ram Rauni, was demolished, Harmandir Sahib was also demolished, and the army desecrated the Sarovar (Holy water) by filling it with debris and dead animals. Baba Deep Singh made history when he cut through 20,000 Durrani soldiers and reached Harmandir Sahib, Amritsar.
Adina Beg did not pay revenues to the government so the Governor dismissed him and appointed a new Faujdar (garrison commander) in his place. The army was sent to arrest him and this prompted Adina Beg to request Sikh help. The Sikhs took advantage of the situation and to weaken the government, they fought against the army. One of the commanders was killed by the Sikhs and the other deserted. Later, the Sikhs attacked Jalandhar and thus became the rulers of all the tracts between Sutlej and Beas rivers, called Doaba. Instead of roaming in the forests now they were ruling the cities.
The Sikhs started bringing more areas under their control and realising revenue from them. In 1758, joined by the Mahrattas, they conquered Lahore and arrested many Afghan soldiers who were responsible for filling the Amrit Sarovar with debris a few months earlier. They were brought to Amritsar and made to clean the Sarovar (holy water). After the cleaning of the Sarovar, the soldiers were allowed to go home with a warning that they should not do that again.
Ahmed Shah Abdali came again in October 1759 to loot Delhi. The Sikhs gave him a good fight and killed more than 2,000 of his soldiers. Instead of getting involved with the Sikhs, he made a rapid advance to Delhi. The Khalsa decided to collect revenues from Lahore to prove to the people that the Sikhs were the rulers of the state. The Governor of Lahore closed the gates of the city and did not come out to fight against them. The Sikhs laid siege to the city. After a week, the Governor agreed to pay 30,000 rupees to the Sikhs.
Ahmed Shah Abdali returned from Delhi in March 1761 with lots of gold and more than 2,000 young girls as prisoners who were to be sold to the Afghans in Kabul. When Abdali was crossing the river Beas, the Sikhs swiftly fell upon them. They freed the women prisoners and escorted them back to their homes. The Sikhs took over Lahore in September of 1761, after Abdali returned to Kabul.
The Khalsa minted their coins in the name of Guru Nanak Dev. Sikhs, as rulers of the city, received full cooperation from the people. After becoming the Governor of Lahore, Punjab Jassa Singh Ahluwalia was given the title of Sultan-ul-Kaum (King of the Nation).
In the winter of 1762, after losing his loot from Delhi to the Sikhs, The Durrani emperor, Ahmad Shah Abdali brought a big, well equipped army to finish the Sikhs forever. Sikhs were near Ludhiana on their way to the forests and dry areas of the south and Abdali moved from Lahore very quickly and caught the Sikhs totally unprepared. They had their women, children and old people with them. As many as 30,000 Sikhs are said to have been murdered by the army. Jassa Singh Ahluwalia himself received about two dozen wounds. Fifty chariots were necessary to transport the heads of the victims to Lahore. The Sikhs call this Wadda Ghalughara (The Great Massacre).
Ahmad Shah Abdali, fearing Sikh retaliation, sent messages that he was willing to assign some areas to the Sikhs to be ruled by them. Jassa Singh Ahluwalia rejected his offers and told him that Sikhs own Punjab and they do not recognise his authority at all. Abdali went to Amritsar and destroyed the Harmandir Sahib again by filling it up with gunpowder hoping to eliminate the source of "life" of the Sikhs. While Abdali was demolishing the Harminder Sahib a he was hit on the nose with a brick; later in 1772 Abdali died of cancer from the 'gangrenous ulcer' that consumed his nose. Within a few months the Sikhs attacked Sirhind and moved to Amritsar.
In 1764 the Sikhs shot dead Zain Khan Sirhindi Durrani Governor of Sirhind, and the regions around Sirhind were divided among the Sikh Misldars and money recovered from the treasury were used to rebuild the Harmandir Sahib. Gurdwara Fatehgarh Sahib was built in Sirhind, at the location the two younger sons of Guru Gobind Singh were killed. The Sikhs started striking Govind Shahi coins and in 1765 they took over Lahore again.
In 1767 when Ahmed Shah Abdali came again he sent messages to the Sikhs for their cooperation. He offered them the governorship of Punjab but was rejected. The Sikhs using repeated guerrilla attacks took away his caravan of 1,000 camels loaded with fruits from Kabul. The Sikhs were again in control of the areas between Sutlej and Ravi. After Abdali’s departure to Kabul, Sikhs crossed the Sutlej and brought Sirhind and other areas right up to Delhi, entire Punjab under their control.
Shah Alam II, the Mughal Emperor of Delhi was staying away in Allahabad, ordered his commander Zabita Khan to fight the Sikhs. Zabita made a truce with them instead and then was dismissed from Alam’s service. Zabita Khan then became a Sikh and was given a new name, Dharam Singh.
Qadi Nur Mohammed, who came to Punjab with Ahmad Shah Abdali and was present during many Sikh battles writes about the Sikhs:
|“||They do not kill a woman, a child, or a coward running away from the fight. They do not rob any person nor do they take away the ornaments of a woman, be she a queen or a slave girl. They commit no adultery, rather they respect the women of even their enemies. They always shun thieves and adulterers and in generosity they surpass Hatim."||”|
Ahmad Shah Abdali, fearing the Sikhs, did not follow his normal route through Punjab while he returned to Kabul. Jassa Singh Ahluwalia did not add more areas to his Misl. Instead, whenever any wealth or villages came into the hands of the Sikhs he distributed them among the Jathedars of all the Misls. Ahluwalia passed his last years in Amritsar. With the resources available to him, he repaired all the buildings, improved the management of the Gurdwaras, and provided better civic facilities to the residents of Amritsar. He wanted every Sikh to take Amrit before joining the Dal Khalsa.
Ahluwalia died in 1783 and was cremated near Amritsar. There is a city block, Katra Ahluwalia, in Amritsar named after him. This block was assigned to his Misl in honor of his having stayed there and protected the city of Amritsar.
Jassa Singh Ramgarhia played an active role in Jassa Singh Alhuwalia’s army. He founded the Ramgarhia Misl and played a major role in the battles of the Khalsa Panth. He suffered about two dozen wounds during the Wadda Ghalughara. Jassa Singh Ramgarhia was the son of Giani Bhagwan Singh and was born in 1723. They lived in the village of Ichogil, near Lahore. His grandfather took Amrit during the lifetime of Guru Gobind Singh, and joined him in many battles; he joined the forces of Banda Singh Bahadur. Ramgarhia was the oldest of five brothers. When Ramgarhia was young he had memorised Nitnem hymns and took Amrit.
In 1733, Zakaria Khan, the Governor of Punjab, needed help to protect himself from the Iranian invader, Nader Shah. He offered the Sikhs an estate and a royal robe. The Sikhs in the name of Kapur Singh accepted it. After the battle Zakaria Khan gave five villages to the Sikhs in reward for the bravery of Giani Bhagwan Singh, father of Ramgarhia, who died in the battle. Village Vallah was awarded to Ramgarhia, where Ramgarhia gained the administrative experience required to become a Jathedar (leader) of the Sikhs. During this period of peace with the government, the Sikhs built their fort, Ram Rauni, in Amritsar. Zakaria died in 1745 and Mir Mannu became the Governor of Lahore.
Mir Mannu (Mu'in ul-Mulk), the Governor of Lahore, was worried about the increasing power of the Sikhs so he broke the peace. Mir Mannu also ordered Adina Beg, the Faujdar (garrison commander) of the Jalandhar region, to begin killing the Sikhs. Adina Beg was a very smart politician and wanted the Sikhs to remain involved helping them. In order to develop good relations with the Sikhs, he sent secret messages to them who were living in different places. Jassa Singh Ramgarhia responded and agreed to cooperate with the Faujdar and was made a Commander. This position helped him develop good relations with Divan Kaura Mal at Lahore and assign important posts to the Sikhs in the Jalandhar division.
The Governor of Lahore ordered an attack on Ram Rauni to kill the Sikhs staying in that fort. Adina Beg was required to send his army as well and Jassa Singh, being the commander of the Jalandhar forces, had to join the army to kill the Sikhs in the fort. After about four months of siege, Sikhs ran short of food and supplies in the fort. He contacted the Sikhs inside the fort and joined them. Jassa Singh used the offices of Divan Kaura Mal and had the siege lifted. The fort was strengthened and named Ramgarh; Jassa Singh Ramgarhia, having been designated the Jathedar of the fort, became popular as Ramgarhia.
Mir Mannu intensified his violence and oppression against the Sikhs. There were only 900 Sikhs when he surrounded the Ramgarh fort again. The Sikhs fought their way out bravely through thousands of army soldiers. The army demolished the fort. The hunt for and torture of the Sikhs continued until Mannu died in 1753. Mannu's death left Punjab without any effective Governor. It was again an opportune period for the Sikhs to organise themselves and gain strength. Jassa Singh Ramgarhia rebuilt the fort and took possession of some areas around Amritsar. The Sikhs took upon themselves the task of protecting the people in the villages from the invaders. The money they obtained from the people was called Rakhi (protection charges). The new Governor, Taimur, son of Ahmed Shah Abdali, despised the Sikhs. In 1757, he again forced the Sikhs to vacate the fort and move to their hiding places. The fort was demolished, Harmandir Sahib was blown up, and Amrit Sarovar was filled with debris. The Governor decided to replace Adina Beg. Beg asked the Sikhs for help and they both got a chance to weaken their common enemy. Adina Beg won the battle and became the Governor of Punjab. Sikhs rebuilt their fort Ramgarh and repaired the Harmandir Sahib. Beg was well acquainted with the strength of the Sikhs and he feared they would oust him if he allowed them to grow stronger, so he led a strong army to demolish the fort. After fighting valiantly, the Sikhs decided to leave the fort. Adina Beg died in 1758.
Jassa Singh Ramgarhia occupied the area to the north of Amritsar between the Ravi and the Beas rivers. He also added the Jalandhar region and Kangra hill areas to his estate. He had his capital in Sri Hargobindpur, a town founded by the sixth Guru. The large size of Ramgarhia's territory aroused the jealousy of the other Sikh Misls.
A conflict between Jai Singh Kanhaiya and Jassa Singh Ramgarhia developed and the Bhangi Misl sardars also developed differences with Jai Singh Kanhaiya. A big battle was fought between Jai Singh, Charat Singh, and Jassa Singh Ahluwalia on one side and Bhangis, Ramgarhias and their associates on the other side. The Bhangi side lost the battle.
Later, Jassa Singh Ahluwalia, one day while hunting, happened to enter Ramgarhia territory where Jassa Singh Ramgarhia's brother arrested him. Ramgarhia apologised for the misbehaviour of his brother, and returned Ahluwalia with gifts.
Due to mutual jealousies, fights continued among the Sikh Sardars. In 1776, the Bhangis changed sides and joined Jai Singh Kanhaiya to defeat Jassa Singh Ramgarhia. His capital at Sri Hargobindpur was taken over and he was followed from village to village, and finally forced to vacate all his territory. He had to cross the river Sutlej and go to Amar Singh, the ruler of Patiala. Maharaja Amar Singh welcomed Ramgarhia and who then occupied the areas of Hansi and Hissar which eventually Ramgarhia handed over to his son, Jodh Singh Ramgarhia.
Maharaja Amar Singh and Ramgarhia took control of the villages on the west and north of Delhi, now forming parts of Haryana and Western Uttar Pradesh. The Sikhs disciplined and brought to justice all the Nawabs who were harassing their non-Muslim population. Jassa Singh Ramgarhia entered Delhi in 1783. Shah Alam II, the Mughal emperor, extended the Sikhs a warm welcome. Ramgarhia left Delhi after receiving gifts from him. Because of the differences arising out of the issue of dividing the Jammu state revenues, longtime friends and neighbors Maha Singh, Jathedar of Sukerchakia Misl and Jai Singh, Jathedar of the Kanheya Misl, became enemies. This resulted in a war which changed the course of Sikh history. Maha Singh requested Ramgarhia to help him. In the battle, Jai Singh lost his son, Gurbalchsh Singh, while fighting with Ramgarhias.
After continuous raids, Sikhs under Jassa Singh Ahluwalia, Baba Baghel Singh, Jassa Singh Ramgarhia defeated the Mughals on 11 March 1783, captured Delhi and hoisted Sikh flag (Nishan Sahib) in Red Fort and Ahluwalia became King but they gave it back to Mughals after signing peace treaties.
Jai Singh Kanheya’s widowed daughter-in-law, Sada Kaur, though very young, was a great statesperson. Sada Kaur saw the end of the Khalsa power through such mutual battles but she was able to convince Maha Singh to adopt the path of friendship. For this she offered the hand of her daughter, then only a child, to his son, Ranjit Singh (later the Maharaja of the Punjab), who was then just a boy. The balance of power shifted in favour of this united Misl. This made Ranjit Singh the leader of the most powerful union of the Misls.
When the Afghan invader, Zaman Shah Durrani, came in 1788 the Sikhs, however, were still divided. Ramgarhia and Bhangi Misls were not willing to help Ranjit Singh to fight the invader, so the Afghans took over Lahore and looted it. Ranjit Singh occupied Lahore in 1799 but still the Ramgarhias and Bhangis did not accept him as the leader of all the Sikhs. They got the support of their friends and marched to Lahore to challenge Ranjit Singh. When the Bhangi leader died Jassa Singh Ramgarhia returned to his territory. Ramgarhia was eighty years old when he died in 1803. His son, Jodh Singh Ramgarhia, developed good relations with Ranjit Singh and they never fought again.
Ranjit Singh was crowned on 12 April 1801 (to coincide with Baisakhi). Sahib Singh Bedi, a descendant of Guru Nanak Dev, conducted the coronation. Gujranwala served as his capital from 1799. In 1802 he shifted his capital to Lahore and Amritsar. Ranjit Singh rose to power in a very short period, from a leader of a single Sikh misl to finally becoming the Maharaja (Emperor) of Punjab.
Nihang Abchal Nagar (Nihangs from Hazur Sahib), 1844. Shows turban-wearing Sikh soldiers with chakrams.
The Sikh Empire (from 1801–1849) was formed on the foundations of the Punjabi Army by Maharaja Ranjit Singh. The Empire extended from Khyber Pass in the west, to Kashmir in the north, to Sindh in the south, and Tibet in the east. The main geographical footprint of the empire was the Punjab. The religious demography of the Sikh Empire was Muslim (80%), Sikh (10%), Hindu (10%).
The foundations of the Sikh Empire, during the Punjab Army, could be defined as early as 1707, starting from the death of Aurangzeb and the downfall of the Mughal Empire. After fighting off local Mughal remnants and allied Rajput leaders, Afghans, and occasionally hostile Punjabi Muslims who sided with other Muslim forces the fall of the Mughal Empire provided opportunities for the army, known as the Dal Khalsa, to lead expeditions against the Mughals and Afghans. This led to the growth of the army, which was split into different Punjabi Armies and then semi-independent misls. Each of these component armies was known as a misl, each controlling different areas and cities. However, in the period from 1762-1799 Sikh rulers of their misls appeared to be coming into their own. The formal start of the Sikh Empire began with the disbandment of the Punjab Army by the time of Coronation of Maharaja Ranjit Singh in 1801, creating the one unified political Empire. All the misldars who were affiliated with the Army were nobility with usually long and prestigious family histories in Punjab's history.
The Sikh rulers were very tolerant of other religions; and arts, painting and writings flourished in Punjab. In Lahore alone there were 18 formal schools for girls besides specialist schools for technical training, languages, mathematics and logic, let alone specialised schools for the three major religions, they being Hinduism, Islam, and Sikhism. There were craft schools specialising in miniature painting, sketching, drafting, architecture, and calligraphy. There wasn't a mosque, a temple, a dharmsala that had not a school attached to it. All the sciences in Arabic and Sanskrit schools and colleges, as well as Oriental literature, Oriental law, Logic, Philosophy, and Medicine were taught to the highest standard. In Lahore, Schools opened from 7am and closed at midday. In no case was a class allowed to exceed 50 pupils.
Ghorchara (Horse-mounted) Bodyguards of Maharaja Ranjit Singh of Punjab.
The Sikh Fauj-i-Ain (regular army) consisted of roughly 71,000 men and consisted of infantry, cavalry, and artillery units. Ranjit Singh employed generals and soldiers from many countries including Russia, Italy, France, and America.
There was strong collaboration in defense against foreign incursions such as those initiated by Shah Zaman and Timur Shah Durrani. The city of Amritsar was attacked numerous times. Yet the time is remembered by Sikh historians as the "Heroic Century". This is mainly to describe the rise of Sikhs to political power against large odds. The circumstances were hostile religious environment against Sikhs, a tiny Sikh population compared to other religious and political powers, which were much larger in the region than the Sikhs.
After Maharaja Ranjit Singh's death in 1839, the empire was severely weakened by internal divisions and political mismanagement. This opportunity was used by the British Empire to launch the First Anglo-Sikh War. The Battle of Ferozeshah in 1845 marked many turning points, the British encountered the Punjabi Army, opening with a gun-duel in which the Sikhs "had the better of the British artillery". But as the British made advancements, Europeans in their army were especially targeted, as the Sikhs believed if the army "became demoralised, the backbone of the enemy's position would be broken". The fighting continued throughout the night earning the nickname "night of terrors". The British position "grew graver as the night wore on", and "suffered terrible casualties with every single member of the Governor General's staff either killed or wounded".
British General Sire James Hope Grant recorded: "Truly the night was one of gloom and forbidding and perhaps never in the annals of warfare has a British Army on such a large scale been nearer to a defeat which would have involved annihilation" The Punjabi ended up recovering their camp, and the British were exhausted. Lord Hardinge sent his son to Mudki with a sword from his Napoleonic campaigns. A note in Robert Needham Cust's diary revealed that the "British generals decided to lay down arms: News came from the Governor General that our attack of yesterday had failed, that affairs were disparate, all state papers were to be destroyed, and that if the morning attack failed all would be over, this was kept secret by Mr. Currie and we were considering measures to make an unconditional surrender to save the wounded...".
However, a series of events of the Sikhs being betrayed by some prominent leaders in the army led to its downfall. Maharaja Gulab Singh and Dhian Singh, were Hindu Dogras from Jammu, and top Generals of the army. Tej Singh and Lal Singh were secretly allied to the British. They supplied important war plans of the Army, and provided the British with updated vital intelligence on the Army dealings, which ended up changing the scope of the war and benefiting the British positions.
The Punjab Empire was finally dissolved after a series of wars with the British at the end of the Second Anglo-Sikh War in 1849 into separate princely states, and the British province of Punjab that where granted a statehood, and eventually a lieutenant governorship stationed in Lahore as a direct representative of the Royal Crown in London.
Every village in the Punjab, through the Tehsildar (taxman), had an ample supply of the Punjabi qaida (beginners book), which was compulsory for females and thus, almost every Punjabi woman was literate in the sense that she could read and write the lundee form of Gurmukhi.
In the carnage of revenge that followed 1857, the British Raj made it a special effort to search every house of a village and to burn every book. Even in the secular schools of Lahore which used Persian or lundee Gurmukhi which was given by Guru Angad Dev ji, as the medium of instruction, books formed the major bonfire than the British troops 'cleansed' the region.
Under the East India Company and then British colonial rule from 1858 Sikhs was feared and respected for their martial ability. After they played a key role in the suppression of the Indian 'Mutiny' of 1857-8. Sikhs were increasingly incorporated into the Indian army because they were not only seen as 'loyal', but because the British believed that they were a 'martial race' whose religious traditions and popular customs made them skilled fighters.
The Sikhs again were honoured in the Battle of Saragarhi where twenty-one Sikhs of the 4th Battalion (then 36th Sikhs) of the Sikh Regiment of British India, died defending an army post from 10,000 Afghan and Orakzai tribesmen in 1897.
In 1873 and 1879 the First and Second Singh Sabha was founded, the Sikh leaders of the Singh Sabha worked to offer a clear definition of Sikh identity and tried to purify Sikh belief and practice.
In 1882 The first Punjab university, University of the Punjab, was founded at Lahore. In 1892 the Khalsa College was founded in Amritsar. In 1907 The Khalsa Diwan Society is established in Vancouver, British Columbia, Canada. In 1911 The first Gurdwara is established in London. In 1912 the First Gurdwara in United States was established in Stockton, California.[self-published source]
In two world wars 83,005 Sikh soldiers were killed and 109,045 were wounded. Sikh soldiers died or were wounded for the freedom of Britain and the world and during shell fire, with no other protection but their turban (a symbol of the Sikh faith).
At offset of World War I, Sikh military personnel numbered around 35,000 men of the 161,000 troops, which is around 22% of the British Armed Forces, yet the Sikhs only made up less than 2% of the total population in India. Sikhs, before and after this were, and are, well known for their martial skills, freedom in speaking their minds, and their daredevil courage.
A Sikh in World War II.
Indian Sikh soldiers in Italian campaign.
A company of 15th Sikhs at Le Sart, France, c. 1915
A company of 15th Sikhs at Le Sart, France 1915.
During World War I thousands of Sikhs from India fought alongside Britain and many sacrificed their lives for the greater cause. The Royal Military Academy Sandhurst honored Sikhs by featured a re-enacting by 36 Sikh volunteers.
In 1920 The Akali Party is established to free gurdwaras from corrupt masands (treasurers), and the Shiromani Gurdwara Parbandhak Committee (SPGC) is founded. In 1925 The Punjab Sikh Gurdwaras Act is passed, which transfers control of the Punjab's historic gurdwaras to the Shiromani Gurdwara Parbandhak Committee.
In 1919 the massacre of Jallianwala Bagh massacre in Amritsar during the festival of Vaisakhi when 15,000 to 20,000 peaceful protesters including women, children and the elderly where shot at under the orders of Reginald Dyer.
A non-violent agitation to assert the right to felling trees for Guru ka Langar from the land attached to Gurdwara Guru ka Bagh was underway. The first Sikh volunteers were arrested and tried for trespass, but from 25 August police resorted to beating day after day the batches of Sikhs that came. eventually, the beating stopped and the procedure of arrests resumed with jail time of about two and a half years and a fine of one hundred rupees each.
One such train left Amritsar on 29 October 1922 for the Attock Fort which would touch Hasan Abdal the following morning. The Sikhs of Panja Sahib decided to serve a meal to the detainees but when they reached the railway station with the food they were informed by the station master that the train was not scheduled to halt there.
Two of the Sikhs, Bhai Pratap Singh and Bhai Karam Singh who were leading the sangat went forward as the rumbling sound of the approaching train was heard and sat cross-legged in the middle of the track. Several others, men and women, followed suit. The train ran over eleven of the squatters before stopping while the Sikhs pleaded to serve the arrested Sikhs before proceeding. The Sikhs served the Singhs in the train and then turned to the injured. The worst mauled were Bhai Pratap Singh and Bhai Karam Singh, who succumbed to their injuries the following day.
In 1924 A special Jatha of five hundred Akalis approaching Jaito, India is fired upon by police; two hundred were injured and one hundred died. but the freedom to hold Akhand Path at Jaito was obtained after a span of one year and ten months.
Sohan Singh Bhakna, Kartar Singh Sarabha, alongside many other Punjabi's founded the Ghadar party to overthrow British colonial authority in India by means of an armed revolution. The Ghadar party is closely associated with the Babbar Akali Movement, a 1921 splinter group of "militant" Sikhs who broke away from the mainstream non-violent Akali movement.
In 1914 Baba Gurdit Singh led the Komagata Maru ship to the port of Vancouver with 346 Sikhs on board; forced to leave port on 23 July. Bela Singh Jain an informer and agent of Inspector William Hopkinson, pulled out two guns and started shooting at the Khalsa Diwan Society Gurdwara Sahib on West 2nd Avenue. He murdered Bhai Bhag Singh, President of the Society and Battan Singh and Bela Singh was charged with murder, but Hopkinson decided to appear as a witness in his case and made up much of his testimony at his trail and subsequently Bela Singh was acquitted. On 21 October 1914, Bhai Mewa Singh, Granthi of Khalsa Diwan Society shot William Hopkinson in the Assize court corridor with two revolvers because he believed him to be unscrupulous and corrupt, using informers to spy on Indian immigrants. Canadian policeman William Hopkinson shot and killed by Mewa Singh who is later sentenced to death.
In 1940 Udham Singh, an Indian revolutionary socialist, assassinated Michael O'Dwyer to avenge justice for the Jallianwalla Bagh Massacre when 15,000 to 20,000 people including women, children were shot at after a peaceful protest in Amritsar
Bhagat Puran Singh Pingalwara dedicated his life to the 'selfless service of humanity'. He founded Pingalwara in 1947 with only a few patients, the neglected and rejected of the streets of Amritsar. An early advocate of what we today refer to as the 'Green Revolution', Bhagat Puran Singh was spreading awareness about environmental pollution, and increasing soil erosion long before such ideas became popular.
The months leading up to the partition of India in 1947, saw heavy conflict in the Punjab between Sikh and Muslims, which saw the effective religious migration of Punjabi Sikhs and Hindus from West Punjab which mirrored a similar religious migration of Punjabi Muslims in East Punjab. The 1960s saw growing animosity and rioting between Punjabi Sikhs and Hindus in India, as the Punjabi Sikhs agitated for the creation of a Punjabi Sikh majority state, an undertaking which was promised to the Sikh leader Master Tara Singh by Nehru in return for Sikh political support during the negotiations for Indian Independence. Sikhs obtained the Sikh majority state of Punjab on 1 November 1966.
In 1950 the Sikh Rehat Maryada is published.
Communal tensions arose again in the late 1970s, fueled by Sikh claims of discrimination and marginalization by the secularist dominated Indian National Congress ruling party and the "dictatorial" tactics adopted the then Indian Prime Minister, Indira Gandhi. Frank argues that Gandhi's assumption of emergency powers in 1975 resulted in the weakening of the "legitimate and impartial machinery of government" and her increasing "paranoia" of opposing political groups led her to instigate a "despotic policy of playing castes, religions and political groups against each other for political advantage". As a reaction against these actions came the emergence of the Sikh leader Sant Jarnail Singh Bhindranwale who vocalised Sikh sentiment for justice. This accelerated Punjab into a state of communal violence. Gandhi's 1984 action to defeat Sant Jarnail Singh Bhindranwale led to desecration of the Golden Temple in Operation Blue Star and ultimately led to Gandhi's assassination by her Sikh bodyguards and led to the Sarbat Khalsa advocating the creation of a Sikh homeland, Khalistan. This resulted in an explosion of violence against the Sikh community in the Anti Sikh Riots which resulted in the massacre of thousands of Sikhs throughout India; Khushwant Singh described the actions as being a Sikh pogrom in which he "felt like a refugee in my country. In fact, I felt like a Jew in Nazi Germany". Since 1984, relations between Sikhs and Hindus have reached a rapprochement helped by growing economic prosperity; however in 2002 the claims of the popular right-wing Hindu organisation the RSS, that "Sikhs are Hindus" angered Sikh sensibilities. Many Sikhs still are campaigning for justice for victims of the violence and the political and economic needs of the Punjab espoused in the Khalistan movement.
In 1996 the Special Rapporteur for the Commission on Human Rights on freedom of religion or belief, Abdelfattah Amor (Tunisia, 1993–2004), visited India in order to compose a report on religious discrimination. In 1997, Amor concluded, "it appears that the situation of the Sikhs in the religious field is satisfactory, but that difficulties are arising in the political (foreign interference, terrorism, etc.), economic (in particular with regard to sharing of water supplies) and even occupational fields. Information received from nongovernment (sic) sources indicates that discrimination does exist in certain sectors of the public administration; examples include the decline in the number of Sikhs in the police force and the absence of Sikhs in personal bodyguard units since the murder of Indira Gandhi." The reduced intake of the Sikhs in the Indian armed forces also attributes to following certain orders issued in the Indian Emergency of 1975/1977.
In 2002, Arjan Singh became the Marshal of Indian Airforce.
There are a number of small pseudo-Sikh sects who are not considered to be Sikhs. See Sects of Sikhism for more information.
A large number of Hindu and Muslim peastants converted to Sikhism from conviction, fear, economic motives, or a combination of the three (Khushwant Singh 1999: 106; Ganda Singh 1935: 73). |
Overview & Purpose
Escherichia coli, commonly referred to as E. coli, has many different strains. The most commonly known serotypes of these bacteria can cause serious food poisoning or even fatality in humans. However, most strains are completely harmless. These strains are usually found in the gut of the host and help by producing K2 and helping with digestion. The presence of these bacteria is very beneficial for it helps to prevent pathogenic bacteria from being present in the intestine.
The Lac switch that we have created in the genetic coding of E. coli bacteria produces a glowing blue color that initially runs off of glucose and eventually runs off of lactose. With this technology, we can create a glow stick that can be used in emergency kits that will provide light in dire situations. By using a non-harmful strain of E. coli, we can create an environmental conscious and biodegradable glow stick that will not cause harm to the surroundings.
This technology will prove to be very helpful for hunters or those who are outdoors for they will not have to worry about disposing of their light source. Used like a regular glow stick, the different components of the device will remain separated and will be mixed together to produce light once a certain amount of force is applied.
Basic Components of a Lac Operon [1
Natural Lac Operon with Various Parts [2
The lac operon itself is a set of genes found in certain bacterias' DNA that is required for the transport and metabolism of lactose. Most commonly found in Escherichia coli, the operon was the first example of a group of genes under the control of an operator region to which a lactose repressor (LacI) binds.
The Lac operon functions as a single transcription unit and in its basic form comprises of an operator, a promoter, and one or more structural genes such as a regulator or terminator that are transcribed into one polycistronic mRNA. Typically, the structural genes include LacZ, LacY, and LacA.
- LacZ encodes β-galactosidase, an intracellular enzyme that cleaves the disaccharide lactose into glucose and galactose.
- LacY encodes β-galactoside permease, a membrane-bound transport protein that pumps lactose into the cell.
- LacA encodes β-galactoside transacetylase, an enzyme that transfers an acetyl group from acetyl-CoA to β-galactosides.
"Only LacZ and LacY appear to be necessary for lactose catabolism" .
When the bacteria are transferred to lactose-containing medium, allolactose (which forms when lactose is present in the cell) binds to the LacI repressor, inhibits the binding of the repressor to the operator, and allows transcription of mRNA for enzymes involved in lactose metabolism and transport across the membrane as seen in the image.
The main idea is that E. coli (the most common medium when investigating the Lac operon) conserves its resources by not making many Lac proteins when other more easily-accepted sugars, such as glucose, are available . This was tested by Jacques Monod during World War II. He tested the combinations of different sugars for E. coli and discovered that when the bacteria are grown with glucose and lactose, glucose would get metabolized first during the bacteria's growth phase I and then lactose during growth phase II.
This means that if glucose and lactose are available for the cell, transcription will occur but at a slow rate. Obviously, if there is no lactose at all, nothing will be transcribed. As long as lactose is available, transcription will happen as the LacI repressor is never binded to the operator. Thus, when these Lac proteins are made with the presence of lactose, the lac gene and its derivatives can be used to trigger a color change within the cell. Once glucose is used up, lactose acts as the power source, and the lac operon can truly act as a reporter gene. As in the case for our group, the lac operon device contained the necessary promoters, ribosome-binding sites, terminators, a LacI repressor, a cyano fluorescent protein, and a vector backbone based on the Type IIS assebmly strategy and would turn a bright cyan color when exposed to lactose. This "switch" function can have a multitude of possibilities, and one of these uses is focused on in the page.
Design: Our genetic circuit
OUR GENE SWITCH:
<tab>pSB1A3-1 is a high copy number plasmid. The replication origin is a pUC19-derived pMB1 (copy number of 100-300 per cell). The terminators bracketing pSB1A3 MCS are designed to prevent transcription from inside the MCS from reading out into the vector.
Building: Assembly Scheme
Testing: Modeling and GFP Imaging
Network Diagram Illustration of the Lac Model (Julia)
A LAC SWITCH MODEL
We used a previously published synthetic switch, developed by Ceroni et al., to understand how our system could potentially be modeled and simulated. The graphic to the left depicts the relationships between the parameters of the Lac Operon switch described by Ceroni using a network diagram illustration. The parameters shown in the illustration relate to cell processes and could be used in forming a cohesive mathematical model of the cell's operation.
In order to approximate the behavior of this set-up, a mathematical model can be developed based upon the relationships between the processes found in the cell. These relationships can be expressed in mathematical terms using numbers that relate to the system, including creation or decay rates, concentrations, or various constants. The actual values for these parameters can be sourced from experimentation, literature, or a predefined steady-state.
If a model is well-defined and the necessary parameters known, a person may use the model to ascertain the state of a cell at a given point in time. For example, if an experimenter wanted to know the decay of the GFP protein molecules at a given point in time in a single cell, the following equation could be written using the notation found in the table below.
Decay = G × λG/L
The formula takes the concentration of the GFP protein (in molecules per cell) and multiplies it by the protein degradation rate (in minutes-1). This results in a decay value for GPF in molecules per minute per cell.
The Ceroni et al. model and the network diagram illustration use the table of variables and parameters seen below in their representation of the Lac switch. The variables related to a particular cell process are located near to that process in the network diagram illustration.
Lac Switch Model: Important Variables and Parameters
|| IPTG concentration
|| GFP protein concentration
|| free LacI molecules
|| LacI molecules bound to IPTG
|| mRNA molecules of GFP
|| mRNA molecules of LacI
|| free Repressor/Reporter plasmids
|| Repressor/Reporter plasmids bound to LacI molecules
|| Repressor/Reporter plasmids bound to induced LacI molecules
|| number of Reporter plasmids per cell
|| number of Repressor plasmids per cell
|| protein degradation rate
|| mRNA degradation rate
|| GFP rate of synthesis
|| LacI rate of synthesis
|| GFP transcription rate
|| LacI transcription rate
|| equilibrium binding constant of the LacI-Ox complex
|| equilibrium binding constant for the binding of induced LacI molecule to the Ox operator sequence
|| equilibrium binding constant for binding IPTG-LacI
|| time constant of LacI binding to operator sequences
|| time constant of induced-LacI binding to operator sequences
|| time constant of LacI-IPTG binding
AN INTERACTIVE MODEL
We used a model of the natural Lac operon to understand how changing the parameter values changes the behavior of the system. By changing the initial concentration of input (IPTG in this case), we were able to estimate the threshold that produces an "on" state in the system. Initially, the code had the concentration at 0.32 which is seen in the β-galactoside (Bgal concentration) vs. time plot (fig. 1).
Figure 1: Original Bgal Concentration vs. Time with I = 0.32
This value was changed again to 0.25 in determining the threshold that produces this "on" state (fig. 2).
Figure 2: Bgal Concentration vs. Time with I = 0.25
After proceeding to go up and down with these a values, a threshold was indeed found where the concentration of IPTG is about 0.064 (fig. 3).
Figure 3: Bgal Concentration vs. Time with I = 0.064
COLLECTING IMPERICAL VALUES TO IMPROVE THE MODEL
We explored how one technique, imaging via microscopy could be used to determine the production rate of an output protein, in this case GFP in yeast, could be used to determine a "real" value for maximum GFP production rate under our own laboratory conditions.
- show plot of data and discuss outcome.
- include some of the pictures of the raw data
- wrap up section to explain how the curves could be improved
Ideally, the GFP production rate measured by this method could be entered as a value for [which parameter] in the Ceroni et al. model.
SUPPORTS & UNDERSTANDS
DOES NOT SUPPORT & UNDERSTANDS
SUPPORTS & DOES NOT UNDERSTAND
DOES NOT SUPPORT & DOES NOT UNDERSTAND
- My name is Emily Byrne, and I am a student majoring in biomedical engineering. I am taking BME 494 because ###. An interesting fact about me is that ###.
- My name is Sarah K. Halls, and I am a student majoring in Biomedical Engineering. I am taking BME 494 because I enjoy cell and tissue Engineering work and hope to start my career in this field of study. An interesting fact about me is that I did an internship at Harvard University working on cell patterning.
- My name is Edgil Hector (Sean), and I am a student majoring in biomedical engineering. I am taking BME 494 because the subject is relevant to my interests, and the class counts as a required technical elective. An interesting fact about me is that I am the most indecisive human being on the planet.
- My name is Julia Smith, and I am a senior majoring in Biomedical Engineering. I am taking BME 494 because I am extremely interested in synthetic biology. An interesting fact about me is that in addition to my nerdy side and love of accademic learning, I train reining horses.
Error fetching PMID 21070658:
- Error fetching PMID 21070658: |
A graph is a pictorial representation of the relationship between two quantities. A graph can be anything from a simple bar graph that displays the measurements of various objects to a more complicated graph of functions in two or three dimensions. The former shows the relationship between the kind of object and its quantity; the latter shows the relationship between input and output. Graphing is a way to make information easier for a viewer to absorb.
Types of Graphs
The simplest graphs show the number of many objects. For example, a bar graph might name the months of the year along a horizontal axis and show numbers (for the number of days in each month) along a vertical axis. Then a rectangle (or bar) is drawn above each month. The height of the bar might indicate the number of days in that month on which it rained, or on which a person exercised, or on which the temperature rose above 90 degrees. See the generic example of a bar graph below (top right).
Another simple kind of graph is a circle graph or pie graph, which shows fractions or percentages. In this kind of graph, a circle is divided into pieshaped sectors. Each sector is given a label and indicates the fraction of the total area that goes with that label. See the generic example of a pie graph below (top middle).
A pie chart might be used to display the percentages of a budget that are allotted to various expenditures. If the sector labeled "medical bills" takes up two-tenths of the area of the circle, that means that two-tenths, or 20 percent, of the budget is devoted to medical expenses. Usually the percentages are written on the sectors along with their labels to make the graph easier to read. In both bar graphs and pie graphs, the reader can immediately pick out the largest and smallest categories without having to search through a chart or list, making it easy to compare the relative sizes of many objects simultaneously.
Often the two quantities being graphed can both be represented numerically. For example, student scores on examinations are often plotted on a graph, especially if there are many students taking the exam. In such a graph, the numbers on the horizontal axis represent the possible scores on the exam, and the numbers on the vertical axis represent numbers of students who earned that score. The information could be plotted as a simple bar graph. If only the top point of each bar is plotted and a curve is drawn to connect these points, the result is a line graph. See the generic example of a line graph on the pevious page (top left). Although the points on the line between the plotted points do not correspond to any pieces of information, a smooth line can be easier to understand than a large collection of bars or dots.
Graphs for Continuous Data
Graphs become slightly more complicated when one (or both) of the quantities in the graph can have continuous values rather than a discrete set. A common example of this is a quantity that changes over time. For example, a scientist might be observing the rate of growth of bacteria. The rates could be plotted so that the horizontal axis displays units of time and the vertical axis displays numbers (how many bacteria exist).
Then, for instance, the point (3,1000) would mean that at "time 3" (which could mean three o'clock, or three seconds after starting, or various other times, depending on the units being used and the starting point of the experiment) there were one thousand bacteria in the sample. The rise and fall of the graph show the increases and decreases in the number of bacteria.
In this case, even though only a finite set of points represent actual data, the remaining points do have a natural interpretation. For instance, suppose that in addition to the point (3,1000), the graph also contains the point (4,1500) and that both of these points correspond to actual measurements. If the scientist joins all of the points on the graph by a line, then the point (3.5,1200) might lie on the graph, or perhaps the point (3.5,1350).
There are many different lines that can be drawn through a collection of points. Looking at the overall shape of the data points helps the scientist decide which line is the most reasonable fit. In the previous example, the scientist could estimate that at time 3.5, there were 1200 (or 1350) bacteria in the sample. Thus graphing can be helpful in making estimates and predictions.
Graphs for Predictions
Sometimes the purpose for drawing a graph may not be to view the data already known but to construct a mathematical model that will allow one to analyze data and make predictions. One of the simplest models that can be constructed from a set of data is called a best-fit line. Such a line is useful in situations in which the data are roughly linear—that is, they are increasing or decreasing at a roughly constant rate but do not fall precisely on a line. (See graph on the previous page, bottom right.)
A best-fit line can be a very useful tool for analyzing data because lines have very simple formulas describing their behavior. If, for instance, one has collected data up to time 5 and wishes to predict what the value will be at time 15, the value 15 can be inserted into the formula for the line to derive an estimation. One can also determine how good an estimate is likely to be by computing the correlation factor for the data. The correlation factor is a quantity that measures how close the set of data is to being linear; that is, how good a "fit" the best-fit line actually is.
Graphs for Functions
One of the most common uses of graphs is to display the information encoded in a function. A function, informally speaking, is an operation or rule that can be applied to numbers. Functions are usually graphed in the cartesian plane (that is, the x,y -plane) with the horizontal or x -axis representing the input variable and the vertical or y -axis representing the output variable. The graph of a function differs from the other types of graphs described so far in that all the points on the graph represent actual information. A concrete relationship, usually given by a mathematical formula, connects the two objects being analyzed.
For example, the "squaring" function takes numbers and squares them. Thus an input of the number 1 corresponds to an output of 1; an input of 2 corresponds to an output of 4; an input of −7 corresponds to an output of 49; and so on. Therefore, the graph of this function contains the points (1, 1), (2, 4), (−7, 49), and infinitely many others.
Does the point (10, 78) lie on this graph? To determine the answer, examine which characteristics all the points on the graph have in common. Any point on the graph of a function represents an input-output pair, with the x -coordinate representing input and the y -coordinate representing output. With the squaring function, each output value is the square of the corresponding input value, so on the graph of the squaring function, each y -coordinate must be the square of the corresponding x -coordinate. Because 78 is not the square of 10, the point (10, 78) does not lie on the graph of the squaring function.
It is traditional to name graphs with an equation rather than with words. The equation of any graph, regardless of whether it is the graph of a function, is meant to be a perfect description of the graph—it should tell the viewer the relationship between the x - and y -coordinates of the numbers being graphed.
For example, the equation of the graph of the squaring function is y = x ² because the y -coordinate of any point on the graph is the square of the x -coordinate. The line that passes through the point (0, 3) and slants upwards with slope 4 (that is, at a rate of four units up for every one unit to the right) has equation y = 4x + 3. This indicates that for every point on the graph, the y -coordinate is 3 more than 4 times the x -coordinate.
An equation of a graph has many uses: it is not only a description of the graph but also a mechanism for finding points on the graph and a test for determining whether a given point lies on the graph. For example, to find out whether the point (278, 3254) lies on the line y = 4x + 3, simply insert (278, 3254), resulting in the inequality 3254 ≠ 4(278) + 3. Because these numbers are not equal, the point does not lie on the line. However, the equation shows that the point (278, 1115) does lie on theline.
see also Data Collection and Interpretation; Graphs and Effects of Parameter Changes; Statistical Analysis.
Larson, Roland E., and Robert P. Hostetler. Precalculus, 3rd ed. Lexington, MA: D. C. Heath and Company, 1993.
Warner, Jack. Graph Attack! Understanding Charts and Graphs. New Jersey: Regents/Prentice Hall, 1993.
More From encyclopedia.com
Analytic Geometry , Analytic geometry is a branch of mathematics that uses algebraic equations to describe the size and position of geometric figures on a coordinate sys… Linear Models (statistics) , Linear regression refers to a linear estimation of the relationship between a dependent variable and one or more independent variables. Social resear… Hyperbola , A hyperbola is a curve formed by the intersection of a right circular cone and a plane (see Figure 1). When the plane cuts both nappes of the cone, t… Cartesian Coordinates , Cartesian •antipodean, Crimean, Judaean, Korean •Albion •Gambian, Zambian •lesbian •Arabian, Bessarabian, Fabian, gabion, Sabian, Swabian •amphibian,… Locus , Locus A locus is a set of points that contains all the points, and only the points, that satisfy the condition, or conditions, required to describe a… Parabola , A parabola is a type of conic section, which is an open curve formed by the intersection of a plane and a right circular cone. The word parabola is d…
About this article
Updated About encyclopedia.com content Print Article
You Might Also Like |
This equation, known as the equation of the circle, follows from the pythagorean theorem applied to any point on the circle: as shown in the adjacent diagram,. Circles worksheet day #1 write an equation of a circle given the following information given the center and another point on the circle, write the equation. The equation of a circle is based upon its definition and the pythagorean theorem since a locus for the circle is the set of points equidistant from a single. The parametric equation of a circle from the above we can find the coordinates of any point on the circle if we know the radius and the subtended angle so in general we can say that a circle centered at the origin, with radius r, is the locus of all points that satisfy the equations.
Equation of circle worksheet (pdf) students will practice writing the equation of a circle from a graph as well as what is the equation of the circle pictured. Step 3: write the equation of the circle using h = 5, k = ±1, and r = $16:(5 find the point(s) of intersection, if any, between each circle and line with the equations given. Equation of a circle a circle is the set of all points in a plane at a given distance (called the radius ) from a given point (called the center) a line segment connecting two points on the circle and going through the center is called a diameter of the circle.
This writing equations of circles worksheet is suitable for 10th - 12th grade in this writing equations worksheet, students examine given information and write the equation of a circle in standard form. Writing equation of circle in the standard form worksheets - showing all 8 printables worksheets are equations of circles, circles work day 1, writing the center. Round and round we go learners first complete a task on writing equations of circles they then take part in a collaborative activity categorizing a set of equations for circles based on the radius and center. Equations of circles worksheet five pack - switch between circle equation forms in this one circles match the standard equations and graphs worksheet five pack. Have your children write equations for circles provided a variety of information given the equation, can your students match the graph and provide other information about the circle such as its' area, circumference, center, radius.
Circle equations a circle is and that is the standard form for the equation of a circle that might be a circle in fact we can write it in general. Completing the square completing the square (in circle equations) in general, any equation of the form \(ax^2 + ay^2 + bx + cy + d = 0\) will produce a circle. Examples of circle and semi-circle functions - sketch their graphs, find their domains & ranges, find centre & radius of a circle given its function, etc.
Given the graph of a circle or its features, find its standard equation. In this section we discuss graphing circles we introduce the standard form of the circle and show how to use completing the square to put an equation of a circle into standard form. View notes - equations of circles answer key from math geometry at eleanor roosevelt high school kuta software - infinite geometry name_ equations of circles date_ period_ identify the center and. Get an answer for 'write the standard form of the equation of the circle with center (-5, 2) and radius 4 what is the domain and range' and find homework help for other math questions at enotes.
Different geometric shapes have their own distinct equations that aid in their graphing and solution a circle's equation can have either a general or standard form. Free practice questions for sat math - how to find the equation of a circle includes full solutions and score reporting we can write its general equation using. This example determines the standard equation of a circle from the given equation and then graphs it it's similar to example 1, but the center of the circle is above. |
Tempo de leitura: menos de 1 minuto
There is […] Read more "Overall Plan" Also available as a printed copy. They should be able to identify fractions with pictures and numbers and be able to find equivalent fractions. The student will be able to identify the numerator and denominator in a fraction. Unit Plan Exploring Fractions and Decimals - Year 3 and Year 4. Well, here is the lesson for you! Videos, examples, solutions, worksheets, songs, and activities to help Grade 4 students understand fractions. Mathematics Lesson Plan for 3rd, 4th, and 5th grade For the lessons on March 3, 4, 5, and 6 - 2009 At the Mills College Children’s School, Oakland, CA Instructor: Akihiko Takahashi a. 2. Teaching hints 4. A DETAILED LESSON PLAN IN MATHEMATICS FOR GRADE-FOURI. Lesson plan 3. More Lessons for Grade 4 Common Core For Grade 4 Videos, examples, solutions, and lessons to help Grade 4 students learn to explain why a fraction a / b is equivalent to a fraction ( n × a )/( n × b ) by using visual fraction models, with attention to how the number and size of the parts differ even though the two fractions themselves are the same size. Add and subtract mixed numbers with similar denominators, and 4. Level 4/5 lesson on fraction. Lesson - Introduction to Fractions (see below for printable lesson) Prepared Examples. More Fractions Games . Download ($5.30). Objectives: 1. CCSS: Math.4.NF.B.3a Math.4.NF.B.3c. 3. Objective. Equivalent fractions lesson plans, worksheets and more. A self-teaching worktext for 2nd-4th grade that covers fractions and mixed numbers, adding and subtracting like fractions, adding and subtracting mixed numbers, equivalent fractions, comparing fractions, and finding a fractional part using division. English as a teaching tool Lesson 2: Measurement of Area (Primary 4) 1. Read more. This multiple part lesson is perfect for grade 4 and 5 students as they learn to multiply fractions. Keep Learning Keep Growing with Arpan 657,228 views To get the students warmed up, I start with a review from yesterday. I have also included a printable version of this lesson plan for you. Students learn about the differences between proper fractions, improper fractions and mixed fractions. This lesson is for third through fifth grade students who have an understanding of equivalent fractions using models, an understanding of multiplication and division facts, and of multiplying and dividing fractions. Module 5 Sample Lesson Plans in Mathematics 4 Sample Lesson Plans (TYPE A) Lesson 1: Multiply a Fraction by a Fraction (Primary 6) 1. A memory tricking game based on equivalent fractio.. 49,957 Plays Grade 4 (1140) Equivalent Fraction Memory Challenge. Read more. Unit Plan Understanding Fractions - Grade 3 and Grade 4. present different ways to express a fraction. FRACTIONS INTRODUCTION LESSON PLAN . In today's lesson, the students learn to find equivalent fractions. Lesson overview 2. Fraction Action - Students create fractions using strips of paper and then compare the fractions. Teachers may also use the text as part of a classroom lesson plan. 2. This lesson only uses halves, thirds, and fourths as a starting point for this concept. For two-way mapping of supplements to standards download the Grade 4 Correlations. More Fractions Games . The student will be able to add fractions. This lesson works well at fourth grade level, but I find it can have good use with advanced third graders. MATHEMATICS GRADE 4 JESSICA CROUSE STANDARDS • C. Understand decimal notation for fractions, and compare decimal The student will be able to determine the lowest common denominator between two and three fractions. The practice portion of this lesson is differentiated instruction because students are completing worksheets based on their needs. 9,079 Downloads Read Lesson! The student will be able to write a whole number in fraction form. MULTIPLYING FRACTIONS LESSON PLAN: How many pieces in a chocolate candy bar? Algebra, Set B1: Equations & Operations, pdf Lessons They’ll also explain in words and pictures how to add and subtract fractions. Download it HERE. The student will be able to identify the numerator and denominator in a fraction. Introduction to fractions and going over what they mean and comparing different fractions using fraction grid. DAY 1: Start with the lesson using chocolate bars. use visual demonstrations to answer the question. Number Specific Outcomes 8, 9, 10 . In this lesson, we will learn about fractions, how to compare unit fractions, how to compare fractions with the same numerator but different denominators, how to compare fractions using a benchmark fraction and equivalent fractions. Great lesson to teach students how to multiply fractions. Directions: Print the Beginner Fractions reading and questions worksheet (see below).. Students should read the passage silently, then answer the questions. Key Understandings. LESSON PLAN – INTRODUCING FRACTIONS ON A NUMBER LINE. Add and subtract fractions with the same denominators, 2. This is a zipped folder containing a PDF version and an editable version of the following items: - a more detailed 2 page lesson plan (suitable for trainees) - a less detailed 1 page lesson plan - Equivalent fractions lesson starter - Equival Students play a fraction game. Grade 4 (556) Fraction of a Number. Grade 4 (556) Fraction of a Number. From 4th grade language arts worksheets to 4th grade physical science videos, quickly find teacher-reviewed educational resources. => Learn more and see the free samples! This Mathematics unit covers a range of concepts relating to fractions. This unit is 4 days and is worth 50 points in Math. Folded Fractions - Students use various geometric shapes to represent fractional parts. Grade 4 Fraction Unit of Instruction This is a progressive unit of instruction beginning with students investigating the concrete area model of fraction equivalence and ordering. This lesson prepares students for the conceptual shift involved in progressing from adding and subtracting whole numbers to adding and subtracting fractions. The student will be able to accurately multiply fractions. Checking For Understanding: Students will complete a formative assessment at the end of the lesson focusing on mixed numbers and improper fractions. In this lesson plan adaptable for grades 3-8, students use BrainPOP to learn how to identify the lowest common denominator in equations, and add and subtract unlike fractions. OBJECTIVES At the end of a 45-minute period, the grade four pupils will be able to: 1. Lessons were developed by staff of the UAB NSF project “Integrating Computing Across the Curriculum: Incorporating Technology into STEM Education Using XO Laptops.” 1 Title: Equivalent Fractions / Comparing Fractions Grade(s): 4th Grade Subject(s): Math Common Core Math Grade 4 Fraction Addition and Subtraction One goal of the Common Core is to develop a deeper understanding of fractions by using a progression of concepts from simple to complex. Lessons. To do this, I post 3 fractions on the board and ask students to find 3 equivalent fractions for each. The 4th grade lesson plan section will continuously grow as more teachers from our Teacher.org community submit their lesson plans. 1 Recognising Common Fractions; 2 Creating Common Fractions; 3 Identifying Equivalent Fractions; 4 Equivalent Fractions on a Number Line; 5 Counting By Fractions 1. FRACTIONS LESSON: How to Add Fractions. The emphasis of this lesson is on simplifying fractions and also determining when a fraction is in simplest form. With your dedication and creativity, these lessons will help inspire many students. Equivalent Fractions, easily identified using physical objects (3-5) Most students will benefit from the use of physical objects when they are introduced to the concept of equivalent fractions. In this improper and mixed fractions lesson, students discuss the fractions 1/3, 4/3 and 1 1/3. Check out my other lessons Please let me know what you think? Students progress through the concrete, representational and abstract model of understanding within the lessons. Add and subtract fractions with dissimilar denominators, 3. See All . View my own lesson plan grade 4.pptx from EDU 280 at Ashford University. Grade 4 Fractions and Decimals . Lesson Activity: 1. Goals of the Unit: Students will understand the meaning and the representations of fractions … Great resources along with it. 6,245 Downloads Read Lesson! The Use of Chalkboard 5. Students will use multiplication and division to show equivalent fractions. 9,079 Downloads Read Lesson! Objectives: 1. Grade 4, 5 (849) Adding and Subtracting Fractions. Find 4th grade lesson plans and teaching resources. Lesson plan 3. Lesson overview 2. A look at teaching fractions to 4th grade students. K-3rd grade students can watch this resource page with lesson plans and teaching tips, to learn how equivalent fractions have the same value, but use different numbers in the numerator and denominator. Fantastic Fractions teaches students the difference between 1/2, 1/3, and 1/4. For fourth grade, 17 of 28 supplements sets are correlated to the Common Core State Standards. Teaching Materials. Overall Lesson Plan: Our goal for the students is to understand fractions. Teacher demonstrates. INTRODUCTION TO FRACTIONS LESSON. 2. Guided Math Lessons for First Grade Geometry and Fractions***Spanish Supplement File now Included****This packet is detailed and differentiated lesson plans, activities, games, and cards for your guided math whole group and small group lessons!This is unit 5A unit pre-test has been addedA unit post The lessons cover multiple subject areas and objectives. Ask class for examples. Aligns with Common Core Standards for Math. Title of the Lesson: Fractions b. LESSON TITLE: Understand a Fraction as a Number on a Number Line TEACHER: Deondra M. Seamon, Oriskany Teachers Association SUBJECT: Math GRADE: 3 TIME FRAME: 4 days PLANNING AND PREPARATION: Prior to lessons, students will develop an understanding of fractions, beginning with unit fractions; 3.NF.1. Overview: In this lesson students will be introduced to the idea of fractions on a number line. 8,833 Downloads Read Lesson! See All . FRACTION MATHS TLM & ACTIVITY for grade 4,5,6 BASIC AND TYPES OF FRACTION WITH MATHS PROJECT - Duration: 4:53. Will use multiplication and division to show equivalent fractions more and see the free samples lesson focusing mixed., 17 of 28 supplements sets are correlated to the idea of on... Unit covers a range of concepts relating to fractions creativity, these lessons will help inspire many students students they... Learning Exchange ( ALEX ) view my own lesson Plan section will continuously grow more... Going over what they mean and comparing different fractions using fraction grid these lessons will help inspire many students within... Students as they learn to find equivalent fractions Measurement of Area ( Primary 4 ) 1 mixed numbers similar... 4Th grade physical science videos, quickly find teacher-reviewed educational resources use various geometric shapes to fractional! A formative assessment at the end of the lesson focusing on mixed numbers with similar denominators, 3 going! Subtracting fractions algebra, Set B1: Equations & Operations, pdf look. How to multiply fractions four pupils will be able to: 1 practice portion this! Lesson for you and Decimals - Year 3 and Year 4 Understanding fractions - grade and. Measurement of Area ( Primary 4 ) 1 of a Number LINE with MATHS PROJECT - Duration 4:53. Progress through the concrete, representational and abstract model of Understanding within the lessons only uses halves,,! Students how to multiply fractions 4 and 5 students as they learn to find 3 equivalent fractions similar denominators and. View my own lesson Plan section will continuously grow as more teachers from Our Teacher.org community submit their lesson.! & Operations, pdf a look at teaching fractions to 4th grade lesson Plan format is from! On their needs and subtracting whole numbers to adding and subtracting whole numbers to adding subtracting. How to add and subtract fractions with dissimilar denominators, 2 to multiply.... Mixed numbers and improper fractions in Math a starting point for this concept lesson 2: Measurement Area., 4/3 and 1 1/3 on a Number common Core State Standards in... Also included a printable version of this lesson students will complete a formative assessment at end! Relating to fractions and Decimals - Year 3 and grade 4 Correlations version of this lesson will... Lesson Plan format is adapted from the Alabama Learning Exchange ( ALEX ) Set B1: Equations & Operations pdf. More teachers from Our Teacher.org community submit their lesson plans for 4th lesson... Today 's lesson, the grade 4 JESSICA CROUSE Standards • C. decimal. The board and ask students to find equivalent fractions own lesson Plan and compare. Of 28 supplements sets are correlated to the idea of fractions on the board and ask students find! To write a whole Number in fraction form continuously grow as more teachers Our. And improper fractions and Decimals shapes to represent fractional parts a look at fractions... More and see the free samples grade 4,5,6 BASIC and TYPES of fraction with MATHS PROJECT - Duration 4:53... Adding and subtracting fractions shapes to represent fractional parts language arts worksheets to 4th grade language arts worksheets to grade. Within the lessons end of a classroom lesson Plan format is adapted from the Alabama Learning Exchange ( )! Covers a range of concepts relating to fractions ( see below for printable lesson ) Prepared Examples adapted... Is adapted from the Alabama Learning Exchange ( ALEX ) lesson plan on fractions for grade 4 comparing fractions... With the 3 Phase lesson Structure because students are completing worksheets based on equivalent..... Of fraction with MATHS PROJECT - Duration: 4:53 4 ( 1140 equivalent! Perfect for grade 4,5,6 BASIC and TYPES of fraction with MATHS PROJECT Duration... The common Core State Standards CROUSE Standards • C. understand decimal notation for fractions, and 4 students to 3! Creativity, these lessons will help inspire many students fractions - students create fractions using of... Of the lesson using chocolate bars - students use various geometric shapes to represent fractional parts a period... Range of concepts relating to fractions ( see below for printable lesson ) Prepared Examples today 's lesson, discuss. Plans for 4th grade language arts worksheets to 4th grade about the differences between proper fractions, fourths. |
An international team of astronomers has announced the discovery of an enormous and surprising structure, formed fairly early in the lifetime of the universe. This structure is a “supercluster” of galaxies, created when many galaxies are bound together by the force of gravity. Conceptually, you can imagine it as looking similar to a swarm of bees, albeit on a cosmic scale, and each bee being replaced with an entire galaxy full of hundreds of billions of stars.
This supercluster is both old and staggeringly large for its age. It is named Hyperion after a titan of Greek mythology, and it comprises roughly the same amount of mass as five thousand galaxies the size of our own Milky Way.
Hyperion is not only large in terms of its mass, it’s also physically very large. Roughly speaking, you can imagine it as a cylinder about 200 million light years across the circular ends, and about 500 million light years long. To give a basis of comparison, our own Milky Way galaxy, which comprises about 200 billion stars, is only about 100,000 light years across.
Superclusters are not all that unusual; astronomers know of many of them. What sets the Hyperion apart from others is that it is old. The universe is about 14 billion years old — just two billion years older than Hyperion, which, in cosmic time, is the blink of an eye.
And that’s a peculiarity.
It turns out that these structures are a lot like people — they start out small and grow over time. When the universe was very young, galaxies were generally much smaller. As time passed, small galaxies collided and coalesced, resulting in bigger galaxies. At the same time, and on much bigger scales, these bigger galaxies were pulled closer to one another through the force of gravity, assembling into clusters and then superclusters.
It’s also not at all unusual to find superclusters of galaxies that are closer to Earth than Hyperion. Because we use light to see astronomical objects, and light has a finite speed, the objects closer to Earth are younger — maybe a few hundred million or a billion years old.
However, the Hyperion supercluster existed about 12 billion years ago. And it’s surprising to see such a large assemblage of matter so early in the history of the universe. On a human scale, it’s a little like looking at a daycare where you expect to see only toddlers and seeing a 2-year-old who is already the size of an adult.
Apart from its size, Hyperion is an unusual supercluster in terms of the density of its galaxies. It’s a little more diffuse than modern superclusters that have relatively compact structure, with small groups of galaxies clustered nearer to one another.
And that’s an important point from an astronomical point of view. Remember that we are not seeing Hyperion as it exists now. We’re seeing it as it existed 12 billion years ago. If we were able to see it as it exists now, it would presumably look more like more modern superclusters, located closer to Earth. By extension, it’s probably true that Hyperion gives us a glimpse of what nearby superclusters looked like in the distant past.
In a very real way, Hyperion helps us understand just how some of the biggest structures in the universe came to be assembled. It’s perhaps easiest to think of the galaxies distributed across the universe as looking a little like the foam on a freshly pulled pint of Guinness stout. Look at it from a distance, and it’s uniform. But look more closely, and you see the individual bubbles — little spheres of liquid that are empty in the middle.
The universe is similar, with galaxies spread uniformly on average — but, if you look more closely, you will see galaxies clustered in what are called walls and filaments, surrounding vast voids in which few galaxies exist. Those filaments are made of superclusters.
So now we have a little insight into the moment in the history of the universe when these filaments and voids began to form — when large-scale structures began to come into existence.
Anyone who is a parent remembers the time when they watched their child and saw the adult they would eventually become. With the observation of the Hyperion supercluster, we’re now doing that for the universe as a whole. |
Plotting geospatial data is a common visualisation task, and one that requires specialised tools. Typically the problem can be decomposed into two problems: using one data source to draw a map, and adding metadata from another information source to the map. This chapter will help you tackle both problems. I’ve structured the chapter in the following way: Section 6.1 outlines a simple way to draw maps using
geom_polygon(), which is followed in Section 6.2 by a modern “simple features” (sf) approach using
geom_sf(). Next, Sections 6.3 and 6.4 discuss how to work with map projections and the underlying sf data structure. Finally, Section 6.5 discusses how to draw maps based on raster data.
Perhaps the simplest approach to drawing maps is to use
geom_polygon() to draw boundaries for different regions. For this example we take data from the maps package using
ggplot2::map_data(). The maps package isn’t particularly accurate or up-to-date, but it’s built into R so it’s an easy place to start. Here’s a data set specifying the county boundaries for Michigan:
In this data set we have four variables:
long specify the latitude and longitude of a vertex (i.e. a corner of the polygon),
id specifies the name of a region, and
group provides a unique identifier for contiguous areas within a region (e.g. if a region consisted of multiple islands). To get a better sense of what the data contains, we can plot
geom_point(), as shown in the left panel below. In this plot, each row in the data frame is plotted as a single point, producing a scatterplot that shows the corners of every county. To turn this scatterplot into a map, we use
geom_polygon() instead, which draws each county as a distinct polygon. This is illustrated in the right panel below.
ggplot(mi_counties, aes(lon, lat)) + geom_point(size = .25, show.legend = FALSE) + coord_quickmap() ggplot(mi_counties, aes(lon, lat, group = group)) + geom_polygon(fill = "white", colour = "grey50") + coord_quickmap()
In both plots I use
coord_quickmap() to adjust the axes to ensure that longitude and latitude are rendered on the same scale. Chapter 16 discusses coordinate systems in ggplot2 in more general terms, but as we’ll see below, geospatial data often require a more exacting approach. For this reason, ggplot2 provides
coord_sf() to handle spatial data specified in simple features format.
There are a few limitations to the approach outlined above, not least of which is the fact that the simple “longitude-latitude” data format is not typically used in real world mapping. Vector data for maps are typically encoded using the “simple features” standard produced by the Open Geospatial Consortium. The sf package19 developed by Edzer Pebesma https://github.com/r-spatial/sf provides an excellent toolset for working with such data, and the
coord_sf() functions in ggplot2 are designed to work together with the sf package.
To introduce these functions, we rely on the ozmaps package by Michael Sumner https://github.com/mdsumner/ozmaps/ which provides maps for Australian state boundaries, local government areas, electoral boundaries, and so on.20 To illustrate what an sf data set looks like, we import a data set depicting the borders of Australian states and territories:
library(ozmaps) library(sf) oz_states <- ozmaps::ozmap_states oz_states #> Simple feature collection with 9 features and 1 field #> Geometry type: MULTIPOLYGON #> Dimension: XY #> Bounding box: xmin: 106 ymin: -43.6 xmax: 168 ymax: -9.23 #> Geodetic CRS: GDA94 #> # A tibble: 9 × 2 #> NAME geometry #> * <chr> <MULTIPOLYGON [°]> #> 1 New South Wales (((151 -35.1, 151 -35.1, 151 -35.1, 151 -35.1, 151 -35.2, 1… #> 2 Victoria (((147 -38.7, 147 -38.7, 147 -38.7, 147 -38.7, 147 -38.7)),… #> 3 Queensland (((149 -20.3, 149 -20.4, 149 -20.4, 149 -20.3)), ((149 -20.… #> 4 South Australia (((137 -34.5, 137 -34.5, 137 -34.5, 137 -34.5, 137 -34.5, 1… #> 5 Western Australia (((126 -14, 126 -14, 126 -14, 126 -14, 126 -14)), ((124 -16… #> 6 Tasmania (((148 -40.3, 148 -40.3, 148 -40.3, 148 -40.3)), ((147 -39.… #> # … with 3 more rows
This output shows some of the metadata associated with the data (discussed momentarily), and tells us that the data is essentially a tibble with 9 rows and 2 columns. One advantage to sf data is immediately apparent, we can easily see the overall structure of the data: Australia is comprised of six states and some territories. There are 9 distinct geographical units, so there are 9 rows in this tibble (cf.
mi_counties data where there is one row per polygon vertex).
The most important column is
geometry, which specifies the spatial geometry for each of the states and territories. Each element in the
geometry column is a multipolygon object which, as the name suggests, contains data specifying the vertices of one or more polygons that demark the border of a region. Given data in this format, we can use
coord_sf() to draw a serviceable map without specifying any parameters or even explicitly declaring any aesthetics:
To understand why this works, note that
geom_sf() relies on a
geometry aesthetic that is not used elsewhere in ggplot2. This aesthetic can be specified in one of three ways:
In the simplest case (illustrated above) when the user does nothing,
geom_sf()will attempt to map it to a column named
dataargument is an sf object then
geom_sf()can automatically detect a geometry column, even if it’s not called
You can specify the mapping manually in the usual way with
aes(geometry = my_column). This is useful if you have multiple geometry columns.
In some instances you may want to overlay one map on top of another. The ggplot2 package supports this by allowing you to add multiple
geom_sf() layers to a plot. As an example, I’ll use the
oz_states data to draw the Australian states in different colours, and will overlay this plot with the boundaries of Australian electoral regions. To do this, there are two preprocessing steps to perform. First, I’ll use
dplyr::filter() to remove the “Other Territories” from the state boundaries.
The code below draws a plot with two map layers: the first uses
oz_states to fill the states in different colours, and the second uses
oz_votes to draw the electoral boundaries. Second, I’ll extract the electoral boundaries in a simplified form using the
ms_simplify() function from the rmapshaper package.21 This is generally a good idea if the original data set (in this case
ozmaps::abs_ced) is stored at a higher resolution than your plot requires, in order to reduce the time taken to render the plot.
oz_states <- ozmaps::ozmap_states %>% filter(NAME != "Other Territories") oz_votes <- rmapshaper::ms_simplify(ozmaps::abs_ced) #> Registered S3 method overwritten by 'geojsonlint': #> method from #> print.location dplyr
Now that I have data sets
oz_votes to represent the state borders and electoral borders respectively, the desired plot can be constructed by adding two
geom_sf() layers to the plot:
It is worth noting that the first layer to this plot maps the
fill aesthetic in onto a variable in the data. In this instance the
NAME variable is a categorical variable and does not convey any additional information, but the same approach can be used to visualise other kinds of area metadata. For example, if
oz_states had an additional column specifying the unemployment level in each state, we could map the
fill aesthetic to that variable.
Adding labels to maps is an example of annotating plots (Chapter 8) and is supported by
geom_sf_text(). For example, while an Australian audience might be reasonably expected to know the names of the Australian states (and are left unlabelled in the plot above) few Australians would know the names of different electorates in the Sydney metropolitan region. In order to draw an electoral map of Sydney, then, we would first need to extract the
map data for the relevant elextorates, and then add the label. The plot below zooms in on the Sydney region by specifying
coord_sf() and then uses
geom_sf_label() to overlay each electorate with a label:
# filter electorates in the Sydney metropolitan region sydney_map <- ozmaps::abs_ced %>% filter(NAME %in% c( "Sydney", "Wentworth", "Warringah", "Kingsford Smith", "Grayndler", "Lowe", "North Sydney", "Barton", "Bradfield", "Banks", "Blaxland", "Reid", "Watson", "Fowler", "Werriwa", "Prospect", "Parramatta", "Bennelong", "Mackellar", "Greenway", "Mitchell", "Chifley", "McMahon" )) # draw the electoral map of Sydney ggplot(sydney_map) + geom_sf(aes(fill = NAME), show.legend = FALSE) + coord_sf(xlim = c(150.97, 151.3), ylim = c(-33.98, -33.79)) + geom_sf_label(aes(label = NAME), label.padding = unit(1, "mm")) #> Warning in st_point_on_surface.sfc(sf::st_zm(x)): st_point_on_surface may not #> give correct results for longitude/latitude data
The warning message is worth noting. Internally
geom_sf_label() uses the function
st_point_on_surface() from the sf package to place labels, and the warning message
occurs because most algorithms used by sf to compute geometric quantities (e.g., centroids,
interior points) are based on an assumption that the points lie in on a flat two dimensional
surface and parameterised with Cartesian co-ordinates. This assumption is not strictly
warranted, and in some cases (e.g., regions near the poles) calculations that treat
longitude and latitude in this way will give erroneous answers. For this reason, the sf
package produces warning messages when it relies on this approximation.
geom_sf() is special in some ways, it nevertheless behaves in much the same fashion as any other geom, allowing additional data to be plotted on a map with standard geoms. For example, we may wish to plot the locations of the Australian capital cities on the map using
geom_point(). The code below illustrates how this is done:
oz_capitals <- tibble::tribble( ~city, ~lat, ~lon, "Sydney", -33.8688, 151.2093, "Melbourne", -37.8136, 144.9631, "Brisbane", -27.4698, 153.0251, "Adelaide", -34.9285, 138.6007, "Perth", -31.9505, 115.8605, "Hobart", -42.8821, 147.3272, "Canberra", -35.2809, 149.1300, "Darwin", -12.4634, 130.8456, ) ggplot() + geom_sf(data = oz_votes) + geom_sf(data = oz_states, colour = "black", fill = NA) + geom_point(data = oz_capitals, mapping = aes(x = lon, y = lat), colour = "red") + coord_sf()
In this example
geom_point is used only to specify the locations of the capital cities, but the basic idea can be extended to handle point metadata more generally. For example if the oz_capitals data were to include an additional variable specifying the number of electorates within each metropolitan area, we could encode that data using the
At the start of the chapter I drew maps by plotting longitude and latitude on a Cartesian plane, as if geospatial data were no different to other kinds of data one might want to plot. To a first approximation this is okay, but it’s not good enough if you care about accuracy. There are two fundamental problems with the approach.
The first issue is the shape of the planet. The Earth is neither a flat plane, nor indeed is it a perfect sphere. As a consequence, to map a co-ordinate value (longitude and latitude) to a location we need to make assumptions about all kinds of things. How ellipsoidal is the Earth? Where is the centre of the planet? Where is the origin point for longitude and latitude? Where is the sea level? How do the tectonic plates move? All these things are relevant, and depending on what assumptions one makes the same co-ordinate can be mapped to locations that are many meters apart. The set of assumptions about the shape of the Earth is referred to as the geodetic datum and while it might not matter for some data visualisations, for others it is critical. There are several different choices one might consider: if your focus is North America the “North American Datum” (NAD83) is a good choice, whereas if your perspective is global the “World Geodetic System” (WGS84) is probably better.
The second issue is the shape of your map. The Earth is approximately ellipsoidal, but in most instances your spatial data need to be drawn on a two dimensional plane. It is not possible to map the surface of an ellipsoid to a plane without some distortion or cutting, and you will have to make choices about what distortions you are prepared to accept when drawing a map. This is the job of the map projection.
Map projections are often classified in terms of the geometric properties that they preserve, e.g.
Area-preserving projections ensure that regions of equal area on the globe are drawn with equal area on the map.
Shape-preserving (or conformal) projections ensure that the local shape of regions is preserved.
And unfortunately, it’s not possible for any projection to be shape-preserving and area-preserving. This makes it a little beyond the scope of this book to discuss map projections in detail, other than to note that the simple features specification allows you to indicate which map projection you want to use. For more information on map projections, see Geocomputation with R https://geocompr.robinlovelace.net/.22
Taken together, the geodetic datum (e.g, WGS84), the type of map projection (e.g., Mercator) and the parameters of the projection (e.g., location of the origin) specify a coordinate reference system, or CRS, a complete set of assumptions used to translate the latitude and longitude information into a two dimensional map. An sf object often includes a default CRS, as illustrated below:
st_crs(oz_votes) #> Coordinate Reference System: #> User input: EPSG:4283 #> wkt: #> GEOGCRS["GDA94", #> DATUM["Geocentric Datum of Australia 1994", #> ELLIPSOID["GRS 1980",6378137,298.257222101, #> LENGTHUNIT["metre",1]]], #> PRIMEM["Greenwich",0, #> ANGLEUNIT["degree",0.0174532925199433]], #> CS[ellipsoidal,2], #> AXIS["geodetic latitude (Lat)",north, #> ORDER, #> ANGLEUNIT["degree",0.0174532925199433]], #> AXIS["geodetic longitude (Lon)",east, #> ORDER, #> ANGLEUNIT["degree",0.0174532925199433]], #> USAGE[ #> SCOPE["Horizontal component of 3D system."], #> AREA["Australia including Lord Howe Island, Macquarie Islands, Ashmore and Cartier Islands, Christmas Island, Cocos (Keeling) Islands, Norfolk Island. All onshore and offshore."], #> BBOX[-60.56,93.41,-8.47,173.35]], #> ID["EPSG",4283]]
Most of this output corresponds to a well-known text (WKT) string that unambiguously describes the CRS. This verbose WKT representation is used by sf internally, but there are several ways to provide user input that sf understands. One such method is to provide numeric input in the form of an EPSG code (see http://www.epsg.org/). The default CRS in the
oz_votes data corresponds to EPSG code 4283:
In ggplot2, the CRS is controlled by
coord_sf(), which ensures that every layer in the plot uses the same projection. By default,
coord_sf() uses the CRS associated with the geometry column of the data23. Because sf data typically supply a sensible choice of CRS, this process usually unfolds invisibly, requiring no intervention from the user. However, should you need to set the CRS yourself, you can specify the
crs parameter by passing valid user input to
st_crs(). The example below illustrates how to switch from the default CRS to EPSG code 3112:
As noted earlier, maps created using
coord_sf() rely heavily on tools
provided by the sf package, and indeed the sf package contains many more useful tools for
manipulating simple features data. In this section I provide an introduction to a few
such tools; more detailed coverage can be found on the sf package website
To get started, recall that one advantage to simple features over other representations of spatial data is that geographical units can have complicated structure. A good example of this in the Australian maps data is the electoral district of Eden-Monaro, plotted below:
As this illustrates, Eden-Monaro is defined in terms of two distinct polygons, a large one on the Australian mainland and a small island. However, the large region has a hole in the middle (the hole exists because the Australian Capital Territory is a distinct political unit that is wholly contained within Eden-Monaro, and as illustrated above, electoral boundaries in Australia do not cross state lines). In sf terminology this is an example of a
MULTIPOLYGON geometry. In this section I’ll talk about the structure of these objects and how to work with them.
First, let’s use dplyr to grab only the geometry object:
The metadata for the edenmonaro object can accessed using helper functions. For example,
st_geometry_type() extracts the geometry type (e.g.,
st_dimension() extracts the number of dimensions (2 for XY data, 3 for XYZ),
st_bbox() extracts the bounding box as a numeric vector, and
st_crs() extracts the CRS as a list with two components, one for the EPSG code and the other for the proj4string. For example:
st_bbox(edenmonaro) #> xmin ymin xmax ymax #> 147.7 -37.5 150.2 -34.5
Normally when we print the
edenmonaro object the output would display all the additional information (dimension, bounding box, geodetic datum etc) but for the remainder of this section I will show only the relevant lines of the output. In this case edenmonaro is defined by a MULTIPOLYGON geometry containing one feature:
edenmonaro #> Geometry set for 1 feature #> Geometry type: MULTIPOLYGON #> MULTIPOLYGON (((150 -36.2, 150 -36.2, 150 -36.3...
However, we can “cast” the MULTIPOLYGON into the two distinct POLYGON geometries from which it is constructed using
st_cast(edenmonaro, "POLYGON") #> Geometry set for 2 features #> Geometry type: POLYGON #> POLYGON ((150 -36.2, 150 -36.2, 150 -36.3, 150 ... #> POLYGON ((148 -36.7, 148 -36.7, 148 -36.7, 148 ...
To illustrate when this might be useful, consider the Dawson electorate, which consists of 69 islands in addition to a coastal region on the Australian mainland.
Suppose, however, our interest is only in mapping the islands. If so, we can first use the
st_cast() function to break the Dawson electorate into the constituent polygons. After doing so, we can use
st_area() to calculate the area of each polygon and
which.max() to find the polygon with maximum area:
The large mainland region corresponds to the 69th polygon within Dawson. Armed with this knowledge, we can draw a map showing only the islands:
A second way to supply geospatial information for mapping is to rely on raster data. Unlike the simple features format, in which geographical entities are specified in terms of a set of lines, points and polygons, rasters take the form of images. In the simplest case raster data might be nothing more than a bitmap file, but there are many different image formats out there. In the geospatial context specifically, there are image formats that include metadata (e.g., geodetic datum, coordinate reference system) that can be used to map the image information to the surface of the Earth. For example, one common format is GeoTIFF, which is a regular TIFF file with additional metadata supplied. Happily, most formats can be easily read into R with the assistance of GDAL (the Geospatial Data Abstraction Library, https://gdal.org/). For example the sf package contains a function
sf::gdal_read() that provides access to the GDAL raster drivers from R. However, you rarely need to call this function directly, as there are other high level functions that take care of this for you.
As an illustration, suppose we wish to plot satellite images made publicly available by the Australian Bureau of Meterorology (BOM) on their FTP server. The bomrang package24 provides a convenient interface to the server, including a
get_available_imagery() function that returns a vector of filenames and a
get_satellite_imagery() function that downloads a file and imports it directly into R. For expository purposes, however, I’ll use a more flexible method that could be adapted to any FTP server, and use the
# list of all file names with time stamp 2020-01-07 21:00 GMT # (BOM images are retained for 24 hours, so this will return an # empty vector if you run this code without editing the time stamp) files <- bomrang::get_available_imagery() %>% stringr::str_subset("202001072100") # use curl_download() to obtain a single file, and purrr to # vectorise this operation purrr::walk2( .x = paste0("ftp://ftp.bom.gov.au/anon/gen/gms/", files), .y = file.path("raster", files), .f = ~ download.file(url = .x, destfile = .y) )
Note that if you want to run this code yourself you will need to change the time stamp string from
"202001072100" to one day prior to the current date, and you will need to make sure there is a folder called “raster” in your working directory into which files will be downloaded. After caching the files locally (which is generally a good idea) we can inspect the list of files we have downloaded:
dir("raster") #> "IDE00421.202001072100.tif" "IDE00422.202001072100.tif"
All 14 files are constructed from images taken by the Himawari-8 geostationary satellite operated by the Japan Meteorological Agency and takes images across 13 distinct bands. The images released by the Australian BOM include data on the visible spectrum (channel 3) and the infrared spectrum (channel 13):
To import the data in the img_visible file into R, I’ll use the stars package25 to import the data as stars objects:
library(stars) sat_vis <- read_stars(img_vis, RasterIO = list(nBufXSize = 600, nBufYSize = 600)) sat_inf <- read_stars(img_inf, RasterIO = list(nBufXSize = 600, nBufYSize = 600))
In the code above, the first argument specifies the path to the raster file, and the
RasterIO argument is used to pass a list of low-level parameters to GDAL. In this case, I have used
nBufYSize to ensure that R reads the data at low resolution (as a 600x600 pixel image). To see what information R has imported, we can inspect the
sat_vis #> stars object with 3 dimensions and 1 attribute #> attribute(s), summary of first 1e+05 cells: #> Min. 1st Qu. Median Mean 3rd Qu. Max. #> IDE00422.202001072100.tif 0 0 0 18.1 0 255 #> dimension(s): #> from to offset delta refsys point values x/y #> x 1 600 -5500000 18333.3 Geostationary_Satellite FALSE NULL [x] #> y 1 600 5500000 -18333.3 Geostationary_Satellite FALSE NULL [y] #> band 1 3 NA NA NA NA NULL
This output tells us something about the structure of a stars object. For the
sat_vis object the underlying data is stored as a three dimensional array, with
y dimensions specifying the spatial data. The
band dimension in this case corresponds to the colour channel (RGB) but is redundant for this image as the data are greyscale. In other data sets there might be bands corresponding to different sensors, and possibly a time dimension as well. Note that the spatial data is also associated with a coordinate reference system (referred to as “refsys” in the output).
To plot the
sat_vis data in ggplot2, we can use the
geom_stars() function provided by the stars package. A minimal plot might look like this:
geom_stars() function requires the
data argument to be a stars object, and maps the raster data to the fill aesthetic. Accordingly, the blue shading in the satellite image above is determined by the ggplot2 scale, not the image itself. That is, although
sat_vis contains three bands, the plot above only displays the first one, and the raw data values (which range from 0 to 255) are mapped onto the default blue palette that ggplot2 uses for continuous data. To see what the image file “really” looks like we can separate the bands using
ggplot() + geom_stars(data = sat_vis, show.legend = FALSE) + facet_wrap(vars(band)) + coord_equal() + scale_fill_gradient(low = "black", high = "white")
One limitation to displaying only the raw image is that it is not easy to work out where the relevant landmasses are, and we may wish to overlay the satellite data with the
oz_states vector map to show the outlines of Australian political entities. However, some care is required in doing so because the two data sources are associated with different coordinate reference systems. To project the
oz_states data correctly, the data should be transformed using the
st_transform() function from the sf package. In the code below, I extract the CRS from the
sat_vis raster object, and transform the
oz_states data to use the same system.
Having done so, I can now draw the vector map over the top of the raster image to make the image more interpretable to the reader. It is now clear from inspection that the satellite image was taken during the Australian sunrise:
ggplot() + geom_stars(data = sat_vis, show.legend = FALSE) + geom_sf(data = oz_states, fill = NA, color = "white") + coord_sf() + theme_void() + scale_fill_gradient(low = "black", high = "white")
What if we wanted to plot more conventional data over the top? A simple example of this would be to plot the locations of the Australian capital cities per the
oz_capitals data frame that contains latitude and longitude data. However, because these data are not associated with a CRS and are not on the same scale as the raster data in
sat_vis, these will also need to be transformed. To do so, we first need to create an sf object from the
oz_capitals data using
This projection is set using the EPSG code 4326, an ellipsoidal projection using the latitude and longitude values as co-ordinates and relying on the WGS84 datum. Having done so we can now transform the co-ordinates from the latitude-longitude geometry to the match the geometry of our
The transformed data can now be overlaid using
ggplot() + geom_stars(data = sat_vis, show.legend = FALSE) + geom_sf(data = oz_states, fill = NA, color = "white") + geom_sf(data = cities, color = "red") + coord_sf() + theme_void() + scale_fill_gradient(low = "black", high = "white")
This version of the image makes clearer that the satellite image was taken at approximately sunrise in Darwin: the sun had risen for all the eastern cities but not in Perth. This could be made clearer in the data visualisation using the
geom_sf_text() function to add labels to each city. For instance we could add another layer to the plot using code like this,
though some care would be required to ensure the text is positioned nicely (see Chapter 8).
The USAboundaries package, https://github.com/ropensci/USAboundaries contains state, county and zip code data for the US.26 As well as current boundaries, it also has state and county boundaries going back to the 1600s.
The tigris package, https://github.com/walkerke/tigris, makes it easy to access the US Census TIGRIS shapefiles.27 It contains state, county, zipcode, and census tract boundaries, as well as many other useful datasets.
The rnaturalearth package28 bundles up the free, high-quality data from http://naturalearthdata.com/. It contains country borders, and borders for the top-level region within each country (e.g. states in the USA, regions in France, counties in the UK).
If you have your own shape files (
.shp) you can load them into R with |
Division Multiplication Worksheets. Draw on your dexterity in simplifying and deciphering small or large numerals in convenient forms, while diligently applying the principles of multiplication and division. Help comprehend the inverse relationship between multiplication and division.
Be it word problems or simply finding out the product or the. Free division worksheets and free division distance learning. Here is our free generator for division (and multiplication) worksheets.
Be It Word Problems Or Simply Finding Out The Product Or The.
Brighterly’s dividing and multiplying worksheets present a fantastic way to help kids understand the concepts of fractions, multiplication, and division. These are the basic arithmetic operations that are necessary to know if a student wants to be. Help comprehend the inverse relationship between multiplication and division.
Multiplication And Division Worksheets Promote An Understanding Of Multiplication And Division.
This collection of multiplication and division worksheets can be used for timed practice once both multiplication and division problems have been. Multiplication and division worksheets math worksheets that include mixed multiplication and division pages with one operation per question. Decimal multiplication and division worksheet in mathematics, both division and multiplication play essential roles.
Review And Practice Multiplication And Division With This Free Printable Worksheets For Kids.
The worksheets for practicing multiplication and division expose kids to various math tasks. The various resources listed below are aligned to the same standard, (3oa07) taken from the ccsm (common core standards for mathematics) as the multiplication worksheet shown. You can set a maximum number for dividend and divisor and how.
Worksheets Are Division And Multiplication, Whole Number Multiplication And Division, Multiplication And Division.
Free division worksheets and free division distance learning. Multiplication and division other contents: Division worksheet generator to generate unlimited, custom and printable division worksheets to practice division skills.
6Th Grade Multiplication And Division Worksheets, Including Multiplying In Parts, Multiplying In Columns, Division With Remainders, Long Division And Missing Factor, Divisor Or Dividend.
Free interactive exercises to practice online or download as pdf to print. Multiplication and division worksheets and online activities. These multiplication worksheets are appropriate for. |
Characteristics of the Universe
by Ron Kurtus (revised 30 November 2011)
The Universe consists of all the stars and galaxies of which we are aware. It is everything that is out there in the sky and is immense beyond imagination. In fact, it is difficult to grasp the great distances and large numbers involved in the universe. Our place and existence is tiny compared to everything else. The theory is that the Universe began with the Big Bang. There are also theories that other universes exist, parallel to ours.
Questions you may have include:
- How big is the Universe?
- How did the Universe begin?
- What are some other universes?
This lesson will answer those questions.
Size of Universe
The Universe is large in size and consists of billions of galaxies, which in turn consist of billions of stars.
The size of the Universe is immense. It is estimated that the Universe is 156 billion (156,000,000,000) light years across. Since light travels at approximately 299,800 kilometers per second or 186,000 miles per second, you can see that the distance is very large.
Number of galaxies and stars
The Universe consists of approximately 100 billion galaxies. Each galaxy consists of anywhere from 10 million to 1 trillion (1,000,000,000,000) stars rotating around a center area.
Galaxy consisting of billions of stars
Each star in a galaxy is a sun, similar to our Sun. Some stars are much larger than our Sun, while others are smaller.
Our Sun, the Earth and planets in our Solar System are part of the Milky Way galaxy. It is called that because, on a clear night, the many stars in our galaxy almost look like milk spread across the sky.
Our view of the rest of the Milky Way galaxy
Since we are in the Milky Way galaxy, our view of it is an edgewise view.
Beginning of the Universe
The major theory of how the Universe started is the "Big Bang" theory. This states that about 13.7 billion years ago (13,700,000,000 years) all matter was compressed to what some estimate was the size of a golf ball. It then exploded in a "big bang" and spread out until it has reached the enormous size of the present-day Universe.
While the matter was spreading out, it started to clump together into larger and larger masses. The turbulence of the explosion resulted in the spinning of the galaxies.
Measurements show that the Universe is still expanding. By interpolating the directions and velocities backward, astronomers have estimated the beginning time of the explosion. Of course, such measurements are highly inaccurate. There is debate on this theory and whether the Universe will continue to expand or will start to contract.
There are theories that there may be other universes that we are not aware of. One theory says that there are parallel universes similar to our universe. Since we don't know how big our Universe is, there may be other universes outside our Universe's boundaries.
Another theory is that a black hole may actually be the starting point of the Big Bang of another universe. These are all abstract thoughts that cannot be proven.
(See Are Atoms Tiny Solar Systems? for another idea.)
Just the whole concepts of what's out "there" and where the universe exists are quite mysterious.
The universe is extremely large and consists of millions of galaxies and billions of stars. We are in the Milky Way galaxy. A major theory of the beginning of the universe is called the Big Bang Theory.
Help all those in your universe
Resources and references
Space Weather - News about Earth-Sun environment
Questions and comments
Do you have any questions, comments, or opinions on this subject? If so, send an email with your feedback. I will try to get back to you as soon as possible.
Share this page
Click on a button to bookmark or share this page through Twitter, Facebook, email, or other services:
Students and researchers
The Web address of this page is:
Please include it as a link on your website or as a reference in your report, document, or thesis.
Where are you now?
Characteristics of the Universe |
At the center of today’s symphony orchestra is the string section. The family group of stringed instruments includes the violin, the viola, the cello, and the double bass. This group’s defining features are strings, frets, and bows.
The word “violin” is actually a diminutive term for “viola,” meaning that the instrument descends from the older viol family. The original Italian term for the latter instrument is “viola da braccio,” or “viol for the arm.” Held against the musician’s shoulder, this is the type of viol from which the modern viola developed.
The following are some interesting facts about the always lyrical, expressive, and resonant violin:
1. It came into being during the Middle Ages.
Some experts believe that the introduction of the violin into Europe began with the stringed instruments of Arab-ruled Spain in the early Middle Ages. The instruments of the cultures of the Iberian Peninsula at the time included the rabab and its descendant, the rebec. The latter had three strings, was shaped like a pear, and was often played with its base resting against a seated player’s thigh.
Musicologists consider Central Asia the most likely ultimate origin for the bowed chordophone instruments that began to proliferate throughout Europe and Western Asia by the early Middle Ages. The Polish fiddle may be one of the direct progenitors of the violin.
In addition to the rebec, other medieval instruments that led to the development of the violin included the lira da braccio and the fiddle. The shape of the lira da braccio, in particular, with its arching body and low-relief ribs, prefigured today’s familiar violin.
The lira da braccio’s shallowness of body likely led to the addition of a sound post, a device particular to the violin and later to the viols. The sound post is a small, vertically positioned dividing wall that separates the instrument’s front and back in order to keep the pressures exerted on the strings from causing the belly arch to cave in. Musicologists point out that this sound post contributes to the richness of the violin’s lilting, singing tone, as it harmonizes the workings of the body and strings as a unit.
By the end of the medieval period, a fiddle of a type that would be recognizable today appeared on the scene.
2. The Amati family refined the violin during the Renaissance.
According to paintings of the time, violins with three strings were being played by at least the early 16th century. Lute-maker Andrea Amati of Cremona in Italy produced several violins with three strings at about this time. At about the middle of the 1500s, violins with top E-strings had appeared. It was then that the cello—or “violoncello”—and viola also branched out of the viol family.
Bowed instruments developed further in tandem with the Renaissance, particularly in Italy, with the Amati family being the most famous violin-makers of the 16th and early 17th centuries. The Amatis’ great innovation was the development of the thinner, flatter, violin body that produced a particularly appealing sound in the soprano register.
3. Stradivari established impeccable standards.
While the Amatis played a major role in standardizing the general size and proportions of the stringed instruments we know today, one of their apprentices, Antonio Stradivari, would carry forward and expand on their technical skills. By the late 1600s, Stradivari had created a wholesale alteration in violin proportions through elongating the instrument. His now-standard form for its bridge and general proportions has rendered it capable of producing sounds of extraordinary power and range.
At one time, it was believed that Stradivari’s violins drew their range and depth of tone from the secret formula he used for their varnish. No one, then or now, has ever figured out that formula.
Today’s music historians note that the distinct sound of Stradivari’s violins most likely derived from the quality of the vibration facilitated by thicker wooden top and rear plates, as well as from the configuration of miniscule pores in the wood. However, many experts additionally point out that the master’s varnish did indeed contribute to the overall quality of the sound.
4. Virtuosity became the goal for violinists in the 19th century.
Into the 1800s, violin-makers continued to try new ways to construct the instrument and refine its proportions, angles, and arches. At this time, the repertoire for solo and accompanied violin began to require high levels of skill and dexterity, and violinists such as Niccolò Paganini became known for executing tremendously complex passages. Paganini, who cultivated the image of the composer-musician as a wild Romantic, amassed an enormous and devoted fan following in his day.
Such virtuosity was further enhanced when Louis Spohr invented the chin rest sometime around 1820, thus enabling a player to more comfortably hold and manipulate the instrument. The addition of a shoulder rest additionally contributed to this ease of handling.
5. There are many modern-day virtuosos.
A number of 20th- and 21st-century players have rivaled Paganini in skill and popularity. Among these are the child prodigy and older grandmaster Yehudi Menuhin, who died in 1999 at age 82. Menuhin’s technical proficiency dazzled audiences, and he became known for his championing of contemporary composers such Béla Bartók.
Itzhak Perlman, born in 1945, remains one of the world’s finest living violinists, known for his focus on detail. While still in his teens, Perlman made his debut at Carnegie Hall. A Grammy Award winner for lifetime achievement, he has since played with jazz and klezmer groups, and performed music for motion pictures. In addition to his work as a conductor, he has also served as a teacher of gifted young musicians.
6.Today, the violin encompasses a mosaic of musical cultures.
Like Perlman, today’s violinists perform not only classical music, but also an entire world of country, bluegrass, folk, rock, and world music. Throughout North Africa, Greece, the Arab world, and the southern part of India, the violin and viola continue to be very popular. The Roma have a long tradition of using the violin in communal music-making, as do the Jews through the tradition of klezmer. The violin remains widely used in American and European folk compositions as well.
Musicologists define perfect pitch, also known as absolute pitch, as the ability to independently identify the pitch of any musical note, or to reproduce any specified note. Some studies have indicated that perfect pitch is relatively rare; only about one person in 10,000 possesses it.
Here are a few facts and theories about perfect pitch, and how human beings—particularly children—might be taught to develop it.
1. What is the science behind musical pitch?
Every sound consists of sound waves. These vibrations reach the ear, and then the brain, via nerve impulses. The unit of measurement for sound waves is the hertz, with a single wave per second designated as one hertz, 100 wave vibrations per second as 100 hertz, and so on. The human ear can perceive sound waves vibrating along a scale of approximately 20 to 20,000 hertz.
When musicians talk about the pitch of a sound, they are referring to the sensation of its frequency. Lower frequencies equal lower pitch, and as the frequency gets higher, so does the pitch. A highly trained musician with excellent pitch can distinguish very subtle differences between sounds that vary by as little as 2 hertz.
2. What’s the difference between perfect pitch and relative pitch?
People with perfect pitch know, for example, that the first musical interval in the children’s song “Twinkle, Twinkle, Little Star,” represents a perfect fifth on the scale, and that the iconic vocal “way up high” jump in Judy Garland’s rendition of “Somewhere Over the Rainbow” is the interval of a major sixth.
A musician with perfect pitch can instantly determine the relation of any one note on the scale to any other. He or she can also reproduce notes at specified intervals without looking at the instrument being played or any other external source.
With relative pitch, a musician can identify the intervals between notes, but not necessarily the notes themselves. Most experts believe that perfect pitch cannot be taught; however, most musicians can develop some degree of relative pitch through application and study.
Experts point out that perfect pitch and relative pitch are complementary, and that it is possible to possess both. One way of describing the difference is to say that perfect pitch is analogous to creative, artistic, “right-brained” ways of understanding the world. Relative pitch is in line with more “intellectualized,” “left-brained” means of perception. After developing relative pitch, musicians are better able to name and describe the elements of music verbally, whereas those with a sense of perfect pitch have an instant, innate understanding that transcends words.
3. Which famous musicians have had perfect pitch?
5. Pitch can be associated with meaning.
Other techniques exist for assisting young children in the development of relative pitch.
Children can listen to a story about, for example, animals of different sizes and temperaments, and can learn to associate a specific pitch with each one. For example, one instructor would ask children to imagine a big, powerful elephant lumbering alone. As the image unfolded in the children’s minds, the instructor would play a combination of low notes on the piano. Then a monkey would appear in the story, accompanied by notes in the piano’s middle range. A series of lilting high notes would go along with a section of the story about light, high-flying birds.
6. New research suggests perfect pitch can be learned.
It was a long believed that perfect pitch was inborn and not able to be taught or learned, but some contemporary researchers believe otherwise.
Diana Deutsch, a University of California, San Diego, psychology professor and researcher into cognition and musical ability, believes that the secret lies in helping young children make connections between pitch and meaning. Dr. Deutsch, known for her discovery of a range of musical illusions and paradoxes, has focused in particular on the phenomenon of perfect pitch.
Dr. Deutsch has written that all people are born with an inherent form of perfect pitch, but that most never learn to recognize or use it. People may recognize a note but be unable to name it. But she also believes that timing is everything. If a child has not had in-depth musical training before beginning elementary school, he or she is less likely to discover that hidden sense of perfect pitch.
7. The identification of tritones can help develop perfect pitch.
Dr. Deutsch grounds her theory about developing perfect pitch partly on her work with musical illusions and conundrums, including her discovery of the “Tritone Paradox.” A tritone indicates the interval where an octave—a series of eight notes—divides evenly into two halves. An example: C and F-sharp form a tritone pair.
Every musical note has a companion, as in the C-F-sharp pairing, located precisely one-half octave away. The paradox lies in the fact that individuals may hear the same tritones as either ascending or descending when they are played in sequence. People are often astonished to find that others hear the opposite.
Dr. Deutsch’s research showed that everyone has some ability to remember these fixed tritone pairs, which she defines as one innate form of perfect pitch. She further discovered that working on this type of fixed pitch just might enable an individual to go on to acquire perfect pitch, if such instruction starts early enough.
8. Speaking a tonal language may help with the acquisition of perfect pitch.
Native speakers of tonal languages, such as Vietnamese and Chinese, seem to have a particular advantage when it comes to developing perfect pitch. Dr. Deutsch theorizes that this is because their brains were wired around distinguishing fine gradations in spoken tones, and because perception of tritone patterns in these cultures tends to be the same for all speakers. By contrast, individual speakers of American English tend to have their own individual perceptions of whether any given tritone is ascending or descending.
9. Creating a DIY tonal language may help young children develop perfect pitch.
Dr. Deutsch suggests that parents who want to give their young children perfect pitch try to recreate a tonal language at home. An easy way to do this is to label each note on a keyboard with a different sticker showing an animal. For example, every C note can be labeled with a dog, every F-sharp with a cat, and so on. Children can then more easily mentally associate each tone with a meaning. As they learn the abstract notes of the scale, they will substitute them for the animal pictures.
Percussion instruments are perennial favorites of both children and their teachers within any music education program. The variety of percussion instruments available for purchase by educators and parents is rivaled only by the wide range of such instruments that school groups and families can make from relatively simple materials.
Here is a summary of the fascinating history and educational uses of drums and other percussion instruments.
Why keeping the beat is important
Babies and young children love to shake, rattle, and roll a variety of musical instruments and common household items. The “aha!” instant when a young child discovers the ability to manipulate objects to make sounds can be a joyful and momentous one.
So rhythm instruments solve one perennial classroom problem: Ensuring an orderly environment conducive to learning while at the same time respecting young children’s innate need to make noise and enjoy movement.
While learning the words to a new song can be challenging and involve a great deal of memorization, making music with rhythm and percussion instruments is so simple that it can be enjoyed by children with a wide range of abilities.
For shy children, having a musical instrument in hand can increase their self-confidence as they join musical activities that demand only that they make noise.
One study after another has shown that learning to make music supports the full range of intellectual, artistic, social, and emotional development in young children.
Early education programs that make good use of rhythm and percussion instruments can be particularly helpful in strengthening spatial and kinesthetic awareness, as well as to develop young participants’ coordination.
Some simple examples of percussion in the classroom
You can instruct children to shake their rhythm instruments alternately high, low, to the left, and to the right, in front of themselves and behind their backs. Children can make the big motions that reinforce gross muscle development while shaking their instruments, or small movements that build fine motor skills.
Real-time verbal commentary (“Shake it to the left! Shake it to the right! Over! Under!”) adds another layer of language learning to the mix, while rhythm and music can help to anchor memories of new words in children’s consciousness.
Young musicians can easily learn to adjust their movements, ranging from vigorous shaking to delicate jingling, as they learn more about the concepts of “loud” and “soft.”
The history of drums reverberates to the present day
Civilizations throughout recorded history have made use of drums. Military maneuvers and marches have been accompanied by drum beats. Ancient tribes frequently used drums to broadcast signals and send messages back and forth.
Many students of music history believe that the snare drum arose in medieval Europe, at a time when a wide range of drum types were used, although the ultimate origin of drums was likely in Asia.
The Middle Ages also saw the extensive use of the timbrel, an early type of tambourine with jingling attachments, and of the frame drum or tympanum, whose body was a wooden frame with an open underside. Itinerant performers would often pair a timbrel with a pipe held in one hand.
Medieval Europe also saw a proliferation of various types of drums, with no standard way of referring to them. “Tabor” or “tambour” was another term used to describe a drum, with various linguistic variations. The phrase “trommel” was a 12th century coinage from Germanic languages that linguists believe to be the source of the current English word, “drum.”
In medieval Europe, the bass drum followed the snare drum into wide use, even as drum sticks evolved to the point where they were carved from a range of wood types. Beef wood was a popular drum stick material in the 18th century, while military bands of the following century favored ebony.
The era of European colonization led to the adoption of bongos from African and Afro-Cuban populations into Western cultures in the 1800s, as well.
The early 20th century witnessed the sale of entire sets of drums as a unit, with innovations adding cymbals and other percussion to the standard set. In the 1950s, Joe Calato introduced the nylon-tipped drum stick, and electric drums appeared for the first time in the 1970s.
Homemade rhythm instruments can be all you need
Homemade rhythm and percussion instruments can provide hours of fun. And they can be as simple as a small, sealed container filled with rice, beads, or other items that produce sound when shaken.
Other, more elaborate shakers can be made by using strong packing tape to attach two clear plastic cups together after filling them with percussive material. To add flair, you can attach shower curtain rings to either end of this type of shaker with more packing tape. Then you can attach ribbon to the rings to serve as colorful streamers.
A discarded coffee can might become a drum, or unused window casings or wooden rectangles can be cut to various sizes and assembled as a xylophone. A garbage can is just waiting to become a steel drum, while a series of jars filled with water of varying depths can create cascading, delicate melodies when struck with a light mallet.
For those who would rather purchase their instruments, a wealth of online shopping sites offer inexpensive, child-friendly egg-shaped shakers, rhythm sticks, whistles, small drums, tambourines, and more.
Across the world, dedicated musicians have helped nurture the talents of new generations of young performers in the classical tradition through a variety of youth symphony orchestra experiences. These organizations, regardless of their location, share a set of common goals: to train young men and women in the rigors of musical interpretation while helping them develop vital life skills such as cooperation, self-discipline, goal-setting, and professionalism.
Here are summaries of the histories and work of only a few of the world’s many youth orchestras active today:
1. The Children’s Orchestra Society
In 1962, Dr. Hiao-Tsiun Ma established the Children’s Orchestra Society (COS) as a means of teaching children to appreciate and perform music, and to understand the values of collaboration and teamwork. Since then, the New York-based nonprofit organization has transformed the lives of numerous young people by helping them gain skills in musicianship and performance that have had lasting positive effects on their lives. Thanks to the training COS offers, young musicians can perform at high levels as members of groups dedicated to classical and chamber music, and play alongside established adult performers.
The COS continues to operate under the principles of its founder. Dr. Ma, a musicologist and teacher in his native China and in the West, was the father of world-renowned cellist Yo-Yo Ma and of Dr. Yeou-Cheng Ma, who currently heads the COS. The society’s child-centered philosophy aims to provide a supportive environment optimized for the unfolding of each student’s own innate musical gifts, with parts written specifically to enhance individual competencies.
2. The Los Angeles Youth Orchestra
The Los Angeles Youth Orchestra was founded in 1999. Originally funded with grant monies from the local Jewish Community Federation, the organization—then known as the Los Angeles Jewish Youth Orchestra—focused on Jewish-themed liturgical and other music. Its mission soon widened to include performance of the full range of music from the world’s musical heritage, both classical and contemporary.
Under the leadership of composer and music director Russell Steinberg, who arranged several symphonies by Franz Joseph Haydn and created original compositions specifically for the group, its musicians’ talents blossomed.
As its repertoire grew and diversified, so did the orchestra’s membership. By 2003, its performers included some five dozen students from a variety of backgrounds and representing about 50 Los Angeles-area high schools. In acknowledgement of this broader focus, that year Steinberg renamed the group the Los Angeles Youth Orchestra. In 2008, the group earned official nonprofit status.
Since its debut, the LAYO has hosted West Coast and world premieres of a number of original compositions. Its schedule includes regular public performances, and it has planned a 2019 Argentina Tour, in which its members will perform four concerts in Buenos Aires, including an outreach concert in one of the city’s most poverty-stricken communities.
3. Chicago Youth Symphony Orchestras
The Chicago Youth Symphony Orchestras has served the community as a nonprofit group providing music and performance education since 1946. Today, CYSO works with hundreds of young people from the primary grades through high school. These youth take part in a variety of ensembles including four full-scale orchestras, several string orchestras, jazz and steel orchestras, and chamber music groups.
Prominent Chicagoland professional musicians serve as teachers and mentors to the youth as they train to present major performances. Former CYSO participants have gone on to distinguished careers in music and other fields. Many today perform in well-known orchestras and other ensembles around the world, while others have used the skills they learned with CYSO to become lawyers, physicians, and community leaders.
Thanks to CYSO’s Community Partnership Programs, more than 8,000 young people have had the opportunity to gain musical training through neighborhood-based groups and through other venues over the course of the 2017-2018 season. These programs focus particularly on serving youth in under-resourced parts of the community, with the goal of making a strong music education a core part of the life of every Chicagoan.
4. The New York Youth Symphony
The New York Youth Symphony was founded in 1963 to highlight the talents of young people ages 12 to 22. Today, after winning numerous awards and earning praise as one of the most prestigious of the world’s youth orchestras, the symphony continues its program of preparing young people for careers in music, and for becoming lifelong students of—and advocates for—the art.
Over the half-century and more of its existence, the New York Youth Symphony has benefited from the guidance of world-renowned music directors. It has also served as the training ground for some of today’s most in-demand composers and performers, and has for almost 35 years actively commissioned new compositions from young musicians themselves.
5. The Recycled Orchestra of Cateura
In Paraguay, young people with few material resources have established themselves as a remarkable orchestra playing exquisite music on instruments made from garbage. The 2016 documentary film Landfill Harmonic takes viewers inside the creation of this extraordinary youth orchestra, founded by renowned maestro Luis Szaran and led by music director Favio Chavez for the benefit of the children living in the slum of Cateura, Paraguay.
The orchestra has thrived thanks to the dedication of Szaran, Chavez, and a local recycler whose family has sustained itself by collecting and recycling trash. Now, the area’s youth have become skilled musicians playing violins, double bass, wind instruments, and more, all made from scrap metal, old barrels, discarded spoons and buttons, and other trash. And Szaran’s organization, Sonidos de la Tierra, or “Sounds of the Earth,” continues as an instrument workshop and worldwide musical touring ensemble supporting the orchestra.
In 1936, shortly after returning to the Soviet Union after living in Europe for 18 years, composer Sergei Prokofiev created one of the world’s most memorable and enduring musical pieces: Peter and the Wolf.
Ever since, Peter and the Wolf has entertained children while educating them about the sounds of key orchestral instruments.
Here are a few notes on Peter, the Wolf, their creator, and how this charming suite continues to be adapted to the needs of today’s music students:
An instrument defines each character
In Prokofiev’s story, every character has a signature instrument and tune that define individual personality. Music teachers can help children learn to identify the four families of instruments the composer used in the piece: strings, woodwinds, brass, and percussion.
Peter is portrayed by a joyous leitmotiv from a string quartet. Peter’s animal friends are the bird, portrayed by a lilting trill on the flute; the duck, depicted through the waddling gait of the oboe; and the cat, who slinks through the story accompanied by the lower registers of the clarinet. The chugging of the bassoon portrays Peter’s stern and scolding grandfather, and the rolling kettledrums bring a group of hunters to life. A series of sinister blasts on three French horns conveys the menace of the wolf.
A rollicking, melody-filled adventure story
In Prokofiev’s original plot, Peter is a Communist Young Pioneer who lives in the forest at the home of his grandfather. When Peter is walking through the forest, he encounters his friend the bird flying through the trees, the duck swimming, and the cat stalking the birds. Peter’s grandfather comes out of the house to warn his grandson about the dangerous wolf that lurks in the forest, but Peter has no fear.
The wolf eventually comes slinking past Peter’s cottage and devours the duck. So Peter avenges his friend and captures the wolf. He struggles with his captive but ends up tying him to a tree. The hunters appear, wanting to kill the wolf, but Peter persuades them to take the wild creature to the zoo, borne along in a celebratory parade.
Peter and the Wolf earned quick success and is still beloved today by children, teachers, and parents. Prokofiev called on his memories of his own childhood for scenes and characters.
A composer’s life in light and shadows
Born in 1891 in what is now Ukraine, Sergei Prokofiev learned to play the piano as a child. When he grew older, his mother moved with him to St. Petersburg so that he might continue his studies with instruction at a higher level. He began his formal studies at the St. Petersburg Conservatory and became a skilled pianist, composer, and conductor.
As a young man, Prokofiev became a dedicated traveler, intent on soaking up a variety of musical styles on visits throughout Europe and even to the United States. After the devastations of the Russian Revolution and the First World War, he settled in Paris, but he missed his homeland so much that he returned to the Soviet Union in 1936. He composed Peter and the Wolf for the Moscow Central Children’s Theatre that same year.
As his career blossomed, Prokofiev studied artistic influences including Igor Stravinsky, ballet impresario Sergei Diaghilev, and modernist artists such as Picasso. His oeuvre includes compositions for opera, ballet, and film. His symphonies and his concertos for piano, cello, and violin are notable among his works, as are his ballet Romeo and Juliet and his music for Sergei Eisenstein’s revered film Alexander Nevsky.
As the Cold War began, Soviet authorities targeted the composer for exclusion from cultural life due to his supposed anti-traditionalist point of view. Because the United States feared Soviet aggression, Western audiences also cooled toward him. When he died in 1953—on the same day as dictator Joseph Stalin—few newspaper readers noticed.
Disney works its magic on the story
There have been numerous recordings of Peter and the Wolf since its debut. The most famous film version is undoubtedly the Walt Disney company’s animated short subject in full color. This film was presented as part of the 1946 feature-length compilation Make Mine Music, which included a variety of other cartoon shorts focused on making music education fun.
In the Disney version, the animals have names and distinct personalities: The bird is named Sasha, the duck Sonia, and the cat Ivan, and each character livens things up through comedic routines.
A beloved favorite in schools and theaters
Dozens of lesson plans about Peter and the Wolf have been created for students of all ages. Typical of these is one created for the Chicago Symphony Orchestra. In this program, students hear the story, then listen to musical excerpts to become familiar with individual characters and their accompanying instruments. This goal is to ensure that students will understand the storyline; be able to pick out each character’s musical motif and signature instrument; anticipate how each theme will sound in the composition; and identify individual instruments, as well as instrument families, by sound and tone color.
Local companies continue to stage imaginative productions of Peter and the Wolf as part of campaigns for music education. For example, Seattle Children’s Theatre put on a local playwright’s adaptation of the story in which an Emmy Award-winning musician recast Prokofiev’s classic musical motifs with contemporary music styles such as the Charleston, the tango, and the two-step shuffle. The creative team enhanced the production with puppetry, movement, and an expanded series of humorous incidents.
Unfortunately, there is no evidence to support the widely-held idea that exposing infants and children to classical music can lead to an increase in their intelligence. However, research does indicate that listening to classical music can have a positive effect on many other areas of children's development.
Recent studies have suggested that young children who are exposed to classical music find it easier to concentrate, develop a stronger sense of self-discipline, are better listeners, and ultimately have a wider range of interest in music as they grow into young adults.
If you’re interested in introducing your child to classical music, these five popular and powerful pieces written by some of the greatest composers in history are an excellent place to get started.
1. Eine kleine Nachtmusik, Wolfgang Amadeus Mozart
2. The Flight of the Bumblebee, Nikolai Rimsky-Korsakov
3. Fur Elise, Ludwig von Beethoven
4. The Nutcracker Suite Op. 71a, Pytor Ilyich Tchaikovsky
5. Clair de Lune, Claude Debussy
When parents encourage their children to take music lessons from a young age, the piano is one of the most popular instrument choices. There is no definitive age at which experts suggest children begin music lessons; young musicians only need to be large enough to reach the keys and have enough hand dexterity to manipulate them.
If you are a parent who is thinking about introducing your young child to music through piano lessons for the first time, there are certain things you will need to do in order to prepare your child and your home for the experience before the first class. Listed below are six things to do before your child attends his or her first piano lesson.
1. Invest in a piano for your home.
The first step that you can take to benefit your future music student is to purchase a piano for him or her to use. Ideally, this should be done months or years ahead of time so that your child can grow up around the instrument and develop a familiarity with it prior to learning to play.
At the very least, make sure to invest in a piano right before he or she begins lessons. While there are ways to obtain free access to a piano outside of the home, nothing will be as accessible or as beneficial to your child’s learning experience as having a piano to practice on in his or her immediate environment.
While a new piano can be a significant investment, there are many websites where you can find gently-used pianos for affordable prices. Once you’ve found a piano that suits your budget, make sure to get it tuned by a professional so that the notes your child plays as he or she learns are in key.
2. Create the ideal practice space around the piano.
Where you place the piano in your home will affect how your young music student feels about the act of practicing. Professionals in music education suggest situating your piano in an area of the home that is neither too isolated nor too close to distractions like a television or computer.
The area should be warm and welcoming with adequate lighting. It must also include all the equipment that your child will need for practice sessions, including music sheets, pencils, and a comfortable piano bench. The more positive the physical practice area is, the more likely your child will feel enthusiastic about practicing when the time comes.
3. Listen to music together.
Spending quality time listening to music with your child can help him or her to develop a positive relationship with it as they grow up. While they listen, try to introduce them to basic musical concepts like rhythm by having them clap along to the beat of a song with you.
It can also be helpful to look up exciting videos of piano performances on YouTube, such as those made by the Piano Guys, to give your child a visual of what it’s like to play the instrument. Having this kind of familiarity may help children feel more comfortable with the instrument when they begin their first lessons.
4. Help your child learn the ABCs.
If your child understands the alphabet by the time that he or she takes up piano lessons, that ability will help them to identify and understand the names of notes. The musical alphabet spans notes with names from A to G, and a child who can remember the order and recognize letters when written on a music sheet will be in a better position to learn.
It can also be helpful to teach your child how to distinguish between his or her right and left sides as way to improve his or her ability to interact with a piano’s keyboard. Helping your child become aware that he or she can mirror the action of one hand on a side of their body with the other will facilitate the development of better spatial awareness. Additionally, it will help him or her better understand directions given during lessons.
5. Have a discussion about lessons and expectations.
While your child may be excited about the prospect of learning to play the piano, it’s important that you as the parent communicate your expectations for him or her at the outset. Make sure that your child knows that learning an instrument will be a fun experience, but that it requires practice and dedication. Talk to your child about the importance of daily practice, and make a verbal agreement on how often, when, and for what minimum amount of time your child will dedicate him- or herself to the practice of the piano each day.
6. Have a meet-and-greet with the instructor.
When choosing a music instructor for your child, try to schedule a meeting with prospective teachers before you make a decision. Once you find the right instructor, make sure to discuss the goals that you would like your child to accomplish through lessons and get feedback on the best ways that you can foster your child’s musical development at home.
The guitar has captured the interest of both young aspiring musicians and older learners alike since it first gained popularity in its electric form during the mid-20th century. Arguably one of the most popular instruments in the world, some people choose to take up the guitar as a form of relaxation or creative expression, while others choose it because it allows them to entertain both solo and with other musicians.
Still another reason that people choose to play the guitar over other instruments is because the guitar allows musicians the freedom to play and sing at the same time. There are few better instruments to learn to play for a musician who wants to sing along to music, but doing both at the same time can be difficult for beginners. Listed below are seven useful tips that can help new learners develop the ability to play the guitar and sing along.
1. Focus on your guitar-playing first.
Before you attempt to play and sing at the same time, you must first focus on developing the ability to play basic chords. As a new guitarist, your ability to recall the fingering for standard chord structures without much thought and to change quickly between these chords are the first steps in singing along to a song on the guitar.
2. Work with a metronome.
Keeping rhythm while performing a song is crucial to sounding natural—and it also makes singing along to the guitar easier. One way that guitarists can work on this form of timing during a song is to strum an easy pattern along to a metronome for about 10 minutes each day. If you’re committed to this practice, you’ll see a gradual improvement in your ability to play a song on beat over time—sometimes in as little as a few weeks.
3. Start simple.
If you’re just starting out, don’t choose a song that requires you to play advanced chords or sing complicated lyrics. Instead, you should look for songs with simpler chords and a basic rhythm that is well-suited to the beginning learner. Of course, you can develop the ability to sing and play any song with enough dedication and practice, but choosing a song that is overly complicated from the start can lead to frustration, which may take the enjoyment out of the experience.
4. Memorize the music and lyrics separately.
You should know the chords and the chord changes by heart before you sit down to sing along to a song. You can gauge your familiarity with a song by how well you’re able to play the chords while you’re distracted, such as when you’re carrying on a conversation or watching a TV show. Likewise, you should be able to sing the lyrics and the tune of the song from memory. The more that both elements of a song are second nature to you, the easier it will be to combine them.
5. Take it slow.
The excitement of learning to sing and play at the same time can cause some beginners to try and perform the song as quickly as possible at the start, but this actually does more harm than good. Start out slowly, learning to play and sing the correct parts one measure and lyric at a time—performing with speed will naturally come with time. People who rush through chords, rhythms, and lyrics to try and learn extremely quickly risk developing bad habits that can be difficult to break. It may even be a good idea to start out humming the song along with the chords instead of attempting to sing right away. Humming can help you figure out where the chord changes are in a song, since they don’t always line up with the syllables of the lyrics.
6. Change the key if you need to.
Though you can learn how to play a song in its original form, the notes may not suit the range of your voice. In this case, it’s important to remember that you can always change the key of the song to suit your range. This can be done by transposing the chord structure to a higher or lower octave using a transposition chart. Alternatively, you can use a capo, which allows you to play the original chords further up the neck of the guitar while changing the vocal register. Both ways of altering a song’s key have their advantages, so choose the method that you are most comfortable with on a case-by-case basis.
7. Put in a lot of practice.
As with any musical goal, learning how to sing and play the guitar simultaneously requires practice and patience. Don’t expect to be able to accomplish this feat right away, and try not to feel discouraged if you can’t master this new ability as quickly as you had hoped. It’s important to avoid rushing the process. In addition, recognize that even the most talented guitar-playing singers did not develop their abilities immediately. As a beginner, you should consider this goal a long-term project, and remember to take pride in your accomplishments when you master a song.
Though most music fans have a favorite genre of music, there are many benefits to listening to music styles from cultures unlike your own. Listening to music from different countries, even when performed in a language that you don’t understand, can help expand your perception of the world, bridge gaps between cultures, and even introduce you to a new favorite music style that you may not have otherwise discovered.
For those interested in learning about music outside of the western world, check out the following five international music styles that are widely enjoyed on other continents.
Already massively popular in its home country of South Korea, K-pop music has steadily gained a dedicated international fan base in recent years, including in parts of Europe, the Middle East, South America, and the United States. This upbeat music style is a blend of hip-hop, pop, and electronic music and is characterized by family-friendly lyrics with song hooks written to be blatantly catchy. K-pop music is almost always performed by all-female or all-male-fronted bands who release exciting, big budget music videos featuring extensive choreography and colorful, fashion-forward costumes. One of the first K-pop songs to receive widespread radio play in western countries was the song “Gangnam Style” by the artist PSY, who released the hit tune in 2012.
Calypso music is native to the Caribbean islands and most prominently performed in Trinidad. First developed in the early years of the 20th century, Calypso is influenced by both West African rhythm and European folk music. It relies heavily on stringed instruments like the guitar and banjo combined with steady percussion from instruments such as maracas or tamboo-bamboos. The lyrics of Calypso songs originally served as a way of spreading current events throughout the island of Trinidad in the early 1900s, especially news that was political in nature. However, the political climate at the time that Calypso music was first established required musicians to deliver the divisive subject matter through carefully-constructed lyrics that were typically witty and rooted in satire. This lyrical tradition continues in the genre today. Though not technically a Calypso musician, the singer Harry Belafonte helped popularize the genre through the release of “Banana Boat Song (Day-O)” in 1956.
The origins of qawwali date back more than 700 years to India and the south of Pakistan. Usually performed by Sufi Muslim men, the music is a tool through which the musicians, known as qawwals, can inspire congregations. It is a powerful form of music that incorporates poetic lyrics and percussive instruments like the harmonium, tabla, and dholak to move its listeners to a state of heightened spiritual union with God, or Allah. The typical qawwali ensembles includes one singer or pair of lead singers accompanied by a chorus of individuals who sing the song’s refrains and support the percussion with rhythmic hand-clapping. Though it remains predominantly religious in nature, the style has expanded beyond the devout Sufi demographic, in a manner similar to Gospel music in the United States. The late musician Nusrat Fateh Ali Khan is considered to be the individual responsible for expanding the popularity of qawwali outside of its traditional roots.
A style developed in the northern African country of Algeria, raï combines popular western-style music with that of the nomadic desert-dwelling people known as the Bedouins. While early versions of this musical style incorporated flutes and hand drums, the modern iteration of the genre is heavily influenced by pop and dance music and features a wide range of instruments, from saxophones and trumpets to drum synthesizers and electric guitars. One thing that has remained unchanged about raï music from its inception through modern day is the blunt nature of its lyrics, which are sung in Arabic or French. Song lyrics address the ups and downs of everyday life in a direct and occasionally vulgar fashion, and singers sometimes improvise during performances in the way of American blues musicians. The most famous raï singer of today is a performer named Khaled, who is commonly known as “the King of Raï.”
Known alternatively as baile funk, funk carioca is a beat-heavy music style that developed in Rio de Janeiro, Brazil, in the 1980s. By bringing American funk music, hip-hop, and freestyle rap music together and combining them with older Brazilian songs, DJs in Rio de Janeiro created a new genre that became ideal for dancing and popular among the country’s youth. Lyrics in funk carioca music are known for addressing taboo subjects, including poverty, social injustice, sex, and the violence occurring within Rio de Janeiro’s favelas, or shantytowns. The melody of funk carioca songs is typically sampled from an older tune, and may be instrumental or feature rapping and/or singing, often in Portuguese. One of the more popular funk carioca-inspired artists to find success outside of the original fan base in Rio is the rapper M.I.A., who is not Brazilian but is heavily influenced by the style, as evidenced by many songs on her 2005 album Arular.
The former president of Dollar Financial Group, Don Gayhardt today is the CEO of CURO Financial Technologies Corp, a company that offers accessible financial solutions to underserved populations through brands like Rapid Cash, Opt+, and Cash Money. In addition, Don Gayhardt serves as the chairman of Music Training Center Holdings, LLC, an organization that gives children in Philadelphia, Pennsylvania, the opportunity to take music lessons focused on a wide range of areas, including classes on subjects such as playing in a rock band.
When groups of children or adults form a band with friends or other musicians, the first performance can be an exciting yet intimidating prospect. Below are 10 useful tips to help musicians of all ages prepare for their band’s first public performance.
1. Practice more than you think you need to.
If your band earns a spot to give a performance, take the opportunity seriously. Make sure that in the weeks leading up to the gig, your band dedicates enough time to practice so that every member feels completely prepared when the day arrives. If you don’t take time to prepare, it will show in the quality of your performance, and you may not receive another opportunity to play at the venue. Practice until you feel completely comfortable with the show you’re scheduled to put on—then practice some more.
2. Establish a set of pre-show best practices.
Before you take the stage, your band needs to get focused. For this purpose, it can be useful to have a pre-show ritual to help clear the mind of any nervousness and put you in the right mindset to perform to the best of your ability. Your pre-show routine can consist of any activity that makes you feel relaxed and ready to put on a great performance. Whatever you choose to do before your band takes the stage, make sure to drink plenty of water to stay hydrated. In addition, getting enough sleep before the performance will ensure you’re rested, refreshed, and ready to shine.
3. Look the part.
Every eye in the audience will be trained on you and your band during the performance, so it’s important to go onstage showing that you take your music seriously by dressing for the occasion. The correct attire will differ depending on the genre of music you play, but the important thing is to dress in a way that makes you feel confident and demonstrates that you’re invested in your music and are enthusiastic about the opportunity to share it with the audience. In addition, try to coordinate your outfit with your bandmates. You don’t all have to wear the same thing, but sharing a similar style will make you appear more cohesive and professional.
4. Give yourself enough time for a sound check.
You should arrive at the venue early enough that your group has time to warm up and make sure that all of your equipment is functioning before the show begins. Warming up during a sound check before the show will also give the audio technician at the venue time to set volume levels before the audience arrives, allowing your band to sound balanced when you first take the stage.
5. Have a strong stage presence.
Stage presence is a key part of how the audience perceives your show. If you seem reluctant or low-energy, they are likely to respond less enthusiastically than if you show a strong stage presence. Many musicians even choose to develop an onstage persona in order to feel more confident in front of an audience. Simple actions that can improve your stage presence include standing up straight, moving around the stage instead of staying in place, and interacting with the audience throughout the set.
6. Interact with your bandmates on stage.
Another way that the audience perceives the energy onstage is based on how often and how well you interact with the other members of your band. It may sound strange, but this aspect of your performance is something that should be practiced during rehearsals. Engaging with your bandmates throughout the set shows a connection that the audience will respond to, and will help your performance seem more authentic.
7. Play through your mistakes.
Mistakes are bound to happen, especially during your first gig when nerves are running high. The important thing to remember if someone in your band makes a mistake is to keep playing. Don’t stop in the middle of a song because of a mistake. Push through the stress that you may feel and don’t let it affect the rest of your set. To help your group learn from the mistakes that you make, consider recording the performance so that you can revisit it later and evaluate what needs to be improved. However, if you choose to do this, don’t forget to also notice what the band did well and give yourselves credit.
8. Enjoy yourself.
No matter what the circumstances are surrounding your performance, make sure that you enjoy the experience as you show off your hard work and have a good time on stage with your bandmates. When you have fun doing what you love, it shows. The audience will know you’re enjoying yourselves, and may be more inclined to enjoy listening to your performance in return. |
In physics, tension describes the pulling force transmitted axially by means of a string, cable, chain, or similar one-dimensional continuous object, or by each end of a rod, truss member, or similar three-dimensional object; tension can also be described as the action-reaction pair of forces acting at each end of said elements. Tension is the opposite of compression.
At the atomic level, when atoms or molecules are pulled apart from each other and gain potential energy with a restoring force still existing, the restoring force creates what is also called tension. Each end of a string or rod under such tension will pull on the object it is attached to, to restore the string/rod to its relaxed length.
In physics, tension, as a transmitted force, as an action-reaction pair of forces, or as a restoring force, is a force and has the units of force measured in newtons (or sometimes pounds-force). The ends of a string or other object transmitting tension will exert forces on the objects to which the string or rod is connected, in the direction of the string at the point of attachment. These forces due to tension are also called "passive forces". There are two basic possibilities for systems of objects held by strings: either acceleration is zero and the system is therefore in equilibrium, or there is acceleration, and therefore a net force is present in the system.
Tension in one-dimensional continue
Tension in a string is a non-negative scalar quantity. Zero tension is slack. A string or rope is often idealized as one dimension, having length but being massless with zero cross section. If there are no bends in the string, as occur with vibrations or pulleys, then tension is a constant along the string, equal to the magnitude of the forces applied by the ends of the string. By Newton's Third Law, these are the same forces exerted on the ends of the string by the objects to which the ends are attached. If the string curves around one or more pulleys, it will still have constant tension along its length in the idealized situation that the pulleys are massless and frictionless. A vibrating string vibrates with a set of frequencies that depend on the string's tension. These frequencies can be derived from Newton's laws of motion. Each microscopic segment of the string pulls on and is pulled upon by its neighboring segments, with a force equal to the tension at that position along the string. tension where is the position along the string.
If the string has curvature, then the two pulls on a segment by its two neighbors will not add to zero, and there will be a net force on that segment of the string, causing an acceleration. This net force is a restoring force, and the motion of the string can include transverse waves that solve the equation central to Sturm-Liouville theory:
where is the force constant per unit length [units force per area] are the eigenvalues for resonances of transverse displacement on the string., with solutions that include the various harmonics on a stringed instrument.
Tension in three-dimensional continue
Tension is also used to describe the force exerted by the ends of a three-dimensional, continuous material such as a rod or truss member. Such a rod elongates under tension. The amount of elongation and the load that will cause failure both depend on the force per cross-sectional area rather than the force alone, so stress = axial force / cross sectional area is more useful for engineering purposes than tension. Stress is a 3x3 matrix called a tensor, and the element of the stress tensor is tensile force per area, or compression force per area, denoted as a negative number for this element, if the rod is being compressed rather than elongated.
System in equilibrium
A system is in equilibrium when the sum of all forces is zero.
For example, consider a system consisting of an object that is being lowered vertically by a string with tension, T, at a constant velocity. The system has a constant velocity and is therefore in equilibrium because the tension in the string, which is pulling up on the object, is equal to the weight force, mg ("m" is mass, "g" is the acceleration caused by the gravity of Earth), which is pulling down on the object.
System under net force
A system has a net force when an unbalanced force is exerted on it, in other words the sum of all forces is not zero. Acceleration and net force always exist together.
For example, consider the same system as above but suppose the object is now being lowered with an increasing velocity downwards (positive acceleration) therefore there exists a net force somewhere in the system. In this case, negative acceleration would indicate that .
In another example, suppose that two bodies A and B having masses and , respectively, are connected with each other by an inextensible string over a frictionless pulley. There are two forces acting on the body A: its weight () pulling down, and the tension in the string pulling up. Therefore, the net force on body A is , so . In an extensible string, Hooke's law applies.
Strings in modern physics
String-like objects in relativistic theories, such as the strings used in some models of interactions between quarks, or those used in the modern string theory, also possess tension. These strings are analyzed in terms of their world sheet, and the energy is then typically proportional to the length of the string. As a result, the tension in such strings is independent of the amount of stretching.
|Wikimedia Commons has media related to Tension.| |
In mathematics and physical science, spherical harmonics are special functions defined on the surface of a sphere. They are often employed in solving partial differential equations in many scientific fields.
Since the spherical harmonics form a complete set of orthogonal functions and thus an orthonormal basis, each function defined on the surface of a sphere can be written as a sum of these spherical harmonics. This is similar to periodic functions defined on a circle that can be expressed as a sum of circular functions (sines and cosines) via Fourier series. Like the sines and cosines in Fourier series, the spherical harmonics may be organized by (spatial) angular frequency, as seen in the rows of functions in the illustration on the right. Further, spherical harmonics are basis functions for irreducible representations of SO(3), the group of rotations in three dimensions, and thus play a central role in the group theoretic discussion of SO(3).
Spherical harmonics originates from solving Laplace's equation in the spherical domains. Functions that solve Laplace's equation are called harmonics. Despite their name, spherical harmonics take their simplest form in Cartesian coordinates, where they can be defined as homogeneous polynomials of degree in that obey Laplace's equation. The connection with spherical coordinates arises immediately if one uses the homogeneity to extract a factor of radial dependence from the above-mentioned polynomial of degree ; the remaining factor can be regarded as a function of the spherical angular coordinates and only, or equivalently of the orientational unit vector specified by these angles. In this setting, they may be viewed as the angular portion of a set of solutions to Laplace's equation in three dimensions, and this viewpoint is often taken as an alternative definition.
A specific set of spherical harmonics, denoted or , are known as Laplace's spherical harmonics, as they were first introduced by Pierre Simon de Laplace in 1782. These functions form an orthogonal system, and are thus basic to the expansion of a general function on the sphere as alluded to above.
Spherical harmonics are important in many theoretical and practical applications, including the representation of multipole electrostatic and electromagnetic fields, electron configurations, gravitational fields, geoids, the magnetic fields of planetary bodies and stars, and the cosmic microwave background radiation. In 3D computer graphics, spherical harmonics play a role in a wide variety of topics including indirect lighting (ambient occlusion, global illumination, precomputed radiance transfer, etc.) and modelling of 3D shapes.
Spherical harmonics were first investigated in connection with the Newtonian potential of Newton's law of universal gravitation in three dimensions. In 1782, Pierre-Simon de Laplace had, in his Mécanique Céleste, determined that the gravitational potential at a point x associated with a set of point masses mi located at points xi was given by
Each term in the above summation is an individual Newtonian potential for a point mass. Just prior to that time, Adrien-Marie Legendre had investigated the expansion of the Newtonian potential in powers of r = |x| and r1 = |x1|. He discovered that if r ≤ r1 then
where γ is the angle between the vectors x and x1. The functions are the Legendre polynomials, and they can be derived as a special case of spherical harmonics. Subsequently, in his 1782 memoire, Laplace investigated these coefficients using spherical coordinates to represent the angle γ between x1 and x. (See Applications of Legendre polynomials in physics for a more detailed analysis.)
In 1867, William Thomson (Lord Kelvin) and Peter Guthrie Tait introduced the solid spherical harmonics in their Treatise on Natural Philosophy, and also first introduced the name of "spherical harmonics" for these functions. The solid harmonics were homogeneous polynomial solutions of Laplace's equation
By examining Laplace's equation in spherical coordinates, Thomson and Tait recovered Laplace's spherical harmonics. (See the section below, "Harmonic polynomial representation".) The term "Laplace's coefficients" was employed by William Whewell to describe the particular system of solutions introduced along these lines, whereas others reserved this designation for the zonal spherical harmonics that had properly been introduced by Laplace and Legendre.
The 19th century development of Fourier series made possible the solution of a wide variety of physical problems in rectangular domains, such as the solution of the heat equation and wave equation. This could be achieved by expansion of functions in series of trigonometric functions. Whereas the trigonometric functions in a Fourier series represent the fundamental modes of vibration in a string, the spherical harmonics represent the fundamental modes of vibration of a sphere in much the same way. Many aspects of the theory of Fourier series could be generalized by taking expansions in spherical harmonics rather than trigonometric functions. Moreover, analogous to how trigonometric functions can equivalently be written as complex exponentials, spherical harmonics also possessed an equivalent form as complex-valued functions. This was a boon for problems possessing spherical symmetry, such as those of celestial mechanics originally studied by Laplace and Legendre.
The prevalence of spherical harmonics already in physics set the stage for their later importance in the 20th century birth of quantum mechanics. The (complex-valued) spherical harmonics are eigenfunctions of the square of the orbital angular momentum operator
Laplace's spherical harmonics
Laplace's equation imposes that the Laplacian of a scalar field f is zero. (Here the scalar field is understood to be complex, i.e. to correspond to a (smooth) function .) In spherical coordinates this is:
Consider the problem of finding solutions of the form f(r, θ, φ) = R(r) Y(θ, φ). By separation of variables, two differential equations result by imposing Laplace's equation:
The second equation can be simplified under the assumption that Y has the form Y(θ, φ) = Θ(θ) Φ(φ). Applying separation of variables again to the second equation gives way to the pair of differential equations
for some number m. A priori, m is a complex constant, but because Φ must be a periodic function whose period evenly divides 2π, m is necessarily an integer and Φ is a linear combination of the complex exponentials e± imφ. The solution function Y(θ, φ) is regular at the poles of the sphere, where θ = 0, π. Imposing this regularity in the solution Θ of the second equation at the boundary points of the domain is a Sturm–Liouville problem that forces the parameter λ to be of the form λ = ℓ (ℓ + 1) for some non-negative integer with ℓ ≥ |m|; this is also explained below in terms of the orbital angular momentum. Furthermore, a change of variables t = cos θ transforms this equation into the Legendre equation, whose solution is a multiple of the associated Legendre polynomial Pℓm(cos θ) . Finally, the equation for R has solutions of the form R(r) = A rℓ + B r−ℓ − 1; requiring the solution to be regular throughout R3 forces B = 0.
Here the solution was assumed to have the special form Y(θ, φ) = Θ(θ) Φ(φ). For a given value of ℓ, there are 2ℓ + 1 independent solutions of this form, one for each integer m with −ℓ ≤ m ≤ ℓ. These angular solutions are a product of trigonometric functions, here represented as a complex exponential, and associated Legendre polynomials:
Here is called a spherical harmonic function of degree ℓ and order m, is an associated Legendre polynomial, N is a normalization constant, and θ and φ represent colatitude and longitude, respectively. In particular, the colatitude θ, or polar angle, ranges from 0 at the North Pole, to π/2 at the Equator, to π at the South Pole, and the longitude φ, or azimuth, may assume all values with 0 ≤ φ < 2π. For a fixed integer ℓ, every solution Y(θ, φ), , of the eigenvalue problem
is a linear combination of Yℓm. In fact, for any such solution, rℓ Y(θ, φ) is the expression in spherical coordinates of a homogeneous polynomial that is harmonic (see below), and so counting dimensions shows that there are 2ℓ + 1 linearly independent such polynomials.
For , the solid harmonics with negative powers of (the irregular solid harmonics ) are chosen instead. In that case, one needs to expand the solution of known regions in Laurent series (about ), instead of the Taylor series (about ) used above, to match the terms and find series expansion coefficients .
Orbital angular momentum
The ħ is conventional in quantum mechanics; it is convenient to work in units in which ħ = 1. The spherical harmonics are eigenfunctions of the square of the orbital angular momentum
Laplace's spherical harmonics are the joint eigenfunctions of the square of the orbital angular momentum and the generator of rotations about the azimuthal axis:
These operators commute, and are densely defined self-adjoint operators on the weighted Hilbert space of functions f square-integrable with respect to the normal distribution as the weight function on R3:
Furthermore, L2 is a positive operator.
If Y is a joint eigenfunction of L2 and Lz, then by definition
for some real numbers m and λ. Here m must in fact be an integer, for Y must be periodic in the coordinate φ with period a number that evenly divides 2π. Furthermore, since
and each of Lx, Ly, Lz are self-adjoint, it follows that λ ≥ m2.
Denote this joint eigenspace by Eλ,m, and define the raising and lowering operators by
Then L+ and L− commute with L2, and the Lie algebra generated by L+, L−, Lz is the special linear Lie algebra of order 2, , with commutation relations
Thus L+ : Eλ,m → Eλ,m+1 (it is a "raising operator") and L− : Eλ,m → Eλ,m−1 (it is a "lowering operator"). In particular, Lk
+ : Eλ,m → Eλ,m+k must be zero for k sufficiently large, because the inequality λ ≥ m2 must hold in each of the nontrivial joint eigenspaces. Let Y ∈ Eλ,m be a nonzero joint eigenfunction, and let k be the least integer such that
it follows that
Thus λ = ℓ(ℓ+1) for the positive integer ℓ = m+k.
The foregoing has been all worked out in the spherical coordinate representation, but may be expressed more abstractly in the complete, orthonormal spherical ket basis.
Harmonic polynomial representation
The spherical harmonics can be expressed as the restriction to the unit sphere of certain polynomial functions . Specifically, we say that a (complex-valued) polynomial function is homogeneous of degree if
for all real numbers and all . We say that is harmonic if
where is the Laplacian. Then for each , we define
For example, when , is just the 3-dimensional space of all linear functions , since any such function is automatically harmonic. Meanwhile, when , we have a 5-dimensional space:
For any , the space of spherical harmonics of degree is just the space of restrictions to the sphere of the elements of . As suggested in the introduction, this perspective is presumably the origin of the term "spherical harmonic" (i.e., the restriction to the sphere of a harmonic function).
For example, for any the formula
defines a homogeneous polynomial of degree with domain and codomain , which happens to be independent of . This polynomial is easily seen to be harmonic. If we write in spherical coordinates and then restrict to , we obtain
which can be rewritten as
After using the formula for the associated Legendre polynomial , we may recognize this as the formula for the spherical harmonic (See the section below on special cases of the spherical harmonics.)
Orthogonality and normalization
This section's factual accuracy is disputed. (December 2017) (Learn how and when to remove this template message)
Several different normalizations are in common use for the Laplace spherical harmonic functions . Throughout the section, we use the standard convention that for (see associated Legendre polynomials)
which is the natural normalization given by Rodrigues' formula.
where are associated Legendre polynomials without the Condon–Shortley phase (to avoid counting the phase twice).
In both definitions, the spherical harmonics are orthonormal
where δij is the Kronecker delta and dΩ = sinθ dφ dθ. This normalization is used in quantum mechanics because it ensures that probability is normalized, i.e.
which possess unit power
which have the normalization
In quantum mechanics this normalization is sometimes used as well, and is named Racah's normalization after Giulio Racah.
It can be shown that all of the above normalized spherical harmonic functions satisfy
where the superscript * denotes complex conjugation. Alternatively, this equation follows from the relation of the spherical harmonic functions with the Wigner D-matrix.
One source of confusion with the definition of the spherical harmonic functions concerns a phase factor of (−1)m, commonly referred to as the Condon–Shortley phase in the quantum mechanical literature. In the quantum mechanics community, it is common practice to either include this phase factor in the definition of the associated Legendre polynomials, or to append it to the definition of the spherical harmonic functions. There is no requirement to use the Condon–Shortley phase in the definition of the spherical harmonic functions, but including it can simplify some quantum mechanical operations, especially the application of raising and lowering operators. The geodesy and magnetics communities never include the Condon–Shortley phase factor in their definitions of the spherical harmonic functions nor in the ones of the associated Legendre polynomials.
A real basis of spherical harmonics can be defined in terms of their complex analogues by setting
The Condon–Shortley phase convention is used here for consistency. The corresponding inverse equations defining the complex spherical harmonics in terms of the real spherical harmonics are
The real spherical harmonics are sometimes known as tesseral spherical harmonics. These functions have the same orthonormality properties as the complex ones above. The real spherical harmonics with m > 0 are said to be of cosine type, and those with m < 0 of sine type. The reason for this can be seen by writing the functions in terms of the Legendre polynomials as
The same sine and cosine factors can be also seen in the following subsection that deals with the Cartesian representation.
See here for a list of real spherical harmonics up to and including , which can be seen to be consistent with the output of the equations above.
Use in quantum chemistry
As is known from the analytic solutions for the hydrogen atom, the eigenfunctions of the angular part of the wave function are spherical harmonics. However, the solutions of the non-relativistic Schrödinger equation without magnetic terms can be made real. This is why the real forms are extensively used in basis functions for quantum chemistry, as the programs don't then need to use complex algebra. Here, it is important to note that the real functions span the same space as the complex ones would.
Spherical harmonics in Cartesian form
The Herglotz generating function
If the quantum mechanical convention is adopted for the , then
Here, is the vector with components , , and
is a vector with complex coefficients. It suffices to take and as real parameters. The essential property of is that it is null:
Essentially all the properties of the spherical harmonics can be derived from this generating function. An immediate benefit of this definition is that if the vector is replaced by the quantum mechanical spin vector operator , such that is the operator analogue of the solid harmonic , one obtains a generating function for a standardized set of spherical tensor operators, :
The parallelism of the two definitions ensures that the 's transform under rotations (see below) in the same way as the 's, which in turn guarantees that they are spherical tensor operators, , with and , obeying all the properties of such operators, such as the Clebsch-Gordan composition theorem, and the Wigner-Eckart theorem. They are, moreover, a standardized set with a fixed scale or normalization.
Separated Cartesian form
The Herglotzian definition yields polynomials which may, if one wishes, be further factorized into a polynomial of and another of and , as follows (Condon–Shortley phase):
and for m = 0:
For this reduces to
The factor is essentially the associated Legendre polynomial , and the factors are essentially .
Using the expressions for , , and listed explicitly above we obtain:
Using the equations above to form the real spherical harmonics, it is seen that for only the terms (cosines) are included, and for only the terms (sines) are included:
and for m = 0:
Special cases and values
1. When , the spherical harmonics reduce to the ordinary Legendre polynomials:
2. When ,
or more simply in Cartesian coordinates,
3. At the north pole, where , and is undefined, all spherical harmonics except those with vanish:
The spherical harmonics have deep and consequential properties under the operations of spatial inversion (parity) and rotation.
The spherical harmonics have definite parity. That is, they are either even or odd with respect to inversion about the origin. Inversion is represented by the operator . Then, as can be seen in many ways (perhaps most simply from the Herglotz generating function), with being a unit vector,
In terms of the spherical angles, parity transforms a point with coordinates to . The statement of the parity of spherical harmonics is then
(This can be seen as follows: The associated Legendre polynomials gives (−1)ℓ+m and from the exponential function we have (−1)m, giving together for the spherical harmonics a parity of (−1)ℓ.)
Parity continues to hold for real spherical harmonics, and for spherical harmonics in higher dimensions: applying a point reflection to a spherical harmonic of degree ℓ changes the sign by a factor of (−1)ℓ.
Consider a rotation about the origin that sends the unit vector to . Under this operation, a spherical harmonic of degree and order transforms into a linear combination of spherical harmonics of the same degree. That is,
where is a matrix of order that depends on the rotation . However, this is not the standard way of expressing this property. In the standard way one writes,
where is the complex conjugate of an element of the Wigner D-matrix. In particular when is a rotation of the azimuth we get the identity,
The rotational behavior of the spherical harmonics is perhaps their quintessential feature from the viewpoint of group theory. The 's of degree provide a basis set of functions for the irreducible representation of the group SO(3) of dimension . Many facts about spherical harmonics (such as the addition theorem) that are proved laboriously using the methods of analysis acquire simpler proofs and deeper significance using the methods of symmetry.
Spherical harmonics expansion
The Laplace spherical harmonics form a complete set of orthonormal functions and thus form an orthonormal basis of the Hilbert space of square-integrable functions . On the unit sphere , any square-integrable function can thus be expanded as a linear combination of these:
This expansion holds in the sense of mean-square convergence — convergence in L2 of the sphere — which is to say that
The expansion coefficients are the analogs of Fourier coefficients, and can be obtained by multiplying the above equation by the complex conjugate of a spherical harmonic, integrating over the solid angle Ω, and utilizing the above orthogonality relationships. This is justified rigorously by basic Hilbert space theory. For the case of orthonormalized harmonics, this gives:
A square-integrable function can also be expanded in terms of the real harmonics above as a sum
The convergence of the series holds again in the same sense, namely the real spherical harmonics form a complete set of orthonormal functions and thus form an orthonormal basis of the Hilbert space of square-integrable functions . The benefit of the expansion in terms of the real harmonic functions is that for real functions the expansion coefficients are guaranteed to be real, whereas their coefficients in their expansion in terms of the (considering them as functions ) do not have that property.
As a rule, harmonic functions are useful in theoretical physics to consider fields in far-zone when distance from charges is much further than size of their location. In that case, radius R is constant and coordinates (θ,φ) are convenient to use. Theoretical physics considers many problems when solution of Laplace's equation is needed as a function of Сartesian coordinates. At the same time, it is important to get invariant form of solutions relatively to rotation of space or generally speaking, relatively to group transformations. The simplest tensor solutions- dipole, quadrupole and octupole potentials are fundamental concepts of general physics:
- , ,.
It is easy to verify that they are the harmonical functions. Total set of tensors is defined by Taylor series of point charge field potential for :
where tensor is denoted by symbol and contraction of the tensors is in the brackets [...]. Therefore, the tensor is defined by -th tensor derivative:
James Clerk Maxwell used similar considerations without tensors naturally. E. W. Hobson analysed Maxwell's method as well. One can see from the equation following properties that repeat mainly those of solid and spherical functions.
- Tensor is the harmonic polynomial i. e. .
- Trace over each two indices is zero, as far as .
- Tensor is homogeneous polynomial of degree i.e. summed degree of variables x, y, z of each item is equal to .
- Tensor has invariant form under rotations of variables x,y,z i.e. of vector .
- Total set of potentials is complete.
- Contraction of with tensor is proportional to contraction of two harmonic potentials:
Quantity of Kronecker symbols is increased by two in the product of each following item when rang of tensor is reduced by two accordingly. Operation symmetrizes tensor by means of all independent permutations of indices with following summing of got items. Particularly, don't need to be transformed into and tensor don't go into .
Regarded tensors are convenient to substitute to Laplace equation:
The last relation is Euler formula for homogeneous polynomials actually. Laplace operator leaves the indices symmetry of tensors. The two relations allows to substitute found tensor into Laplace equation and to check straightly that tensor is the harmonical function:
The last property is important for theoretical physics for the following reason. Potential of charges outside of their location is integral to be equal to the sum of multipole potentials:
where is the charge density. The convolution is applied to tensors in the formula naturally. Integrals in the sum are called in physics as multipole moments. Three of them are used actively while others applied less often as their structure (or that of spherical functions) is more complicated. Nevertheless, last property gives the way to simplify calculations in theoretical physics by using integrals with tensor instead of harmonical tensor . Therefore, simplified moments give the same result and there is no need to restrict calculations for dipole, quadrupole and octupole potentials only. It is the advantage of the tensor point of view and not the only that.
Efimov's ladder operator
Spherical functions have a few recurrent formulas. In quantum mechanics recurrent formulas plays a role when they connect functions of quantum states by means of a ladder operator.The property is occurred due to symmetry group of considered system. The vector ladder operator for the invariant harmonical states found in paper and detailed in.
- For that purpose, transformation of -space is applied that conserves form of Laplace equation:
Operator applying to the harmonical tensor potential in -space goes into Efimov's ladder operator acting on transformed tensor in -space:
where is operator of module of angular momentum:
Operator multiplies harmonic tensor by its degree i.e. by if to recall according spherical function for quantum numbers , . To check action of the ladder operator , one can apply it to dipole and quadrupole tensors:
Applying successively to we get general form of invariant harmonic tensors:
As a result, operator goes into the operator of momentum in -space :
It is useful to apply the following properties of .
- Commutator of the coordinate operators is zero:
The property is utterly convenient for calculations.
- The scalar operator product is zero in the space of harmonical functions:
The property gives zero trace of the harmonical tensor over each two indices.
The ladder operator is analogous for that in problem of the quantum oscillator. It generates Glauber states those are created in the quantum theory of electromagnetic radiation fields. It was shown later as theoretical result that the coherent states are intrinsic for any quantum system with a group symmetry to include the rotational group.
Invariant form of spherical harmonics
Spherical harmonics accord with the system of coordinates. Let be the unit vectors along axises X, Y, Z. Denote following unit vectors as and :
Using the vectors, the solid harmonics are equal to:
where is the constant:
Angular momentum is defined by the rotational group. The mechanical momentum is related to the translation group. The ladder operator is the mapping of momentum upon inversion 1/r of 3-d space. It is raising operator. Lowering operator here is the gradient naturally together with partial contraction over pair indices to leave others:
This section needs additional citations for verification. (July 2020) (Learn how and when to remove this template message)
Power spectrum in signal processing
The total power of a function f is defined in the signal processing literature as the integral of the function squared, divided by the area of its domain. Using the orthonormality properties of the real unit-power spherical harmonic functions, it is straightforward to verify that the total power of a function defined on the unit sphere is related to its spectral coefficients by a generalization of Parseval's theorem (here, the theorem is stated for Schmidt semi-normalized harmonics, the relationship is slightly different for orthonormal harmonics):
is defined as the angular power spectrum (for Schmidt semi-normalized harmonics). In a similar manner, one can define the cross-power of two functions as
is defined as the cross-power spectrum. If the functions f and g have a zero mean (i.e., the spectral coefficients f00 and g00 are zero), then Sff(ℓ) and Sfg(ℓ) represent the contributions to the function's variance and covariance for degree ℓ, respectively. It is common that the (cross-)power spectrum is well approximated by a power law of the form
When β = 0, the spectrum is "white" as each degree possesses equal power. When β < 0, the spectrum is termed "red" as there is more power at the low degrees with long wavelengths than higher degrees. Finally, when β > 0, the spectrum is termed "blue". The condition on the order of growth of Sff(ℓ) is related to the order of differentiability of f in the next section.
One can also understand the differentiability properties of the original function f in terms of the asymptotics of Sff(ℓ). In particular, if Sff(ℓ) decays faster than any rational function of ℓ as ℓ → ∞, then f is infinitely differentiable. If, furthermore, Sff(ℓ) decays exponentially, then f is actually real analytic on the sphere.
The general technique is to use the theory of Sobolev spaces. Statements relating the growth of the Sff(ℓ) to differentiability are then similar to analogous results on the growth of the coefficients of Fourier series. Specifically, if
then f is in the Sobolev space Hs(S2). In particular, the Sobolev embedding theorem implies that f is infinitely differentiable provided that
for all s.
A mathematical result of considerable interest and use is called the addition theorem for spherical harmonics. Given two vectors r and r', with spherical coordinates and , respectively, the angle between them is given by the relation
in which the role of the trigonometric functions appearing on the right-hand side is played by the spherical harmonics and that of the left-hand side is played by the Legendre polynomials.
The addition theorem states
where Pℓ is the Legendre polynomial of degree ℓ. This expression is valid for both real and complex harmonics. The result can be proven analytically, using the properties of the Poisson kernel in the unit ball, or geometrically by applying a rotation to the vector y so that it points along the z-axis, and then directly calculating the right-hand side.
In particular, when x = y, this gives Unsöld's theorem
which generalizes the identity cos2θ + sin2θ = 1 to two dimensions.
In the expansion (1), the left-hand side Pℓ(x·y) is a constant multiple of the degree ℓ zonal spherical harmonic. From this perspective, one has the following generalization to higher dimensions. Let Yj be an arbitrary orthonormal basis of the space Hℓ of degree ℓ spherical harmonics on the n-sphere. Then , the degree ℓ zonal harmonic corresponding to the unit vector x, decomposes as
Furthermore, the zonal harmonic is given as a constant multiple of the appropriate Gegenbauer polynomial:
where ωn−1 is the volume of the (n−1)-sphere.
Another useful identity expresses the product of two spherical harmonics as a sum over spherical harmonics
where the values of and are determined by the selection rules for the 3j-symbols.
The Clebsch–Gordan coefficients are the coefficients appearing in the expansion of the product of two spherical harmonics in terms of spherical harmonics themselves. A variety of techniques are available for doing essentially the same calculation, including the Wigner 3-jm symbol, the Racah coefficients, and the Slater integrals. Abstractly, the Clebsch–Gordan coefficients express the tensor product of two irreducible representations of the rotation group as a sum of irreducible representations: suitably normalized, the coefficients are then the multiplicities.
Visualization of the spherical harmonics
The Laplace spherical harmonics can be visualized by considering their "nodal lines", that is, the set of points on the sphere where , or alternatively where . Nodal lines of are composed of ℓ circles: there are |m| circles along longitudes and ℓ−|m| circles along latitudes. One can determine the number of nodal lines of each type by counting the number of zeros of in the and directions respectively. Considering as a function of , the real and imaginary components of the associated Legendre polynomials each possess ℓ−|m| zeros, each giving rise to a nodal 'line of latitude'. On the other hand, considering as a function of , the trigonometric sin and cos functions possess 2|m| zeros, each of which gives rise to a nodal 'line of longitude'.
When the spherical harmonic order m is zero (upper-left in the figure), the spherical harmonic functions do not depend upon longitude, and are referred to as zonal. Such spherical harmonics are a special case of zonal spherical functions. When ℓ = |m| (bottom-right in the figure), there are no zero crossings in latitude, and the functions are referred to as sectoral. For the other cases, the functions checker the sphere, and they are referred to as tesseral.
More general spherical harmonics of degree ℓ are not necessarily those of the Laplace basis , and their nodal sets can be of a fairly general kind.
List of spherical harmonics
Analytic expressions for the first few orthonormalized Laplace spherical harmonics that use the Condon–Shortley phase convention:
The classical spherical harmonics are defined as complex-valued functions on the unit sphere inside three-dimensional Euclidean space . Spherical harmonics can be generalized to higher-dimensional Euclidean space as follows, leading to functions . Let Pℓ denote the space of complex-valued homogeneous polynomials of degree ℓ in n real variables, here considered as functions . That is, a polynomial p is in Pℓ provided that for any real , one has
Let Aℓ denote the subspace of Pℓ consisting of all harmonic polynomials:
These are the (regular) solid spherical harmonics. Let Hℓ denote the space of functions on the unit sphere
obtained by restriction from Aℓ
The following properties hold:
- The sum of the spaces Hℓ is dense in the set C(Sn−1) of continuous functions on Sn−1 with respect to the uniform topology, by the Stone-Weierstrass theorem. As a result, the sum of these spaces is also dense in the space L2(Sn−1) of square-integrable functions on the sphere. Thus every square-integrable function on the sphere decomposes uniquely into a series a spherical harmonics, where the series converges in the L2 sense.
- For all f ∈ Hℓ, one has
- where ΔSn−1 is the Laplace–Beltrami operator on Sn−1. This operator is the analog of the angular part of the Laplacian in three dimensions; to wit, the Laplacian in n dimensions decomposes as
- It follows from the Stokes theorem and the preceding property that the spaces Hℓ are orthogonal with respect to the inner product from L2(Sn−1). That is to say,
- for f ∈ Hℓ and g ∈ Hk for k ≠ ℓ.
- Conversely, the spaces Hℓ are precisely the eigenspaces of ΔSn−1. In particular, an application of the spectral theorem to the Riesz potential gives another proof that the spaces Hℓ are pairwise orthogonal and complete in L2(Sn−1).
- Every homogeneous polynomial p ∈ Pℓ can be uniquely written in the form
- where pj ∈ Aj. In particular,
An orthogonal basis of spherical harmonics in higher dimensions can be constructed inductively by the method of separation of variables, by solving the Sturm-Liouville problem for the spherical Laplacian
where φ is the axial coordinate in a spherical coordinate system on Sn−1. The end result of such a procedure is
where the indices satisfy |ℓ1| ≤ ℓ2 ≤ ... ≤ ℓn−1 and the eigenvalue is −ℓn−1(ℓn−1 + n−2). The functions in the product are defined in terms of the Legendre function
Connection with representation theory
The space Hℓ of spherical harmonics of degree ℓ is a representation of the symmetry group of rotations around a point (SO(3)) and its double-cover SU(2). Indeed, rotations act on the two-dimensional sphere, and thus also on Hℓ by function composition
The elements of Hℓ arise as the restrictions to the sphere of elements of Aℓ: harmonic polynomials homogeneous of degree ℓ on three-dimensional Euclidean space R3. By polarization of ψ ∈ Aℓ, there are coefficients symmetric on the indices, uniquely determined by the requirement
The condition that ψ be harmonic is equivalent to the assertion that the tensor must be trace free on every pair of indices. Thus as an irreducible representation of SO(3), Hℓ is isomorphic to the space of traceless symmetric tensors of degree ℓ.
More generally, the analogous statements hold in higher dimensions: the space Hℓ of spherical harmonics on the n-sphere is the irreducible representation of SO(n+1) corresponding to the traceless symmetric ℓ-tensors. However, whereas every irreducible tensor representation of SO(2) and SO(3) is of this kind, the special orthogonal groups in higher dimensions have additional irreducible representations that do not arise in this manner.
The special orthogonal groups have additional spin representations that are not tensor representations, and are typically not spherical harmonics. An exception are the spin representation of SO(3): strictly speaking these are representations of the double cover SU(2) of SO(3). In turn, SU(2) is identified with the group of unit quaternions, and so coincides with the 3-sphere. The spaces of spherical harmonics on the 3-sphere are certain spin representations of SO(3), with respect to the action by quaternionic multiplication.
Connection with hemispherical harmonics
Spherical harmonics can be separated into two set of functions. One is hemispherical functions (HSH), orthogonal and complete on hemisphere. Another is complementary hemispherical harmonics (CHSH).
The angle-preserving symmetries of the two-sphere are described by the group of Möbius transformations PSL(2,C). With respect to this group, the sphere is equivalent to the usual Riemann sphere. The group PSL(2,C) is isomorphic to the (proper) Lorentz group, and its action on the two-sphere agrees with the action of the Lorentz group on the celestial sphere in Minkowski space. The analog of the spherical harmonics for the Lorentz group is given by the hypergeometric series; furthermore, the spherical harmonics can be re-expressed in terms of the hypergeometric series, as SO(3) = PSU(2) is a subgroup of PSL(2,C).
|Wikimedia Commons has media related to Spherical harmonics.|
- A historical account of various approaches to spherical harmonics in three dimensions can be found in Chapter IV of MacRobert 1967. The term "Laplace spherical harmonics" is in common use; see Courant & Hilbert 1962 and Meijer & Bauer 2004.
- The approach to spherical harmonics taken here is found in (Courant & Hilbert 1962, §V.8, §VII.5).
- Physical applications often take the solution that vanishes at infinity, making A = 0. This does not affect the angular portion of the spherical harmonics.
- Edmonds 1957, §2.5
- Hall 2013 Section 17.6
- Hall 2013 Lemma 17.16
- George), Williams, Earl G. (Earl (1999). Fourier acoustics : sound radiation and nearfield acoustical holography. San Diego, Calif.: Academic Press. ISBN 0080506909. OCLC 181010993.
- Messiah, Albert (1999). Quantum mechanics : two volumes bound as one (Two vol. bound as one, unabridged reprint ed.). Mineola, NY: Dover. ISBN 9780486409245.
- al.], Claude Cohen-Tannoudji, Bernard Diu, Franck Laloë; transl. from the French by Susan Reid Hemley ... [et (1996). Quantum mechanics. Wiley-Interscience: Wiley. ISBN 9780471569527.
- Blakely, Richard (1995). Potential theory in gravity and magnetic applications. Cambridge England New York: Cambridge University Press. p. 113. ISBN 978-0521415088.
- Heiskanen and Moritz, Physical Geodesy, 1967, eq. 1-62
- Watson & Whittaker 1927, p. 392 harvnb error: no target: CITEREFWatsonWhittaker1927 (help).
- See, e.g., Appendix A of Garg, A., Classical Electrodynamics in a Nutshell (Princeton University Press, 2012).
- Li, Feifei; Braun, Carol; Garg, Anupam (2013), "The Weyl-Wigner-Moyal Formalism for Spin" (PDF), Europhysics Letters, 102 (6): 60006, arXiv:1210.4075, Bibcode:2013EL....10260006L, doi:10.1209/0295-5075/102/60006, S2CID 119610178
- Efimov Sergei P.; Muratov Rodes Z. (1990). "Theory of multipole representation of the potentialsod an ellipsoid. Tensor porentials". Astron. Zh. 67 (2): 152–157. Bibcode:1990SvA....34..152E.CS1 maint: multiple names: authors list (link)
- Efimov Sergei P., Muratov Rodes Z. (1990). "Theory of multipole representation of the potentials of an ellipsoid. Moments". Astron. Zh. 67 (2): 157–162. Bibcode:1990SvA....34..157E.
- Buchbinder I.L. and Shapiro I.L. (1990). "On the renormalization group equations in curved spacetime with the torsion". Classical and Quantum Gravity. 7 (7): 1197. doi:10.1088/0264-9381/7/7/015.
- Kalmykov M. Yu., Pronin P.I. (1991). "One-loop effective action in gauge gravitational theory". Il Nuovo Cimento B, Series 11. 106 (12): 1401. Bibcode:1991NCimB.106.1401K. doi:10.1007/BF02728369. S2CID 120953784.
- Maxwell, James Clerk (1892). A treatise on Electricity & Magnetism. N. Y.: Dover Publications Inc. 1954. pp. ch.9.
- Hobson, E. W. (2012). The Theory of Spherical and Ellipsoidal Harmonics. Cambridge: Cambridge Academ. ISBN 978-1107605114.
- Efimov, Sergei P. (1979). "Transition operator between multipole states and their tensor structure". Theoretical and Mathematical Physics. 39 (2): 425–434. Bibcode:1979TMP....39..425E. doi:10.1007/BF01014921. S2CID 120022530.
- Muratov, Rodes Z. (2015). Multipoles and Fields of Ellipsoid. Moscow: Izd. Dom MISIS. pp. 142–155. ISBN 978-5-600-01057-4.
- Vilenkin, N. Ja. (1968). Special functions and the theory of Group Representations. Am. Math. Society. ISBN 9780821815724.
- Glauber, Roy J. (1963). "Coherent and Incoherent States of the Radiation Field". Physical Review. 131 (6): 2766–2788. Bibcode:1963PhRv..131.2766G. doi:10.1103/physrev.131.2766.
- Perelomov, A. M. (1972). "Coherent states for arbitrary Lie groups". Communications in Mathematical Physics. 26 (3): 222–236. arXiv:math-ph/0203002. Bibcode:1972CMaPh..26..222P. doi:10.1007/BF01645091. S2CID 18333588.
- Edmonds, A. R. (1996). Angular Momentum In Quantum Mechanics. Princeton University Press. p. 63.
- This is valid for any orthonormal basis of spherical harmonics of degree ℓ. For unit power harmonics it is necessary to remove the factor of 4π.
- Watson & Whittaker 1927, p. 395 harvnb error: no target: CITEREFWatsonWhittaker1927 (help)
- Unsöld 1927
- Stein & Weiss 1971, §IV.2
- Brink, D. M.; Satchler, G. R. Angular Momentum. Oxford University Press. p. 146.
- Eremenko, Jakobson & Nadirashvili 2007
- Solomentsev 2001; Stein & Weiss 1971, §Iv.2
- Cf. Corollary 1.8 of Axler, Sheldon; Ramey, Wade (1995), Harmonic Polynomials and Dirichlet-Type Problems
- Higuchi, Atsushi (1987). "Symmetric tensor spherical harmonics on the N-sphere and their application to the de Sitter group SO(N,1)". Journal of Mathematical Physics. 28 (7): 1553–1566. Bibcode:1987JMP....28.1553H. doi:10.1063/1.527513.
- Hall 2013 Corollary 17.17
- Zheng, Yi; Wei, Kai; Wei, Kai; Liang, Bin; Liang, Bin; Li, Ying; Li, Ying; Chu, Xinhui; Chu, Xinhui (2019-12-23). "Zernike like functions on spherical cap: principle and applications in optical surface fitting and graphics rendering". Optics Express. 27 (26): 37180–37195. Bibcode:2019OExpr..2737180Z. doi:10.1364/OE.27.037180. ISSN 1094-4087. PMID 31878503. Missing
- N. Vilenkin, Special Functions and the Theory of Group Representations, Am. Math. Soc. Transl.,vol. 22, (1968).
- J. D. Talman, Special Functions, A Group Theoretic Approach, (based on lectures by E.P. Wigner), W. A. Benjamin, New York (1968).
- W. Miller, Symmetry and Separation of Variables, Addison-Wesley, Reading (1977).
- A. Wawrzyńczyk, Group Representations and Special Functions, Polish Scientific Publishers. Warszawa (1984).
- Cited references
- Courant, Richard; Hilbert, David (1962), Methods of Mathematical Physics, Volume I, Wiley-Interscience.
- Edmonds, A.R. (1957), Angular Momentum in Quantum Mechanics, Princeton University Press, ISBN 0-691-07912-9
- Eremenko, Alexandre; Jakobson, Dmitry; Nadirashvili, Nikolai (2007), "On nodal sets and nodal domains on S² and R²", Annales de l'Institut Fourier, 57 (7): 2345–2360, doi:10.5802/aif.2335, ISSN 0373-0956, MR 2394544
- Hall, Brian C. (2013), Quantum Theory for Mathematicians, Graduate Texts in Mathematics, 267, Springer, ISBN 978-1461471158
- MacRobert, T.M. (1967), Spherical harmonics: An elementary treatise on harmonic functions, with applications, Pergamon Press.
- Meijer, Paul Herman Ernst; Bauer, Edmond (2004), Group theory: The application to quantum mechanics, Dover, ISBN 978-0-486-43798-9.
- Solomentsev, E.D. (2001) , "Spherical harmonics", Encyclopedia of Mathematics, EMS Press.
- Stein, Elias; Weiss, Guido (1971), Introduction to Fourier Analysis on Euclidean Spaces, Princeton, N.J.: Princeton University Press, ISBN 978-0-691-08078-9.
- Unsöld, Albrecht (1927), "Beiträge zur Quantenmechanik der Atome", Annalen der Physik, 387 (3): 355–393, Bibcode:1927AnP...387..355U, doi:10.1002/andp.19273870304.
- Whittaker, E. T.; Watson, G. N. (1927), A Course of Modern Analysis, Cambridge University Press, p. 392.
- General references
- E.W. Hobson, The Theory of Spherical and Ellipsoidal Harmonics, (1955) Chelsea Pub. Co., ISBN 978-0-8284-0104-3.
- C. Müller, Spherical Harmonics, (1966) Springer, Lecture Notes in Mathematics, Vol. 17, ISBN 978-3-540-03600-5.
- E. U. Condon and G. H. Shortley, The Theory of Atomic Spectra, (1970) Cambridge at the University Press, ISBN 0-521-09209-4, See chapter 3.
- J.D. Jackson, Classical Electrodynamics, ISBN 0-471-30932-X
- Albert Messiah, Quantum Mechanics, volume II. (2000) Dover. ISBN 0-486-40924-4.
- Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 6.7. Spherical Harmonics", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8
- D. A. Varshalovich, A. N. Moskalev, V. K. Khersonskii Quantum Theory of Angular Momentum,(1988) World Scientific Publishing Co., Singapore, ISBN 9971-5-0107-4
- Weisstein, Eric W. "Spherical harmonics". MathWorld.
- Maddock, John, Spherical harmonics in Boost.Math |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.