text stringlengths 4 602k |
|---|
Strings in python
One of the many basic data types of a programing language is the string data type, this post will be just a quick overview of strings in python. A string can often be considered a sequence of characters, so they are often used as a way to store text values, however they can also often be used to store an array of values like that of a list.
On top of the very basics of strings, there are also a number of string methods with the data type. So it would be a good idea to go over some of these methods that there are to work with when it comes to strings, such as uppercase, cont, find and so forth.
1 - basics of strings in python
Well I have to start somewhere when it comes to learning a thing or two about strings in python. In this section I will just be going over a few quick basic examples of strings. There is just creating a string literal value for starters, but then there are other basic things that I should have solid when it comes to learning about strings in a new language such as python. One thing that comes to mind with this is concatenation of strings, there is also the question of how to go about converting a string of a number to an actual number data type, and much more just when it comes to the basics. So lets get this part out of the way so i can move on to the good stuff.
1.1 - Basic string literal example
So maybe one of the first things I should learn when it comes to how to work with strings in a new programing environment is to learn how to create just a simple string literal value.
1.2 - A string can be the result of an expression
On top of just being a simple string literal, a string can also be the result of an expression. When doing so it is called for to make sure that any value that is not a string is converted to a string. If I add a value that is a string, to a value that is not a string, I can end up with a type error.
1.3 - Converting a string to an integer value
One way to do type conversion of a string of a number to an actual number data type would be to use the int built in function.
There are other built in functions such as the float function, but the basic idea is more or less the same. In any case to convert a string value to another value it will just require passing the string value to the appropriate built in function, or whatever method or function there is to preform the type conversion.
1.4 - Concatenate strings
1.5 - I can loop over strings in python
A string can be looped over with a for loop in python.
2 - String Methods
2.1 - The split method
The split string method can be used to create a list iof strings from a string with a separator. There is however one major drawback with this method though which is that I can not give an empty string as a separator. So then the split method will fall short if I want to split a sing into a list of chars. However one work around would be to pass the string value to the list built in function.
2.2 - The count method
There is a count method that will return the number of times that a sub string happens in the given string that the method is called off of.
3 - Conclusion
Well that is it for now when it comes to strings in python. I am still fairly new with python myself as of this writing so at some point in the future I will have to come back and edit this post as I start to pick pick up more when it comes to strings and python in general actually. |
Among the sequence criterion, the epsilon-delta criterion is another way to define the continuity of functions. This criterion describes the feature of continuous functions, that sufficiently small changes of the argument cause arbitrarily small changes of the function value.
In the beginning of this chapter, we learned that continuity of a function may - by a simple intuition - be considered as an absence of jumps. So if we are at an argument where continuity holds, the function values will change arbitrarily little, when we wiggle around the argument by a sufficiently small amount. So , for in the vicinity of . The function values may therefore be useful to approximate .
Continuity when approximating function values[Bearbeiten]
If a function has no jumps, we may approximate its function values by other nearby values . For this approximation, and hence also for proofs of continuity, we will use the epsilon-delta criterion for continuity. So how will such an approximation look in a practical situation?
Suppose, we make an experiment that includes measuring the air temperature as a function of time. Let be the function describing the temperature. So is the temperature at time . Now, suppose there is a technical problem, so we have no data for - or we simply did not measure at exactly this point of time. However, we would like to approximate the function value as precisely as we can:
Suppose, a technical issue prevented the measurement of . Since the temperature changes continuously in time - and especially there is no jump at - we may instead use a temperature value measured at a time close to . So, let us approximate the value by taking a temperature with close to . That means, is an approximation for . How close must come to in order to obtain a given approximation precision?
Suppose that for the evaluation of the temperature at a later time , the maximal error shall be . So considering the following figure, the measured temperature should be in the grey region . Those are all temperatures with function values between and , i.e. inside the open interval :
In this graphic, we may see that there is a region around , where function values differ by less than from . So in fact, there is a time difference , such that all function values are inside the interval highlighted in grey:
Therefore, we may indeed approximate the missing data point sufficiently well (meaning with a maximal error of ) . This is done by taking a time differing from by less than and then, the error of in approximating will be smaller than the desired maximal error . So will be the approximation for .
Conclusion: There is a , such that the difference is smaller than for all smaller than . I.e.
Increasing approximation precision[Bearbeiten]
What will happen, if we need to know the temperature value to a higher precision due to increased requirements in the evaluation of the experiment? For instance, if the required maximal temperature error is set to instead of ?
In that case, thare is an interval around , where function values do not deviate by more than from . Mathematically speaking, there a exists, such that differs by a maximum amount of from , if there is :
No matter how small we choose , thanks to the continuous temperature dependence, we may always find a , such that differs at most by from , whenever is closer to than . We keep in mind:
No matter which maximal error is required, there is always an interval around , which is with size , where all approximated function values deviate by less than from the function value to be approximated.
This holds true , since the function does not have a jump at . In other words, since in continuous at . Even beyond that, we may always infer from the above characteristic that there is no jump in the graph of at . Therefore, we may use it as a formal definition for continuity. As mathematicians frequently use the variables and when describing this characteristic, it is also called epsilon-delta-criterion for continuity.
Epsilon-delta-criterion for continuity[Bearbeiten]
Why does the epsilon-delta-criterion hold if and only if the graph of the function doen not have a jump at some argument (i.e. it is continuous there)? The temperature example allows us to intuitively verify, that the epsilon-delta-criterion is satisfied for continuous functions. But will the epsilon-delta-criterion be violated, when a function has a jump at some argument? To answer this question, let us assume that the temperature as a function of time has a jump at some :
Let be a given maximal error that is larger than the jump:
In that case, we may not choose a -interval around , where all function values have a deviation lower than from . If we, for instance, choose the following , then there certainly is an between and which a function value differing by more than from :
When choosing a smaller , we will find a with , as well:
No matter how small we choose , there will always be an argument with a distance of less than to , such that the function value differs by more than from . So we have seen that in an intuitive example, the epsilon-delta-criterion is not satisfied, if the function has a jump. Therefore, the epsilon-delta-criterion characterizes whether the graph of the function has a jump at the considered argument or not. That means, we may consider it as a definition of continuity. Since this criterion only uses mathematically well-defined terms, it may be used not just as an intuitive, but also as a formal definition.
Epsilon-Delta criterion for continuity[Bearbeiten]
The - definition of continuity at an argument inside the domain of definition is the following:
Definition (Epsilon-Delta-definition of continuity)
A function with is continuous at , if and only if for any there is a , such that holds for all with . Written in mathematical symbols, that means is continuous at if and only if
Explanation of the quantifier notation:
The above definition describes continuity at a certain point (argument). An entire function is called continuous, when it is continuous - according to the epsilon-delta criterion - at each of its arguments in the domain of definition.
Derivation of the Epsilon-Delta criterion for discontinuity[Bearbeiten]
We may also obtain a criterion of discontinuity by simply negating the above definition. Negating mathematical propositions has already been treated in chapter „Aussagen negieren“ . While doing so, an all quantifier gets transformed into an existential quantifier and vice versa. Concerning inner implication, we have to keep in mind that the negation of is equivalent to . Negating the epsilon-delta criterion of discontinuity, we obtain:
This gets us the negation of continuity (i.e. discontinuity):
Epsilon-Delta criterion for discontinuity[Bearbeiten]
Definition (Epsilon-Delta definition of discontinuity)
A function with is discontinuous at , if and only if there is an , such that for all a with and exists. Mathematically written, is discontinuous at iff
Explanation of the quantifier notation:
Further explanations considering the Epsilon-Delta criterion[Bearbeiten]
The inequality means that the distance between and is smaller than . Analogously, tells uns that the distance between and is smaller than . Therefor, the implication just says that whenever and are closer together than , then we know that the distance between and before applying the function must have been smaller than . Thus we may interpret the epsilon-delta criterion in the following way:
No matter how small we set the maximal distance between function values , there will always be a , such that and (after being mapped) are closer together than , whenever is closer to than .
For continuous functions, we can control the error to be lower than by keeping the error in the argument sufficiently small (smaller than ). Finding a means answering the question: How low does my initial error in the argument have to be in order to get a final error smaller than . This may get interesting when doing numerical calculations or measurements. Imagine, you are measuring some and then using it to compute where is a continuous function. The epsilon-delta criterion allows you to find the maximal error in (i.e. ), which guarantees that the final error will be smaller than .
A may only be found if small changes around the argument also cause small changes around the function value . Hence, concerning functions continuous at , there has to be:
I.e.: whenever is sufficiently close to , then is approximately . This may also be described using the notion of an -neighborhood:
For every -neighborhood around - no matter how small it may be - there is always a -neighborhood around , whose function values are all mapped into the -neighborhood.
In topology, this description using neighborhoods will be generalized to a topological definition of continuity.
Visualization of the Epsilon-Delta criterion[Bearbeiten]
Description of continuity using the graph[Bearbeiten]
The epsilon-delta criterion may nicely be visualized by taking a look at the graph of a funtion. Let's start by getting a picture of the implication . This means, the distance between and is smaller than epsilon, whenever is closer to than . So for , there is . Hence, the point has to be inside the rectangle . This is a rectangle with width and height centered at :
We will call this the --rectangle and only consider its interior. That means, the boundary does not belong to the rectangle. Following the epsilon-delta criterion, the implication has to be fulfilled for all arguments . Thus, all points making up the graph of restricted to arguments inside the interval (in the interior of the --rectangle, which is marked green) must never be above or below the rectangle (the red area):
So graphically, we may describe the epsilon-delta criterion as follows:
For all rectangle heights , there is a sufficiently small rectangle width , such that the graph of restricted to (i.e. the width of the rectangle) is entirely inside the green interior of the --rectangle, and never in the red above or below area.
Example of a continuous function[Bearbeiten]
For an example, consider the function . This fucntion is continuous everywhere - and hence also at the argument . There is . At first, consider a maximal final error of around . With , we can find a , such that the graph of is entirely situated inside the interior of the --rectangle:
But not only for , but for any we may find a , such that the graph of is situated entirely inside the respective --rectangle:
For , one can choose and the graph is in the interior of the --rectangle.
In case , the width will be small enough to get the graph into the --rectangle.
Example for a discontinuous function[Bearbeiten]
What happens if the function is discontinuous? Let's take the signum function , which is discontinuous at 0:
And here is its graph:
The graph intuitively allows to recognize that at , there certainly is a discontinuity. And we may see this using the rectangle visualization, as well. When choosing a rectangle height , smaller than the jump height (i.e. ), then there is no , such that the graph can be fitted entirely inside the --rectangle. For instance if , then for any - no matter how small - there will always be function values above or below the --rectangle. In fact, this apples to all values except for :
For and , the signum function has values above or below the --rectangle (colored in red).
For we will find points in the graph above or below the --rectangle, as well.
Dependence of delta or epsilon choice[Bearbeiten]
How does the choice of depend on and ? Suppose, an arbitrary is given in order to check continuity of . Now, we need to find a rectangle width , such that the restriction of the graph of to arguments inside the interval entirely fits into the epsilon-tube . This of course requires choosing sufficiently small. When is too large, there may be an argument in , where has escaped the tube, i.e. it has a distance to larger than :
If for a given , the respective is chosen too large, then there may be function values above or below the --rectangle (marked red, here).
By contrast, if is re-scaled to be sufficiently small, the graph entirely fits into the --rectangle.
How small has to be chosen, will depend on three factors: The function , the given and the argument . Depending on the function slope, a different chosen (steep functions require a smaller ). Furthermore, for a smaller we also have to choose a smaller . The following diagrams illustrate this: Here, a quadratic function is plotted, which is continuous at . For a smaller , we also need to choose a smaller :
The choice of will depend on the argument , as well. The more a function changes in the neighborhood of a certain point (i.e. it is steep around it), the smaller we have to choose . The following graphic demonstrates this: The -value proposed there is sufficiently small at , but too large at :
In the vicinity of , the function has a higher slope compared to . Hence, we need to choose a smaller at . Let us denote the -values at and correspondingly by and - and choose to be smaller:
So, we have just seen that the choice of depends on the function to be considered, as well as the argument and the given .
For a discontinuity proof, the relations between the variables will interchange. This relates back to the interchange of the quantifiers under negation of propositions. In order to show discontinuity, we need to find an small enough, such that for no the graph of fits entirely into the --rectangle. In particular, if the discontinuity is caused by a jump, then must be chosen smaller than the jump height. For too large, there might be a , such that does fit into the --rectangle:
Choosing too lage for the signum function, we get a , such that the graph entirely fits into the --rectangle.
If is chosen small enough, then for any there will be function values above or below the --retangle.
Which has to be chosen again depends on the function around . After has been chosen, an arbitrary will be considered. Then, an between and has to be found, such that has a distance larger than (or equal to) to . That means, the point has to be situated above or below the --rectangle. Which has to be chosen depends on a varety of parameters: the chosen and the arbitrarily given , the discontinuity and the behavior of the function around it.
Exercise (Continuity of a linear function)
Prove that a linear function with is continuous.
How to get to the proof? (Continuity of a linear function)
Graph of a function
. Considering the graph , we see that this function is continuous everywhere.
To actually prove continuity of , we need to check continuity at any argument . So let be an arbitrary real number. Now, choose any arbitrary maximal error . Our task is now to find a sufficiently small , such that for all arguments with . Let us take a closer look at the inequality :
That means, has to be fulfilled for all with . How to choose , such that implies ?
We use that the inequality contains the distance . As we know that this distance is smaller than . This can be plugged into the inequality :
If is now chosen such that , then will yield the inequality which we wanted to show. The smallness condition for can now simply be found by resolving for :
Any satisfying could be used for the proof. For instance, we may use . As we now found a suitable , we can finally conduct the proof:
Proof (Continuity of a linear function)
Let with and let be arbitrary. In addition, consider any to be given. We choose . Let with . There is:
This shows , and establishes continuity of at by means of the epsilon-delta criterion. Since was chosen to be arbitrary, we also know that the entire function is continuous.
Exercise (Discontinuity of the signum function)
Prove that the signum function is is discontinuous:
How to get to the proof? (Discontinuity of the signum function)
In order to prove discontinuity of the entire function, we just have to find one single argument where it is discontinuous. Considering the graph of , we can already guess, which argument this may be:
The function has a jump at . So we expect it to be discontinuous, there. It remains to choose an that makes it impossible to find a , that makes the function fit into the --rectangle. This is done by setting smaller than the jump height - for instance . For that , no matter how is given, there will be function values above or below the --rectangle.
So let be arbitrary. We need to show that there is an with but . Let us take a look at the inequality :
This inequality classifies all that can be used for the proof. The particular we choose has to fulfill :
So our needs to fulfill both and . The second inequality may be achieved quite easily: For any , the value is either or . So does always fulfill .
Now we need to fulfill the first inequality . From the second inequality, we have just concluded . This is particularly true for all with . Therefore, we choos to be somewhere between and , for instance .
The following figure shows that this is a sensible choice. The --rectangle with and is drawn here. All points above or below that rectangle are marked red. These are exactly all inside the interval excluding . Our chosen (red dot) is situated directly in the middle of the red part of the graph above the rectangle:
So choosing is enough to complete the proof:
Proof (Discontinuity of the signum function)
We set (this is where is discontinuous). In addition, we choose . Let be arbitrary. For that given , we choose . Now, on one hand there is:
But on the other hand:
So indeed, is discontinuous at . Hence, the function is discontinuous itself.
Relation to the sequence criterion[Bearbeiten]
Now, we have two definitions of continuity: the epsilon-delta and the sequence criterion. In order to show that both definitions describe the same concept, we have to prove their equivalence. If the sequence criterion is fulfilled, it must imply that the epsilon-delta criterion holds and vice versa.
epsilon-delta criterion implies sequence criterion[Bearbeiten]
Theorem (The epsilon-delta criterion implies the sequence criterion)
Let with be any function. If this function satisfies the epsilon-dela criterion at , then the sequence criterion is fulfilled at , as well.
How to get to the proof? (The epsilon-delta criterion implies the sequence criterion)
Let us assume that the function satisfies the epsilon-delta criterion at . That means:
For every , there is a such that for all with .
We now want to prove that the sequence criterion is satisfied, as well. So we have to show that for any sequence of arguments converging to , there also has to be . We therefor consider an arbitrary sequence of arguments in the domain with . Our job is to show that the sequence of function values converges to . So by the definition of convergence:
For any there has to be an such that for all .
Let be arbitrary. We have to find a suitable with for all sequence elements beyond that , i.e. . The inequality seems familiar, recalling the epsilon-delta criterion. The only difference is that the argument is replaced by a sequence element - so we consider a special case for . Let us apply the epsilon-delta criterion to that special case, with our arbitrarily chosen being given:
There is a , such that for all sequence elements fulfilling .
Our goal is coming closer. Whenever a sequence element is close to with , it will satisfy the inequality which we want to show, namely . It remains to choose an , where this is the case for all sequence elements beyond . The convergence implies that gets arbitrarily small. So by the definition of continuity, we may find an , with for all . This now plays the role of our . If there is , it follows that and hence by the epsilon-delta criterion. In fact, any will do the job. We now conclude our considerations and write down the proof:
Proof (The epsilon-delta criterion implies the sequence criterion)
Let e a function satisfying the epsilon-delta criterion at . Let be a sequence inside the domain of definition, i.e. for all coverging as . We would like to show that for any given there exists an , such that holds for all .
So let be given. Following the epsilon-delta criterion, there is a , with for all close to , i.e. . As converges to , we may find an with for all .
Now, let be arbitrary. Hence, . The epsilon-delta criterion now implies . This proves and therefore establishes the epsilon-delta criterion.
Sequence criterion implies epsilon-delta criterion[Bearbeiten]
Theorem (The sequence criterion implies the epsilon-delta criterion)
Let with be a function. If satisfies the sequence criterion at , then the epsilon-delta criterion is fulfilled there, as well.
How to get to the proof? (The sequence criterion implies the epsilon-delta criterion)
We need to show that the following implication holds:
This time, we do not show the implication directly, but using a contraposition. So we will prove the following implication (which is equivalent to the first one):
Or in other words:
So let be a function that violates the epsilon-delta criterion at . Hence, fulfills the discontinuity version of the epsilon-delta criterion at . We can find an ,such that for any there is a with but . It is our job now to prove, that the sequence criterion is violated, as well. This requires choosing a sequence of aguments , converging as but .
This choice will be done exploiting the discontinuity version of the epsilon-delta criterion. That version provides us with an , where holds (so continuity is violated) for certain arguments . We will now construct our sequence exclusively out of those certain . This will automatically get us .
So how to find a suitable sequence of arguments , converging to ? The answer is: by choosing a null sequence . Practically, this is done as follows: we set . For any , we take one of the certain for as our argument . Then, but also . These make up the desired sequence . On one hand, there is and as , the convergence holds. But on the other hand , so the sequence of function values does not converge to . Let us put these thoughts together in a single proof:
Proof (The sequence criterion implies the epsilon-delta criterion)
We establish the theorem by contraposition. It needs to be shown that a function violating the epsilon-delta criterion at also violates the sequence criterion at . So let with be a function violating the epsilon-delta criterion at . Hence, there is an , such that for all an exists with but .
So for any , there is an with but . The inequality can also be written . As , there is both and . Thus, by the sandwich theorem, the sequence converges to .
But since for all , the sequence can not converge to . Therefore, the sequence criterion is violated at for the function : We have found a sequence of arguments with but .
Exercise (Continuity of the quadratic function)
Prove that the function with is continuous.
How to get to the proof? (Continuity of the quadratic function)
For this proof, we need to show that the square function is continuous at any argument . Using the proof structure for the epsilon-delta criterion, we are given an arbitrary . Our job is to find a suitable , such that the inequality holds for all .
In order to find a suitable , we plug in the definition of the function into the expression which shall be smaller than :
The expression may easily be controlled by . Hence, it makes sense to construct an upper estimate for which includes and a constant. The factor appears if we perform a factorization using the third binomial formula:
The requirement allows for an upper estimate of our expression:
The we are looking for may only depend on and . So the dependence on in the factor is still a problem. We resolve it by making a further upper estimate for the factor . We will use a simple, but widely applied "trick" for that: A is subtracted and then added again at another place (so we are effectively adding a 0) , such that the expression appears:
The absolute is obtained using the triangle inequality. This absolute is again bounded from above by :
So reshaping expressions and applying estimates, we obtain:
With this inequality in hand, we are almost done. If is chosen in a way that , we will get the final inequality . This is practically found solving the quadratic equation for . Or even simpler, we may estimate from above. We use that we may freely impose any condition on . If we, for instance, set , then which simplifies things:
So will also do the job. This inequality can be solved for to get the second condition on (the first one was ):
So any fulfilling both conditions does the job: and have to hold. Ind indeed, both are true for . This choice will be included into the final proof:
Proof (Continuity of the quadratic function)
Let be arbitrary and . If an argument fulfills then:
This shows that the square function is continuous by the epsilon-delta criterion.
Concatenated absolute function[Bearbeiten]
Exercise (Example for a proof of continuity)
Prove that the following function is continuous at :
How to get to the proof? (Example for a proof of continuity)
We need to show that for each given , there is a , such that for all with the inequality holds. In our case, . So by choosing for small enough, we may control the expression . First, let us plug into in order to simplify the inequality to be shown :
The objective is to "produce" as many expressions as possible, since we can control . It requires some experience with epsilon-delta proofs in order to "directly see" how this is achieved. First, we need to get rid of the double absolute. This is done using the inequality . For instance, we could use the following estimate:
However, this is a bad estimate as the expression no longer tends to 0 as . To resolve this problem, we use before applying the inequality :
A factor of can be directly extracted out of this with the third binomial fomula:
And we can control it by :
Now, the required must only depend on and . Therefore, we have to get rid of the -dependence of . This can be done by finding an upper bound for which does not depend on . As we are free to chose any for our proof, we may also impose any condition to it which helps us with the upper bound. In this case, turns out to be quite useful. In fact, or an even higher bound would do this job, as well. What follows from this choice?
As before, there has to be . As , we now have and as , we obtain and . This is the upper bound we were looking for:
As we would like to show , we set . And get that our final inequality holds:
So if the two conditions for are satisfied, we get the final inequality. In fact, both conditions will be satisfied if , concluding the proof. So let's conclude our ideas and write them down in a proof:
Proof (Example for a proof of continuity)
Let be arbitrary and let . Further, let with . Then:
As , there is . Hence and . It follows that and therefore .
Hence, the function is continuous at .
Exercise (Continuity of the hyperbolic function)
Prove that the function with is continuous.
How to get to the proof? (Continuity of the hyperbolic function)
The basic pattern for epsilon-delta proofs is applied here, as well. We would like to show the implication . Forst, let us plug in what we know and reshape our terms a bit until a appears:
By assumption, there will be , of which we can make use:
The choice of may again only depend on and so we need a smart estimate for in order to get rid of the -dependence. To do so, we consider .
Why was and not chosen? The explanation is quite simple: We need a -neighborhood inside the domain of definition of . If we had simply chosen , we might have get kicked out of this domain. For instance, in case, the following problem appears:
The biggest -value with is and the smallest one is . However, is not inside the domain of definition as . In particular, is element of that interval, where cannot be defined at all.
A smarter choice for , such that the -neighborhood doesn't touch the -axis is half of the distance to it, i.e. . A third of this distance or other fractions smaller than 1 would also be possible: , or .
As we chose and by , there is . This allows for an upper bound: and we may write:
So we get the estimate:
Now, we want to prove . Hence we choose |
This chapter is nothing short of mathemagical!!! You will begin with a study of square roots of perfect squares (positive and negative roots) and classifying numbers as rational or irrational. From there it will be time to explore the Pythagorean Theorem and seeing how the Theorem leads to The Distance Formula. You will then learn about the Midpoint Formula before spending time with the foundations of similar figures. Similar figures acts as a great connection to special right triangles -> 30-60-90 and 45-45-90 triangles!!! The chapter closes out with a deep dive into trigonometry where sine, cosine, and tangent are learned!! You may have heard of this as SOH-CAH-TOA. Word problems involving angles of elevation and depression cap off an epic chapter of math! Trigonometry sounds scary…but you will see it can be made simple – now go check it all out!!!
Not a member yet? Click here to check out a bunch of free videos to see what makes our approach extraordinary! |
Last Updated on October 20, 2022 by Shahzad Arsi
what are processing devices? What are their uses, how do they work, benefits of the processing unit?
Processing devices are computer chips that perform mathematical and logical operations. They are the key to making a computer work. Central processing units (CPUs) and graphics processing units (GPUs) are the most common types of processing devices.
In this very article, we will look in depth at what exactly are processing devices, how they work, their benefits, and the different types of processing devices available!
Table of Contents
What are Processing Devices
A processing device is a computer system that executes software instructions. The instructions are typically stored in a computer-readable medium, such as a Read Only Memory (ROM), Random Access Memory (RAM), or hard drive. The processor, or central processing unit (CPU), executes the instructions to carry out the desired task.
Most personal computers contain one or more processing devices. The device may be an Intel x86-based microprocessor, such as those found in Microsoft Windows and Apple macOS computers, or it may be a custom processor designed for a specific application, such as digital signal processors (DSPs) and graphics processing units (GPUs).
Processing devices can also be found in embedded systems, which are computer systems used in devices such as cars, home appliances, medical equipment, and industrial control systems.
How do Processing Devices Work?
Processing devices are important in all walks of life. They play a vital role in the production of goods and services. But how do they work?
Processing devices take an input and produce an output. The input can be anything from raw materials to data. The output can be anything from products to information.
Processing devices can be divided into two categories: discrete and continuous. Discrete processing devices take a finite number of discrete steps to produce an output. Continuous processing devices take a continuous stream of data and produce an output that is a function of the data stream.
Benefits of Processing Devices
The benefits of processing devices are vast. They provide a means for people to communicate with one another, access information, and make transactions. In addition, they are essential tools for managing personal finances and conducting business. Moreover, processing devices are also used for entertainment purposes. For example, people use them to watch movies and play video games.
Below we have mentioned some of the most important benefits of processing devices.
- Processing devices are important tools for any office or business.
- They help to keep the office organized and running smoothly.
- Processing devices can help with tasks such as copying, printing, scanning, and faxing.
- They make it easy for employees to do their jobs efficiently.
- Processing devices also save time and money for businesses.
- By using processing devices, businesses can improve their productivity and bottom line.
Different Types of Processing Devices
Although there are various types of processing devices in computers we will discuss some of the most common ones below.
1. Central Processing Unit (CPU)
The CPU, or central processing unit, is the most vital part of any computer. The CPU is responsible for executing the instructions that make up a computer program. It also controls all of the other components of the computer, including the memory, disk drives, and display.
The speed and power of a computer’s CPU have a major impact on its performance. CPUs are available in a wide range of speeds and prices. Some CPUs are designed for use in desktop computers, while others are designed for use in laptops or mobile devices.
2. Graphics Processing Unit (GPU)
The Graphics Processing Unit (GPU) is a computer chip that is specifically designed for graphics processing. GPUs are used to accelerate the creation of images in a display device, such as a computer monitor or television. GPUs can also be used to accelerate the processing of video and other multimedia content.
Most modern GPUs are based on the Graphics Core Next (GCN) architecture developed by AMD. Nvidia also manufactures GPUs based on this architecture. The GCN architecture features programmable shaders, a geometry engine, and compute units that can be used to perform various tasks related to graphics processing.
GPUs are used in a variety of devices, including desktop and laptop computers, gaming consoles, and mobile devices. In addition to graphics acceleration, GPUs can also be used for general-purpose computing tasks such as cryptocurrency mining and machine learning.
A motherboard is the main processing device in a computer. It contains the central processing unit, or CPU, memory, and chipset. The motherboard also provides connectors for attaching additional devices, such as a video card, network card, or hard drive.
Motherboards come in a variety of shapes and sizes and are typically named for their form factor. ATX and microATX are the most popular case types.
4. Network Card
A network card is a piece of hardware that processes data packets and sends them over a computer network. It is installed in a computer’s expansion slot and connects to the network using an Ethernet cable. Network cards come in many different varieties, including wired and wireless models. They are used to connect computers to networks, routers, modems, and other devices.
5. Sound Card
A sound card, also known as an audio card, is a processing device that turns digital data into audible sound. It is inserted into the computer’s expansion slot and allows the user to listen to music, watch movies, and play games with sound effects.
There are many different types of sound cards on the market, ranging from basic models that just provide basic functionality to high-end cards that offer features such as surround sound and advanced audio processing. When shopping for a sound card, it is important to consider your needs and budget.
6. Video Card
A video card, also called a graphics card, is a type of processing device that helps a computer system generate images. There are two main types of video cards: dedicated and integrated. A dedicated video card has its own memory and processor, while an integrated video card shares resources with the system’s other components.
Video cards come in a variety of sizes and shapes, and are used in a wide range of applications. They are commonly found in desktop and laptop computers, but can also be used in gaming consoles and other devices. In general, the more powerful the video card, the better the image quality.
Choosing the right video card is important for getting the most out of your computer system. If you need a lot of graphics power, for example for gaming or graphic design, then you will need to buy a dedicated video card.
More Helpful Resource
The Bottom Line
In conclusion, processing devices are important tools used in a variety of industries. They are used to control and monitor machines and help to optimize the performance of systems. There are many different types of processing devices, each with its own unique capabilities.
If you are looking for a processing device for your specific needs, be sure to do your research to find the best option for you. |
A hydrogen bond is the electrostatic attraction between two polar groups that occurs when a hydrogen (H) atom covalently bound to a highly electronegative atom such as nitrogen (N), oxygen (O), or fluorine (F) experiences the electrostatic field of another highly electronegative atom nearby.
Hydrogen bonds can occur between molecules (intermolecular) or within different parts of a single molecule (intramolecular). Depending on geometry and environment, the hydrogen bond free energy content is between 1 and 5 kcal/mol. This makes it stronger than a van der Waals interaction, but weaker than covalent or ionic bonds. This type of bond can occur in inorganic molecules such as water and in organic molecules like DNA and proteins.
Intermolecular hydrogen bonding is responsible for the high boiling point of water (100 °C) compared to the other group 16 hydrides that have much weaker hydrogen bonds. Intramolecular hydrogen bonding is partly responsible for the secondary and tertiary structures of proteins and nucleic acids. It also plays an important role in the structure of polymers, both synthetic and natural.
The hydrogen bond is an attractive interaction between a hydrogen atom from a molecule or a molecular fragment X–H in which X is more electronegative than H, and an atom or a group of atoms in the same or a different molecule, in which there is evidence of bond formation.
An accompanying detailed technical report provides the rationale behind the new definition.
- 1 Bonding
- 2 History
- 3 Hydrogen bonds in water
- 4 Hydrogen bonds in DNA and proteins
- 5 Hydrogen bonds in polymers
- 6 Symmetric hydrogen bond
- 7 Dihydrogen bond
- 8 Advanced theory of the hydrogen bond
- 9 Dynamics probed by spectroscopic means
- 10 Hydrogen bonding phenomena
- 11 References
- 12 Further reading
- 13 External links
A hydrogen atom attached to a relatively electronegative atom will play the role of the hydrogen bond donor. This electronegative atom is usually fluorine, oxygen, or nitrogen. A hydrogen attached to carbon can also participate in hydrogen bonding when the carbon atom is bound to electronegative atoms, as is the case in chloroform, CHCl3. An example of a hydrogen bond donor is the hydrogen from the hydroxyl group of ethanol, which is bonded to an oxygen.
In a hydrogen bond, the electronegative atom not covalently attached to the hydrogen is named proton acceptor, whereas the one covalently bound to the hydrogen is named the proton donor.
In the donor molecule, the electronegative atom attracts the electron cloud from around the hydrogen nucleus of the donor, and, by decentralizing the cloud, leaves the atom with a positive partial charge. Because of the small size of hydrogen relative to other atoms and molecules, the resulting charge, though only partial, represents a large charge density. A hydrogen bond results when this strong positive charge density attracts a lone pair of electrons on another heteroatom, which then becomes the hydrogen-bond acceptor.
The hydrogen bond is often described as an electrostatic dipole-dipole interaction. However, it also has some features of covalent bonding: it is directional and strong, produces interatomic distances shorter than the sum of the van der Waals radii, and usually involves a limited number of interaction partners, which can be interpreted as a type of valence. These covalent features are more substantial when acceptors bind hydrogens from more electronegative donors.
The partially covalent nature of a hydrogen bond raises the following questions: "To which molecule or atom does the hydrogen nucleus belong?" and "Which should be labeled 'donor' and which 'acceptor'?" Usually, this is simple to determine on the basis of interatomic distances in the X−H···Y system, where the dots represent the hydrogen bond: the X−H distance is typically ≈110 pm, whereas the H···Y distance is ≈160 to 200 pm. Liquids that display hydrogen bonding (such as water) are called associated liquids.
- F−H···:F (161.5 kJ/mol or 38.6 kcal/mol)
- O−H···:N (29 kJ/mol or 6.9 kcal/mol)
- O−H···:O (21 kJ/mol or 5.0 kcal/mol)
- N−H···:N (13 kJ/mol or 3.1 kcal/mol)
- N−H···:O (8 kJ/mol or 1.9 kcal/mol)
3 (18 kJ/mol or 4.3 kcal/mol; data obtained using molecular dynamics as detailed in the reference and should be compared to 7.9 kJ/mol for bulk water, obtained using the same molecular dynamics.)
Quantum chemical calculations of the relevant interresidue potential constants (compliance constants) revealed[how?] large differences between individual H bonds of the same type. For example, the central interresidue N−H···N hydrogen bond between guanine and cytosine is much stronger in comparison to the N−H···N bond between the adenine-thymine pair.
The length of hydrogen bonds depends on bond strength, temperature, and pressure. The bond strength itself is dependent on temperature, pressure, bond angle, and environment (usually characterized by local dielectric constant). The typical length of a hydrogen bond in water is 197 pm. The ideal bond angle depends on the nature of the hydrogen bond donor. The following hydrogen bond angles between a hydrofluoric acid donor and various acceptors have been determined experimentally:
|Acceptor···donor||VSEPR geometry||Angle (°)|
In the book The Nature of the Chemical Bond, Linus Pauling credits T. S. Moore and T. F. Winmill with the first mention of the hydrogen bond, in 1912. Moore and Winmill used the hydrogen bond to account for the fact that trimethylammonium hydroxide is a weaker base than tetramethylammonium hydroxide. The description of hydrogen bonding in its better-known setting, water, came some years later, in 1920, from Latimer and Rodebush. In that paper, Latimer and Rodebush cite work by a fellow scientist at their laboratory, Maurice Loyal Huggins, saying, "Mr. Huggins of this laboratory in some work as yet unpublished, has used the idea of a hydrogen kernel held between two atoms as a theory in regard to certain organic compounds."
Hydrogen bonds in water
The most ubiquitous and perhaps simplest example of a hydrogen bond is found between water molecules. In a discrete water molecule, there are two hydrogen atoms and one oxygen atom. Two molecules of water can form a hydrogen bond between them; the simplest case, when only two molecules are present, is called the water dimer and is often used as a model system. When more molecules are present, as is the case with liquid water, more bonds are possible because the oxygen of one water molecule has two lone pairs of electrons, each of which can form a hydrogen bond with a hydrogen on another water molecule. This can repeat such that every water molecule is H-bonded with up to four other molecules, as shown in the figure (two through its two lone pairs, and two through its two hydrogen atoms). Hydrogen bonding strongly affects the crystal structure of ice, helping to create an open hexagonal lattice. The density of ice is less than the density of water at the same temperature; thus, the solid phase of water floats on the liquid, unlike most other substances.
Liquid water's high boiling point is due to the high number of hydrogen bonds each molecule can form, relative to its low molecular mass. Owing to the difficulty of breaking these bonds, water has a very high boiling point, melting point, and viscosity compared to otherwise similar liquids not conjoined by hydrogen bonds. Water is unique because its oxygen atom has two lone pairs and two hydrogen atoms, meaning that the total number of bonds of a water molecule is up to four. For example, hydrogen fluoride—which has three lone pairs on the F atom but only one H atom—can form only two bonds; (ammonia has the opposite problem: three hydrogen atoms but only one lone pair).
The exact number of hydrogen bonds formed by a molecule of liquid water fluctuates with time and depends on the temperature. From TIP4P liquid water simulations at 25 °C, it was estimated that each water molecule participates in an average of 3.59 hydrogen bonds. At 100 °C, this number decreases to 3.24 due to the increased molecular motion and decreased density, while at 0 °C, the average number of hydrogen bonds increases to 3.69. A more recent study found a much smaller number of hydrogen bonds: 2.357 at 25 °C. The differences may be due to the use of a different method for defining and counting the hydrogen bonds.
Where the bond strengths are more equivalent, one might instead find the atoms of two interacting water molecules partitioned into two polyatomic ions of opposite charge, specifically hydroxide (OH−) and hydronium (H3O+). (Hydronium ions are also known as "hydroxonium" ions.)
- H−O− H3O+
Indeed, in pure water under conditions of standard temperature and pressure, this latter formulation is applicable only rarely; on average about one in every 5.5 × 108 molecules gives up a proton to another water molecule, in accordance with the value of the dissociation constant for water under such conditions. It is a crucial part of the uniqueness of water.
Because water may form hydrogen bonds with solute proton donors and acceptors, it may competitively inhibit the formation of solute intermolecular or intramolecular hydrogen bonds. Consequently, hydrogen bonds between or within solute molecules dissolved in water are almost always unfavorable relative to hydrogen bonds between water and the donors and acceptors for hydrogen bonds on those solutes. Hydrogen bonds between water molecules have an average lifetime of 10−11 seconds, or 10 picoseconds.
Bifurcated and over-coordinated hydrogen bonds in water
A single hydrogen atom can participate in two hydrogen bonds, rather than one. This type of bonding is called "bifurcated" (split in two or "two-forked"). It can exist, for instance, in complex natural or synthetic organic molecules. It has been suggested that a bifurcated hydrogen atom is an essential step in water reorientation.
Acceptor-type hydrogen bonds (terminating on an oxygen's lone pairs) are more likely to form bifurcation (it is called overcoordinated oxygen, OCO) than are donor-type hydrogen bonds, beginning on the same oxygen's hydrogens.
Hydrogen bonds in DNA and proteins
Hydrogen bonding also plays an important role in determining the three-dimensional structures adopted by proteins and nucleic bases. In these macromolecules, bonding between parts of the same macromolecule cause it to fold into a specific shape, which helps determine the molecule's physiological or biochemical role. For example, the double helical structure of DNA is due largely to hydrogen bonding between its base pairs (as well as pi stacking interactions), which link one complementary strand to the other and enable replication.
In the secondary structure of proteins, hydrogen bonds form between the backbone oxygens and amide hydrogens. When the spacing of the amino acid residues participating in a hydrogen bond occurs regularly between positions i and i + 4, an alpha helix is formed. When the spacing is less, between positions i and i + 3, then a 310 helix is formed. When two strands are joined by hydrogen bonds involving alternating residues on each participating strand, a beta sheet is formed. Hydrogen bonds also play a part in forming the tertiary structure of protein through interaction of R-groups. (See also protein folding).
The role of hydrogen bonds in protein folding has also been linked to osmolyte-induced protein stabilization. Protective osmolytes, such as trehalose and sorbitol, shift the protein folding equilibrium toward the folded state, in a concentration dependent manner. While the prevalent explanation for osmolyte action relies on excluded volume effects, that are entropic in nature, recent Circular dichroism (CD) experiments have shown osmolyte to act through an enthalpic effect. The molecular mechanism for their role in protein stabilization is still not well established, though several mechanism have been proposed. Recently, computer molecular dynamics simulations suggested that osmolytes stabilize proteins by modifying the hydrogen bonds in the protein hydration layer.
Several studies have shown that hydrogen bonds play an important role for the stability between subunits in multimeric proteins. For example, a study of sorbitol dehydrogenase displayed an important hydrogen bonding network which stabilizes the tetrameric quaternary structure within the mammalian sorbitol dehydrogenase protein family.
A protein backbone hydrogen bond incompletely shielded from water attack is a dehydron. Dehydrons promote the removal of water through proteins or ligand binding. The exogenous dehydration enhances the electrostatic interaction between the amide and carbonyl groups by de-shielding their partial charges. Furthermore, the dehydration stabilizes the hydrogen bond by destabilizing the nonbonded state consisting of dehydrated isolated charges.
Hydrogen bonds in polymers
Many polymers are strengthened by hydrogen bonds in their main chains. Among the synthetic polymers, the best known example is nylon, where hydrogen bonds occur in the repeat unit and play a major role in crystallization of the material. The bonds occur between carbonyl and amine groups in the amide repeat unit. They effectively link adjacent chains to create crystals, which help reinforce the material. The effect is greatest in aramid fibre, where hydrogen bonds stabilize the linear chains laterally. The chain axes are aligned along the fibre axis, making the fibres extremely stiff and strong. Hydrogen bonds are also important in the structure of cellulose and derived polymers in its many different forms in nature, such as wood and natural fibres such as cotton and flax.
The hydrogen bond networks make both natural and synthetic polymers sensitive to humidity levels in the atmosphere because water molecules can diffuse into the surface and disrupt the network. Some polymers are more sensitive than others. Thus nylons are more sensitive than aramids, and nylon 6 more sensitive than nylon-11.
Symmetric hydrogen bond
A symmetric hydrogen bond is a special type of hydrogen bond in which the proton is spaced exactly halfway between two identical atoms. The strength of the bond to each of those atoms is equal. It is an example of a three-center four-electron bond. This type of bond is much stronger than a "normal" hydrogen bond. The effective bond order is 0.5, so its strength is comparable to a covalent bond. It is seen in ice at high pressure, and also in the solid phase of many anhydrous acids such as hydrofluoric acid and formic acid at high pressure. It is also seen in the bifluoride ion [F−H−F]−.
Symmetric hydrogen bonds have been observed recently spectroscopically in formic acid at high pressure (>GPa). Each hydrogen atom forms a partial covalent bond with two atoms rather than one. Symmetric hydrogen bonds have been postulated in ice at high pressure (Ice X). Low-barrier hydrogen bonds form when the distance between two heteroatoms is very small.
The hydrogen bond can be compared with the closely related dihydrogen bond, which is also an intermolecular bonding interaction involving hydrogen atoms. These structures have been known for some time, and well characterized by crystallography; however, an understanding of their relationship to the conventional hydrogen bond, ionic bond, and covalent bond remains unclear. Generally, the hydrogen bond is characterized by a proton acceptor that is a lone pair of electrons in nonmetallic atoms (most notably in the nitrogen, and chalcogen groups). In some cases, these proton acceptors may be pi-bonds or metal complexes. In the dihydrogen bond, however, a metal hydride serves as a proton acceptor, thus forming a hydrogen-hydrogen interaction. Neutron diffraction has shown that the molecular geometry of these complexes is similar to hydrogen bonds, in that the bond length is very adaptable to the metal complex/hydrogen donor system.
Advanced theory of the hydrogen bond
In 1999, Isaacs et al. showed from interpretations of the anisotropies in the Compton profile of ordinary ice that the hydrogen bond is partly covalent. However, this interpretation was challenged by Ghanty et al., who concluded that considering electrostatic forces alone could explain the experimental results. Some NMR data on hydrogen bonds in proteins also indicate covalent bonding.
Most generally, the hydrogen bond can be viewed as a metric-dependent electrostatic scalar field between two or more intermolecular bonds. This is slightly different from the intramolecular bound states of, for example, covalent or ionic bonds; however, hydrogen bonding is generally still a bound state phenomenon, since the interaction energy has a net negative sum. The initial theory of hydrogen bonding proposed by Linus Pauling suggested that the hydrogen bonds had a partial covalent nature. This remained a controversial conclusion until the late 1990s when NMR techniques were employed by F. Cordier et al. to transfer information between hydrogen-bonded nuclei, a feat that would only be possible if the hydrogen bond contained some covalent character. While much experimental data has been recovered for hydrogen bonds in water, for example, that provide good resolution on the scale of intermolecular distances and molecular thermodynamics, the kinetic and dynamical properties of the hydrogen bond in dynamic systems remain unchanged.
Dynamics probed by spectroscopic means
The dynamics of hydrogen bond structures in water can be probed by the IR spectrum of OH stretching vibration. In the hydrogen bonding network in protic organic ionic plastic crystals (POIPCs), which are a type of phase change material exhibiting solid-solid phase transitions prior to melting, variable-temperature infrared spectroscopy can reveal the temperature dependence of hydrogen bonds and the dynamics of both the anions and the cations. The sudden weakening of hydrogen bonds during the solid-solid phase transition seems to be coupled with the onset of orientational or rotational disorder of the ions.
Hydrogen bonding phenomena
- Dramatically higher boiling points of NH3, H2O, and HF compared to the heavier analogues PH3, H2S, and HCl.
- Increase in the melting point, boiling point, solubility, and viscosity of many compounds can be explained by the concept of hydrogen bonding.
- Occurrence of proton tunneling during DNA replication is believed to be responsible for cell mutations.
- Viscosity of anhydrous phosphoric acid and of glycerol
- Dimer formation in carboxylic acids and hexamer formation in hydrogen fluoride, which occur even in the gas phase, resulting in gross deviations from the ideal gas law.
- Pentamer formation of water and alcohols in apolar solvents.
- High water solubility of many compounds such as ammonia is explained by hydrogen bonding with water molecules.
- Negative azeotropy of mixtures of HF and water
- Deliquescence of NaOH is caused in part by reaction of OH− with moisture to form hydrogen-bonded H
2 species. An analogous process happens between NaNH2 and NH3, and between NaF and HF.
- The fact that ice is less dense than liquid water is due to a crystal structure stabilized by hydrogen bonds.
- The presence of hydrogen bonds can cause an anomaly in the normal succession of states of matter for certain mixtures of chemical compounds as temperature increases or decreases. These compounds can be liquid until a certain temperature, then solid even as the temperature increases, and finally liquid again as the temperature rises over the "anomaly interval"
- Smart rubber utilizes hydrogen bonding as its sole means of bonding, so that it can "heal" when torn, because hydrogen bonding can occur on the fly between two surfaces of the same polymer.
- Strength of nylon and cellulose fibres.
- Wool, being a protein fibre, is held together by hydrogen bonds, causing wool to recoil when stretched. However, washing at high temperatures can permanently break the hydrogen bonds and a garment may permanently lose its shape.
- Sweetman, A. M.; Jarvis, S. P.; Sang, Hongqian; Lekkas, I.; Rahe, P.; Wang, Yu; Wang, Jianbo; Champness, N.R.; Kantorovich, L.; Moriarty, P. (2014). "Mapping the force field of a hydrogen-bonded assembly". Nature Communications. 5. Bibcode:2014NatCo...5E3931S. doi:10.1038/ncomms4931. PMC . PMID 24875276.
- Hapala, Prokop; Kichin, Georgy; Wagner, Christian; Tautz, F. Stefan; Temirov, Ruslan; Jelínek, Pavel (2014-08-19). "Mechanism of high-resolution STM/AFM imaging with functionalized tips". Physical Review B. 90 (8): 085421. doi:10.1103/PhysRevB.90.085421.
- Hämäläinen, Sampsa K.; van der Heijden, Nadine; van der Lit, Joost; den Hartog, Stephan; Liljeroth, Peter; Swart, Ingmar (2014-10-31). "Intermolecular Contrast in Atomic Force Microscopy Images without Intermolecular Bonds". Physical Review Letters. 113 (18): 186102. doi:10.1103/PhysRevLett.113.186102. PMID 25396382.
- IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version: (2006–) "hydrogen bond".
- John R. Sabin (1971). "Hydrogen bonds involving sulfur. I. Hydrogen sulfide dimer". J. Am. Chem. Soc. 93 (15): 3613–3620. doi:10.1021/ja00744a012.
- Arunan, Elangannan; Desiraju, Gautam R.; Klein, Roger A.; Sadlej, Joanna; Scheiner, Steve; Alkorta, Ibon; Clary, David C.; Crabtree, Robert H.; Dannenberg, Joseph J.; Hobza, Pavel; Kjaergaard, Henrik G.; Legon, Anthony C.; Mennucci, Benedetta; Nesbitt, David J. (2011). "Definition of the hydrogen bond". Pure Appl. Chem. 83 (8): 1637–1641. doi:10.1351/PAC-REC-10-01-02.
- Arunan, Elangannan; Desiraju, Gautam R.; Klein, Roger A.; Sadlej, Joanna; Scheiner, Steve; Alkorta, Ibon; Clary, David C.; Crabtree, Robert H.; Dannenberg, Joseph J.; Hobza, Pavel; Kjaergaard, Henrik G.; Legon, Anthony C.; Mennucci, Benedetta; Nesbitt, David J. (2011). "Defining the hydrogen bond: An Account". Pure Appl. Chem. 83 (8): 1619–1636. doi:10.1351/PAC-REP-10-01-01.
- Beijer, Felix H.; Kooijman, Huub; Spek, Anthony L.; Sijbesma, Rint P.; Meijer, E. W. (1998). "Self-Complementarity Achieved through Quadruple Hydrogen Bonding". Angew. Chem. Int. Ed. 37 (1–2): 75–78. doi:10.1002/(SICI)1521-3773(19980202)37:1/2<75::AID-ANIE75>3.0.CO;2-R.
- Campbell, Neil A.; Brad Williamson; Robin J. Heyden (2006). Biology: Exploring Life. Boston, Massachusetts: Pearson Prentice Hall. ISBN 0-13-250882-6.
- Wiley, G.R.; Miller, S.I. (1972). "Thermodynamic parameters for hydrogen bonding of chloroform with Lewis bases in cyclohexane. Proton magnetic resonance study". Journal of the American Chemical Society. 94 (10): 3287. doi:10.1021/ja00765a001.
- Kwak, K; Rosenfeld, DE; Chung, JK; Fayer, MD (2008). "Solute-solvent complex switching dynamics of chloroform between acetone and dimethylsulfoxide-two-dimensional IR chemical exchange spectroscopy". The Journal of Physical Chemistry B. 112 (44): 13906–15. doi:10.1021/jp806035w. PMC . PMID 18855462.
- Romańczyk, P. P.; Radoń, M.; Noga, K.; Kurek, S. S. (2013). "Autocatalytic cathodic dehalogenation triggered by dissociative electron transfer through a C-H...O hydrogen bond". Physical Chemistry Chemical Physics. 15 (40): 17522–17536. doi:10.1039/C3CP52933A. PMID 24030591.
- Larson, J. W.; McMahon, T. B. (1984). "Gas-phase bihalide and pseudobihalide ions. An ion cyclotron resonance determination of hydrogen bond energies in XHY- species (X, Y = F, Cl, Br, CN)". Inorganic Chemistry. 23 (14): 2029–2033. doi:10.1021/ic00182a010.
- Emsley, J. (1980). "Very Strong Hydrogen Bonds". Chemical Society Reviews. 9 (1): 91–124. doi:10.1039/cs9800900091.
- Markovitch, Omer; Agmon, Noam (2007). "Structure and energetics of the hydronium hydration shells". J. Phys. Chem. A. 111 (12): 2253–2256. doi:10.1021/jp068960g. PMID 17388314.
- Grunenberg, Jörg (2004). "Direct Assessment of Interresidue Forces in Watson−Crick Base Pairs Using Theoretical Compliance Constants". Journal of the American Chemical Society. 126 (50): 16310–1. doi:10.1021/ja046282a. PMID 15600318.
- Legon, A. C.; Millen, D. J. (1987). "Angular geometries and other properties of hydrogen-bonded dimers: a simple electrostatic interpretation of the success of the electron-pair model". Chemical Society Reviews. 16: 467. doi:10.1039/CS9871600467.
- Pauling, L. (1960). The nature of the chemical bond and the structure of molecules and crystals; an introduction to modern structural chemistry (3rd ed.). Ithaca (NY): Cornell University Press. p. 450. ISBN 0-8014-0333-2.
- Moore, T. S.; Winmill, T. F. (1912). "The state of amines in aqueous solution". J. Chem. Soc. 101: 1635. doi:10.1039/CT9120101635.
- Latimer, Wendell M.; Rodebush, Worth H. (1920). "Polarity and ionization from the standpoint of the Lewis theory of valence.". Journal of the American Chemical Society. 42 (7): 1419–1433. doi:10.1021/ja01452a015.
- Jorgensen, W. L.; Madura, J. D. (1985). "Temperature and size dependence for Monte Carlo simulations of TIP4P water". Mol. Phys. 56 (6): 1381. Bibcode:1985MolPh..56.1381J. doi:10.1080/00268978500103111.
- Zielkiewicz, Jan (2005). "Structural properties of water: Comparison of the SPC, SPCE, TIP4P, and TIP5P models of water". J. Chem. Phys. 123 (10): 104501. Bibcode:2005JChPh.123j4501Z. doi:10.1063/1.2018637. PMID 16178604.
- Jencks, William; Jencks, William P. (1986). "Hydrogen Bonding between Solutes in Aqueous Solution". J. Amer. Chem. Soc. 108 (14): 4196. doi:10.1021/ja00274a058.
- Dillon, P. F. (2012). Biophysics. Cambridge University Press. p. 37. ISBN 978-1-139-50462-1.
- Baron, Michel; Giorgi-Renault, Sylviane; Renault, Jean; Mailliet, Patrick; Carré, Daniel; Etienne, Jean (1984). "Hétérocycles à fonction quinone. V. Réaction anormale de la butanedione avec la diamino-1,2 anthraquinone; structure cristalline de la naphto \2,3-f] quinoxalinedione-7,12 obtenue". Can. J. Chem. 62 (3): 526–530. doi:10.1139/v84-087.
- Laage, Damien; Hynes, James T. (2006). "A Molecular Jump Mechanism for Water Reorientation". Science. 311 (5762): 832–5. Bibcode:2006Sci...311..832L. doi:10.1126/science.1122154. PMID 16439623.
- Markovitch, Omer; Agmon, Noam (2008). "The Distribution of Acceptor and Donor Hydrogen-Bonds in Bulk Liquid Water". Molecular Physics. 106 (2): 485. Bibcode:2008MolPh.106..485M. doi:10.1080/00268970701877921.
- Politi, Regina; Harries, Daniel (2010). "Enthalpically driven peptide stabilization by protective osmolytes". ChemComm. 46 (35): 6449–6451. doi:10.1039/C0CC01763A.
- Gilman-Politi, Regina; Harries, Daniel (2011). "Unraveling the Molecular Mechanism of Enthalpy Driven Peptide Folding by Polyol Osmolytes". Journal of Chemical Theory and Computation. 7 (11): 3816–3828. doi:10.1021/ct200455n. PMID 26598272.
- Hellgren, M; Kaiser, C; de Haij, S; Norberg, A; Höög, JO (December 2007). "A hydrogen-bonding network in mammalian sorbitol dehydrogenase stabilizes the tetrameric state and is essential for the catalytic power.". Cellular and molecular life sciences : CMLS. 64 (23): 3129–38. doi:10.1007/s00018-007-7318-1. PMID 17952367.
- Fernández, A; Rogale K; Scott Ridgway; Scheraga H.A. (June 2004). "Inhibitor design by wrapping packing defects in HIV-1 proteins.". Proceedings of the National Academy of Sciences. 101 (32): 11640–5. doi:10.1073/pnas.0404641101. PMID 15289598.
- Crabtree, Robert H.; Siegbahn, Per E. M.; Eisenstein, Odile; Rheingold, Arnold L.; Koetzle, Thomas F. (1996). "A New Intermolecular Interaction: Unconventional Hydrogen Bonds with Element-Hydride Bonds as Proton Acceptor". Acc. Chem. Res. 29 (7): 348–354. doi:10.1021/ar950150s. PMID 19904922.
- Isaacs, E.D.; et al. (1999). "Covalency of the Hydrogen Bond in Ice: A Direct X-Ray Measurement". Physical Review Letters. 82 (3): 600–603. Bibcode:1999PhRvL..82..600I. doi:10.1103/PhysRevLett.82.600.
- Ghanty, Tapan K.; Staroverov, Viktor N.; Koren, Patrick R.; Davidson, Ernest R. (2000-02-01). "Is the Hydrogen Bond in Water Dimer and Ice Covalent?". Journal of the American Chemical Society. 122 (6): 1210–1214. doi:10.1021/ja9937019. ISSN 0002-7863.
- Cordier, F; Rogowski, M; Grzesiek, S; Bax, A (1999). "Observation of through-hydrogen-bond (2h)J(HC') in a perdeuterated protein". J Magn Reson. 140 (2): 510–2. Bibcode:1999JMagR.140..510C. doi:10.1006/jmre.1999.1899. PMID 10497060.
- Cowan ML; Bruner BD; Huse N; et al. (2005). "Ultrafast memory loss and energy redistribution in the hydrogen bond network of liquid H2O". Nature. 434 (7030): 199–202. Bibcode:2005Natur.434..199C. doi:10.1038/nature03383. PMID 15758995.
- Luo, Jiangshui; Jensen, Annemette H.; Brooks, Neil R.; Sniekers, Jeroen; Knipper, Martin; Aili, David; Li, Qingfeng; Vanroy, Bram; Wübbenhorst, Michael; Yan, Feng; Van Meervelt, Luc; Shao, Zhigang; Fang, Jianhua; Luo, Zheng-Hong; De Vos, Dirk E.; Binnemans, Koen; Fransaer, Jan (2015). "1,2,4-Triazolium perfluorobutanesulfonate as an archetypal pure protic organic ionic plastic crystal electrolyte for all-solid-state fuel cells". Energy & Environmental Science. 8 (4): 1276. doi:10.1039/C4EE02280G.
- Löwdin, P. O. (1963). "Proton Tunneling in DNA and its Biological Implications". Rev. Mod. Phys. 35 (3): 724. Bibcode:1963RvMP...35..724L. doi:10.1103/RevModPhys.35.724.
- Law-breaking liquid defies the rules. Physicsworld.com (September 24, 2004 )
- George A. Jeffrey. An Introduction to Hydrogen Bonding (Topics in Physical Chemistry). Oxford University Press, USA (March 13, 1997). ISBN 0-19-509549-9 |
In computer programming , assembly language or assembler language , often abbreviated asm , is any low-level programming language in which there is a very strong correspondence between the instructions in the language and the architecture's machine code instructions. Assembly language may also be called symbolic machine code. Assembly code is converted into executable machine code by a utility program referred to as an assembler. The conversion process is referred to as assembly , as in assembling the source code. Assembly language usually has one statement per machine instruction , but comments and statements that are assembler directives , macros , and symbolic labels of program and memory locations are often also supported. The term "assembler" is generally attributed to Wilkes , Wheeler and Gill in their book The preparation of programs for an electronic digital computer , who, however, used the term to mean "a program that assembles another program consisting of several sections into a single program".
|Published (Last):||3 May 2005|
|PDF File Size:||19.58 Mb|
|ePub File Size:||14.54 Mb|
|Price:||Free* [*Free Regsitration Required]|
Descubra todo lo que Scribd tiene para ofrecer, incluyendo libros y audiolibros de importantes editoriales. Computer instructions are represented, in a computer, as sequences of bits. Assembly language is very closely related to machine language, and there isusually a straightforward way to translate programs written in assembly language into machinelanguage. Assembly language is usually a direct translation of the machine language; one instruction in assembly language corresponds to one instruction in the machine language.
Because of the close relationship between machine and assembly languages, each dierent machine architecture usually has its own assembly language in fact, a particular architecture may have several , and each is unique.
RAM is a place to where the programs are loaded in order to be executed. The first reason to work with assembler is that it provides the opportunity of knowing more the operation of your PC, which allows the development of software in a more consistent manner.
The second reason is the total control of the PC which you can have with the use of the assembler. Another reason is that the assembly programs are quicker, smaller, and have larger capacities than ones created with other languages.
Lastly, the assembler allows an ideal optimization in programs, be it on their size or on their execution. There are many ways to represent the same numeric value. Long ago, humans used sticks to count, and later learned how to draw pictures of sticks in the ground and eventually on paper.
So, the number 5 was first represented as: for five sticks. Later on, the Romans began using different symbols for multiple numbers of sticks: still meant three sticks, but a V now meant five sticks, and an X was used to represent ten of them!
Using sticks to count was a great idea for its time. And using symbols instead of real sticks was much better. Most people today use decimal representation to count. In the decimal system there are 10 digits:. Number The value is formed by the sum of each digit, multiplied by the base in this case it is 10 because there are 10 digits in decimal system in power of digit position counting from zero :.
Computers are not as smart as humans are or not yet , it's easy to make an electronic machine with two states: on and off, or 1 and 0. Computers use binary system, binary system uses 2 digits: 0, 1. And thus the base is 2. There is a convention to add "b" in the end of a binary number, this way we can determine that b is a binary number with decimal value of 5. The binary number b equals to decimal value of Hexadecimal numbers are compact and easy to read.
It is very easy to convert numbers from binary system to hexadecimal system and viceversa, every nibble 4 bits can be converted to a hexadecimal digit using this table: Decimal base 10 Binary base 2 Hexadecimal base There is a convention to add "h" in the end of a hexadecimal number, this way we can determine that 5Fh is a hexadecimal number with decimal value of We also add "0" zero in the beginning of hexadecimal numbers that begin with a letter A..
F , for example 0Eh. The hexadecimal number h is equal to decimal value of In order to convert from decimal system, to any other system, it is required to divide the decimal value by the base of the desired system, each time you should remember the result and keep the remainder, the divide process continues until the result is zero.
The remainders are then used to represent a value in that system. As you see we got this hexadecimal number: 27h. All remainders were below 10 in the above example, so we do not use any letters. There is no way to say for sure whether the hexadecimal byte 0FFh is positive or negative, it can represent both decimal value "" and "1". Using this complex way to represent negative numbers has some meaning, in math when you add "- 5" to "5" you should get zero. This is what happens when processor adds two bytes 5 and , the result gets over , because of the overflow processor gets zero!
Comience la prueba gratis Cancele en cualquier momento. Tutorial Bahasa Rakitan 1. Cargado por Prima Wirawan. Fecha en que fue cargado Apr 30, Compartir este documento Compartir o incrustar documentos Opciones para compartir Compartir en Facebook, abre una nueva ventana Facebook. Denunciar este documento. Descargar ahora. Carrusel Anterior Carrusel Siguiente. Buscar dentro del documento.
What is Assembly Language? Why learn assembler language The first reason to work with assembler is that it provides the opportunity of knowing more the operation of your PC, which allows the development of software in a more consistent manner. Numbering systems There are many ways to represent the same numeric value.
Decimal System Most people today use decimal representation to count. Computers use binary system, binary system uses 2 digits: 0, 1 And thus the base is 2. It is very easy to convert numbers from binary system to hexadecimal system and viceversa, every nibble 4 bits can be converted to a hexadecimal digit using this table: Decimal base 10 Binary base 2 Hexadecimal base 16 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 0 1 2 3 4 5 6 7 8 9 A B C D E 15 There is a convention to add "h" in the end of a hexadecimal number, this way we can determine that 5Fh is a hexadecimal number with decimal value of Let's convert the value of 39 base 10 to Hexadecimal System base 16 : As you see we got this hexadecimal number: 27h.
Signed Numbers There is no way to say for sure whether the hexadecimal byte 0FFh is positive or negative, it can represent both decimal value "" and "1". Documentos similares a Tutorial Bahasa Rakitan 1. Srinivasa Rao G. Madhusudan Singh Rathore. Rafael Taylor. Anil Agrawal.
Don Khan. Arnab Bhattacharjee. Abhishek Singh. Suneeth Roy. Percy Herrera. Vishnu Iyengar. Balaji Palanisamy. Msika Aloyce. Bharanitharan Sundaram. Victor Das. Anup Shukla. Haris Faisal. Maaz Zafar. Razonable Morales Rommel. Muhammad Farhan. Inaam Arshad. Boby Ati. Popular en Assembly Language.
Vishal Purkoti. Pearl Rodrigues. Anonymous hWzybe1. Saukani Halim. Muhammad Adeel. Kiran Reddy. Kirubhakar Gurusamy. Microprocessor for Memory Mapping and Instruction Set. Angelica Arevalo. Mark Anthony De Silva. Tamil Selvan. Thenes Kumar. Kavin Cavin. Miguel Angel Mamani C.
Buku Ajar Pemrograman Bahasa Rakitan |
Summary: Use the
global keyword to declare a global variable inside the local scope of a function so that it can be modified or used outside the function as well. To use global variables across modules create a special configuration module and import the module into our main program. The module is available as a global name in our program. As each module has a single instance, any changes to the module object get reflected everywhere.
Problem: Given a function; how to use a global variable in it?
def foo(): # Some syntax to declare the GLOBAL VARIABLE "x" x = 25 # Assigning the value to the global variable "x" def func(): # Accessing global variable defined in foo() y = x+25 print("x=",x,"y=",y) foo() func()
x= 25 y= 50
In the above example, we have been given a function named
foo() which returns a global variable
x such that the value of
x can be used inside another function named
func(). Let us have a quick look at how we can use the
global keyword to resolve our problem.
Solution: Using The Global Keyword
We can use the
global as a prefix to any variable in order to make it global inside a local scope.
def foo(): global x x = 25 def func(): y = x+25 print("x=",x,"y=",y) foo() func()
x= 25 y= 50
Now that we already know our solution, we must go through some of the basic concepts required for a solid understanding of our solution. So, without further delay let us discuss them one by one.
Variable Scope In Python
The scope of a variable is the region or part of the program where the variable can be accessed directly. Let us discuss the different variable scopes available in Python.
❖ Local Scope
When a variable is created inside a function, it is only available within the scope of that function and ceases to exist if used outside the function. Thus the variable belongs to the local scope of the function.
def foo(): scope = "local" print(scope) foo()
❖ Enclosing Scope
An enclosing scope occurs when we have nested functions. When the variable is in the scope of the outside function, it means that the variable is in the enclosing scope of the function. Therefore, the variable is visible within the scope of the inner and outer functions.
def foo(): scope = "enclosed" def func(): print(scope) func() foo()
In the above example, the variable
scope is inside the enclosing scope of the function
foo() and available inside the
foo() as well as
❖ Global Scope
A global variable is a variable that is declared in a global scope and can be used across the entire program; that means it can be accessed inside as well outside the scope of a function. A global variable is generally declared outside functions, in the main body of the Python code.
name = "FINXTER" def foo(): print("Name inside foo() is ", name) foo() print("Name outside foo() is :", name)
Name inside foo() is FINXTER Name outside foo() is : FINXTER
In the above example,
name is a global variable that can be accessed inside as well as outside the scope of the function foo(). Let’s check what happens if you try to change the value of the global variable
name inside the function.
name = "FINXTER" def foo(): name = name + "PYTHON" print("Name inside foo() is ", name) foo()
Traceback (most recent call last): File "main.py", line 8, in <module> foo() File "main.py", line 4, in foo name = name + "PYTHON" UnboundLocalError: local variable 'name' referenced before assignment
We get an
UnboundLocalError in this case, because Python treats
name as a local variable inside
name is not defined inside
foo(). If you want to learn more about the UnboundLocalError and how to resolve it, please read it in our blog tutorial here.
❖ Built-In Scope
The built-in scope is the widest scope available in python and contains keywords, functions, exceptions, and other attributes that are built into Python. Names in the built-in scope are available all across the python program. It is loaded automatically at time of executing a Python program/script.
x = 25 print(id(x))
In the above example, we did not import any module to use the functions
id(). This is because both of them are in the built-in scope.
Having discussed the variable scopes in Python, let us discuss about a couple of very important keywords in Python in relation to the variable scopes.
Use Global Variables Inside A Function Using The global Keyword
We already read about the global scope where we learned that every variable that is declared in the main body and outside any function in the Python code is global by default. However, if we have a situation where we need to declare a global variable inside a function as in the problem statement of this article, then the global keyword comes to our rescue. We use the
global keyword inside a function to make a variable global within the local scope. This means that the global keyword allows us to modify and use a variable outside the scope of the function within which it has been defined.
Now let us have a look at the following program to understand the usage of the
def foo(): global name name = "PYTHON!" print("Name inside foo() is ", name) foo() name = "FINXTER "+name print("Name outside foo() is ", name)
Name inside foo() is PYTHON! Name outside foo() is FINXTER PYTHON!
In the above example, we have a global variable name declared inside the local scope of function foo(). We can access and modify this variable outside the scope of this variable as seen in the above example.
❃ POINTS TO REMEMBER
- A variable defined outside a function is global by default.
- To define a global variable inside a function we use the
- A variable inside a function without the
globalkeyword is local by default.
- Using the
globalkeyword for a variable that is already in the global scope, i.e., outside the function has no effect on the variable.
Global Variables Across Modules
In order to share information across Python modules within the same piece of code, we need to create a special configuration module, known as config or cfg module. We have to import this module into our program. The module is then available as a global name in our program. Because each module has a single instance, any changes to the module object get reflected everywhere.
Let us have a look at the following example to understand how we can share global variables across modules.
Step 1: config.py file is used to store the global variables.
Step 2: modify.py file is used to change global variables.
Step 3: main.py file is used to apply and use the changed values of the global variable.
Output After Executing
The nonlocal Keyword
nonlocal keyword is useful when we have a nested function, i.e., functions having variables in the enclosing scope. In other words if you want to change/modify a variable that is in the scope of the enclosing function (outer function), then you can use the
def foo(): a = 25 print("Value of 'a' before calling func = ",a) def func(): nonlocal a a=a+20 print("Value of 'a' inside func = ",a) func() print("Value of 'a' after exiting func = ",a) foo()
Value of 'a' before calling func = 25 Value of 'a' inside func = 45 Value of 'a' after exiting func = 45
From the above example it is clear that if we change the value of a
nonlocal variable the value of the
local variable also changes.
The key points that we learned in this article are:
- Variable Scopes:
- Local Scope
- Enclosing Scope
- Global Scope
- Built-in Scope
- Important Keywords:
- How to use a global variable inside a function?
- How to use a global variable across modules?
Where to Go From Here?
Enough theory. Let’s get some practice!
Coders get paid six figures and more because they can solve problems more effectively using machine intelligence and automation.
To become more successful in coding, solve more real problems for real people. That’s how you polish the skills you really need in practice. After all, what’s the use of learning theory that nobody ever needs?
You build high-value coding skills by working on practical coding projects!
Do you want to stop learning with toy projects and focus on practical code projects that earn you money and solve real problems for people?
🚀 If your answer is YES!, consider becoming a Python freelance developer! It’s the best way of approaching the task of improving your Python skills—even if you are a complete beginner.
If you just want to learn about the freelancing opportunity, feel free to watch my free webinar “How to Build Your High-Income Skill Python” and learn how I grew my coding business online and how you can, too—from the comfort of your own home. |
HTML and the Web
The HTML is a computer language designed for a Web browser to execute. It is not like C, C++ or Java programming languages, but it is an interpreted or a script language. To understand this language and its purpose, the reader has to be familiar with the Internet and World Wide Web. The attempt here is give the reader a picture of the Web and from that picture, the HTML language will be easy to understand and hopefully easy to use. There is a number of concepts and definitions that need to be covered or explained briefly so the reader will not be overwhelmed with terms that are not clear or familiar.
Any computer system is composed of hardware and software. The hardware is the physical machine and software is made up of the programs and data. Programs are divided into systems and applications. System programs are divided into an operating system and services. For example, the old IBM PC has DOS as its operating system, which provides the control over the machine. DOS has a number of utilities such as formatting diskettes and printing routines. There are other system programs, that are not part of DOS that can be added to the PC and the most famous is Norton Utilities.
To understand the Internet in terms of hardware and software is a little more complicated than PCs, but the Web is composed of hardware and software. The difference between the Web and the PC is the PC is a dedicated machine with a dedicated operating system and only one user. The Web is a network of computers. This requires the understanding of what a network is, and from that, we can understand the Web.
A network is a collection of computers, terminals and other equipment that uses communication channels (such as phone lines, microwaves or satellites) to share data, information, hardware and software.
The Client/Server concept needs some clarification. A client is the receiver of a service and server is the service provider. For example, a computer requesting data from a network is a client and computer that is providing the data is the server. Does this means a client and server can be toggled? The answer is yes, but there is a dedicated server with only one job, which is to provide service to clients. The same hold true for some clients, they have nothing to offer but requests. On any network, there is a layer of software that acts a server for that network. The client can use this layer of software by using specific commands.
World Wide Web
To understand the Internet or the Web, we may need to give an analogy of something similar to the Web. Assume that you the reader want to send a letter to a company. The reader lives in Chicago and the company is located in Los Angeles. The reader can simply use a fax machine to dial-up and fax the letter. This is a direct link from the reader to the company, where the reader or a client dials up and connects to a network and using its services. This direct dial-up may be costly if it is a long-distance call. In the case where the reader needs to send a number of large packages to several companies that are located in different parts of the country, dialing up is not feasible. The reader may find sending each package himself/herself directly to each company is a costly and a time consuming task. The reader may find it cheaper, faster and more convenient to use one of the package delivery companies such as UPS. How the delivery company sends the package is not important to the reader as long as it gets there on time and in one piece. The delivery company can send it by air, sea, train, truck, car, or even hires another company to do it. If the package is supposed to be sent to Los Angeles and an airplane takes the package from Chicago to Japan and back to Los Anglos, the reader may not know nor cares.
The Internet or the Web is hundreds of thousands of networks (servers) that are connected to each other like a spider web. Looking at a spider web and how the spider weaves a large net of silk, the spider web is an equal analogy of the Internet. The term World Wide Web (or the Web for short) may have come from the spider web analogy. The spider web’s parts are connecting in several ways and to get to any part of the web, there exists numerous routes. If the reader can imagine that every spider silk intersection on the web is a package delivery company (site, server or a network), and all these companies cooperate to deliver packages, then using one of these companies is equal to using all of them. The Internet is a web of networks that are connected to each other and share a pool of data, information, software, and equipment. Each of the networks is an independent network with its own operating system and applications, data and hardware. This web of networks is similar to the web of package delivery companies, where they cooperate to connect each other to provide a service. A package may handle by several companies before it reaches its destination. The same thing with the Internet, a client in Chicago may go through several networks to connect to a server in Los Angeles. For all these independent networks to cooperate there are a number of issues that may need to be addressed as follows:
1. Connection and Communication
2. Address – TCP/IP
3. Domain names and Category
4. Services and Service Providers
7. Common Languages
A Web user or Web client can connect and use any of the sites (servers) on the Web, the Web client or user can access the entire Web services and hardware (if permissible) as if they are local to the client network and services. The Web gives the client a worldwide access to services and information. Web service providers are companies that provide Web access at a cost. Service providers provide their users with a local phone call service. The cost of using such services is very small compared to actual cost if the user tries to do the same thing on his/her own.
Connection and Communication
The Internet communicates using a number of protocols. Its sites are connected through phone lines, microwaves, satellites or bridges. A bridge is a combination of hardware and software that connect two similar type of networks.
A protocol is a set of rules and procedures for exchanging information between computers regardless of their make and operating system. Communication software is designed to work with one or more protocols. Protocols defines the following:
1. How the communication link is established
2. How information is transmitted
3. How errors are detected and handled
For example, an IBM PC and Apple Computers have different hardware and software, that are not compatible or their software is not portable, but they can communicate using protocols.
The Internet communicates using a family of protocols known as the Transmission Control Protocol/Internet Protocol (TCP/IP). TCP/IP is used to connect any machine on the Internet to another, and sends packets (especially formatted data) from one to the other.
The Web is hundreds of thousands of networks (servers) that are connected to each other and these networks need some kind of labeling or addressing. The postal address of a house consists of the state, the city, the street name and the house number. The state is the most distinguished item, since you can have two cities with the same name like Springfield, Illinois and Springfield, Missouri. The same can be applied to the street name and number. So we can use the state as Domain of the address.
Domain Names and Category
What is the Internet Domain Name?
The Web is composed of servers and every Web server has an address, which is a numeric number called “Internet Protocol (IP)”. An IP number is composed of four numbers separated by a period, where each number is between 0 and 255. For example, an IP number can be one of the following numbers:
Computers work with numbers, but humans have problems remembering numbers. To make life easy, each IP or Web address is given a name that is unique. For example the above three IP numbers can be named as follows:
These names are called Domain names which are the text alternative to the IP numbers. The Domain name or the IP number can be used interchangeably without any problems.
The Domain name or IP number is equivalent to the state, but once the state is known, then the city name, street name and house number is needed to get some place. The URL (Uniform Resource Locator) is what is needed to get to a specific site or an Internet resource.
Uniform Resource Locator (URL)
URL (Uniform Resource Locator) is the actual address or the location of a Web page, directory, path or any resource on the Web. For example, the following two URLs are the JavaSoft home pages.
The URL is made up of the Domain name, followed by “/” then the directory or the path. It may end with the name of a file or a resource. The file can be “.HTML” file, a Java applet or a Web page. The URL is composed of the following:
1. Server - http
2. Host - Domain name = java.sun.com
3. Port number - default for http is 80 and it does no have to be listed
4. Resource path - directory plus a file name of the resource that would be accessed.
Domain names end with “.” a dot and a two or three characters extension similar to file names. The Web is organized into functional groups, such as education, government and so on. The Web also includes different countries. The Web has used a two or three character code to distinguish the different groups. The following is the organization code:
1. COM - for commercial
2. EDU - for education
3. GOV - for government
4. INT - for international or Internet?
5. MIL - for military
6. NET - for network
7. ORG - for Organization
Countries have similar code as the organizations. For example, “CA” is for Canada and “UK” is for United Kingdom (England). United States code is “US”, but it is the default since the Internet began here in the US.
What is the Domain name composed of?
The Domain name is right justified, which means the name actually starts from the right and ends in the left side of the name. For example, “java.sun.com” is a commercial company named Sun that has Java as a Domain name. The proper or complete Domain name should be “java.sun.com.us” where the country is The US, a commercial company named Sun that has Java as a Domain name.
Services and Service Providers
The original objective of the Internet was to provide communication and services for government, education and research institutes. The US government had funded the Internet cost. Now, the Web main objective is to provide communication and services for businesses, government, education and so on. The Web are commercial services and its funding is provided by the networks that make up the Web. These networks are independent companies, governments or educational institutes. They provide Web services with a fee or free to their members. These networks are called service providers. For example, a student enrolled in a state university, may have a free Web or Internet account. A company like America On-line provides its customer with a Internet access and a Web page for a monthly or yearly fee.
Every network has all the software and hardware needed to communicate with the rest of the Web and it customers or members. It may be called the service provider, host or server, but it is actually a service provider. Note that the software of that service provider (which has or owns), that does all the communication and services is called the server. This means the server is the network communication software and hardware plus all the utilities that network has. This also means a service provider may have more than one server on the Web. For example, a company or a university may have one or more of the following servers on the Web:
Each of these servers may use different communication protocol. For example HTTP serve uses TCP/IP family of protocol for communication.
The server is the software that service providers use to communicate and provide services. The server can provide a number of services, which may be included in the following:
1. Web Page – Home Page
4. Common Gateway Interface
5. Web Site – Virtual host
6. Search Services
7. Communication Channels
9. File Services
A group of commands stored in a file is called script. This script is to be executed using a command interpreter. For example, a DOS Batch file can be considered a DOS script. Unix Shell script is another type of script, where the Unix shell is the command interpreter, which executes the shell script.
A Web page is a text file containing a number of commands to be executed by a program called “Web Browser”. It is basically a script to be executed by an interpreter. The Web page commands are HTML commands, which are known as HTML tags. These commands are used to create a Web page with messages, banners, logos, buttons, graphic images, as well as calling Java applets, or run programs (CGI).
Home page is the starting Web page for server or the first page the Browser starts with. Every server, Web site or a group of pages has a home page. The user of the Web can set any Web page to be a home page. For example, JavaSoft has a home page for Web users to link to and access information about Java and Java’s latest changes.
Web Site/ Web Host
A Web site is a group of related Web pages sharing a common subject or theme. A Web server is the “Web Host”, which may host many Web sites. For example, AT&T service provider may have a server with over 200 Web sites, ranging form software developers groups to ant or bug collectors.
Virtual host/Presence Provider
A Web site can be set on a server with its unique Domain name and may appear to outside world as if it is a unique server. This is type of Web site is called a Virtual Host. For example, a mid size insurance company named Banana Insurance, may pay AT&T service provider for a Web site and sets this Web site with the Domain name of “Banana.COM”. This Web site would look to the outside world as a Web server of Banana Insurance Company. AT&T service provider would also be called “Presence Provider” by providing the space for the Banana Insurance Company.
A Web page is a text file with HTML tags (commands) to be interpreted by an interpreter. A Web Browser is a program (interpreter) which is used to execute the Web pages. The two most well-known Web Browsers are Netscape Navigator and Microsoft Internet Explorer, which are used by the vast majority. A Web Browser sometimes called a "user agent", which is located on the user machine. It works by using a special protocol called HTTP to request a specially encoded text document from a Web server. The text document contains special instructions (written in HTML) that tell the Browser how to display the document on the user's screen.
A search engine is a Web service that helps Web users find information about any topic. For example, a Web user can search for “travel” and a search engine such as Yahoo can provide a listing of Web pages on the travel.
Search engines basically work using three steps. The first is visit Web sites and read every Web page a use the information provided by the Web pages to categorize them. The second step is to index these Web pages to speed the last step which is searching its own index listing to find a match that the search engine user is seeking.
Web pages have given the Web ability to allow every user to express or display individuality. For example, companies can display their products and allow users to buy their products. Individuals can have their own Web page and share their ideas with others.
An interface is a connection between parts of the computer hardware or the software that handles the interaction between the user and an application.
A gateway is a combination of hardware and software that allows user on one network to access the resources of a different type of network.
Common Gateway Interface (CGI)
CGI is an interface that helps execute external programs. It defines how information can be exchanged between the Web server and the external programs namely CGI programs. CGI programs are usually written in interpreted languages such as Unix shell script or PERL. CGI programs can also be written in C. CGI programs have performance problems since they run in a separate process from the server, plus they also require significant start-up time.
The Web with all its complexity, the servers must have some kind of common language or languages. Any Browser must understand Web pages and knows where to find things. CGI must be able to run external programs. The main issue here is the Web languages. The Web has two main languages, which are Java and HTML.
One of the Web features is the ability to move between Web pages and other sites with a click of a mouse. Documents or Web pages may have highlighted text that the user clicks on it to get to a different a document or Web page. This ability to jump between pages and sites is possible with the use of what is known as “Hypertext”.
Hypertext is a text, which is not constrained to be linear and contains links to other text. The links are more of jump points (address – URL) that are used to jump to other pages or another part of the same page. The Hypertext link is also known as anchor.
Hypermedia is similar to hypertext, but includes media other than text, e.g. a hypermedia document could include text and graphics, or sound and animation.
Security is to protect against unauthenticated interactive logins from the "outside" world. This helps prevent vandals from logging into machines on your network.
A firewall can be defined as one of the followings:
A firewall is a set of related programs, located at a network gateway of a server, that protects the resources of a private network from users and other networks.
It is a system restricted access from machines, which are not on the company’ Internet.
It is also a computer that filters traffic going into and out of the corporate network.
Internet installs a firewall to prevent outsiders from accessing its own private data resources and for controlling outside resources its own users have access to. |
Related to ‘Distance between’, we will discuss below three types of formulas as found in the math topic of coordinate geometry.
The three formulas related to ‘Distance between’ in coordinate geometry are as follows:
1. Distance between two points:
Consider two points A and B in the rectangular coordinate system formed by the x-axis and the y-axis.
Let the coordinates of the point:
A be (x1, y1) and those of point B be (x2, y2)
The formula for finding distance between two points
A (x1, y1) and B(x2, y2) is
What is the distance between the two points A (2, 3) and B (4, 5)
From the above formula, the distance between the two points
A (2, 3) and B (4, 5) is
2. Distance between two straight lines
Consider two straight lines L and M in the Cartesian coordinate plane, which are parallel to each other.
Let the two straight lines L and M, which are parallel to each other be such as
Equation of line L is: ax + by + c = 0, and
Equation of line M is:ax + by + d = 0.
Then the formula for
Distance between the two parallel lines L and M is
In the equations of the two parallel lines L and M:
ax + by + c = 0 and ax + by + d = 0
difference of constant terms of the two equations is c – d, and
coefficient of x = a, and coefficient of y = b.
Now, let us solve a question on distance between two straight lines.
Find the distance between the two straight lines L and M, which are parallel to each other, such that the equation of
L is 3x + 4y + 5 = 0 and that of M is 3x + 4y + 6 = 0
In the equations of the two parallel lines L and M,
Difference of the constant terms = 6 – 5 = 1,
And in both of the equations,
Coefficient of x, i.e. a = 3, and coefficient of y, i.e. b = 4.
So, a2 + b2 = 32 + 42 = 9 + 16 = 25
Therefore, distance between the two straight lines L and M is
1/ (√25) = 1/5 units.
3. Distance between a point and a straight line
Consider a straight line L whose equation is ax + by + c = 0
Let there be a point P with coordinates (x, y).
Now, the distance of a point P(x1, y1) from a straight line L is the length of the line segment drawn perpendicular from the point P to the straight line L.
And the formula to find the perpendicular distance between a straight line L and a point P (x1, y1) is |
|This article needs additional citations for verification. (June 2012)|
Microwave transmission is the transmission of information or energy by electromagnetic waves whose wavelengths are measured in small numbers of centimetre; these are called microwaves.[clarification needed] This part of the radio spectrum ranges across frequencies of roughly 1.0 gigahertz (GHz) to 300 GHz. These correspond to wavelengths from 30 centimeters down to 1.0 cm.[clarification needed]
- 1 Uses
- 2 Microwave radio relay
- 3 Troposcatter
- 4 Microwave power transmission
- 5 See also
- 6 References
- 7 External links
Microwaves are widely used for point-to-point communications because their small wavelength allows conveniently-sized antennas to direct them in narrow beams, which can be pointed directly at the receiving antenna. This allows nearby microwave equipment to use the same frequencies without interfering with each other, as lower frequency radio waves do. Another advantage is that the high frequency of microwaves gives the microwave band a very large information-carrying capacity; the microwave band has a bandwidth 30 times that of all the rest of the radio spectrum below it. A disadvantage is that microwaves are limited to line of sight propagation; they cannot pass around hills or mountains as lower frequency radio waves can.
Microwave radio transmission is commonly used in point-to-point communication systems on the surface of the Earth, in satellite communications, and in deep space radio communications. Other parts of the microwave radio band are used for radars, radio navigation systems, sensor systems, and radio astronomy.
The next higher part of the radio electromagnetic spectrum, where the frequencies are above 30 GHz and below 100 GHz, are called "millimeter waves" because their wavelengths are conveniently measured in millimeters, and their wavelengths range from 10 mm down to 3.0 mm.[clarification needed] Radio waves in this band are usually strongly attenuated by the Earthly atmosphere and particles contained in it, especially during wet weather. Also, in wide band of frequencies around 60 GHz, the radio waves are strongly attenuated by molecular oxygen in the atmosphere. The electronic technologies needed in the millimeter wave band are also much more difficult to utilize than those of the microwave band.
Wireless transmission of information
- One-way (e.g. television broadcasting) and two-way telecommunication using communications satellite
- Terrestrial microwave relay links in telecommunications networks including backbone or backhaul carriers in cellular networks linking BTS-BSC and BSC-MSC.
Wireless transmission of power
- Proposed systems e.g. for connecting solar power collecting satellites to terrestrial power grids
Microwave radio relay
Microwave radio relay is a technology for transmitting digital and analog signals, such as long-distance telephone calls, television programs, and computer data, between two locations on a line of sight radio path. In microwave radio relay, microwaves are transmitted between the two locations with directional antennas, forming a fixed radio connection between the two points. The requirement of a line of sight limits the distance between stations to 30 or 40 miles.
Beginning in the 1940s, networks of microwave relay links, such as the AT&T Long Lines system in the U.S., carried long distance telephone calls and television programs between cities. The first system, dubbed TD-2 and built by AT&T, connected New York and Boston in 1947 with a series of eight radio relay stations. These included long daisy-chained series of such links that traversed mountain ranges and spanned continents. Much of the transcontinental traffic is now carried by cheaper optical fibers and communication satellites, but microwave relay remains important for shorter distances.
Because the radio waves travel in narrow beams confined to a line-of-sight path from one antenna to the other, they don't interfere with other microwave equipment, and nearby microwave links can use the same frequencies. Antennas used must be highly directional (High gain); these antennas are installed in elevated locations such as large radio towers in order to be able to transmit across long distances. Typical types of antenna used in radio relay link installations are parabolic antennas, dielectric lens, and horn-reflector antennas, which have a diameter of up to 4 meters. Highly directive antennas permit an economical use of the available frequency spectrum, despite long transmission distances.
Because of the high frequencies used, a quasi-optical line of sight between the stations is generally required. Additionally, in order to form the line of sight connection between the two stations, the first Fresnel zone must be free from obstacles so the radio waves can propagate across a nearly uninterrupted path. Obstacles in the signal field cause unwanted attenuation, and are as a result only acceptable in exceptional cases. High mountain peak or ridge positions are often ideal: Europe's highest radio relay station, the Richtfunkstation Jungfraujoch, is situated atop the Jungfraujoch ridge at an altitude of 3,705 meters (12,156 ft) above sea level.
Obstacles, the curvature of the Earth, the geography of the area and reception issues arising from the use of nearby land (such as in manufacturing and forestry) are important issues to consider when planning radio links. In the planning process, it is essential that "path profiles" are produced, which provide information about the terrain and Fresnel zones affecting the transmission path. The presence of a water surface, such as a lake or river, in the mid-path region also must be taken into consideration as it can result in a near-perfect reflection (even modulated by wave or tide motions), creating multipath distortion as the two received signals ("wanted" and "unwanted") swing in and out of phase. Multipath fades are usually deep only in a small spot and a narrow frequency band, so space and/or frequency diversity schemes would be applied to mitigate these effects.
The effects of atmospheric stratification cause the radio path to bend downward in a typical situation so a major distance is possible as the earth equivalent curvature increases from 6370 km to about 8500 km (a 4/3 equivalent radius effect). Rare events of temperature, humidity and pressure profile versus height, may produce large deviations and distortion of the propagation and affect transmission quality. High intensity rain and snow must also be considered as an impairment factor, especially at frequencies above 10 GHz. All previous factors, collectively known as path loss, make it necessary to compute suitable power margins, in order to maintain the link operative for a high percentage of time, like the standard 99.99% or 99.999% used in 'carrier class' services of most telecommunication operators.
The longest microwave radio relay known up to date crosses the Red Sea with 360 km hop between Jebel Erba (2170m a.s.l., 20°44'46.17"N 36°50'24.65"E, Sudan) and Jebel Dakka (2572m a.s.l., 21° 5'36.89"N 40°17'29.80"E, Saudi Arabia). The link built in 1979 by Telettra allowed to proper transmit 300 telephone channels and 1 TV signal, in the 2 GHz frequency band. (Hop distance is the distance between two microwave stations)
In 1931 an Anglo-French consortium headed by Andre C. Clavier demonstrated an experimental microwave relay link across the English Channel using 10 foot (3 m) dishes. Telephony, telegraph and facsimile data was transmitted over the bidirectional 1.7 GHz beams 64 km (40 miles) between Dover, UK and Calais, France. The radiated power, produced by a miniature Barkhausen-Kurz tube located at the dish's focus, was one-half watt. A 1933 military microwave link between airports at St. Inglevert, UK and Lympne, France, a distance of 56 km (35 miles) was followed in 1935 by a 300 MHz telecommunication link, the first commercial microwave relay system.
The development of radar during World War II provided much of the microwave technology which made practical microwave communication links possible, particularly the klystron oscillator and techniques of designing parabolic antennas.
During the 1950s the AT&T Long Lines system of microwave relay links grew to carry the majority of US long distance telephone traffic, as well as intercontinental television network signals. The prototype was called TDX and was tested with a connection between New York City and Murray Hill, the location of Bell Laboratories in 1946. The TDX system was set up between New York and Boston in 1947. The TDX was improved to the TD2, which still used klystron tubes in the transmitters, and then later to the TD3 that used solid state electronics. The main motivation in 1946 to use microwave radio instead of cable was that a large capacity could be installed quickly and at less cost. It was expected at that time that the annual operating costs for microwave radio would be greater than for cable. There were two main reasons that a large capacity had to be introduced suddenly: Pent up demand for long distance telephone service, because of the hiatus during the war years, and the new medium of television, which needed more bandwidth than radio.
Though not commonly known, the US military used both portable and fixed-station microwave communications in the European Theater during WWII. Starting in the late 1940s, this continued to some degree into the 1960s, when many of these links were supplanted with tropospheric scatter or satellite systems. When the NATO military arm was formed, much of this existing equipment was transferred to communications groups. The typical communications systems used by NATO during that time period consisted of the technologies which had been developed for use by the telephone carrier entities in host countries. One example from the USA is the RCA CW-20A 1–2 GHz microwave relay system which utilized flexible UHF cable rather than the rigid waveguide required by higher frequency systems, making it ideal for tactical applications. The typical microwave relay installation or portable van had two radio systems (plus backup) connecting two LOS sites. These radios would often provide communication for 24 telephone channels of frequency division multiplexed signal (i.e. Lenkurt 33C FDM), though any channel could be designated to carry up to 18 teletype communications instead. Similar systems from Germany and other member nations were also in use.
Similar systems were soon built in many countries, until the 1980s when the technology lost its share of fixed operation to newer technologies such as fiber-optic cable and communication satellites, which offer lower cost per bit.
During the Cold War, the US intelligence agencies, such as the National Security Agency (NSA), were reportedly able to intercept Soviet microwave traffic using satellites such as Rhyolite. Much of the beam of a microwave link passes the receiving antenna and radiates toward the horizon, into space. By positioning a geosynchronous satellite in the path of the beam, the microwave beam can be received.
At the turn of the century, microwave radio relay systems are being used increasingly in portable radio applications. The technology is particularly suited to this application because of lower operating costs, a more efficient infrastructure, and provision of direct hardware access to the portable radio operator.
A microwave link is a communications system that uses a beam of radio waves in the microwave frequency range to transmit video, audio, or data between two locations, which can be from just a few feet or meters to several miles or kilometers apart. Microwave links are commonly used by television broadcasters to transmit programmes across a country, for instance, or from an outside broadcast back to a studio.
Mobile units can be camera mounted, allowing cameras the freedom to move around without trailing cables. These are often seen on the touchlines of sports fields on Steadicam systems.
- Involve line of sight (LOS) communication technology
- Affected greatly by environmental constraints, including rain fade
- Have very limited penetration capabilities through obstacles such as hills, buildings and trees
- Sensitive to high pollen count
- Signals can be degradedduring Solar proton events
- In communications between satellites and base stations
- As backbone carriers for cellular systems
- In short range indoor communications
- Telecommunications, in linking remote and regional telephone exchanges to larger (main) exchanges without the need for copper/optical fibre lines.
Terrestrial microwave relay links described above are limited in distance to the visual horizon, about 40 miles. Tropospheric scatter ("troposcatter" or "scatter") was a technology developed in the 1950s allow microwave communication links beyond the horizon, to a range of several hundred kilometers. The transmitter radiates a beam of microwaves into the sky, at a shallow angle above the horizon toward the receiver. As the beam passes through the troposphere a small fraction of the microwave energy is scattered back toward the ground by water vapor and dust in the air. A sensitive receiver beyond the horizon picks up this reflected signal. Signal clarity obtained by this method depends on the weather and other factors, and as a result a high level of technical difficulty is involved in the creation of a reliable over horizon radio relay link. Troposcatter links are therefore only used in special circumstances where satellites and other long distance communication channels cannot be relied on, such as in military communications.
Microwave power transmission
Microwave power transmission (MPT) is the use of microwaves to transmit power through outer space or the atmosphere without the need for wires. It is a sub-type of the more general wireless energy transfer methods.
Following World War II, which saw the development of high-power microwave emitters known as cavity magnetrons, the idea of using microwaves to transmit power was researched. In 1964, William C. Brown demonstrated a miniature helicopter equipped with a combination antenna and rectifier device called a rectenna. The rectenna converted microwave power into electricity, allowing the helicopter to fly. In principle, the rectenna is capable of very high conversion efficiencies - over 90% in optimal circumstances.
Most proposed MPT systems now usually include a phased array microwave transmitter. While these have lower efficiency levels they have the advantage of being electrically steered using no moving parts, and are easier to scale to the necessary levels that a practical MPT system requires.
Common safety concerns
The common reaction to microwave transmission is one of concern, as microwaves are generally perceived by the public as dangerous forms of radiation - stemming from the fact that they are used in microwave ovens. While high power microwaves can be painful and dangerous as in the United States Military's Active Denial System, MPT systems are generally proposed to have only low intensity at the rectenna.
Though this would be extremely safe as the power levels would be about equal to the leakage from a microwave oven, and only slightly more than a cell phone, the relatively diffuse microwave beam necessitates a large receiving antenna area for a significant amount of energy to be transmitted.
Research has involved exposing multiple generations of animals to microwave radiation of this or higher intensity, and no health issues have been found.
MPT is the most commonly proposed method for transferring energy to the surface of the Earth from solar power satellites or other in-orbit power sources. MPT is occasionally proposed for the power supply in beam-powered propulsion for orbital lift space ships. Even though lasers are more commonly proposed, their low efficiency in light generation and reception has led some designers to opt for microwave based systems.
Wireless Power Transmission (using microwaves) is well proven. Experiments in the tens of kilowatts have been performed at Goldstone in California in 1975 and more recently (1997) at Grand Bassin on Reunion Island. In 2008 a long range transmission experiment successfully transmitted 20 watts 92 miles (148 km) from a mountain on Maui to the main island of Hawaii.
- Wireless energy transfer
- Fresnel zone
- Passive repeater
- Radio repeater
- Transmitter station
- Path loss
- British Telecom microwave network
- Trans-Canada Microwave
- Antenna array (electromagnetic)
- Pond, Norman H. "The Tube Guys". Russ Cochran, Publisher, 2008 p.170
- "Photos of Telettra". Facebook. Retrieved 2012-10-02.
- Free, E. E. (August 1931). "Searchlight radio with the new 7 inch waves" (PDF). Radio News (New York: Radio Science Publications) 8 (2): 107–109. Retrieved March 24, 2015.
- "Microwaves span the English Channel" (PDF). Short Wave Craft (New York: Popular Book Co.) 6 (5): 262. September 1935. Retrieved March 24, 2015.
- "Sugar Scoop Antennas Capture Microwaves." Popular Mechanics, February 1985, p. 87, bottom of page.
- James Bamford, The Shadow Factory, Doubleday, 2008, ISBN 0-385-52132-4. p.176
- "Analyzing Microwave Spectra Collected by the Solar Radio Burst Locator". Digital.library.unt.edu. 2012-09-24. Retrieved 2012-10-02.
- Brown, W. C. (Raytheon) (December 1965) "Experimental Airborne Microwave Supported Platform" Technical Report NO. RADC-TR- 65- 188, Air Force Systems Command. Retrieved July 9, 2012
- "Environmental Effects - the SPS Microwave Beam". Permanent.com. Retrieved 2012-10-02.
- "NASA Video, date/author unknown". Retrieved 2012-10-02.
- "Wireless Power Transmission for Solar Power Satellite (SPS) (Second Draft by N. Shinohara), Space Solar Power Workshop, Georgia Institute of Technology" (PDF). Retrieved 2012-10-02.
- Brown., W. C. (September 1984). "The History of Power Transmission by Radio Waves". Microwave Theory and Techniques, IEEE Transactions on (Volume: 32, Issue: 9 On page(s): 1230- 1242 + ISSN: 0018-9480). Bibcode:1984ITMTT..32.1230B. doi:10.1109/TMTT.1984.1132833.
- POINT-TO-POINT WIRELESS POWER TRANSPORTATION IN REUNION ISLAND 48th International Astronautical Congress, Turin, Italy, 6–10 October 1997 - IAF-97-R.4.08 J. D. Lan Sun Luk, A. Celeste, P. Romanacce, L. Chane Kuang Sang, J. C. Gatina - University of La Réunion - Faculty of Science and Technology.
- "Researchers Beam ‘Space' Solar Power in Hawaii". Wired. 12 September 2008. Retrieved 28 May 2015.
- Microwave Radio Transmission Design Guide, Trevor Manning, Artech House, 1999
- RF / Microwave Design at Oxford University
- William C. Brown's Distinguished Career
- AT&T's Microwave Radio-Relay Skyway introduced in 1951
- Bell System 1951 magazine ad for Microwave Radio-Relay systems.
- RCA vintage magazine ad for Microwave-Radio Relay equipment used for Western Union Telegraph Co.
- Digital Microwave Radio
- AT&T Long Lines Microwave Towers Remembered
- AT&T Long Lines
- Western Union Microwave Network History
- Trevor Manning's course 'Microwave Radio for Next Generation Networks' at Oxford University
- An article about how a microwave link is planned and how it works
- IEEE Global History Network Microwave Link Networks
- UK Microwave Radio Case Study
- Microwave Transmission Technology Articles |
Monetary policy is the area of economic policy dealing with how much money is available in the economy and how easy it is to borrow money. Economics study the effects of monetary policy on growth, employment and inflation, and central banks like the U.S. Federal Reserve use monetary policy to try to steer the economy in a good direction. Monetary policy is separate from fiscal policy, which deals with how the government itself spends money.
The Impact of Monetary Policy
As the name implies, monetary policy deals with the role and availability of money in the economy. In most modern economies, monetary policy is set by an organization called a central bank, which has the power to shape interest rates and policies that spur banks to lend more or less money. In the United States, the central bank is known as the Federal Reserve.
Depending on economic circumstances, central banks might adopt what is called an expansionist monetary policy, aimed at spurring economic growth, or a contractionary monetary policy, which is aimed at limiting growth to limit inflation, or rising prices. Central banks generally also seek to maximize employment for the well-being of their jurisdictions.
While economists and political scientists generally agree that monetary policy has the power to shape the economy, they can be divided on exactly the right policy for any situation. Some of that is due to the fact that it's difficult to predict where the economy is headed at any given moment, and some is due to differing theories about the effects of the money supply and ease of borrowing on the overall economy.
Many countries, including the United States, try to keep central banks somewhat independent so that they don't manipulate the economy to woo voters before elections.
Monetary Policy Tools
Central banks like the Federal Reserve, often called the Fed, use a variety of tools to shape monetary policy. They can buy securities such as bonds from ordinary banks to add money to the economy, or sell them to the banks to remove money from the economy. Adding money to the economy usually effectively lowers interest rates, causing money to be more available for business expansion and consumer spending and spurring economic growth.
Additionally, central banks can set rates at which they offer short-term loans to banks, shaping interest rates overall. They also usually have some regulatory over banks and can enable them to lend out a higher percentage of their total holdings or take other actions that cause them to exercise more caution or lend out more money.
Monetary and Fiscal Policy
The area of economic policy dealing with government spending is called fiscal policy. In most countries, this is set separately from monetary policy by the legislative and executive branches, which allocate and spend funds at various agencies.
The two areas of policy can work hand in hand, with the government spending more money to introduce money into the economy when it appears weak and cutting back on spending when the economy appears stronger. Governments may also reduce taxes when the economy appears weaker to give consumers and businesses more money to spend.
As with monetary policy, economists and political scientists differ about exactly the right fiscal policy approaches to take in different economist situations. There are also complexities since particular spending programs can be popular with voters and politicians due to their impacts outside of simple economic stimulation, so they can become harder to adjust in response to economic conditions.
Different economists can disagree on the advantages and disadvantages of fiscal and monetary policy in different situations, but most will agree that both are important tools in helping to shape the economy.
- Investopedia: Monetary Policy
- Federal Reserve: Monetary Policy
- Investopedia: Monetary Policy vs. Fiscal Policy: What's the Difference?
- Federal Reserve Bank of St. Louis: Here’s the Difference between Fiscal Policy and Monetary Policy
- Center on Budget and Policy Priorities: The Critical Importance of an Independent Central Bank
- dollars image by KtD from Fotolia.com |
Calculate linear regression, test it and plot it
Linear regression is a common statistical method to quantify the relationship of two quantitative variables, where one can be considered as dependent on the other. In this example, let's 1) calculate linear regression using example data and fit the regression equation, 2) predict fitted values, 3) calculate explained variation (coefficient of determination, r2), and test the statistical significance, and 4) plot the regression line onto the scatterplot.
Data we will use here are in the dataset
cars, with two variables,
dist (the distance the car is driving after driver pressed the break) and
speed (the speed of the car at the moment when the break was pressed).
speed dist 1 4 2 2 4 10 3 7 4 4 7 22 5 8 16 6 9 10
dist is dependent variable and
speed is independent (increasing speed is likely increasing the breaking distance, not the other way around). We can draw the relationship, using the
plot function with variables defined using formula interface:
dependent_variable ~ independent_variable, data = data.frame:
To calculate linear regression model, we use the function
lm (linear model), which uses the same formula interface as the
Call: lm(formula = dist ~ speed, data = cars) Coefficients: (Intercept) speed -17.579 3.932
The function 'lm' returns estimated regression coefficients, namely intercept and slope of the regression. Using these coefficients, we can create equation to calculate fitted values:
distpredicted = -17.579 + 3.932*speed
We can argue that the intercept coefficient does not make practical sense (at zero speed, the braking distance is obviously zero, not -17.6 feet), but let's ignore this annoying detail now and let's focus on other statistical details. We can apply generic function
summary on the object which stores the linear model results, to get further numbers:
Commented output reports the most important variables: apart to the regression coefficients also the coefficient of determination (r2), and the value of F statistic and P-value. These are usually numbers you need to include when you are reporting the results of the regression. Some details below:
- Parameters of the regression equation are important if you plan to predict the values of the dependent variable for a certain value of the explanatory variable. For example, if you want to know what would be the predicted braking distance if the car drives 15 mph, you can calculate it: distpredicted = -17.58 + 3.93*15 = 41.40 ft (note that the values of speed and distance are in imperial units - mph = miles per hour, where 1 mph = 1.61 km/h, and ft = feet, where 1 ft = 0.31 m).
- Coefficient of determination quantifies how much variation in the dependent variable
distwas explained by given explanatory variable (
speed). You can imagine that braking distance, in fact, depends on many other factors than just the speed of the car at the moment of pressing the brake - it also depends on how overused are wheels, whether the road is dry or wet or icy, how quickly driver pressed the brake etc. Here, speed explains 65% of the variation, which is more than half, but far from the whole - almost 35% of variation remains unexplained.
- Before we start to interpret the results, we often want to make sure that there is something to interpret. Even if the explanatory variable is a randomly generated value, it will explain the non-zero amount of variation (you can try it to see!). This means that we need to know whether our explained variation (65%) is higher than would be variation explained by some random variable. For this, we may test the null hypothesis H0 that there is no relationship between the dependent and explanatory variable, and if rejected, we can confirm that this relationship is significant. In the case of a parametric test of significance for regression, we calculate F-value and use it (together with appropriate degrees of freedom) to look up the appropriate P-value. P-value is the probability of obtaining the value of the test statistic (here F-value) equal to or more extreme than the observed one, even if the null hypothesis is true. The lower the P-value, the more statistically significant the result. In our case, P-value is very low, namely 0.00000000000149, very close to zero (you don't want to write it in this long format!). When reporting the P-value, we can either report the original value, or we report that the value is smaller than the arbitrarily selected threshold, e.g. P < 0.001 (three thresholds are commonly used, P < 0.001, P < 0.01 and P < 0.05; if the P-value is e.g. 0.032, you can report P < 0.05).
It is important to report all relevant statistical values. For example, when you report P-value, you need also to include the value of the test statistic (here F-value), and also the number of degrees of freedom (because the same F-value with different degrees of freedom leads to different P-values).
Finally, we can plot the regression line onto the plot. The simplest way to do it is using the function
abline, which can directly use the variables from the
lm object. You may know that function
abline has arguments
h for coordinates of the vertical and horizontal line, respectively, but it has also arguments
b for regression coefficients (intercept and slope, respectively). It can extract these values directly from the result of
plot (dist ~ speed, data = cars) lm_cars <- lm (dist ~ speed, data = cars) abline (lm_cars, col = 'red')
The more general solution is to use the parameters of linear regression and predict the values of dependent variables. Since we are considering linear response here, we can just predict the values of the dependent variable for the minimum and maximum values of the explanatory variable (i.e. breaking distance for minimum and maximum measured speed within the experiment). For this purpose, there is a function
predict, which takes two arguments: the object generated by
lm function (here
newdata with the values of the explanatory variable for which dependent variable should be predicted. Important:
newdata must be a list, and the name of the component must be the same as the name of the explanatory variable in the formula you use in
speed). If the name is different (e.g. in
lm you use
speed, but in
predict you use
x), then the predict function will (silently, ie without warning) ignore the argument and predict values for all numbers inside the explanatory variable (50 speed measurements here).
plot (dist ~ speed, data = cars) lm_cars <- lm (dist ~ speed, data = cars) speed_min_max <- range (cars$speed) # range of values, the same as: c(min (cars$speed), max (cars$speed)) dist_pred <- predict (lm_cars, newdata = list (speed = speed_min_max)) lines (dist_pred ~ speed_min_max, col = 'blue')
You can see that the result (blue regression line) is very similar to the one drawn by
abline function (red regression line above), but not identical - the red line goes from the edge to edge of the plotting region, while the blue line goes only from minimum to maximum value of
The benefit of the
predict solution will become more apparent if we consider that maybe the relationship between the dependent and explanatory variable is not linear, and would be better done by a curvilinear shape. For this, we can use
poly function, included in the formula of
lm model, which allows creating polynomial regression (ie y = b0 + b1*x + b2*x2 for polynomial of the second degree, allowing the curve to “bend” once).
Call: lm(formula = dist ~ poly(speed, degree = 2), data = cars) Coefficients: (Intercept) poly(speed, degree = 2)1 poly(speed, degree = 2)2 42.98 145.55 23.00
The equation of this polynomial regression is then
distpredicted = 42.98 + 145.55*speed + 23.00*speed2
To draw the predicted polynomial regression curve, we need to use
abline can draw only straight lines), and additionally, we need to define enough points within the curve to make the curvature smooth.
plot (dist ~ speed, data = cars) speed_seq <- seq (min (cars$speed), max (cars$speed), length = 100) dist_pred_2 <- predict (lm_cars, newdata = list (speed = speed_seq)) lines (dist_pred_2 ~ speed_seq, col = 'green')
You see that the green line is practically straight - in fact, the pattern is very close to linear, so even if we fit it by the polynomial curve.
Finally, we can report the statistical values of the linear regression as the legend inside the figure (this is suitable, however, only if there is enough space inside the drawing area, otherwise you can include it e.g. in the caption). Let's do it for linear (but not polynomial) regression, and include parameters of the regression, r2, F-value and P-value:
plot (dist ~ speed, data = cars) lm_cars <- lm (dist ~ speed, data = cars) abline (lm_cars, col = 'red') legend ('topleft', legend = c('y = -17.58 + 3.93*x', 'r2 = 0.65, F = 89.57, P < 0.001'), bty = 'n') |
We've prepared our fish by cutting it into flat rectangular portions and diced our vegetables into thin slices. We've even gone off on a tangent, playing with frisbees and toilet paper tubes. We're hungry for a gourmet fish supper, but it'll be bland without any seasoning.
That's where this section comes in handy. Spices and herbs come in long strings that we need to chop. These long lines, called arcs, are written using a couple different methods, some of which we'll discuss below. We'll learn how to chop them up to find how much we have before we add them to our boiling pot.
The first thing we need to know is the length of a line. We used it for triangles, we used it for circles, and we even used it to find areas on disks. Who knew Pythagorus would be on to something when came up with the Pythagorean Theorem? Well...he probably did.
These problems are just a short review of how to use his theorem. If you have any trouble with them, you should review the Pythagorean Theorem and then come back.
Find the length of the line between
1) (5,4) and (2,3)
2) (-2,-1) and (6, 3)
3) (3,5) and (-2, -6)
1) The change in x between the points (5,4) and (2,3) is 5 – 2 = 3 and the change in y is 4 – 3 = 1.
The length of the line is
2) Between the points (-2,-1) and (6, 3), Δ x = 8 and Δ y = 4.
The length of the line is
3) Between the points (3,5) and (-2, -6), Δ x = 5 and Δ y = 11.
The length of the line is
When an herb is shaped like a curve (or arc) instead of a straight line, the length of the curve or arc length is the length of a piece of string that exactly covers the curve.
The idea of the string is good intuition, but it's not very useful for doing problems. Instead, like the other problems we've done in this unit to find area and volume, we are going to break our arc length problems down into parts. To find the length of the curve of a continuous function f on an interval [a,b],
Most of the assumptions we made earlier still apply if we replace "area" with "length." The exception is the first assumption, since there aren't "horizontal" or "vertical" slices when we're only working in one dimension.
We're going to use the Pythagorean Theorem a lot more in this section, albeit a little differently than we have before. For a sample function f(x) on an interval [a,b],
we'll chop the curve up into tiny pieces and zoom in on the piece at position x. We want to be careful that the sun isn't shining too brightly. We aren't trying to kill a bug with a magnifying glass.
This piece is very close to being a straight line. To approximate its length, we'll pretend that it really is a straight line.
The change in x over the little bit of line is Δ x. The change in y is Δ y. That's nothing new. Since we've zoomed in so much, the derivative of f at x is approximately the same as the slope of this little bit of line.
If we rearrange this equation, we get
Δ y ≅ f ' (x) Δ x.
The Pythagorean Theorem says the length of the little bit of line is
We add up these quantities and let the number of pieces approach ∞ to get an integral that gives us the exact length of the curve. The integral is from a to b since those are the values that make sense for x.
We can't really make these problems any harder. We just need to remember
to find the length of a curve. If we were to forget the formula, it's not the zombie apocalypse. We could just draw a picture that breaks the curve up into pieces and use the Pythagorean Theorem to find the length of a piece. Then we'd just go through the steps we just used above to reconstruct the formula. We recommend practicing this a few times before the test. Memorizing a formula is one thing, but actually understanding it is another.
We just learned how to chop up a simple herb like a chive. They're just simple lines that we know how to describe in terms of the variables x and y. Sometimes, we need to cut up more intricate herbs like rosemary. These lines may be described in different ways, including in parametric equations.
These parametric equations are just a different language for describing lines. They're nice because it's easier to describe the curve using this other language, and it can be easier to integrate this way too.
The process for finding the length of a curve described by parametric equations is the same. We're just using a different language. It's not even an African click language.
Suppose we want to find the length of the curve described by parametric equations x(t) and y(t), on the interval a ≤ t ≤ b.
We break up the curve into little pieces, as usual:
The Pythagorean Theorem says that the length of a little piece is (approximately)
We need to get Δ x and Δ y in more useful terms, preferably in terms of t, so we can end up with an integral with respect to t.
The amount x changes over a little time interval is approximately multiplied by the length of the time interval.
Similarly, for y,
This means the length of a little piece is approximately
This simplifies to
When we integrate over the values that make sense for t, we get the length of the curve:
This equation is just like the one we derived in the previous section, but now the equation is parameterized with variable t. We're using the notation instead of the notation y'(t) to make it clear that t is the independent variable. Since there are so many variables floating around, using prime notation here isn't the best idea. It's best not to get confused with dots and quotation or prime marks floating around a page of mathematical soup.
Here is a recap video of Pythagorean Theorem, just in case you need it.
Find the length of the parametric curve described by
x(t) = t2, y(t) = t3
for 0 ≤ t ≤ 2.
You may use a calculator to evaluate the integral.
Explain why the answer is reasonable.
First, we find the derivatives
Using our new formula, the length of the curve is
If we graph the curve, we get
This curve is close enough to the straight line between (0,0) and (4,8), so we would expect the length of the curve to be close to the length of that line.
The length of the line is
Since that's close to our answer, our answer makes sense.
When we're asked to explain why an answer is reasonable, we should compare the curve to a line or circle whose length we know how to find. Even when we're not explicitly asked to do this, it's a good idea as a way to check ourselves. If a curve is as long as a green bean, we don't want to trust an answer that says it's as long as Route 66.
With parametric functions we have to take a little extra care with the limits of integration. The limits of integration are values of t. If we aren't given the values of t, we have to find them ourselves.
Fun fact: "normal" functions are just a special case of parametric functions. The function f(x) can be parametrized as
x(t) = t
y(t) = f(t).
Since x and t are the same thing, the curve f(x) for a ≤ x ≤ b is the same as the parametrized curve for a ≤ t ≤ b. We can find the length of the curve by working with the parametrized version.
The derivative of
x(t) = t is
and the derivative of y(t) = f(t) is
Then the length of the curve is
Since x and t are the same, we could just as well write this as
which is the same thing the formula we had earlier.
This means we don't need to remember multiple formulas for arc length. Having to memorize less is better. It'll keep us from making mistakes between formulas, or from mistaking a panda for a polar bear.
If we remember the version for parametric equations, we can find the arc length of a normal function by using the parametrization x = t.
We prefer the parametric version. It looks just a bit more complicated, but it's more fun. The integral
Looks like it's using the Pythagorean Theorem, which makes it easier to remember.
Polar functions are a special case of parametric functions. These are just equations disguised as nasty trigonometric functions. It's sort of like they're wearing intricate Halloween costumes, like a stick figure man costume made of black cloth and glow sticks. We can't tell who it is, but we can do our best to use our knowledge about a person to figure it out.
The polar function r = f(t) can be parametrized as
x(t) = r cos t = f(t) cos t
y(t) = r sin t = f(t) sin t
We're using t instead of θ so we can talk about instead of , to be consistent with the earlier discussion of parametric functions. To find and , we have to apply the product rule carefully.
Find the arc length for one petal of the polar function r = sin(3t).
The function looks like
The first petal corresponds to the interval . We get
x(t) = r cos t = sin(3t)cos t
y(t) = r sin t = sin(3t)sin t.
Using the product rule gives us
We use the arc length formula and get
It looks complicated and drawn out, but we can always just use a calculator to solve the integral. If our teacher asks us to solve this one, they're looking to torture us.
Sometimes, a slightly modified costume simplifies the calculations. We can use the identity
sin2 t + cos2 t = 1
to get a nicer formula for the arc length of a polar function.
If we want to make the formula look more like it goes with polar functions, we can put θ in place of t:
This formula is nice to keep handy, but we probably don't need to memorize it unless we need to do a lot of polar arc length problems. |
Treaty of Union
The Treaty of Union is the name now given to the agreement which led to the creation of the new state of Great Britain, stating that England and Scotland were to be "United into One Kingdom by the Name of Great Britain", At the time it was more referred to as the Articles of Union. The details of the Treaty were agreed on 22 July 1706, separate Acts of Union were passed by the parliaments of England and Scotland to put the agreed Articles into effect; the political union took effect on 1 May 1707. Queen Elizabeth I of England and Ireland, last monarch of the Tudor dynasty, died without issue on 24 March 1603, the throne fell at once to her first cousin twice removed, James VI of Scotland, a member of House of Stuart and the only son of Mary, Queen of Scots. By the Union of the Crowns in 1603 he assumed the throne of the Kingdom of England and the Kingdom of Ireland as King James I; this personal union lessened the constant English fears of Scottish cooperation with France in a feared French invasion of England.
After this personal union, the new monarch, James I and VI, sought to unite the Kingdom of Scotland and the Kingdom of England into a state which he referred to as "Great Britain". Acts of Parliament attempting to unite the two countries failed in 1606, in 1667, in 1689. Beginning in 1698, the Company of Scotland sponsored the Darien scheme, an ill-fated attempt to establish a Scottish trading colony in the Isthmus of Panama, collecting from Scots investments equal to one-quarter of all the money circulating in Scotland at the time. In the face of opposition by English commercial interests, the Company of Scotland raised subscriptions in Amsterdam and London for its scheme. For his part, King William III had given only lukewarm support to the Scottish colonial endeavour. England was at war with France, hence did not want to offend Spain, which claimed the territory as part of New Granada. England was under pressure from the London-based East India Company, anxious to maintain its monopoly over English foreign trade.
It therefore forced the Dutch investors to withdraw. Next, the East India Company threatened legal action, on the grounds that the Scots had no authority from the king to raise funds outside the king's realm, obliged the promoters to refund subscriptions to the Hamburg investors; this Scotland itself. The colonisation ended in a military confrontation with the Spanish in 1700, but most colonists died of tropical diseases; this was an economic disaster for the Scottish ruling class investors and diminished the resistance of the Scottish political establishment to the idea of political union with England. It supported the union, despite some popular opposition and anti-union riots in Edinburgh and elsewhere. Deeper political integration had been a key policy of Queen Anne since she had acceded to the thrones of the three kingdoms in 1702. Under the aegis of the Queen and her ministers in both kingdoms, in 1705 the parliaments of England and Scotland agreed to participate in fresh negotiations for a treaty of union.
It was agreed that England and Scotland would each appoint thirty-one commissioners to conduct the negotiations. The Scottish Parliament began to arrange an election of the commissioners to negotiate on behalf of Scotland, but in September 1705, the leader of the Country Party, the Duke of Hamilton, who had attempted to obstruct the negotiation of a treaty, proposed that the Scottish commissioners should be nominated by the Queen, this was agreed. In practice, the Scottish commissioners were nominated on the advice of the Duke of Queensberry and the Duke of Argyll. Of the Scottish commissioners who were subsequently appointed, twenty-nine were members of the governing Court Party, while one was a member of the Squadron Volante. At the head of the list was Queensberry himself, with the Lord Chancellor of Scotland, the Earl of Seafield. George Lockhart of Carnwath, a member of the opposition Cavalier Party, was the only commissioner opposed to union; the thirty-one English commissioners included government ministers and officers of state, such as the Lord High Treasurer, the Earl of Godolphin, the Lord Keeper, Lord Cowper, a large number of Whigs who supported union.
Most Tories in the Parliament of England were not in favour of a union, only one was among the commissioners. Negotiations between the English and Scottish commissioners began on 16 April 1706 at the Cockpit-in-Court in London; the sessions opened with speeches from William Cowper, the English Lord Keeper, from Lord Seafield, the Scottish Lord Chancellor, each describing the significance of the task. The commissioners did not carry out their negotiations face in separate rooms, they communicated their proposals and counter-proposals to each other in writing, there was a blackout on news from the negotiations. Each side had its own particular concerns. Within a few days, England gained a guarantee that the Hanoverian dynasty would succeed Queen Anne to the Scottish crown, Scotland received a guarantee of access to colonial markets, in the hope that they would be placed on an equal footing in terms of trade. After the negotiations ended on 22 July 1706, acts of parliament were drafted by both Parliaments to implement the agreed Articles of Union.
The Scottish proponents of union believed that failure to agree to the Articles would result in the imposition of a union under less favourable terms, English troops were stationed just south of the Scottish border and in northern Ireland as an "encouragement". Months of fierce debate in both capital cities and throughout both kingdoms followed. In Scotland, the debate on occasion dissolved int
2016 United Kingdom European Union membership referendum
The United Kingdom European Union membership referendum known as the EU referendum and the Brexit referendum, took place on 23 June 2016 in the United Kingdom and Gibraltar to ask the electorate if the country should remain a member of, or leave the European Union, under the provisions of the European Union Referendum Act 2015 and the Political Parties and Referendums Act 2000. The referendum resulted in 51.9% of votes being in favour of leaving the EU. Although the referendum was non-binding, the government of that time had promised to implement the result, it initiated the official EU withdrawal process on 29 March 2017, meaning that the UK was due to leave the EU before 11PM on 29 March 2019, UK time, when the two-year period for Brexit negotiations expired. Membership of the EU and its predecessors has long been a topic of debate in the United Kingdom; the country joined what were the three European Communities, principally the European Economic Community, in 1973. A previous referendum on continued membership of the European Communities was held in 1975, it was approved by 67.2% of those who voted.
In May 2015, in accordance with a Conservative Party manifesto commitment following their victory at the 2015 UK general election, the legal basis for a referendum on EU membership was established by the UK Parliament through the European Union Referendum Act 2015. Britain Stronger in Europe was the official group campaigning for the UK to remain in the EU, was endorsed by the Prime Minister David Cameron and Chancellor George Osborne. Vote Leave was the official group campaigning for the UK to leave the EU, was fronted by the Conservative MP Boris Johnson, Secretary of State for Justice Michael Gove and Labour MP Gisela Stuart. Other campaign groups, political parties, trade unions and prominent individuals were involved, each side had supporters from across the political spectrum. After the result, financial markets reacted negatively worldwide, Cameron announced that he would resign as Prime Minister and Leader of the Conservative Party, having campaigned unsuccessfully for a "Remain" vote.
It was the first time that a national referendum result had gone against the preferred option of the UK Government. Cameron was succeeded by Home Secretary Theresa May on 13 July 2016; the opposition Labour Party faced a leadership challenge as a result of the EU referendum. Several campaign groups and parties have been fined by the Electoral Commission for campaign finance irregularities, with the fines imposed on Leave. EU and BeLeave constrained by the cap on the commission's fines. There is an ongoing investigation into possible Russian interference in the referendum; the European Communities were formed in the 1950s – the European Coal and Steel Community in 1952, the European Atomic Energy Community and European Economic Community in 1957. The EEC, the more ambitious of the three, came to be known as the "Common Market"; the UK first applied to join them in 1961. A application was successful, the UK joined in 1973. Political integration gained greater focus when the Maastricht Treaty established the European Union in 1993, which incorporated the European Communities.
Prior to the 2010 general election, the Leader of the Conservative Party David Cameron had given a "cast iron" promise of a referendum on the Lisbon Treaty, which he backtracked on after all EU countries had ratified the treaty before the election. When they attended the May 2012 NATO summit meeting, UK Prime Minister David Cameron, Foreign Secretary William Hague and Ed Llewellyn discussed the idea of using a European Union referendum as a concession to energise the Eurosceptic wing of the Conservative Party. Cameron promised in January 2013 that, should the Conservatives win a parliamentary majority at the 2015 general election, the British government would negotiate more favourable arrangements for continuing British membership of the EU, before holding a referendum on whether the UK should remain in or leave the EU; the Conservative Party published a draft EU Referendum Bill in May 2013, outlined its plans for renegotiation followed by an in-out vote, were the party to be re-elected in 2015.
The draft Bill stated that the referendum had to be held no than 31 December 2017. The draft legislation was taken forward as a Private Member's Bill by Conservative MP James Wharton, known as the European Union Bill 2013; the bill's First Reading in the House of Commons took place on 19 June 2013. Cameron was said by a spokesperson to be "very pleased" and would ensure the Bill was given "the full support of the Conservative Party". Regarding the ability of the bill to bind the UK Government in the 2015–20 Parliament to holding such a referendum, a parliamentary research paper noted that:The Bill provides for a referendum on continued EU membership by the end of December 2017 and does not otherwise specify the timing, other than requiring the Secretary of State to bring forward orders by the end of 2016. If no party obtained a majority at the, there might be some uncertainty about the passage of the orders in the next Parliament; the bill received its Second Reading on 5 July 2
English law is the common law legal system of England and Wales, comprising criminal law and civil law, each branch having its own courts and procedures. England's most authoritative law is statutory legislation, which comprises Acts of Parliament, regulations and by-laws. In the absence of any statutory law, the common law with its principle of stare decisis forms the residual source of law, based on judicial decisions and usage. Common law is made by sitting judges who apply both statutory law and established principles which are derived from the reasoning from earlier decisions. Equity is the other historic source of judge-made law. Common law can be repealed by Parliament. Not being a civil law system, English law has no comprehensive codification. However, most of its criminal law has been codified from its common law origins, in the interests both of certainty and of ease of prosecution. For the time being, murder remains a common law crime rather than a statutory offence. Although Scotland and Northern Ireland form part of the United Kingdom and share Westminster as a primary legislature, they have separate legal systems outside of English Law.
International treaties such as the European Union's Treaty of Rome or the Hague-Visby Rules have effect in English law only when adopted and ratified by Act of Parliament. Adopted treaties may be subsequently denounced by executive action.. Unless the denouncement or withdraw would affect rights enacted by parliament. In this case executive action cannot be used due to the doctrine of Parliamentary sovereignty; this principle was established in the case of Miller v Secretary of State for Exiting the European Union in 2017. Criminal law is the law of punishment whereby the Crown prosecutes the accused. Civil law is concerned with tort, families, companies and so on. Civil law courts operate to provide a party who has an enforceable claim with a remedy such as damages or a declaration. In this context, civil law is the system of codified law, prevalent in Europe. Civil law is founded on the ideas of Roman Law. By contrast, English law is the archetypal common law jurisdiction, built upon case law.
In this context, common law means the judge-made law of the King's Bench. Equity is concerned with trusts and equitable remedies. Equity operates in accordance with the principles known as the "maxims of equity"; the reforming Judicature Acts of the 1880s amalgamated the courts into one Supreme Court of Judicature, directed to administer both law and equity. The neo-gothic Royal Courts of Justice in The Strand, were built shortly afterwards to celebrate these reforms. Public Law is the law governing relationships between the state. Private law encompasses relationships between other private entities. A remedy is "the means given by law for the recovery of a right, or of compensation for its infringement". Most remedies are available only from the court. Most civil actions claiming damages in the High Court were commenced by obtaining a writ issued in the Queen's name. After 1979, writs have required the parties to appear, writs are no longer issued in the name of the Crown. Now, after the Woolf Reforms of 1999 all civil actions other than those connected with insolvency, are commenced by the completion of a Claim Form as opposed to a Writ, Originating Application, or Summons.
In England, there is a hierarchy of sources, as follows: Legislation The case law rules of common law and equity, derived from precedent decisions Parliamentary conventions General Customs Books of authority Primary legislation in the UK may take the following forms: Acts of Parliament Acts of the Scottish Parliament Acts and Measures of the National Assembly for Wales Statutory Rules of the Northern Ireland AssemblyOrders in Council are a sui generis category of legislation. Secondary legislation in England includes: Statutory Instruments and Ministerial Orders Bye-laws of metropolitan boroughs, county councils, town councilsStatutes are cited in this fashion: "Short Title Year", e.g. Theft Act 1968; this became the usual way to refer to Acts from 1840 onwards. For example, the Pleading in English Act 1362 was referred to as 36 Edw. III c. 15, meaning "36th year of the reign of Edward III, chapter 15".. Common law is a term with historical origins in the legal system of England, it denotes, in the first place, the judge-made law that developed from the early Middle Ages as described in a work published at the end of the 19th century, The History of English Law before the Time of Edward I, in which Pollock and Maitland expanded the work of Coke and Blackstone.
The law developed in England's Court of Common Pleas and other common law courts, which became the law of the colonies settled under the crown of England or of the United Kingdom, in North America and elsewhere.
2014 Scottish independence referendum
A referendum on Scottish independence from the United Kingdom took place on Thursday 18 September 2014. The referendum question was "Should Scotland be an independent country?", which voters answered with "Yes" or "No". The "No" side won, with 2,001,926 voting against 1,617,989 voting in favour; the turnout of 84.6% was the highest recorded for an election or referendum in the United Kingdom since the introduction of universal suffrage. The Scottish Independence Referendum Act 2013, setting out the arrangements for the referendum, was passed by the Scottish Parliament in November 2013, following an agreement between the devolved Scottish government and the Government of the United Kingdom. To pass, the independence proposal required a simple majority. With some exceptions, all European Union or Commonwealth citizens resident in Scotland aged sixteen years or over could vote, which produced a total electorate of 4,300,000 people; this was the first time that the electoral franchise was extended to include sixteen and seventeen-year-olds in Scotland.
Yes Scotland was the main campaign group for independence, while Better Together was the main campaign group in favour of maintaining the union. Many other campaign groups, political parties, businesses and prominent individuals were involved. Prominent issues raised during the referendum included the currency an independent Scotland would use, public expenditure, EU membership, North Sea oil. An exit poll of voters revealed that for "No"-voters, the retention of the pound sterling was the deciding factor, while for "yes"-voters, the biggest single motivation was "disaffection with Westminster politics"; the Kingdom of Scotland and the Kingdom of England were established as independent countries during the Middle Ages. After fighting a series of wars during the 14th century, the two monarchies entered a personal union in 1603 when James VI of Scotland became James I of England; the two nations were temporarily united under one government when Oliver Cromwell was declared Lord Protector of a Commonwealth in 1653, but this was dissolved when the monarchy was restored in 1660.
Scotland and England united to form the Kingdom of Great Britain in 1707. Factors in favour of union were, on the Scottish side, the economic problems caused by the failure of the Darien scheme and, on the English, securing the Hanoverian line of succession. Great Britain in turn united with the Kingdom of Ireland in 1801, forming the United Kingdom of Great Britain and Ireland. Most of Ireland left the Union in 1922 to form the Irish Free State; the Labour Party was committed to home rule for Scotland in the 1920s, but it slipped down its agenda in the following years. The Scottish National Party was founded in 1934, but did not achieve significant electoral success until the 1960s. A document calling for home rule, the Scottish Covenant, was signed by 2,000,000 people in the late-1940s. Home rule, now known as Scottish devolution, did not become a serious proposal until the late 1970s as the Labour Government of James Callaghan came under electoral pressure from the SNP. A proposal for a devolved Scottish Assembly was put to a referendum in 1979.
A narrow majority of votes were cast in favour of change, but this had no effect due to a requirement that the number voting'Yes' had to exceed 40% of the total electorate. No further constitutional reform was proposed until Labour returned to power in a landslide electoral victory in May 1997. A second Scottish devolution referendum was held that year, as promised in the Labour election manifesto. Clear majorities expressed support for both a devolved Scottish Parliament and that Parliament having the power to vary the basic rate of income tax; the Scotland Act 1998 established the new Scottish Parliament, first elected on 6 May 1999, with power to legislate on unreserved matters within Scotland. A commitment to hold an independence referendum in 2010 was part of the SNP's election manifesto when it contested the 2007 Scottish Parliament election; the press were hostile towards the SNP, with a headline for The Scottish Sun in May 2007 stating – along with an image of a hangman's noose – "Vote SNP today and you put Scotland's head in the noose".
As a result of that election, the SNP became the largest party in the Scottish Parliament and formed a minority government led by the First Minister, Alex Salmond. The SNP administration launched a'National Conversation' as a consultation exercise in August 2007, part of which included a draft referendum bill, the Referendum Bill. After this, a white paper for the proposed Referendum Bill was published, on 30 November 2009, it detailed 4 possible scenarios, with the text of the Referendum to be revealed later. The scenarios were: no change; the Scottish government published a draft version of the bill on 25 February 2010 for public consultation. The consultation paper set out the proposed ballot papers, the mechanics of the proposed referendum, how the proposed referendum was to be regulated. Public responses were invited; the bill outlined three proposals: the first was full devolution or'devolution max', suggesting that the Scottish Parliament should be responsible for "all laws and duties in Scotland", with the exception of "defence and foreign affairs.
Government of Ireland Act 1920
The Government of Ireland Act 1920 was an Act of the Parliament of the United Kingdom. The Act's long title was "An Act to provide for the better government of Ireland"; the Act was intended to establish separate Home Rule institutions within two new subdivisions of Ireland: the six north-eastern counties were to form "Northern Ireland", while the larger part of the country was to form "Southern Ireland". Both areas of Ireland were to continue as a part of the United Kingdom of Great Britain and Ireland, provision was made for their future reunification under common Home Rule institutions. Home Rule never took effect in Southern Ireland, due to the Irish War of Independence, which resulted instead in the Anglo-Irish Treaty and the establishment in 1922 of the Irish Free State. However, the institutions set up under this Act for Northern Ireland continued to function until they were suspended by the British parliament in 1972 as a consequence of the Troubles; the remaining provisions of the Act still in force in Northern Ireland were repealed under the terms of the 1998 Good Friday Agreement.
Various attempts had been made to give Ireland limited regional self-government, known as Home rule, in the late 19th and early 20th centuries. The First Home Rule Bill of 1886 was defeated in the House of Commons because of a split in the Liberal Party over the principle of Home Rule, while the Second Home Rule Bill of 1893, having been passed by the Commons was vetoed by the House of Lords; the Third Home Rule Bill introduced in 1912 by the Irish Parliamentary Party could no longer be vetoed after the passing of the Parliament Act 1911 which removed the power of the Lords to veto bills. They could be delayed for two years; because of the continuing threat of civil war in Ireland, King George V called the Buckingham Palace Conference in July 1914 where Irish Nationalist and Unionist leaders failed to reach agreement. Controversy continued over the rival demands of Irish Nationalists, backed by the Liberals, Irish Unionists, backed by the Conservatives, for the exclusion of most or all of the province of Ulster.
In an attempt at compromise, the British government put forward an amending bill, which would have allowed for Ulster to be temporarily excluded from the working of the Act. A few weeks after the British entry into the war, the Act received Royal Assent, while the amending bill was abandoned. However, the Suspensory Act 1914 meant that implementation would be suspended for the duration of what was expected to be only a short European war. A delay ensued because of the effective end of the First World War in November 1918, the Paris Peace Conference, 1919, the Treaty of Versailles, signed in June 1919. Starting in September 1919, with the British Government, now led by David Lloyd George, committed under all circumstances to implementing Home Rule, the British cabinet's Committee for Ireland, under the chairmanship of former Ulster Unionist Party leader Walter Long, pushed for a radical new solution. Long proposed the creation of two Irish home rule entities, Northern Ireland and Southern Ireland, each with unicameral parliaments.
The House of Lords accordingly amended the old Bill to create a new Bill which provided for two bicameral parliaments, "consisting of His Majesty, the Senate of Ireland, the House of Commons of Ireland." The Bill's second reading debates in late March 1920 revealed that a large number of Irish members of parliament present felt that the proposals were unworkable. After considerable delays in debating the financial aspects of the measure, the substantive third reading of the Bill was approved by a large majority on 11 November 1920. A considerable number of the Irish Members present voted against the Bill, including Southern Unionists such as Maurice Dockrell, Nationalists like Joe Devlin.. During the Great War Irish politics moved decisively in a different direction. Several events, including the Easter Rising of 1916, the subsequent reaction of the British Government, the Conscription Crisis of 1918, had utterly altered the state of Irish Politics, made Sinn Féin the dominant voice of Irish nationalism.
Sinn Féin, standing for'an independent sovereign Ireland', won 73 of the 105 parliamentary seats on the island in the 1918 general election. Its elected members established their own parliament, Dáil Éireann, which declared the country's independence as the Irish Republic. Dáil Éireann, after a number of meetings, was declared illegal in September 1919 by the Lord Lieutenant of Ireland. For a variety of reasons all the Ulster Unionist MPs at Westminster voted against the Act, they preferred that all or most of Ulster would remain within the United Kingdom, accepting the proposed northern Home Rule state only as the second best option. Thus, when the Act became law on 23 December 1920 it was out of touch with realities in Ireland; the long-standing demand for home rule had been replaced among Nationalists by a demand for complete independence. The Republic's army was waging the Irish War of Independence against British rule, which had reached a nadir in late 1920; the Act divided Ireland into two territories, Southern Ireland and Northern Ireland, each intended to be self-governing, except in areas reserved to the Parliament of the United Kingdom: chief amongst these were matters relating to the Crown, to defence, foreign affairs, international trad
European Union (Notification of Withdrawal) Act 2017
The European Union Act 2017 is an Act of the Parliament of the United Kingdom to empower the Prime Minister to give to the Council of the European Union the formal notice – required by Article 50 of the Treaty on European Union – for starting negotiations for the United Kingdom's withdrawal from the European Union. The Act gave effect to the result of the 2016 United Kingdom European Union membership referendum held on 23 June in which 51.9% of voters chose to leave the European Union and directly follows the decision of the Supreme Court on 24 January 2017 in the judicial review case of R v Secretary of State for Exiting the European Union and was the first major piece of Brexit legislation to be passed by Parliament following the referendum. The Act's long title is To Confer power on the Prime Minister to notify, under Article 50 of the Treaty on European Union, the United Kingdom's intention to withdraw from the EU; the Act confers on the Prime Minister the power to give the notice required under the Treaty when a member state decides to withdraw.
Section 1 states that no provision of the European Communities Act 1972 or other enactment prevents the act taking effect. The Act's first reading as a bill in Parliament was on 26 January 2017, after the Supreme Court, in the Miller case, dismissed the government's appeal against the High Court's declaratory order, dated 7 November 2016, that "The Secretary of State does not have power under the Crown's prerogative to give notice pursuant to Article 50 of the Treaty on European Union for the United Kingdom to withdraw from the European Union."David Davis, Secretary of State for Exiting the European Union, formally introduced the bill for first reading in the House of Commons, two days in the following week were allocated for the second reading debate. Labour leader Jeremy Corbyn said: "I am asking all our MPs not to block Article 50 and make sure it goes through next week". However, several Labour MPs were intending to rebel against the whip, including several of Corbyn's fellow opposition frontbenchers.
The vote for the bill's second reading was carried on 1 February by 498 to 114, the bill was committed to a Committee of the Whole House, with a three-day programme for the conclusion of all proceedings up to and including third reading. 47 of 229 Labour MPs voted against the bill, including 10 junior shadow ministers and 3 whips from the party. One Conservative voted against the bill, 2 of the 9 Liberal Democrat MPs abstained. Diane Abbott, the shadow home secretary whose constituency voted to remain in the EU, was accused of having "Brexit flu" as she did not attend the vote on Article 50 due to illness, despite attending a debate in Westminster Hall three hours before the vote. In the parliamentary debates on the bill before enactment, members expressed concerns about the prospective effects on trade and the economy, financial services, research and innovation policy and the rights of UK citizens in or entering the EU, EU citizens in or entering the UK; the House of Commons agreed to hold the Committee stage on 6, 7 and 8 February, followed by the report stage and third reading on 8 February.
Topics covered by the amendments submitted by MPs and selected for debate at the Committee stage included: Parliamentary scrutiny, the devolved administrations, the status of citizens of the EU and the European Economic Area in the UK, that of expatriate British citizens in other parts of the EU and the EEA outside of the UK. All amendments were outvoted in Committee. At third reading, the Commons passed the bill by 494 to 122 on 8 February 2017, the bill was sent for debate in the House of Lords. On 17 February 2017 the House of Commons Library issued a briefing paper on "Parliament's role in ratifying treaties", which quoted David Jones, Minister of State for Exiting the EU, as confirming in the debate the government's commitment to bringing forward a motion, for the approval of both Houses, that will cover the withdrawal agreement and the future relationship with the European Union, as stating that the government expected and intended this will be before the European Parliament debates and votes on the final agreement.
Before adjourning on 8 February 2017, the House of Lords gave the bill, as brought from the Commons, a first reading. The House of Lords announced that Lord Bridges of Headley would move the bill's second reading for debate on 20 and 21 February, that the Lord Privy Seal would move that Standing Orders be dispensed with so as to allow manuscript amendments to be tabled and moved for the third reading. In the second reading debate, one of the cross bench peers, Lord Hope, a Supreme Court Justice from 2009 until his retirement in 2013, mentioned that the wording of the bill sufficed for giving notice of withdrawal, as the Supreme Court's decision in the Miller case required, but it said nothing about the process of the two further stages stated in article 50: negotiation, the concluding of an agreement between the Union and the state, withdrawing. At the end of the second reading debate the House agreed that the bill would be considered by a committee of the whole house; this was timetabled for 27 February and 1 March 2017.
On 1 March, the House of Lords, debating in Committee, made an amendment to protect EU nationals living in the UK regardless of the rights of UK nationals continuing to live in member states of the EU. The amendment was voted for by 358 with 256 against. Eight other major amendments were rejected; the amendment adds to the bill a requirement that the gove
Henry VIII of England
Henry VIII was King of England from 1509 until his death in 1547. Henry was the second Tudor monarch, succeeding his father, Henry VII. Henry is best known for his six marriages, in particular his efforts to have his first marriage, to Catherine of Aragon, annulled, his disagreement with the Pope on the question of such an annulment led Henry to initiate the English Reformation, separating the Church of England from papal authority. He appointed himself the Supreme Head of the Church of England and dissolved convents and monasteries, for which he was excommunicated. Henry is known as "the father of the Royal Navy". Domestically, Henry is known for his radical changes to the English Constitution, ushering into England the theory of the divine right of kings. Besides asserting the sovereign's supremacy over the Church of England, he expanded royal power during his reign. Charges of treason and heresy were used to quell dissent, those accused were executed without a formal trial, by means of bills of attainder.
He achieved many of his political aims through the work of his chief ministers, some of whom were banished or executed when they fell out of his favour. Thomas Wolsey, Thomas More, Thomas Cromwell, Richard Rich, Thomas Cranmer all figured prominently in Henry's administration, he was an extravagant spender and used the proceeds from the Dissolution of the Monasteries and acts of the Reformation Parliament to convert into royal revenue the money, paid to Rome. Despite the influx of money from these sources, Henry was continually on the verge of financial ruin due to his personal extravagance as well as his numerous costly and unsuccessful continental wars with King Francis I of France and the Holy Roman Emperor Charles V. At home, he oversaw the legal union of England and Wales with the Laws in Wales Acts 1535 and 1542 and following the Crown of Ireland Act 1542 he was the first English monarch to rule as King of Ireland, his contemporaries considered Henry in his prime to be an attractive and accomplished king.
He has been described as "one of the most charismatic rulers to sit on the English throne". He was an composer; as he aged, Henry became obese and his health suffered, contributing to his death in 1547. He is characterised in his life as a lustful, egotistical and insecure king, he was succeeded by the issue of his third marriage to Jane Seymour. Born 28 June 1491 at the Palace of Placentia in Greenwich, Henry Tudor was the third child and second son of Henry VII and Elizabeth of York. Of the young Henry's six siblings, only three – Arthur, Prince of Wales, he was baptised by Richard Fox, the Bishop of Exeter, at a church of the Observant Franciscans close to the palace. In 1493, at the age of two, Henry was appointed Constable of Dover Castle and Lord Warden of the Cinque Ports, he was subsequently appointed Earl Marshal of England and Lord Lieutenant of Ireland at age three, was inducted into the Order of the Bath soon after. The day after the ceremony he was created Duke of York and a month or so made Warden of the Scottish Marches.
In May 1495, he was appointed to the Order of the Garter. The reason for all the appointments to a small child was so his father could keep personal control of lucrative positions and not share them with established families. Henry was given a first-rate education from leading tutors, becoming fluent in Latin and French, learning at least some Italian. Not much is known about his early life – save for his appointments – because he was not expected to become king. In November 1501, Henry played a considerable part in the ceremonies surrounding his brother's marriage to Catherine of Aragon, the youngest surviving child of King Ferdinand II of Aragon and Queen Isabella I of Castile; as Duke of York, Henry used the arms of his father as king, differenced by a label of three points ermine. He was further honoured, on 9 February 1506, by Holy Roman Emperor Maximilian I who made him a Knight of the Golden Fleece. In 1502, Arthur died at the age of 15 of sweating sickness, just 20 weeks after his marriage to Catherine.
Arthur's death thrust all his duties upon the 10-year-old Henry. After a little debate, Henry became the new Duke of Cornwall in October 1502, the new Prince of Wales and Earl of Chester in February 1503. Henry VII gave the boy few tasks. Young Henry was supervised and did not appear in public; as a result, he ascended the throne "untrained in the exacting art of kingship". Henry VII renewed his efforts to seal a marital alliance between England and Spain, by offering his second son in marriage to Arthur's widow Catherine. Both Isabella and Henry VII were keen on the idea, which had arisen shortly after Arthur's death. On 23 June 1503, a treaty was signed for their marriage, they were betrothed two days later. A papal dispensation was only needed for the "impediment of public honesty" if the marriage had not been consummated as Catherine and her duenna claimed, but Henry VII and the Spanish ambassador set out instead to obtain a dispensation for "affinity", which took account of the possibility of consummation.
Cohabitation was not possible. Isabella's death in 1504, the ensuing problems of succession in Castile, complicated matters, her father preferred her to stay in England, but Henry VII's relations with Ferdinand had deteriorated. Catherine was therefore left in limbo for some time, culminating in Prince Henry's rejection of the marriage as soon he was able, at the age of 14. Ferdinand's solution was to make his daugh |
A new study has answered one of comets' most enduring mysteries.
For the first time, scientists have been able to provide precise measurements and analysis of a comet. Of the 5,507 comets streaming through space, only eight have been visited by human technology. As a result, scientists' understanding of comets, while growing, has lacked answers to many basic questions, according to an European Space Agency press release.
Comets’ densities are low, according to most measurements. The low density has indicated the objects are highly porous, but left questions over whether that was the result of cavernous interiors or just a low-density composition.
A study published in the journal Science proves the inside of a comet consists of “mostly dust and water ice … homogeneous and constant in density on a global scale without large voids.”
A team of scientists with the Rheinische Institut für Umweltforschung an der Universität zu Köln, Germany, came to this conclusion through an experiment measuring the gravitational pull of Comet 67P/Churyumov-Gerasimenko on The European Space Agency's Rosetta orbiter, by analyzing changes in the orbiter’s frequency. As a result of the Doppler effect, the radio frequency the orbiter was beaming to Earth would change when it was pulled by the gravity of the comet, allowing scientists to create a map of the gravitational field around the comet.
“Newton’s law of gravity tells us that the Rosetta spacecraft is basically pulled by everything,” principial investigator Martin Pätzold said in the release.
“… This means that we had to remove the influence of the Sun, all the planets – from giant Jupiter to the dwarf planets – as well as large asteroids in the inner asteroid belt, on Rosetta’s motion, to leave just the influence of the comet. Thankfully, these effects are well understood and this is a standard procedure nowadays for spacecraft operations.”
The analysis also required a touch of cosmic luck. Comet 67P/Churyumov-Gerasimenko has a double-lobe structure, which has allowed the team to gather much more accurate and precise measurements of the gravitational field, according to the release.
“The high porosity seems to be an inherent property of the nucleus material,” the study concluded.
The Rosetta orbiter launched in March 2004 and embarked on a 10-year flight to deep space, where it rendezvoused with Comet 67P/Churyumov-Gerasimenko in 2014.
The Rosetta orbiter was accompanied on its trip by the Philae lander, which landed on the surface of the comet and collected information. The information was sent to Rosetta and the orbiter sent it to Earth. The European Space Agency lost contact with Philae in July. Attempts to reconnect with the lander have been unsuccessful.
Without any chance to reconnect with Philae, Rosetta’s mission will end in September 2016. The press release reports that the Rosetta orbiter will be “guided to a controlled impact on the surface of the comet.” |
This volume of rectangular prism worksheet will introduce the students to the concept of volume and how to calculate it for a rectangular prism. This will also help them to understand the unit conversion of different parameters.
5 Interactive Ways for Volume of a Rectangular Prism Worksheet
Here, we have discussed 5 fun ways to evaluate the volume of a rectangular prism. Every method consists of an interactive worksheet where we have given some practice problems.
Definition and Example of a Rectangular Prism
A rectangular prism is a three-dimensional solid geometric shape that has six faces and each face is a rectangle. It is also known as a cuboid. A rectangular prism consists of eight vertices, twelve edges, and six rectangular faces. Here is the image of a rectangular prism.
Examples of rectangular prisms are boxes, bricks, and books.
Formula for Volume of a Rectangular Prism
The volume of a rectangular prism can be estimated by two formulas. If you the area of the prism and the height of it, then the Volume= Area✖Height.
When the area is unknown but you know the three sides (Length (l), Width (w), and Height (h)) then the
Volume = Length X Width X Height.
V=l X w X h.
The Volume of a Rectangular Prism: Area and Height
If you know the area of the rectangular surface and the height of your rectangular prism then the output of these two will give you the volume of the rectangular prism. You may have the values in integer form or decimal form.
Volume for Integer Value
When you have both the value in the integer form then the volume will also be in integer form. Follow the below process to calculate the volume.
This is a PDF worksheet to calculate the volume of the rectangular prism.
Volume for Decimal Value of a Rectangular Prism
Here, the values of area or height, or both are in decimal form.
Suppose we need to find the volume of a rectangular object with an area of 26.85 unit² and a height of 5.52 units. Follow the solution given in the picture below to learn it.
Here, is the PDF worksheet for your practice.
Calculating the Volume of a Rectangular Prism: For Side Lengths
If you are given the sides of the prism, then it is quite easy to calculate the volume. All you need is to just multiply the three sides ( length, width, and height) and the output will be the volume of the rectangular prism worksheet. You can calculate it for both integers and fractions.
Volume for Integer Value
The values are in the integer form here. The output will be in integer form also.
This is the PDF for the volume of a rectangular prism worksheet.
Volume for Fraction Value
The three sides will be in fraction mode. The procedure for calculating the volume is similar to the above process. You need to multiply the fractions to get the volume.
Now, it’s time for your practice. Here is the PDF of the volume of a rectangular prism worksheet.
The volume of a Rectangular Prism Worksheet from Table
Here, we have attached a worksheet to practice the volume of a rectangular prism where the dimensions are given in the table. From the value in the table, you need to calculate the volume of the rectangular prism.
We have attached the PDF of the volume of a rectangular prism worksheet from the table.
Unit Conversion for Calculating Volume of a Rectangular Prism Worksheet
In this section, we will demonstrate to you how to change the unit to your preference to estimate the volume of a rectangular prism worksheet.
This is the PDF of the volume of the rectangular prism worksheet for unit conversion.
Exploring Missing Dimension from Volume of Rectangular Prism
You can also find the missing dimensions if you know the volume of a rectangular prism worksheet. Suppose, you want to know the height of a prism where the volume is given. So, the equation will be,
Volume= Length ✕ Width ✕ Height.
Height = Volume/(Length✕Width)
Now, practice from the PDF of the volume of a rectangular prism worksheet to master this skill.
Download Practice Worksheet PDF
This is the combination of all the worksheets that we have given for practice.
This is all about the volume of a rectangular prism worksheet. We have attached 13+ free pages in 5 interactive worksheets that will help the students get the idea of calculating volume. Please, download the free printable worksheets and share them with the students for practice.
Welcome to my blog. I am Fahim Shahriyar Dipto. Currently, I am working as a content developer in You Have Got This Math. I will post math-related articles here from now on. I have completed my bachelor’s degree in Mechanical Engineering from BUET. So, I am a mechanical graduate with a lot of interest in research and exploring new work. I always try to think of something creative and innovative. So, I choose to write creative content here. |
Drag. Drag is resistance to motion of an object through a fluid If fluid is air, sometimes called air resistance Drag with streamline, non-viscous flow depends on:
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Drag • Drag is resistance to motion of an object through a fluid • If fluid is air, sometimes called air resistance • Drag with streamline, non-viscous flow depends on: • fluid density (r), cross-sectional area of object (A), speed of object relative to fluid (v), properties of object’s surface (C). • Cross-sectional area can be thought of as the area of the shadow the object would have, if lit from the direction of the passing fluid.
Drag • Depends on: • density of fluid (r), cross-sectional area of object presented to fluid (A), relative speed of object and fluid (v), properties of the object’s surface (C). • Direction: Always opposes relative motion of fluid and object • Note: This eqn doesn’t apply to viscous or turbulent flow
Projectiles and drag • An object moving vertically does not have the same vertical motion as an object that is moving sideways too, even if vyi is same, if drag is not negligible. • Drag force has a vertical component that depends on speed, not just vy
Relative Motion • Relativity means comparing quantities measured by one observer to • the same quantities measured • by another observer • in a different • Classical Relativity • is all we study in 201 • familiar • very good approximations at speeds much less than 300,000 km/s
Setup Demo • A, B, and C start out together (same x) • Relative to Earth • A goes 3 units each 10s • B goes 1 units each 10s • C stays at rest • Record the relative positions in the table below. Include signs!
t=0 t=0 t=0 t=10s t=10s t=10s t=20s t=20s t=20s t=30s t=30s t=30s t=0 t=0 t=10s t=10s t=20s t=20s t=30s t=30s Position DATA • Where is each person relative to you?
Relative Velocity • From the measured relative position as a function of time, • find the relative velocities in the table below: vAB= vBB= vCB= vAC= vBC=
Patterns • Check your relative velocity answers to see if they fit the expected patterns: • a) vBB=0 No one moves relative to themselves • Does this match your answers? • b) vCB = -vBC If you move East relative to the cows, the cows move West relative to you • Does this match your answers ? • c) vAC = vAB + vBC Vector addition of velocities • Does this match your answers?
Vector Addition of Velocities • A, B, and C each stand for an object, person, … • Each term is the velocity of • 1st subscript • relative to 2nd. • Pretend you are 2nd subscript and looking at 1st. • Beware of order of subscripts! • Beware of direction (or sign) of vector.
Example Red car is traveling at 60 mph in the positive direction relative to the road. Blue car is behind the red car traveling in the same direction at 65 mph. Find the velocity of the red car relative to the blue car. A= ______, B= ________, C= ________- vAC = vAC = vAB + vBC vAB = Why negative? vBC = vAB = ?
Example 2 A bicyclist rides 30 mph on a day with a 15 mph wind. When she rides into a headwind, she experiences a drag force of 100 N. What drag force does she feel going the opposite direction? What drag force does she feel after a 90-degree turn? A= _____, B= _______, C= ________ vAC = vBA = vBC = ?
Example 2 When she rides into a headwind: A= _____, B= _______, C= ________ vAC = vBA = vBC = ? When she rides with tailwind: A= _____, B= _______, C= ________ vAC = vBA = vBC = ? Since D is proportional to v2, we can do a ratio: D1/ v12 = D2/ v22 , so D2=
Example 2 A= bicyclist, B= air , C= earth vAC = 30 j vAB = ? vBC = 15 i vACx = vABx + vBCx 0 = vABx + 15 mph vABx = -15 mph vACy = vABy + vBCy 30mph = vABy + 0 vAB = 30 mph Since D is proportional to v2, we can do a ratio: D1/ v12 = D3/ v32 , so D3= |
In 1602, the astronomer Johannes Kepler was working on a problem for his boss Tycho Brahe. He was continuing a centuries-old study trying to devise a formula that could calculate the orbits of the planets, specifically Mars in this case. He worked for years on the problem with the most advanced technology of the time, but was never satisfied with the results as they were never truly accurate or reliable.
To simplify his calculations, he hit upon the concept that a planet sweeps out equal areas of space (in a pie-slice shaped figure with the sun at the tip) in equal amounts of time, despite differences in the distance of the planet from the sun.
For the next three years he attempted to design the perfect equation to describe an orbit which would fit this and other observations he was using.
Finally, in 1605, he realized that he had the answer. His realization/discovery would firmly place him as one of the most famous scientists of all time.
What did he realize?
Ellipses Not Centered at the Origin
To find an equation for ellipses centered around another point, say , simply replace with and with . This will shift all the points of the ellipse to the right units (or left if ) and to up units (or down if ). So the general form for a horizontally- or vertically-oriented ellipse is:
It is centered about the point . If , the ellipse is horizontally oriented and has foci and on its horizontal major axis. If , it is vertically oriented and has foci and on its vertical major axis.
Explain why subtracting from the term and from the term in the equation for an ellipse shifts the ellipse horizontally and vertically.
If is a solution to
then is a solution to
This produces a graph that is shifted horizontally by and vertically by .
Graph the equation .
We need to get the equation into the form of general equation above. The first step is to group all the terms and terms, factor our the leading coefficients of and , and move the constants to the other side of the equation:
Now, we “complete the square” by adding the appropriate terms to the expressions and the expressions to make a perfect square.
Now we factor and divide by the coefficients to get:
And there we have it. Once it’s in this form, we see this is an ellipse is centered around the point (-1,2), it has a horizontal major axis of length 3 and a vertical minor axis of length 2, and from this we can make a sketch of the ellipse:
The National Statuary Hall in the United States Capital Building is an example of an ellipse-shaped room, sometimes called an “echo room”, which provide an interesting application to a property of ellipses. If a person whispers very quietly at one of the foci, the sound echoes in a way such that a person at the other focus can often hear them very clearly. Rumor has it that John Quincy Adams took advantage of this property to eavesdrop on conversations in this room.
How do Echo Rooms work? What does the elliptical shape of the room have to do with it?
The property of ellipses that makes echo rooms work is called the “optical property.” So why echoes, if this is an optical property? Well, light rays and sound waves bounce around in similar ways. In particular, they both bounce off walls at equal angles. In the diagram below, .
For a curved wall, they bounce at equal angles to the tangent line at that point:
So the “optical property” of ellipses is that lines between a point on the ellipse and the two foci form equal angles to the tangent at that point, or in other words, whispers coming from one foci bounce directly to the other foci. In the diagram below, for each on the ellipse, .
Concept question wrap-up:
In the introduction, we considered the problem that astronomer Johannes Kepler was working on for his boss Tycho Brahe.
Kepler's realization was that, even though he had deliberately avoided them for a very long time because they were so simple, ellipses were the perfect shape to make all of his calculations come together.
When a planet orbits the sun (or when any object orbits any other), it takes an elliptical path and the sun lies at one of the two foci of the ellipse. Kepler's laws regarding planetary motion are accurate enough to produce modern computations which are still used to predict the motion of artificial satellites today.
An ellipse is a conic section (a "slice" of a cone) that can be defined as:
- 1) any finite "slice" of a cone which intersects both side of the cone, particularly at a non-perpendicular angle to the vertical axis.
- 2) a circle which has been dilated (or “stretched”) in one direction.
- 3) the set of points in which the sum of distances to two special points called the foci is constant.
The major axis is the segment spanning an ellipse in the longest direction.
The minor axis is the segment spanning an ellipse in the shortest direction.
Foci are the two points that define an ellipse in the above definition.
Eccentricity is a measure of how “stretched out” an ellipse is. Formally, it is the distance between the two foci divided by the length of the major axis. The eccentricity ranges from 0 (a circle) to points close to 1, which are very elongated ellipses.
1) Though planets take an elliptical path around the sun, these ellipses often have a very low eccentricity, meaning they are close to being circles. The diagram above exaggerates the elliptical shape of a planet’s orbit. The Earth’s orbit has an eccentricity of 0.0167. Its minimum distance from the sun is 146 million km. What is its maximum distance from the sun? If the sun’s diameter is 1.4 million kilometers, do both foci of the Earth’s orbit lie within the sun?
2) What is the sum of the distances to the foci of the points on a vertically-oriented ellipse?
3) Graph the ellipse: .
4) Try to graph the ellipse:
- What goes wrong? Explain what you think the graph of this equation might look like.
5) Graph the ellipse (plot points):
- What is different here? Explain what you think the graph of this equation might look like.
1) Recall that the eccentricity of an ellipse is .
- Assume that the orbit of the sun is an ellipse centered at (0,0). Then we can use the distance from the origin to the focus to set up the equations and . Solving we get , and the distance from (0,0) to the foci, (all units are in millions of km). Finally the maximum distance from the earth to the sun is approximately 152 million km. From Kepler’s law, we know one of the foci of its orbit is at the center of the sun. The other foci is million kilometers away, so it is outside the sun (but not by very far!)
3) To graph
- ..... complete the square and factor (see the previous lesson if you need to review)
- ..... divide both sides by 80
4) After completing the square, we have the sum of positive numbers equaling a negative number. This is an impossibility, so the equation has no solutions.
5) After completing the square, the term and the term are opposite signs. If you plot some points you will see that the graph has two disconnected sections. This class of conic sections will be discussed in the lesson on hyperbolas .
Graph the following more advanced ellipses.
Graph the following special-case ellipses.
- What do these ellipses have in common?
Answer the following word problems.
- While the elliptical paths of planets are ellipses that are closely approximated by circles, comets and asteroids often have orbits that are ellipses with very high eccentricity. Halley’s comet has an eccentricity of 0.967, and comes within 54.6 million miles of the sun at its closest point, or “perihelion”. What is the furthest point it reaches from the sun?
- Calculate the area of an ellipse with the equation (Hint: use a geometric argument starting with the area of a circle.)
- Design the largest possible echo room with the following constraints: You would like to spy on someone who will be 3 m from the tip of the ellipse. The room cannot be more than 100 m wide in any direction. How far from the person you’re spying on will you be standing?
- No matter what the orientation of a stick, if you trace out the path that the shadow of the tip makes on a flat surface, you will find it is an ellipse. Describe why this is true. |
Python Programming – Python Operator Overloading
In the previous chapter, we learn the fundamentals of OOPS, i.e., the creation of user-defined data type classes and objects in Python. In this chapter, we will learn one of the significant features of object-oriented. programming structures (OOPS) that is operator overloading. As implied from the name, operator overloading means assigning special meaning to the existing operator to perform some intended task. In other words, the same operator exhibiting different meanings as per the situation is called operator overloading. For example, the “+’ operator is used to add two numbers, the same can be used for merging two lists and concatenating two strings. It indicates that the same ‘+’ operator can be used to perform different tasks based on the context in which it is being used.
Similarly, two-class objects can also be added by using the concept of operator overloading by using the similar syntax that is used for adding two integer numbers. In the following sections, the concept of operator overloading is discussed in detail.
Python Programming – Overloading Operator in Python
Alike, the ‘+’ operator overloading, the subtraction operator can also be overloaded. The programming code for the same is given in Code 12.2. The code is apparently similar to the addition operator overloading for adding up two objects cl and c2. The only difference is that instead of the ‘+’ operator, the operator is used. The built-in Python function used for overloading ‘-‘ is sub( ), which performs the subtraction of two objects and provides the appropriate result. Alike, addition and subtraction operators, other arithmetic operators can also be overloaded. The list of all arithmetic operators with the built-in overloading functions is given in Table 12.1.
Code: 12.2. Illustration of overloading operator in Python
|#illustration of subtraction’-‘operator overloading
5 + i 8
|Addition (+)||cl + c2||C1 add (c2)|
|Subtraction (-)||cl -c2||C1 sub (c2)|
|Multiplication (*)||cl * c2||C1 pow (c2)|
|Power (**)||cl ** c2||C1 pow (c2)|
|Division (/)||cl / c2||C1 truediv (c2)|
|Floor Division (//)||cl // c2||C1 floordiv (c2)|
|Remainder (modulo) (%)||cl % c2||C1 __mod__ (c2)|
In this chapter, we have learned one of the most exciting features of OOPS, which is known as operator overloading. Operator overloading refers to make the same operator perform different tasks such as the *+’ operator which is used to add two numbers can be used to add two objects as well. All the arithmetic operators can be overloaded, where the programming illustration to overload the ‘+’ operator is given. Apart from that the bitwise operators and relational operators can also be overloaded. The Python language provides built-in functions for overloading operators. In Python language, to overload, an operator is rather simple as that of C++ and Java by using the built-in functions for overloading. The list of all of these functions is given for help supporting the operator overloading. |
Presentation on theme: "Measurement and Motion"— Presentation transcript:
1Measurement and Motion Acceleration in a Straight LineAccelerationA Model for Accelerated MotionFree Fall and the Acceleration due to Gravity
2ObjectivesCalculate acceleration from the change is speed and the change in time.Give an example of motion with constant acceleration.Determine acceleration from the slope of the speed versus time graph.Calculate time, distance, acceleration or speed when given three of the four variables.Solve two-step accelerated motion problems.Calculate height, speed, or time of flight in free fall problems.Explain how air resistance makes objects of different masses fall with different accelerations.
3Vocabulary Terms initial speed acceleration free fall m/sec2 acceleration due to gravity (g)time of flightfrictionair resistanceterminal speedaccelerationm/sec2delta Dconstant accelerationuniform accelerationslopeterm
4AccelerationKey Question:How is the speed of the ball changing?
5Acceleration of a carAcceleration is the rate of change in the speed of an object.
6Acceleration vs. SpeedPositive acceleration and positive speed
7Acceleration vs. SpeedNegative acceleration and positive speed
8Change in speed (m/sec) AccelerationChange in speed (m/sec)Acceleration(m/sec2)a = DvDtChange in time (sec)
9Calculate Acceleration A student conducts anacceleration experiment by coasting a bicycle down a steep hill.The student records the speed of the bicycle every second for five seconds.Calculate the acceleration of the bicycle.1) You are asked for the acceleration.2) You are given a table of speeds and times from an experiment.3) a = (v2-v1) ÷ (t2 - t1)4) Choose any two pairs of speed and time data.a = (6 m/sec - 4 m/sec) ÷ (3 sec - 4 sec) = 2 m/sec2For this experiment, the acceleration would have been the same for any two points.
10Acceleration and Speed Constant acceleration is different from constant speed.Motion with zero acceleration appears as a straight horizontal line on a speed versus time graph.zero accelerationconstant speed
11Acceleration and Speed Constant acceleration is sometimes called uniform acceleration.A ball rolling down a straight ramp has constant acceleration.constant accelerationincreasing speed
12Acceleration and Speed An object can have acceleration, but no speed.Consider a ball rolling up a ramp.As the ball slows down, eventually its speed becomes zero.constant negativeaccelerationdecreasing speed
13Slope and Acceleration Use slope to recognize when there is acceleration in speed vs. time graphs.Level sections (A) on the graph show an acceleration of zero.The highest acceleration (B) is the steepest slope on the graph.Sections that slope down (C) show negative acceleration (slowing down).
14A Model for Accelerated Motion Key Question:How do we describe and predict accelerated motion?*Students read Section 4.2 AFTER Investigation 4.2
15Slope of a graphThe slope of a graph is equal to the ratio of rise to run.On the speed versus time graph, the rise and run have special meanings, as they did for the distance versus time graph.The rise is the amount the speed changes.The run is the amount the time changes.
16Acceleration and slope Acceleration is the change in speed over the change in time.The slope of the speed versus time graph is the acceleration.
17Calculate acceleration The following graph shows the speed of a bicyclist going over a hill.Calculate the maximum acceleration of the cyclist and say when in the trip it occurred.1) You are asked for the acceleration.2) You are given a graph of speed versus time.3) a = slope of graph4) The steepest slope is between 60 and 70 seconds, when the speed goes from 2 to 9 m/sec.a = (9 m/sec - 2 m/sec) ÷ (10 sec) = 0.7 m/sec2
21Calculate speed A ball rolls at 2 m/sec onto a ramp. The angle of the ramp creates an acceleration of 0.75 m/sec2.Calculate the speed of the ball 10 seconds after it reaches the ramp.1) You are asked for the speed.2) You are given an initial speed acceleration, and time.3) v = vo + at4) v = 2 m/sec + (.75 m/sec2)(10 sec) = 9.5 m/sec
22Solving Motion Problems initial positiondistance if at constant speeddistance to add or subtract, depending on acceleration
23Calculate positionA ball traveling at 2 m/sec rolls onto a ramp that tilts upward.The angle of the ramp creates an acceleration of -0.5 m/sec2.How far up the ramp does the ball get at its highest point?(HINT: The ball keeps rolling upward until its speed is zero.)1) You are asked for distance.2) You are given an initial speed and acceleration. You may assume an initial position of 0.3) v = vo + atx = xo + vot + 1/2at24) At the highest point the speed of the ball must be zero.0 = 2 m/sec - 0.5tt = 4 secondsNow use the time to calculate how far the ball went.x = (2 m/sec)(4 sec) - (0.5) (.5 m/sec2) (4 sec)2 = 4 metersAt its highest point, the ball has moved 4 meters up the ramp.
25Calculate time A car at rest accelerates at 6 m/sec2. 1) You are asked for time and speed.2) You are given vo = 0,x = 440m, and a = 6 m/sec2 assume xo = 03) v = vo + at x = xo + vot + 1/2at24) Since xo = vo = 0, the position equation reduces to: x = 1/2at2440 m = - (0.5)(6 m/sec2) t2t2 = 440 ÷ 3 = 146.7t = 12.1 secondsNow use the time to calculate the speed.v = (6 m/sec2)(12.1 sec) = 72.6 m/secThis is 162 miles per hour.A car at rest accelerates at 6 m/sec2.How long does it take to travel 440 meters (about a quarter-mile) and how fast is the car going at the end?
26Calculate positionA ball starts to roll down a ramp with zero initial speed.After one second, the speed of the ball is 2 m/sec.How long does the ramp need to be so that the ball can roll for 3 seconds before reaching the end?1) You are asked to find position (length of the ramp).2) You are given vo = 0, v = 2m/sec at t = 1 sec, t = 3 sec at the bottom of the ramp,and you may assume xo = 0.3) After canceling terms with zeros, v = at and x = 1/2at24) This is a two-step problem.First, you need the acceleration, then you can use the position formula to find the length of the ramp.a = v ÷ t = (2 m/sec) ÷ (1 sec) = 2 m/sec2x = 1/2at2 = (0.5)(2 m/sec2)(3 sec)2 = 9 meters
29Calculate heightA stone is dropped down a well and it takes 1.6 seconds to reach the bottom.How deep is the well?You may assume the initial speed of the stone is zero.1) You are asked for distance.2) You are given an initial speed and time of flight.3) v = vo - gt y = yo + vot - 1/2gt24) Since yo = vo = 0, the height equation reduces to: y = - 1/2gt2y = - (0.5)(9.8 m/sec2)× (1.6 sec)2y = metersThe negative sign indicates the height is lower than the initial height by 12.5 m.
30Air Resistance and Mass The acceleration due to gravity does not depend on the mass of the object which is falling.Air creates friction that resists the motion of objects moving through it.All of the formulas and examples discussed in this section are exact only in a vacuum (no air).
31Terminal SpeedYou may safely assume that a = g = 9.8 m/sec2 for speeds up to several meters per second.The resistance from air friction increases as a falling object’s speed increases.Eventually, the rate of acceleration is reduced to zero and the object falls with constant speed.The maximum speed at which an object falls when limited by air friction is called the terminal speed.
32Free Fall and Acceleration due to Gravity Key Question:How do you measure the acceleration of a falling object?*Students read Section 4.3 BEFORE Investigation 4.3 |
What is the Scientific Method?
Lesson 5 of 16
Objective: SWBAT identify the steps of the scientific method and place them in the proper order.
The Why Behind Teaching It:
Science comes to life through hands on experiments and investigations. It is possible to teach the majority of the standards through experimentation which makes learning fun for students. The scientific method is directly linked to standard 3-5-ETS1-3, which requires students to plan and carry out fair tests in which variables are controlled. The process will also be used with just about every other set of standards throughout the year.
Goal of Lesson:
The goal of today’s lesson is to provide students with knowledge about the steps scientists follow to conduct an experiment and for students to be able to identify these steps in real world situations.
Criteria for Success:
Students will demonstrate understanding of the scientific method by identifying the steps in a real experiment on the exit slip provided.
Preparation for Lesson:
- Copy the scientific method foldable for each student. The foldable is copied and folded so that the printed words are on the inside. It is precut so that there are six flaps on the front, three on each side. I always precut foldables for the students to save time.
- I create a blank anchor chart with a heading and a large version of the foldable above taped on the front.
- Large sheet of contruction paper trifolded.
- Seven laminated experiment strips with magnetic strips on the back. I copy the Experiment guided practice strips, cut them smaller, glue them onto construction paper, and then laminate them and add the magnetic strips.
- Five small trifold boards already set up with 14 pieces of Velcro on each, one board or each group. There are velcro strips in place for the headings that match those inside the foldable, and a piece of velcro below each of those spots for headings.
- Five sets of the experiment headings, experiment practice strips set 1, experiment practice strips set 2, and experiment practice strips set 3 copied and laminated with the connecting side of the Velcro on the back of each of these. I laminate everything so it can be reused year after year, as well as to ensure they don’t rip when students are removing them.
- A copy of the Time For Kids Article "Testing the Five Second Rule" for each student.
- An applying the scientific method exit slip for each student
Anchor Chart and Foldable to Introduce Steps:
I begin today’s lesson by passing out a scientific method foldable to each student. The foldable is prepared as described in the preparation section above.
I have a blank anchor chart already started with a heading and a larger version of the foldable created for them, taped on the front.
I begin the anchor chart by labeling the first flap “Step 1: Ask a Testable Question”. Students already know that anything I record on the anchor chart they are copying onto their foldable. We discuss what the word testable means. I give a couple examples and ask if each is an example or nonexample of a testable question. The examples I use are:
1. Which color of rose smells the best?
2. Which color of rose attracts the most bees?
3. Are fifth grade girls or fifth grade boys taller?
4. Does running in place for 30 seconds increase heart rate?
After our short discussion, I fill in the second flap with “Step 2: Form a Hypothesis”. I ask what a hypothesis is and many students are able to tell me. I inform them that they should be writing their hypothesis in a cause and effect statement. In fifth grade we use the phrase “If I ______________, then I hypothesize___________ will happen.”
I move on to write “Step 3: Plan Experiment” in the third flap. I explain that this is the most important step to allow other people to repeat your experiment. Your plan must include a detailed list of materials, and a detailed list of steps, or procedure, you will follow to conduct the experiment. I make sure students are aware that they want other people to be able to repeat their experiment, by following their steps exactly as they completed them. The other person should get the same results from following these steps.
The fourth flap is labeled with “Step 4: Conduct Experiment and Record Data”. I remind students of the importance of organizing their data so that it is easy to analyze. Data charts are used to help organize our work.
“Step 5: Analyze Data and Draw Conclusions” is written in the fifth flap. We discuss what the word analyze means as well as the word conclusion. We decide together that this step means to look at the data and see what happened. I inform them that they should be restating their hypothesis and telling what evidence they collected to support it.
The final flap is labeled with “Step 6: Communicate Results”. I tell students we will be communicating our results through graphs and will be working on graphing tomorrow.
On the completed anchor chart there are pictures next to each flap of the foldable related to our discussion. I added these as we discussed each step. This can be done later if you do not feel comfortable drawing quick pictures.
With our anchor chart and front of the foldable complete, it is time to practice identifying these steps. I have a large sheet of construction paper, trifolded like the foldable, but it is not cut to make six flaps, hanging on the whiteboard. There are also seven magnetic strips, (Experiment guided practice strips), on the front whiteboard, each with one of the steps completed for a real experiment. The steps are all out of order and are not labeled.
I ask students to identify which of the strips would fit step one. I do not ask which shows a question because it makes them think about what step one is, then find the strip that matches. When students identify the question correctly, I have them come move it to the correct location on the large sheet of construction paper (inside, top left). I instruct students to open the first flap of their foldable which is labeled question, and they will see the heading “Question” already there for them. I ask what makes a good question for an experiment. This is checking for understanding from the warm up as well as allowing students who need that repetition to hear it, and see it, again. When a student answers that it has to be testable, I record on the inside flap of the anchor chart and they record in the same location on their foldable.
Next, I ask which magnetic strip is step two. A student selects the correct one, and comes up to move it to the large sheet of construction paper in the correct place (inside, left, middle). Again I ask for clarification on how a hypothesis should be written, and we open our foldables to record this information.
We continue the same process for steps 3 – 6.
Students glue their foldable into their science notebooks, and close their science notebooks when they are finished so I know they are ready for today’s activity.
While students are gluing in their foldables, I pass out one trifold board (prepared with velcro as indicated in the prepration section) and a set of experiment headings labels to each group. As I pass them out, I tell students not to touch them until I give them directions.
When I see all students are finished gluing, and have their notebooks closed, I explain that they will be organizing some experiment strips for real experiments just as we did for the guided practice activity. I make this activity a race against all other groups, two points will be awarded for the first group to get it correct, and one point for each group that gets it correct after. I instruct them to raise quiet hands when they have finished and I will come over to check their work.
The first thing I have them organize are the headings for each step. I ask them to put these in the correct location on their boards. Students all start at the same time and I circulate while they work. When hands begin going up, I mark on the board which team was 1st, 2nd, 3rd, and so on. I have found that if I try to check the first team and they miss it, I usually don’t know which group’s hands went up 2nd or 3rd and it leads to arguments so I just wait to check until all groups are finished. I check the first team by holding up their board and reading off where they have each label. They have it correct and get two points. I circulate to quickly check all other groups, and award each one that got it correct with one point.
I have students leave the headings on the board for the next 3 races. Next, I pass out the experiment practice strips set 1 to each group. When all groups have a set, I tell them to read through each one, figure out which step of the process it is, and place it in the correct position on the board. I remind them to raise quiet hands when they finish. After giving the ok to begin, I circulate until I see hands going up, at which point I begin noting the order on the board. After all groups finish, I hold up the board from the first group finished and read off each step to check for correctness. After checking, I circulate to quickly check the other groups. As I am checking other groups I am having students give me some information about key words, or other items that helped them identify the steps quickly. They said the if, then statement in the hypothesis, the question mark at the end of step one, and the phrase "after conducting" in the conclusion. While I award points on the board, I ask groups to remove the experiment practice strips and shuffle them. I remind them not to remove the headings.
Connecting It to the Real World Through Reading
After the last race, students remove both the experiment practice strips, and the headings from their boards. The person I call on from each group brings those items, along with their board to the front table. While they return those materials, I ask students how many of them have ever heard of the five second rule for dropping food on the ground. Just about every hand in the room goes up. I ask how many of them believe that rule to be true. I pass out a Time For Kids Article "Testing the Five Second Rule" and explain that scientists actually tested the rule to find the answer.
I explain that this is a great real world situation where the scientific method was used. I pass out an applying the scientific method exit slip to each student. They will read the short article, and then answer the questions on the exit slip as they appear in the article. After they complete the exit slip, they turn it into the basket. I can use this to check for individual understanding on knowledge of the steps, as well as their ability to apply the knowledge to various real world situations.
All students are encouraged to use their reading strategies such as underlining information, jotting down notes in the margin, etc. I do have several students who are reading below grade level, and although they are tested in reading on a fifth grade level, they have reading as an accommodation for science on their IEP's. Any student with this accommodation is pulled to my small group table, so that I can read the passage for them. By pulling them to the side, it eliminates distraction for the other students, and allows me to face the other students in the class to scan the room for talking and in case others have questions. |
The GDP deflator is a measure of the level of price inflation or deflation and is used to adjust nominal GDP.
The GDP deflator is a number that represents the current prices of various goods and services versus their past prices of a given year. It used to measure the level of price changes over time relative to a base year. This allows economists to measure and track inflation or deflation. If current prices are used to measure GDP, true economic output can be over- or understated. The GDP deflator compensates for changing price levels over time (inflation or deflation). This measure is an implicit index of the price level that allows economists to measure real changes in economic output. Economists also use it to convert nominal GDP in any given year into real GDP.Economists analyze changes in real GDP to determine the growth of output in an economy. Real GDP measures the total value produced using constant prices, isolating the effect of price changes. As a result, real GDP is a better gauge of changes in the output level of an economy.
Change in GDP
The GDP deflator is a tool used to measure the level of price changes over time so that current prices can be accurately compared to historical prices. If the nominal GDP and the GDP deflator are both known quantities but real GDP is not, the following formula can be used to solve for real GDP: |
About This Chapter
How it works:
- Identify which concepts are covered on your absolute value homework.
- Find videos on those topics within this chapter.
- Watch fun videos, pausing and reviewing as needed.
- Complete sample problems and get instant feedback.
- Finish your absolute value homework with ease!
Topics from your homework you'll be able to complete:
- Understanding absolute value
- Evaluating absolute value expressions
- Solving absolute value equations
- Graphing absolute values
- Performing transformations
- Graphing dilations and reflections
1. What is an Absolute Value?
When we're talking and comparing numbers, we often don't care whether its positive or negative, just how big it is. This is often called the magnitude of a number and we find it by taking the absolute value. Learn all about it here!
2. How to Evaluate Absolute Value Expressions
Substituting values into absolute values doesn't have to be too hard, but it can be if you're given deceiving beginning information. See if you're up to it by checking out this video!
3. How to Solve an Absolute Value Equation
Once you get familiar with any new operation, the next step in any algebra class is to learn how to solve equations with that operation in them. Absolute values are no different. Solve absolute value equations here!
4. Solving Absolute Value Practice Problems
There are many easy mistakes to make when solving absolute value equations. Learn how to avoid those mistakes here by working on examples of absolute value equations with operations on the inside and the outside of the absolute value.
5. How to Graph an Absolute Value and Do Transformations
Absolute value graphs normally look like the letter 'V', but transformations can change that 'V' in a number of different ways. As well as teaching you how to graph absolute values, this video will focus on a specific group of transformations called translations. Learn all about what that means here!
6. Graphing Absolute Value Equations: Dilations & Reflections
Although a basic absolute value graph isn't complicated, transformations can make them sufficiently confusing! In this lesson, you'll practice different transformations of absolute value graphs.
Earning College Credit
Did you know… We have over 160 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level.
To learn more, visit our Earning Credit Page
Transferring credit to the school of your choice
Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you.
Other chapters within the High School Algebra II: Homework Help Resource course
- Algebra II - Basic Arithmetic Review: Homework Help
- Homework Help for Algebraic Expressions and Equations - Algebra II
- Algebra II - Real Numbers: Homework Help
- Algebra II - Complex and Imaginary Numbers Review: Homework Help
- Algebra II Homework Help: Exponents & Exponential Expressions
- Algebra II - Properties of Functions Review: Homework Help
- Algebra II - Linear Equations Review: Homework Help
- Algebra II - Systems of Linear Equations: Homework Help
- Algebra II - Inequalities Review: Homework Help
- Algebra II - Matrices and Determinants: Homework Help
- Algebra II - Polynomials: Homework Help
- Algebra II - Factoring: Homework Help
- Algebra II Homework Help: Quadratic Equations
- Algebra II - Rational Expressions: Homework Help
- Algebra II - Graphing and Functions: Homework Help
- Algebra II - Roots and Radical Expressions Review: Homework Help
- Algebra II - Quadratic Equations: Homework Help
- Algebra II - Exponential and Logarithmic Functions: Homework Help
- Algebra II - Conic Sections: Homework Help
- Algebra II - Sequences and Series: Homework Help
- Algebra II - Sets: Homework Help
- Algebra II - Combinatorics: Homework Help
- Algebra II Percents & Proportions: Homework Help
- Algebra II - Statistics: Homework Help
- Algebra II - Trigonometry: Homework Help |
Whenever something important breaks on the International Space Station (ISS), NASA has to ensure it is replaced. This normally requires it to be sourced from the spares supply on Earth, or it has to be fabricated from scratch. Then the spare part has to be shipped up to the ISS on the next supply mission. The whole process can take weeks. So NASA has come up with an idea to speed up the manufacturing and supply of spare parts – make them ABOARD the ISS. With a 3D printer.
NASA has contracted with technology start-up Made in Space to build the printer. “Imagine an astronaut needing to make a life-or-death repair on the International Space Station,” said Aaron Kemmer, the company’s chief executive. “Rather than hoping that the necessary parts and tools are on the station already, what if the parts could be 3D printed when they needed them?”
“If you want to be adaptable, you have to be able to design and manufacture on the fly, and that’s where 3D printing in space comes in,” said Dave Korsmeyer, director of engineering at NASA’s Ames Research Center.
3D printing technology has come along in leaps and bounds since the concept was first demonstrated. Commercial sales of 3D printers have grown considerably, and some retail for very low prices. Industrial 3D printers cost many thousands of dollars and produce amazingly good items. However, all of these printers were constructed to operate (unsurprisingly) on planet Earth.
NASA intends to send a microwave-sized 3D printer to the ISS in 2014. Both NASA and Made in Space have some challenges to resolve if NASA wants to ship a printer that works successfully.
Launch into orbit – If anyone has ever watched the launch of a large rocket, he or she will be aware that it is the most dangerous part of the mission. The rocket accelerates quickly to produce high G-forces in the payload area and there can be considerable shaking. Anything sent to the ISS – people, experiments, a new printer – has to be capable of surviving into orbit.
Lack of gravity – 3D printing was developed on the surface of the Earth where gravity rules. In space, inside the ISS, there is no noticeable gravity. The 3D printer has to be designed so that it does not depend upon gravity for positioning its mechanism or the work-in-progress.
Materials used for producing parts – Most 3D printing uses one or other of various types of plastic in a spool form. The printer unspools the plastic and melts it to form a sequence of very thin layers on the printing station. However, the requirements on the ISS are that metal parts have to be manufactured. Laser-melted titanium and nickel-chromium powders are to be used to build strong components.
So this 3D printer, the size of a microwave oven, will be able to produce useful tools and vital spare parts. Instead of waiting weeks for a spare, the ISS inhabitants will be able to print one in minutes (according to NASA). |
Table and chart are two common ways of representing the given data to have a broad understanding and better analysis. These two are used in almost all fields of study and are of utmost importance for some professions.
The table is the way of presenting data in the form of rows and columns and a chart is a way of presenting data in varied forms to make it comprehensible.
Tables mainly consist of the rows and columns where data is presented with the help of detailed text whereas charts make use of minimum data to explain the concept. Tables are usually the formal structures used for data analysis whereas charts are used to explain concepts in detail.
Table vs Chart
The main difference between table and chart is that table displays data in the form of row and columns whereas chart is the graphical representation of data in varied forms. Charts can be of different types such as pie charts, flow charts, line charts, etc.
This should be noted that tables represent the data whereas charts help to explain the larger concepts and the data in an easier way.
Comparison Table Between Table and Chart (in Tabular Form)
|Parameters of comparison||Table||Chart|
|Definition||A table is a method of representing data in the form of row and columns to get a quick overview of data.||A chart is a method of representing data in varied forms to a comprehensible understanding of concepts.|
|Use of symbols||A table doesn’t make use of symbols because it depicts the details of the data in a formal structure.||A chart makes use of different symbols such as slices, lines, bars, etc to depict the concepts in a better way.|
|Representation||The content is represented in the form of rows and columns. It is predominantly used to present data in the form of numbers, quantities, names, etc.||The content is presented in the form of pie charts, flowcharts, line charts, bars, etc to explain the larger concepts of the data that is presented. It is used to make data understandable.|
|Uses||The table is used to make data brief and quick to overview.||Charts make the given data or the concept comprehensible.|
|Types||The table doesn’t have different forms. It is a simple representation in the form of rows and columns.||Charts can be of different types such as pie charts, bar charts, line charts, flow charts, etc.|
What is Table?
A table is a method of presenting data in the form of rows and columns. It is mostly used as per the convenience to make the data apprehensible and brief.
The detailed data can be organized in the structured form to make it quick for an overview.
While representing data in the form of table text and symbols are not of the primary importance. It is used to present data such as numbers, names, addresses, quantities, etc.
Some of the key aspects of a table are as follows:
- Tables are the most common way of present data in various fields such as research, experiments, formal settings, etc.
- They are used to scale down the large collection of data into an organized and structured form.
- Tables make use of the rows and columns to make the details comprehensible.
- Some of the examples of the tables that we witness around us every day are the periodic table of elements, table of contents, table of calculation, etc.
What is Chart?
A chart is the graphical representation of the data with the help of text and symbols to make the concepts clear. Charts are used in various fields of study to make the data organized.
It gives a detailed analysis and a clear understanding of the data presented.
Charts are of different types such as pie charts, flowcharts, line charts, etc. Charts are used to develop a relationship with the data presented.
At times they are used to compare and contrast different things.
Some of the key aspects of a chart are as follows:
- The chart makes use of the symbols to present data.
- They are used to make the given data comprehensible by developing a relationship between the set of concepts.
- The chart provides an understanding of the given information with the help of illustrations
Main Differences Between Table and Chart
- The table is the representation of the given data in the form of rows and columns whereas charts represent data in different structures.
- The table doesn’t use symbols to present data whereas charts use symbols to present data for better understanding.
- Charts are of different types such as pie charts, flow charts, etc whereas the table uses the same structure of rows and columns everywhere.
- The table is used in various fields formally to provide a brief account out of the large collection of data whereas cart is used to explain concepts in an elaborate manner.
- The basic content that is used in the table is that of quantities, numbers, names, etc whereas charts are used to develop ad represent relationships among the data.
Table and chart are two ways of depicting different kinds of data. The table is used to provide an organized and brief understanding of the data whereas the chart is used to present the relationship among the contents of the data.
The chart gives an elaborate analysis of the content that can help to develop comprehensible understanding.
The table is used rows and columns to depict data whereas the chart can be of different forms. Tables that we come across in our daily routine are the table of contents, periodic table of elements, table of names, etc whereas some of the common charts are pie chart, flow chart, line chart, etc
The table doesn’t make use of symbols whereas pie charts use significant symbols to explain the context in a better way.
|AskAnyDifference Home||Click here|
I’ve put so much effort writing this blog post to provide value to you. It’ll be very helpful for me, if you consider sharing it on social media or with your friends/family. SHARING IS ♥️
Table of Contents |
Chapter 5 : Indigenous Ways of Knowing
Inquiry 1: Indigenous Perspectives Elder Knowledge
- Provocations – Book
- Question Generation – 5W’s and H questions, Creative Question Starts, Ask Questions template
- Knowledge Building– Community Expert – Elder visit, Umbrella Questions
- Determining Understanding – KHWLAQ chart, Knowledge Building Circle, Video, Talking Circle
- Pursuing Learning – Natural Inquirer, Arctic Survivor, Climate Connections, Observing Change
- Consolidation – Doodling/Sketching, Consolidation Discussion, Think-Pair-Share
- Assessment – Tableau Assessment Suggestions
- Take Action – Action Project Suggestions
Begin the inquiry by offering a land acknowledgment and discussing why we acknowledge the land. It is essential to teach students that we must recognize the Indigenous land that the school is on to learn about and from it.
As educators, recognizing that these lands are the traditional territories of Indigenous people and that all Canadians benefit from the land plays an essential role in modelling reconciliatory behaviour with your students. Reciting your school’s land acknowledgement helps create a foundation in students for learning about and from Indigenous people whose land we live on.
A land acknowledgement reinforces that we benefit from the land, and we all have a responsibility to actively work towards honouring Indigenous Peoples as equal partners in sharing the land. Land acknowledgments are only one step in cultivating greater respect for and inclusion of Indigenous Peoples, with the understanding of the importance of our Treaty responsibilities.
Chapter 5 Indigenous Ways of Knowing recognizes the importance of Indigenous perspectives and connections to land and place as we work towards reconciliation to address the Calls to Action of the Truth and Reconciliation Commission, particularly the call to "integrate Indigenous knowledge and teaching methods into classrooms" (clause 62) and "build student capacity for intercultural understanding, empathy and mutual respect" (clause 63).
Sharing stories is a way of sharing knowledge among Indigenous communities. Your classroom materials should be culturally diverse and inclusive of Canada’s three distinct Indigenous groups. Here are a few examples of children’s books that illustrate the importance of learning from our Elders and include the three distinct Indigenous groups.
- The Elders are Watching by David Bouchard and Roy Henry Vickers (Métis)
- Nimoshom and His Bus by Penny M. Thomas (First Nations Cree), illustrated by Karen Hibbarb
- Nokum is My Teacher by David Bouchard, illustrated by Allen Sapp (Métis)
- Oral Traditions and Storytelling by Anita Yasuda (First Nations)
- The Tree by the Woodpile by Raymond Yakeleya , Jane Modeste (First Nations Dene)
- Jigging for Halibut with Tsinii by Robert and Sara Davidson, illustrated by Janine Gibbons (First Nations Haida)
- Making a Whole Person: Traditional Inuit Education by Monica Ittusardjuat (Inuit)
- Fishing with Grandma by Maren Vsetula and Susan Avingaq (Inuit), illustrated by Charlene Chua
- A Walk on the Tundra by Rebecca Hainnu and Anna Ziegler (Inuit), illustrated by Qin Leng
- Siha Tooskin Know the Nature of Life by Charlene and Wilson Bearhead, illustrated by Chloe Bluebird Mustooch (First Nations Nakota)
- Sila and the Land by Shelby Angalik, Araian Roundpoint and Lindsay Dupré, illustrated by Halie Finney (First Nations, Métis and Inuit)
Teaching and discussing controversial and sensitive topics is essential because it helps students think in-depth and fosters critical thinking. Many issues involving First Nation, Métis and Inuit peoples are controversial (land claims, self-government, blockades, hunting and fishing rights) or sensitive (residential schools, worldview). Building in and addressing controversial or sensitive topics at an early age allows students to explore and question in the safety of the classroom. Teachers may use some of the suggested questions in this inquiry to introduce more sensitive issues regarding the inequalities faced by Indigenous People. Please keep in mind that Acts of Reconciliation and Reclamation are fundamental as we move forward as a country. Our acknowledgement, and inclusion of Indigenous literature and media helps to create an understanding of the history, diversity, and issues that many Indigenous peoples face.
It would be helpful for the learners to understand that traditional/cultural knowledge is passed as an: I Do, We Do, You Do model. This mentorship model provides the close watching and coaching of the learner by the teacher. This model would aid in learning from mistakes, as well as identifying areas of strength and need for reflection. This helps the person who is learning of how knowledge is passed on, to connect with the sacredness of our relationship with Creator, Mother Earth, the plants, animals, and all other animate and inanimate beings as part of the Creators making. (Daniel Sylvestre)
To hook student interest, use the following provocation to initiate student thinking.
As Native elders have advised from time immemorial, this is a gentle plea to respect the natural environment. A plea to respect the natural treasures of our environment and a message of concern from indigenous leaders of the past to the people of the new millennium, The Elder Are Watching has both a timelessness and an urgency that must be heard.
Vickers, Roy Henry, Cover Illustrations, The Elders are Watching by Bouchard, David, Raincoast Books, 2003
As you read the book, help students become aware of the knowledge, information and guidance older people such as Elders, Knowledge Keepers, grandparents, teachers, uncles, aunts, or mentors can offer. Students should be made aware that one must earn the right to become an Elder or Knowledge Keeper in a First Nations community. Not all Elders or Knowledge Keepers are seniors, nor are all old people Elders, and some Elders are younger. Elders or Knowledge Keepers are honoured because they have gifts of insight and understanding and are willing to share their knowledge. Discuss the role Elders or Knowledge Keepers play in Indigenous communities, provide picture books and other media that illustrate the connection Indigenous People have with the land to enhance the learning.
- What do you think the author means by The Elders are Watching?
- What is an Elder or Knowledge Keeper? And what are they watching in this book?
- Elders are often considered wise and share their Indigenous Knowledge, can you explain why?
- Identify one of the messages that the author is trying to portray in his book.
- The last visual of the online book identifies different Indigenous People. Name three distinct Indigenous groups in Canada*?
- Do all Indigenous People share the same traditions and knowledge*? In what ways do Indigenous peoples continue to pass on traditional knowledge from generation to generation?
- Why is it important to hear the views and stories of other people?
- Who do you have in your life that you would consider an Elder?
- How do you show respect to your parents or other adults? How do you think respect is shown in Indigenous cultures? Why do people not always respect Indigenous knowledge?
* Cultural diversity within the Indigenous people is frequently misinterpreted. There is a misconception that Indigenous People are one group who share the same culture, traditions, language and knowledge. Take the time to identify the three distinct Indigenous groups—First Nations, Metis and Inuit—and their unique connections to the land. Understand that these 3 distinct groups are identified by the Federal Government, that each Indigenous group on Turtle Island is distinct and that they all have their own distinct culture, traditions, language, governance, education, laws, customs, and ways of knowing. A small step students can take in respecting Indigenous people, and their culture is learning the three Indigenous groups and their unique traditions and knowledge.
At this point in the inquiry, we want to harness students’ curiosity and build off of the provocations that have captured their interest by generating meaningful questions to continue to drive the learning process. This section will outline several pathways for question generation depending on the provocation(s) that your class engaged with.
- 5W’s and H Questions – Students will be able to ask and answer questions using the five Ws and an H (who, what, when, where, why, and how) to show understanding of key details in a text.
- Lead a whole-group discussion and brainstorm around the book‘s theme with the goal of students generating questions about the role of Elders, their Indigenous Ways of knowing and the message they are sharing with the readers
- With younger students, review the pictures in the book and have them think about what questions they would ask? Use the Creative Question Starts thinking routine to help students generate a list of interesting questions.
- Older students can work independently using the Ask Questions template to help develop questions that provoke thinking and inquiry
- What do we need to know about the land to live on it? What do Indigenous People teach us about the land?
- How can knowledge from Elders help scientists study climate change?
- How can we apply the Elders understandings of sustainability to reduce the effects of climate change?
- What can we learn from Elders to help us live sustainably in the face of climate change?
- What messages are Elders trying to share? What changes have Elders seen in life on the land?
- Research different ways Indigenous people have used their knowledge of living things to meet their own needs.
- How has the weather affected the Elders’ community?
- What are some of the changes in birds, animals and insects in yours and other communities?
- How have the weather patterns changed in the community?
- Can you identify some other pressing environmental issues that are currently taking place in Canada? (Pipelines, clean drinking water in Northern communities)
At this stage, students may be ready to engage in a group knowledge-building activity. It will encourage students to open their minds to many alternative ways of thinking about the provocations and the ideas generated thus far in the inquiry process.
Community Expert – Sharing knowledge and storytelling is an integral part of Indigenous culture, and a visit from an Elder is an excellent way to bring this experience to students. Indigenous Elders, Knowledge Keepers and Cultural Advisors play a central role in Indigenous communities; they are teachers within and beyond their communities. Elders, Knowledge Keepers and Cultural Advisors are not self-taught individuals. They have been gifted with their respective teachings by other Elders or Knowledge Keepers, typically over years of mentorship and teaching.
- Connect with your school’s Indigenous Education department to speak to an Indigenous education specialist and enquire about education or cultural programs available. Also, to inquire about who you can utilize in your classroom/school for the curricular concepts that you feel need connections to Indigenous ways of knowing that will enhance inquiry into environmental sustainability and relationships with Mother Earth.
- Observe appropriate protocols and acknowledgements when including elders and knowledge keepers in your school/classroom.
- Plan a field trip that fosters a greater understanding of Indigenous Ways of Knowing.
In Indigenous cultures, the Elder is highly regarded as a role model in their community and is considered the keeper of knowledge. A gift must be prepared by the person requesting the visit and offered to the Elder at the time of the request. See more information regarding Elder Wisdom in the Classroom
Umbrella Questions Brainstorm some umbrella questions with your students. An umbrella question is developed to help ground the inquiry. The question should be focused – it’s not aiming to answer all aspects of an issue. The question should be of interest to the students and also connect to the topic of the inquiry.
- How can the knowledge that the Elders share in the story help us learn about climate change?
- How can we apply the Elders understandings of sustainability to reduce the effects of climate change?
- What are the Elders observing and learning by the changing seasons?
- What wisdom and warnings are the Elders sharing regarding ways people are abusing the land and resources?
- What type of knowledge did Elders need to know about their environment to survive in it for thousands of years?
- What types of change have most affected First Nations, Métis and Inuit people? Identify the changes for each distinct Indigenous group.
Use responses to inform and guide the learning process. They can provide insight into which concepts need clarity, what many students are already well informed about, and a general direction that many students want to pursue.
Knowledge Building Circles – A Knowledge Building Circle is a class discussion activity that is specifically reserved for working out students’ questions and ideas. The aim of the circle is to help all students to improve their understandings as they share their learning, ideas and ask questions, This communal activity deepens students’ understanding through increased exposure to the diverse perspectives of the class. The KBC aligns with the Indigenous time-honoured tradition of the Talking Circle where individuals take turns sharing ideas.
With younger students, begin by viewing the book The Sharing Circle by elder and author Theresa "Corky" Larsen-Jonasson. During your knowledge-building circle, use a talking stick so students listen and share respectfully. The student holding the talking stick, and only that student, is designated as having the right to share while the other students listen quietly and respectfully. This Indigenous cultural tradition is used during ceremonies, storytelling and sharing experiences with Elders.
Here is an example of Putting the Talking Stick into practice – use during speaking and listening activities to allow students to interact with others, contribute to a class goal, share ideas and opinions, and solve problems. Making a Talking Stick for the class.
Some Indigenous peoples use a rock when having a talking circle. This connects students to Grandfather Rock teachings, and to our connection with Mother Earth and our Ancestors. We seek guidance and wisdom when we include a rock in our talking circles, to ensure we are moving forward in a good way, as Creator intended us to be, Kind and Compassionate.
- K – What students already KNOW about Indigenous ways of knowing or Elders?
- W – WHAT students want to learn about Elders ways of knowing?
- H – HOW they will research or find the information they want to learn?
- L – What students have LEARNED about Indigenous knowledge after taking action?
- A – How will students APPLY the Indigenous knowledge they’ve learned?
- Q – What QUESTIONS do they still have or have thought of as a result of this inquiry?
At this stage, students may begin research to pursue their umbrella questions, or some of the following activities could be integrated into the process to ensure that students have an understanding of foundational climate science. The activities listed below will enrich the understanding of climate change.
Indigenous peoples have been and are leaders of climate action; their role in monitoring climate change impacts and the environmental effects on traditional lands and waters play a critical part in our fight against climate change. There is a great deal that we can learn from how Indigenous peoples have lived sustainably with the Land for many years. They have adapted by travelling throughout their Land in search of food and other resources depending on the seasons. We need to listen carefully to better understand the value of Traditional knowledge and its contribution to sustainability and planning for the future. Indigenous communities have their own experts, elders, knowledge keepers and ways of knowing; their knowledge is an essential resource for learning how to adapt to climate change. We need to value what they can bring to the climate conversation and actively seek it to guide us.
Watch Norma’s Story an animated true tale of the profound effects of climate change on the environment, culture and food security on the people and wildlife of the Arctic. What happens when we do not respect the land, the environment?
Natural Inquirer – Students use interview techniques to research and write about an animal or plant affected by climate change
Arctic Survivor – Students role-play polar bears and the habitat components of food, water, shelter and space to understand how polar bear populations are affected by changes in their habitat. In the second part of the game, some possible impacts of climate change on the Arctic habitat of polar bears are explored.
Climate Connections – Bring news reports of weather events that have happened in the world. Discuss with the students the impact these events may have had on local habitats. Use the climate connection picture cards to play a variety of non-competitive games that explore connections between human actions, climate change, and positive and negative impacts on wildlife habitat.
Observing Change – In this outdoor activity, students will complete a series of neighbourhood walks with an observation chart over a period of a few months to predict and observe changes in living and non-living components of the local ecosystems in order to understand the impact of weather, climate and climate change. Discuss how humans can impact habitats in positive and negative ways (e.g., provide water for plants; create diverse habitats in gardens; remove native plants and in the process, risk destroying habitat for native animals; pollute water and soil, etc.). Help the students to identify how to show respect for the environment and what actions they can take to positively affect the school environment.
This step is designed to encourage students to integrate and synthesize key ideas. When students make connections and see relationships within and across lessons, this helps them to solidify knowledge and deepen understanding.
- Share news reports of major weather events that have happened in the community, province, country or the world. Discuss with the students the impact these events may have had on local habitats, cities, towns, ecosystems. Have students make a now and then community comparison picture.
- Ensure that every student can describe what they did, why they did it, and what they found out regarding Indigenous Ways of Knowing or the importance of listening to and respecting Elders wisdom.
- Have students write a thank-you letter to the land, the seasons, Elders, grandparents or other adults who teach them things about your culture or nature. Describe how and why you are thankful.
- Students reflect on their learning by reading their letter or sharing their picture/sketch and simply turn around and share with one other person
Teachers will assess learning at different points throughout the inquiry using multiple methods. The following assessment provides an alternative evaluation method to standard to standard quizzes and tests, that can be used after consolidation or at any point in the lesson to check for understanding.
- Tableau – In this activity, students create a still picture, without talking, to capture and communicate the meaning of a concept. Students must truly understand the meaning of a concept or idea to communicate it using physical poses, gestures, and facial expressions rather than words. Use Tableau to check for understanding or see what new insights students have gained during the inquiry.
- Assess students’ knowledge and understanding by inviting them to write a text about an Elder in their life.
- Invite students to brainstorm the teachings that their elders have shared with them and how these teachings connect us with others, the land, histories, and our ancestors (to show we are accountable and that our decisions that we make affect others and the future generations).
- Assess students thank you letters to verify the learning between Elders and the land.
Allowing time for students to take action is an essential part of the learning process on climate change, as it empowers students and eases their eco-anxiety. Ask the students what they want to do to positively impact climate change. List their ideas and come up with a plan to put their action in place.
Action can be taken in many different ways, these are some different possible ideas for Taking Action:
- Create a video or presentation urging others to take action. Presentations can be in the classroom or at a school assembly
- Walk for water – When senior students at Seven Oaks Met School learned that the local community of Shoal Lake 40 First Nation (the very community where most of Winnipeg’s drinking water is sourced!) has been under a boil water advisory for over 20 years, they were inspired to take action. They organized speakers and elders from Winnipeg and Shoal Lake to educate the audience about the water crisis. The event raised over $7,000 for the Shoal Lake 40 First Nation community and spread awareness across the region.
- The Shaughnessy Medicine Wheel Garden in Winnipeg was designed as a teaching garden, incorporating the medicine wheel’s circle teachings, including fire, Water, air, and Earth. The plants and flowers reflect these elements and colours in each quadrant and feature Manitoba’s traditional medicines and indigenous plants. Thirteen boulders encircle the garden to represent the 13 moons of the year, and seven cedar benches will represent the seven teachings. Providing an outdoor learning space for students and a natural setting to enjoy the environment for the local community.
- MMHS Arboretum, Community, Indigenous and Medicinal Plant Gardens Students, staff, community members and partners began planting trees, shrubs and wildflowers at Milliken Mills High School in 1994. Since that time, the arboretum and associated gardens have been enhanced and have flourished. This year we have made every effort to expand the nature of the gardens with an interpretive guide created by students across the curriculum. This, while the physical and plant make-up of the garden continues to evolve. This year, despite the challenges of face-to-face learning and participation, we established the indigenous medicinal plant garden and created a strong cross-departmental partnership in the school, which will see the roots truly become shoots as the project will become stewarded through teamwork.
- The Herb Campbell Public School has created a visual landscape plan for a Medicine Wheel Garden Outdoor Classroom on our school site, which includes: A centred medicine wheel garden with indigenous plants surrounded by stone seating and an outdoor classroom frame; 9 local food gardens including six raised-bed gardens (for herbs, vegetables, fruit, and edible flowers) and three in-ground gardens (a Three Sisters garden, an indigenous berry garden, and a pumpkin patch); 4 outer garden areas with indigenous plants, shrubs, and trees connected to the four cardinal directions of our centred Medicine Wheel Garden; A wildlife observation/inquiry area with feeders, water supply, and log stump seating; Interpretative learning signs; Pathways connecting to our natural forest, meadow, and wetland habitats and other planting areas.
- Oak Park Outdoor Indigenous Learning Place created an outdoor Indigenous learning space that allows students, staff, and the community to connect with nature and celebrate Indigenous culture, tradition, and teaching. This project has many stakeholders, including Indigenous and non-Indigenous students, Indigenous knowledge keepers (academics, community members, Elders), and various divisional staff. To have all staff and students embrace Indigenous ways of knowing, doing, and being; to enhance our Indigenous students’ engagement and success in school. Having a teaching space in front of our school demonstrates our commitment to our school goal and reconciliation. It will also create endless opportunities for teaching and learning that honours, centres, and celebrates Indigenous culture.
- Youth Climate Solutions is a guide for making a difference for polar bears and their sea ice home. Visit Polar Bears and the Changing Arctic at Polar Bears International to learn more about the Arctic Ecosystem and how we can help protect this remarkable part of the planet.
- Visit Our Canada Project for many more action project ideas! This platform inspires youth to be responsible citizens and share their voice. |
Measurement and data in first grade includes such important concepts as comparing the length and weight of two objects using a third object. This guided lesson, designed by curriculum experts, takes students on an exploration of these measurement and data concepts. Once through with the lesson, kids can gain extra practice with measurement and data with the accompanying worksheets.
Planning for a substitute in the classroom has never been easier than with this kindergarten, week-long sub packet! Your substitute can supercharge learning with lessons about the weather and four seasons to educate and inspire students!
Figuring out the difference between bar graphs, line plots, and the other means of graphing data is made easier when students can depend on exciting classroom activities. If you want help teaching graphing data, look no further than our printable student worksheets that make it easy for students to learn how to plot data they’ve already worked with in previous exercises.
Graphing's a great way to visualize and better understand numbers, which can help students in just about every math topic. From tallying up types of pets in a store to filling out a multiplication table, graphing comes up in lots of situations your student might find themselves in. If organizing all those numbers and information just leaves your student all mixed up, we have lots of graphing practice in all kinds of formats, and with all kinds of themes! Kids who love animals can tally up pets, and foodies can count up vegetables or flavors of cake. We even have some lessons on pre-graphing topics, like tallying and grouping, easy enough for even kindergarteners to work with. There's even lesson plans in here for teachers, too! All our content is created by a team of education professionals, so you know your student is getting the best stuff on the internet. get ready for graphing success with our stash of materials on the matter. |
An illustration to explain the dynamics of the ultra-relativistic third Van Allen radiation belt, by Andy Kale. Credit: Andy Kale
Earth’s magnetosphere, the region of space dominated by Earth’s magnetic field, protects our planet from the harsh battering of the solar wind. Like a protective shield, the magnetosphere absorbs and deflects plasma from the solar wind which originates from the Sun. When conditions are right, beautiful dancing auroral displays are generated. But when the solar wind is most violent, extreme space weather storms can create intense radiation in the Van Allen belts and drive electrical currents which can damage terrestrial electrical power grids. Earth could then be at risk for up to trillions of dollars of damage.
Announced today in Nature Physics, a new discovery led by researchers at the University of Alberta shows for the first time how the puzzling third Van Allen radiation belt is created by a “space tsunami.” Intense so-called ultra-low frequency (ULF) plasma waves, which are excited on the scale of the whole magnetosphere, transport the outer part of the belt radiation harmlessly into interplanetary space and create the previously unexplained feature of the third belt.
“Remarkably, we observed huge plasma waves,” says Ian Mann, physics professor at the University of Alberta, lead author on the study and former Canada Research Chair in Space Physics. “Rather like a space tsunami, they slosh the radiation belts around and very rapidly wash away the outer part of the belt, explaining the structure of the enigmatic third radiation belt.”
The research also points to the importance of these waves for reducing the space radiation threat to satellites during other space storms as well. “Space radiation poses a threat to the operation of the satellite infrastructure upon which our twenty-first century technological society relies,” adds Mann. “Understanding how such radiation is energized and lost is one of the biggest challenges for space research.”
For the last 50 years, and since the accidental discovery of the Van Allen belts at the beginning of the space age, forecasting this space radiation has become essential to the operation of satellites and human exploration in space.
The Van Allen belts, named after their discoverer, are regions within the magnetosphere where high-energy protons and electrons are trapped by Earth’s magnetic field. Known since 1958, these regions were historically classified into two inner and outer belts. However, in 2013, NASA’s Van Allen Probes reported an unexplained third Van Allen belt that had not previously been observed. This third Van Allen belt lasted only a few weeks before it vanished, and its cause remained inexplicable.
Mann is co-investigator on the NASA Van Allen Probes mission. One of his team’s main objectives is to model the process by which plasma waves in the magnetosphere control the dynamics of the intense relativistic particles in the Van Allen belts—with one of the goals of the Van Allen Probes mission being to develop sufficient understanding to reach the point of predictability. The appearance of the third Van Allen belt, one of the first major discoveries of the Van Allen Probes era, had continued to puzzle scientists with ever increasingly complex explanation models being developed. However, the explanation announced today shows that once the effects of these huge ULF waves are included, everything falls into place.
“We have discovered a very elegant explanation for the dynamics of the third belt,” says Mann. “Our results show a remarkable simplicity in belt response once the dominant processes are accurately specified.”
Many of the services we rely on today, such as GPS and satellite-based telecommunications, are affected by radiation within the Van Allen belts. Radiation in the form of high-energy electrons, often called “satellite killer” electrons because of their threat to satellites, is a high profile focus for the International Living with a Star (ILWS) Program and international cooperation between multiple international space agencies. Recent socio-economic studies of the impact of a severe space weather storm have estimated that the cost of the overall damage and follow-on impacts on space-based and terrestrial infrastructure could be as large as high as $2 trillion USD.
Politicians are also starting to give serious consideration to the risk from space weather. The White House recently announced the implementation of a Space Weather Action Plan highlighting the importance of space weather research like this recent discovery. The action plan seeks to mitigate the effects of extreme space weather by developing specific actions targeting mitigation and promoting international collaboration.
Mann, lead author of this new study, is the chairman of an international Space Weather Expert Group operated under the auspices of the United Nations Committee on the Peaceful Uses of Outer Space (COPUOS); the Expert Group has a three-year work plan and is charged with examining and developing strategies to address the space weather threat through international cooperation. As a nation living under the auroral zone, Canada faces a much larger potential threat from space weather impacts than other countries.
Source: University of Alberta
Journal: Nature Physics
- Explaining the dynamics of the ultra-relativistic third Van Allen radiation belt, Nature Physics, DOI: 10.1038/nphys3799 |
What makes gasoline and other fuels so powerful? The potential of chemical mixtures such as fuels that power cars come from the reactions these materials are able to cause.
You can measure this energy density using straightforward formulas and equations that govern these chemical and physical properties when the fuels are put to use. The energy density equation gives a way of measuring this powerful energy with respect to the fuel itself.
Energy Density Formula
The formula for energy density is Ed = E/V for energy density Ed, energy E and volume V. You can also measure the specific energy Es as E/M for mass instead of volume. The specific energy is more closely correlated with the energy available that fuels use when powering cars than energy density is. Reference tables show that gasoline, kerosene and diesel fuels have much higher energy densities than coal, methanol and wood.
Regardless, chemists, physicists and engineers use both energy density and specific energy when designing automobiles and testing materials for physical properties. You can determine how much energy a fuel will give off based on the combustion of this densely packed energy. This is measured through energy content.
The amount of energy per unit mass or volume that a fuel gives off when it combusts is the energy content of fuel. While more densely packed fuels have higher values of energy content in terms of volume, lower-density fuels generally produce more energy content per unit mass.
Energy Density Units
The energy content has to be measured for a given volume of gas t a specific temperature and pressure. In the United States, engineers and scientists report the energy content in international British thermal units (BtuIT) while, in Canada and Mexico, energy content is reported in joules (J).
You can also use calories to report energy content. More standard methods of calculating energy content in science and engineering use the amount of heat produced when you burn a single gram of that material in joules per gram (J/g).
Calculating Energy Content
Using this unit of joules per gram, you can calculate how much heat is given off by increasing the temperature of a specific substance when you know the specific heat capacity Cp of that material. The Cp of water is 4.18 J/g°C. You use the equation for heat H as H = ∆T x m x Cp in which ∆T is a change in temperature, and m is the mass of the substance in grams.
If you experimentally measure the initial and final temperatures of a chemical material, you can determine the heat given off by the reaction. If you were to heat a flask of fuel as a container and record the change in temperature in the space directly outside the container, you can measure the heat given off using this equation.
When measuring temperatures, a temperature probe can continuously measure temperature over time. This will give you a broad range of temperatures for which you can use the heat equation. You should also look for places in the graph that show a linear relationship between temperature over time, as this would show that temperature is being given off at a constant rate. This likely indicates the linear relationship between temperature and heat that the heat equation uses.
Then, if you measure how much the mass of the fuel has changed, you can determine how energy was stored in that amount of mass for the fuel. Alternatively, you could measure how much of a volume difference this is for the appropriate energy density units.
This method, known as the bomb calorimeter method, gives you an experimental method of using the energy density formula to calculate this density. More refined methods can take into account heat lost to the walls of the container itself or the conduction of heat through the container's material.
Higher Heating Value Energy Content
You can also express energy content as a variation of the higher heating value (HHV). This is the amount of heat released at room temperature (25 °C) by a mass or volume of fuel after it combusts, and the products have returned to room temperature. This method accounts for the latent heat, the enthalpy heat that emerges when solidification and solid-state phase transformations occur during the cooling of a material.
Through this method, the energy content is given by the higher heating value at base volume conditions (HHVb). At standard or base conditions, the energy flow rate qHb is equal to the product of the volumetric flow rate qvb and the higher heating value at base volume conditions in the equation qHb = qvb x HHVb.
Through experimental methods, scientists and engineers have studied the HHVb for various fuels to determine how it can be determined as function of other variables pertinent to fuel efficiency. Standard conditions are defined as 10 °C (273.15 K or 32 oF) and 105 pascals (1 bar).
These empirical results have shown that HHVb depends on pressure and temperature at base conditions as well as the composition of the fuel or gas. In contrast, the lower heating value LHV is the same measurement, but at the point at which the water in the final combustion products remains as vapor or steam.
Other research has shown that you can calculate HHV from the composition of the fuel itself. This should give you HHV = .35XC + 1.18XH + 0.10XS + - 0.02XN - 0.10XO - 0.02Xash with each X as the fractional mass for carbon (C), hydrogen (H), sulfur (S), nitrogen (N), oxygen (O) and the remaining ash content. Nitrogen and oxygen have an adverse effect on the HHV as they don't contribute to the release of heat as other elements and molecules do.
Energy Density of Biodiesel
Biodiesel fuels offer an environmentally-friendly method of producing fuel as an alternative to other, more harmful fuels. They're created from natural oils, soybean extracts and algae. This renewable fuel source results in less pollution to the environment, and they are usually mixed with petroleum fuels (gasoline and diesel fuels). This makes them ideal candidates for studying how much energy a fuel uses using quantities like energy density and energy content.
Unfortunately from an energy content perspective, biodiesel fuels have a large amount of oxygen, so they produce lower energy values with respect to their mass (in units of MJ/kg). Biodiesel fuels have about a 10 percent lower mass energy content. B100, for example, has an energy content of 119,550 Btu/gal.
Another way of measuring how much energy a fuel uses is the energy balance, which, for biodiesel is 4.56. This means biodiesel fuels produce 4.56 units of energy for every unit of fossil energy they use. Other fuels pack more energy, such as B20, a blend of diesel with biomass fuel. This fuel has about 99 percent of the energy of one gallon of diesel or 109 percent of the energy of one gallon of gasoline.
Alternative methods exist for determining the efficiency of heat given off by biomass in general. Scientists and engineers that study biomass use the bomb calorimeter method to measure the heat released from combustion that is transferred to either air or water surrounding the container. From this, you can determine the HHV for the biomass.
- Neutrium: Specific Energy and Energy Density of Fuels
- Sciencedirect: Energy Content
- ThoughtCo: Specific Heat Example Problem
- D. W. Brooks: Energy Content of Fuels
- Sciencedirect: higher heating value
- Biomass Energy Data Book: The Effect of Moisture on Heating Values
- Coop energy: How do biofuels work?
- U.S. Energy Information Administration: Biofuels: Ethanol and Biodiesel Explained
- Alternative Fuels Data Center: Biodiesel Benefits and Considerations |
Lines and also direct equations
Graphs of lines
Geomeattempt taught us that exactly one line crosses through any 2 points. We can use this fact in algebra as well. When drawing the graph of a line, we just require two points, and also then use a straight edge to attach them. Remember, though, that lines are infinitely long: they execute not start and also stop at the 2 points we supplied to draw them.
Lines can be expressed algebraically as an equation that relates the $y$-values to the $x$-values. We have the right to usage the exact same reality that we used previously that 2 points are had in exactly one line. With only two points, we have the right to determine the equation of a line. Before we do this, let"s discuss some extremely necessary characteristics of lines: slope, $y$-intercept, and also $x$-intercept.
Think of the slope of a line as its "steepness": just how conveniently it rises or drops from left to right. This value is indicated in the graph above as $fracDelta yDelta x$, which specifies how a lot the line rises or drops (readjust in $y$) as we move from left to best (change in $x$). It is necessary to relate slope or steepness to the rate of vertical readjust per horizontal change. A renowned instance is that of rate, which actions the adjust in distance per readjust in time. Wright here a line deserve to represent the distance traveled at various points in time, the slope of the line represents the speed. A steep line represents high rate, whereas very little steepness represents a a lot slower price of take a trip, or low rate. This is shown in the graph below.
The vertical axis represents distance, and the horizontal axis represents time. The red line is steeper than the blue and green lines. Notice the distance traveled after one hour on the red line is about 5 miles. It is much better than the distance traveled on the blue or green lines after one hour - around $1$ mile and $frac15$, respectively. The steeper the line, the higher the distance traveled per unit of time. In other words, steepness or slope represents speed. The red lines is the fastest, via the best slope, and the green line is the slowest, via the smallest slope.
Slope have the right to be classified in 4 ways: positive, negative, zero, and also uncharacterized slope. Optimistic slope implies that as we move from left to right on the graph, the line rises. Negative slope implies that as we relocate from left to ideal on the graph, the line falls. Zero slope suggests that the line is horizontal: it neither rises nor drops as we move from left to appropriate. Vertical lines are said to have "uncharacterized slope," as their slope shows up to be some infinitely big, uncharacterized worth. See the graphs listed below that present each of the four slope kinds.
|Optimistic slope:||Negative slope:||Zero slope (Horizontal):||Unidentified slope (Vertical):|
|$fracDelta yDelta x gt 0$||$fracDelta yDelta x lt 0$||$Delta y = 0$, $Delta x eq 0$, so $fracDelta yDelta x = 0$||$Delta x = 0$, so $fracDelta yDelta x$ is uncharacterized|
| || || || |
Investigate the behavior of a line by adjusting the slope through the "$m$-slider".
Watch this video on slope for even more insight into the idea.
The $y$-intercept of a line is the suggest where the line crosses the $y$-axis. Keep in mind that this happens once $x = 0$. What are the $y$-intercepts of the lines in the graphs above?
It looks favor the $y$-intercepts are $(0, 1)$, $(0, 0)$, and also $(0, 1)$ for the initially three graphs. Tbelow is no $y$-intercept on the fourth graph - the line never crosses the $y$-axis. Investigate the behavior of a line by adjusting the $y$-intercept via the "$b$-slider".$x$-Intercept
The $x$-intercept is a similar concept as $y$-intercept: it is the point wright here the line crosses the $x$-axis. This happens as soon as $y = 0$. The $x$-intercept is not offered as frequently as $y$-intercept, as we will watch as soon as determing the equation of a line. What are the $x$-intercepts of the lines in the graphs above?
It looks prefer the $x$-intercepts are $(-frac12, 0)$ and also $(0, 0)$ for the first 2 graphs. There is no $x$-intercept on the 3rd graph. The fourth graph has actually an $x$-intercept at $(-1, 0)$.
Equations of lines
In order to write an equation of a line, we normally need to recognize the slope of the line initially.Calculating slope
Algebraically, slope is calculated as the ratio of the change in the $y$ worth to the change in the $x$ value between any type of 2 points on the line. If we have actually 2 points, $(x_1, y_1)$ and also $(x_2, y_2)$, slope is expressed as:$$box
Note that we use the letter $m$ to denote slope. A line that is extremely steep has $m$ worths through extremely large magnitude, whereas as line that is not steep has actually $m$ values through extremely tiny magnitude. For example, slopes of $100$ and also $-1,000$ have actually a lot bigger magnitude than slopes of $-0.1$ or $1$.Example:
Find the slope of the line that passes with points $(-2, 1)$ and also $(5, 8)$.
Using the formula for slope, and also letting allude $(x_1, y_1) = (-2, 1)$ and also point $(x_2, y_2) = (5, 8)$, $$eginalign* m &= fracDelta yDelta x = fracy_2 - y_1x_2 - x_1\<1ex> &= frac8 - 15 - (-2)\<1ex> &= frac75 + 2\<1ex> &= frac77\<1ex> &= 1 endalign*$$
Keep in mind that we made a decision allude $(-2, 1)$ as $(x_1, y_1)$ and also allude $(5, 8)$ as $(x_2, y_2)$. This was by alternative, as we might have let suggest $(5, 8)$ be $(x_1, y_1)$ and point $(-2, 1)$ be $(x_1, y_1)$. How does that impact the calculation of slope?
$$eginalign* m &= fracDelta yDelta x = fracy_2 - y_1x_2 - x_1\<1ex> &= frac1 - 8-2 - 5\<1ex> &= frac-7-7\<1ex> &= 1 endalign*$$
We watch the slope is the same either method we pick the first and also second points. We have the right to now conclude that the slope of the line that passes with points $(-2, 1)$ and also $(5, 8)$ is $1$.
Watch this video for even more examples on calculating slope.
Now that we recognize what slope and $y$-intercepts are, we deserve to recognize the equation of a line offered any two points on the line. Tright here are 2 primary means to compose the equation of a line: point-slope form and slope-intercept create. We will certainly initially look at point-slope form.Point-Slope develop
The point-slope develop of an equation that passes through the allude $(x_1, y_1)$ with slope $m$ is the following:$$box
What is the equation of the line has actually slope $m = 2$ and passes through the point $(5, 4)$ in point-slope form?
Using the formula for the point-slope create of the equation of the line, we have the right to simply substitute the slope and also suggest coordinate values directly. In various other words, $m = 2$ and also $(x_1, y_2) = (5, 4)$. So, the equation of the line is $$y - 4 = 2(x - 5).$$Example:
Given two points, $(-3, -5)$ and $(2, 5)$, compose the point-slope equation of the line that passes via them.
First, we calculate the slope: $$eginalign* m &= fracy_2 - y_1x_2 - x_1\<1ex> &= frac5 - (-5)2 - (-3)\<1ex> &= frac105\<1ex> &= 2 endalign*$$
Graphically, we deserve to verify the slope by looking at the adjust in $y$-worths versus the readjust in $x$-values between the two points:
Graph of line passing via $(2, 5)$ and also $(-3, -5)$.
You are watching: What does the distance between two white horizontal lines on this graph represent?
We can currently usage among the points in addition to the slope to compose the equation of the line: $$eginalign* y - y_1 &= m(x - x_1) \ y - 5 &= 2(x - 2) quadcheckmark endalign*$$
We could additionally have actually offered the other suggest to write the equation of the line: $$eginalign* y - y_1 &= m(x - x_1) \ y - (-5) &= 2(x - (-3)) \ y + 5 &= 2(x + 3) quadchecknote endalign*$$
But wait! Those two equations look different. How can they both explain the very same line? If we simplify the equations, we view that they are indeed the very same. Let"s execute simply that: $$eginalign* y - 5 &= 2(x - 2) \ y - 5 &= 2x - 4 \ y - 5 + 5 &= 2x - 4 + 5 \ y &= 2x + 1 quadchecknote endalign*$$ $$eginalign* y + 5 &= 2(x + 3) \ y + 5 &= 2x + 6 \ y + 5 - 5 &= 2x + 6 - 5 \ y &= 2x + 1 quadchecknote endalign*$$
So, making use of either allude to write the point-slope create of the equation results in the very same "simplified" equation. We will check out next that this streamlined equation is another necessary create of direct equations.Slope-Intercept develop
Another way to express the equation of a line is slope-intercept create.$$box
In this equation, $m$ again is the slope of the line, and also $(0, b)$ is the $y$-intercept. Like point-slope form, all we require are 2 points in order to compose the equation that passes via them in slope-intercept create.Constants vs. Variables
It is necessary to note that in the equation for slope-intercept develop, the letters $a$ and $b$ are constant worths, as opposed to the letters $x$ and $y$, which are variables. Remember, constants reexisting a "fixed" number - it does not adjust. A variable deserve to be one of many kind of values - it deserve to change. A offered line contains many points, each of which has a distinctive $x$ and also $y$ worth, yet that line only has one slope-intercept equation via one value each for $m$ and $b$.
Given the very same 2 points over, $(-3, -5)$ and $(2, 5)$, compose the slope-intercept develop of the equation of the line that passes through them.
We already calculated the slope, $m$, over to be $2$. We can then usage among the points to settle for $b$. Using $(2, 5)$, $$eginalign* y &= 2x + b \ 5 &= 2(2) + b \ 5 &= 4 + b \ 1 &= b. endalign*$$ So, the equation of the line in slope-intercept form is, $$y = 2x + 1.$$ The $y$-intercept of the line is $(0, b) = (0, 1)$. Look at the graph over to verify this is the $y$-intercept. At what suggest does the line cross the $y$-axis?
At initially glance, it seems the point-slope and also slope-intercept equations of the line are different, yet they really execute define the very same line. We can verify this by "simplifying" the point-slope create as such: $$eginalign* y - 5 &= 2(x - 2) \ y - 5 &= 2x - 4 \ y - 5 + 5 &= 2x - 4 + 5 \ y &= 2x + 1 \ endalign*$$
Watch this video for more examples on creating equations of lines in slope-intercept develop.
Horizontal and Vertical Lines
Now that we can create equations of lines, we should think about two one-of-a-kind instances of lines: horizontal and also vertical. We declared above that horizontal lines have slope $m = 0$, and that vertical lines have actually uncharacterized slope. How deserve to we usage this to recognize the equations of horizontal and also vertical lines?Vertical lines
Facts around vertical lines If 2 points have the exact same $x$-works with, only a vertical line deserve to pass via both points. Each suggest on a vertical line has the very same $x$-coordinate. If two points have actually the exact same $x$-coordinate, $c$, the equation of the line is $x = c$. The $x$-intercept of a vertical line $x = c$ is the suggest $(c, 0)$. Except for the line $x = 0$, vertical lines perform not have a $y$-intercept.
Consider 2 points, $(2, 0)$ and $(2, 1)$. What is the equation of the line that passes with them?
Graph of line passing with points $(2, 0)$ and $(2, 1)$
First, note that the $x$-coordinate is the exact same for both points. In reality, if we plot any kind of allude from the line, we deserve to see that the $x$-coordinate will be $2$. We recognize that only a vertical line deserve to pass with the points, so the equation of that line must be $x = 2$.
But, just how have the right to we verify this algebraically? First off, what is the slope? We calculate slope as $$eginalign* m &= frac1 - 02 - 2 \<1ex> &= frac10 \<1ex> &= extundefined endalign*$$ In this case, the slope value is undefined, which provides it a vertical line.Slope-intercept and point-slope develops
At this suggest, you might ask, "exactly how deserve to I write the equation of a vertical line in slope-intercept or point-slope form?" The answer is that you really have the right to just write the equation of a vertical line one means. For vertical lines, $x$ is the exact same, or constant, for all worths of $y$. Due to the fact that $y$ might be any number for vertical lines, the variable $y$ does not show up in the equation of a vertical line.Horizontal lines
Facts around horizontal lines If two points have actually the very same $y$-coordinates, only a horizontal line have the right to pass with both points. Each allude on a horizontal line has actually the same $y$-coordinate. If two points have actually the very same $y$-coordinate, $b$, the equation of the line is $y = b$. The $y$-intercept of a horizontal line $y = b$ is the point $(0, b)$. Except for the line $y = 0$, horizontal lines perform not have actually an $x$-intercept.
Consider two points, $(3, 4)$ and $(0, 4)$. What is the equation of the line that passes with them?
Graph of line passing through points $(3, 4)$ and $(0, 4)$
First, note that the $y$-coordinate is the same for both points. In truth, if we plot any kind of suggest on the line, we deserve to watch that the $y$-coordinate is $4$. We understand that only a horizontal line deserve to pass with the points, so the equation of that line should be $y = 4$.
How have the right to we verify this algebraically? First, calculate the slope: $$eginalign* m &= frac4 - 40 - 3 \<1ex> &= frac0-3 \<1ex> &= 0 endalign*$$ Then, making use of slope-intercept form, we deserve to substitute $0$ for $m$, and also fix for $y$: $$eginalign* y &= (0)x + b \<1ex> &= b endalign*$$ This tells us that eexceptionally allude on the line has $y$-coordinate $b.$ Due to the fact that we know 2 points on the line have actually $y$-coordinate $4$, $b$ need to be $4$, and also so the equation of the line is $y = 4$.Slope-intercept and Point-slope develops
Comparable to vertical lines, the equation of a horizontal line deserve to only be written one method. For horizontal lines, $y$ is the same for all worths of $x$. Due to the fact that $x$ can be any number for horizontal lines, the variable $x$ does not appear in the equation of a horizontal line.
Parallel and also Perpendicular lines
Now that we understand exactly how to characterize lines by their slope, we have the right to identify if 2 lines are parallel or perpendicular by their slopes.Parallel lines
In geomeattempt, we are told that two unique lines that execute not intersect are parallel. Looking at the graph listed below, tbelow are two lines that seem to never before to intersect. What deserve to we say around their slopes?
It appears that the lines above have the same slope, and that is correct. Non-vertical parallel lines have the same slope. Any 2 vertical lines, however, are likewise parallel. It is vital to note that vertical lines have actually uncharacterized slope.Perpendicular lines
We understand from geometry that perpendicular lines develop an angle of $90^circ$. The blue and red lines in the graph listed below are perpendicular. What perform we notification around their slopes?
Even though this is one certain instance, the partnership between the slopes applies to all perpendicular lines. Ignoring the indicators for currently, alert the vertical change in the blue line equals the horizontal adjust in the red line. Likewise, the the vertical readjust in the red line equates to the horizontal change in the blue line. So, then, what are the slopes of these 2 lines? $$ extslope of blue line = frac-21 = -2$$ $$ extslope of red line = frac12$$
The other fact to notice is that the indicators of the slopes of the lines are not the very same. The blue line has actually an adverse slope and the red line has actually a positive slope. If we multiply the slopes, we get, $$-2 imes frac12 = -1.$$ This inverse and negative relationship between slopes is true for all perpendicular lines, other than horizontal and vertical lines.
Here is another instance of two perpendicular lines:
$$ extslope of blue line = frac-23$$ $$ extslope of red line = frac32$$ $$ extProduct of slopes = frac-23 cdot frac32 = -1$$ Again, we see that the slopes of two perpendicular lines are negative reciprocals, and therefore, their product is $-1$. Respeak to that the reciprocal of a number is $1$ separated by the number. Let"s verify this through the examples above: The negative reciprocal of $-2$ is $-frac1-2 = frac12 checkmark$. The negative reciprocal of $frac12$ is $-frac1frac12 = -2 checkmark$. The negative reciprocal of $-frac23$ is $-frac1-frac23 = frac32 checkmark$. The negative reciprocal of $frac32$ is $-frac1frac32 = -frac23 checkmark$.
Two lines are perpendicular if one of the complying with is true: The product of their slopes is $-1$. One line is vertical and also the other is horizontal.
Calculate the slope of the line passing through the provided points.
|1. $(2, 1)$ and also $(6, 9)$||2. $(-4, -2)$ and $(2, -3)$||3. $(3, 0)$ and also $(6, 2)$|
|4. $(0, 9)$ and also $(4, 7)$||5. $(-2, frac12)$ and also $(-5, frac12)$||6. $(-5, -1)$ and also $(2, 3)$|
|7. $(-10, 3)$ and also $(-10, 4)$||8. $(-6, -4)$ and $(6, 5)$||9. $(5, -2)$ and also $(-4, -2)$|
Find the slope of each of the following lines.
|10. $y - 2 = frac12(x - 2)$||11. $y + 1 = x - 4$||12. $y - frac23 = 4(x + 7)$|
|13. $y = -(x + 2)$||14. $2x + 3y = 6$||15. $y = -2x$|
|16. $y = x$||17. $y = 4$||18. $x = -2$|
|19. $x = 0$||20. $y = -1$||21. $y = 0$|
Write the point-slope form of the equation of the line with the given slope and also containing the offered suggest.
|22. $m = 6$; $(2, 7)$||23. $m = frac35$; $(9, 2)$||24. $m = -5$; $(6, 2)$|
|25. $m = -2$; $(-4, -1)$||26. $m = 1$; $(-2, -8)$||27. $m = -1$; $(-3, 6)$|
|28. $m = frac43$; $(7, -1)$||29. $m = frac72$; $(-3, 4)$||30. $m = -1$; $(-1, -1)$|
Write the point-slope create of the equation of the line passing via the given pair of points.
|31. $(1, 5)$ and $(4, 2)$||32. $(3, 7)$ and $(4, 8)$||33. $(-3, 1)$ and $(3, 5)$|
|34. $(-2, 3)$ and $(3, 5)$||35. $(5, 0)$ and also $(0, -2)$||36. $(-2, 0)$ and $(0, 3)$|
|37. $(0, 0)$ and also $(-1, 1)$||38. $(1, 1)$ and also $(3, 1)$||39. $(3, 2)$ and $(3, -2)$|
Exercises 40-48: Write the slope-intercept create of the equation of the line with the offered slope and also containing the given point in exercises 22-30.
See more: My Boyfriend Doesn T Like My Body Type: Advice, My Boyfriend Isn'T Happy With My Body
Exercises 49-57: Write the slope-intercept develop of the equation of the line passing with the offered pair of points in exercises 31-39. |
In astronomy, we often use angular measurements to describe the apparent size of an object in space and the apparent distances between objects. Often these angles are very small. Angles are also used to describe an object's location in space. The angular measure of an object is usually expressed in degrees, arcminutes or arcseconds. Just as an hour is divided into 60 minutes and a minute into 60 seconds, a degree is divided into 60 arcminutes and an arcminute is divided into 60 arcseconds. To give you an idea of how small an arcsecond is, imagine the width of a dime as seen from 2 kilometers or 1 1/4 miles away.
1 degree = 1° = 1/360 of a circle1 arcminute = 1' = 1/60 of a degree
1 arcsecond = 1" = 1/60 of an arcminute = 1/3600 of a degree To get a rough estimate of the angular size of objects in space, you can go out on clear night when the moon is up. Extend your arm towards the sky. Your fist, at arms length, covers about 10 degrees of the sky, your thumb covers about 2 degrees, and your little finger covers about 1 degree. If you look at the Moon, it should take up about 1/2 a degree in the sky. The Big Dipper should be about 20 degrees (two fists at arms length) from one end to the other.
Comets are more commonly named by the discoverers and are considered to be members of our Solar SystemSolar System. It is a small Solar System body that orbits the Sun and when it is close enough to the Sun and originates from either the Kuiper Belt or Oort Cloud. Exhibiting a visible atmosphere or a tail-both primarily from the effects of solar radiation upon the comet’s nucleus. They can have highly elliptical orbits that bring them very close to the Sun and swing them deeply into space-often beyond the orbit of Pluto.Comets have a variety of different orbital periods ranging from a few years, to hundreds of thousands of years, while some are believed to pass through the inner Solar System only once before being thrown out into interstellar space. Comets can be described as "dirty snowballs" containing a mixture of dust and frozen gases and can leave a trail of debris behind them. A comet consists of a Nucleus, Coma, and an Ion Tail or a Dust Tail.
The Kuiper Belt is pronounced to be rhymed with “viper” and is sometimes called the Edgeworth-Kuiper Belt. Its existence was predicted in 1951 by Gerald Kuiper, for whom the belt was named after. This belt is a region of the Solar System beyond the planets extending from the orbit of Neptune to approximately 55 AU from the Sun . It is the home to atleast three dwarf planets –Pluto, Haumea, and Makemake. At its fullest extent, including its outlying regions, the Kuiper Belt stretches from roughly 30 to 55 AU. It is quite thick with the main concentration extending as much as ten degrees outside the ecliptic plane and a more diffuse distribution of objects extending several times farther. It is considered to be the source of where the short-period comets come from. No spacecraft has ever traveled to the Kuiper Belt, but NASA’s New Horizon mission, plan to arrive at Pluto in 2015.
The Oort Cloud was proposed in 1950 by Dutch astronomer Jan Oort. The Oort Cloud is a giant, hypothetical spherical cloud of comets believed to lie roughly 50,000 AU, or even nearby a light-year, from the Sun and is filled with about 1 million comets. Objects in the Oort Cloud are largle composed of ices, such as water, ammonia, and methane. The vast distance of the Oort Cloud is considered to be the outer edge of the Solar System. The Oort Cloud is the source of long-period comets and possibly higher-inclination intermediate comets that were pulled into shorter period orbits by the planets. The total mass of comets in the Oort Cloud is estimated to be 40 times that of Earth and are typically tens of millions of kilometers apart. Passing stars can disturb the orbit of one of the icy bodies, causing it to come streaking into the inner Solar System as a long-period comet.
On July 16-22, 1994, over 20 fragments of Shoemaker-Levy 9 traveled at the speed of approximately 60 km./sec. collided into Jupiter providing the first direct observation of an extraterrestrial collision of Solar System objects. The pieces were later observed as a series of fragments ranging up to 2 km. in diameters. It was the first comet observed to be orbiting a planet and also the first collision of two Solar System bodies ever to be observed and the effect of the comet impact on Jupiter's atmosphere have been simply spectacular and beyond expectations. The collision was a distance of 400 million miles from Earth, so far away it didn’t affect Earth, but could at one time have been a severe danger to Earth. The scars from the impacts were more easily visible than the Great Red Spot. A collision of a large comet with an extraordinary, millennial event.
Pictures & Table of shoemaker-Levy. Edit
Comet Shoemaker-Levy 9Edit |
Gas, a fossil fuel primarily composed of methane, has been a subject of environmental concern due to its impact on climate change. While it is considered the cleanest burning fossil fuel, natural gas extraction, production, and combustion release significant amounts of greenhouse gases into the atmosphere, particularly carbon dioxide (CO2) and methane (CH4).
These emissions contribute to the enhanced greenhouse effect, the primary driver of global warming and climate change.
In 2021, energy-related CO2 emissions reached an all-time high of 40.8 Gt of CO2 equivalent, with gas flaring accounting for 0.7%. However, the narrative is somewhat bleak. In 2022, CO2 emissions from natural gas fell by 1.6% or 118 Mt, with pronounced reductions in Europe and the Asia Pacific region.
Despite its reputation as a ‘cleaner’ energy source compared to coal or oil, gas still poses environmental challenges, including the disruption of ecosystems during drilling, potential water pollution, and the leakage of methane—a potent greenhouse gas—from infrastructure.
As we navigate the complexities of the 21st century, the environmental impact of gas remains a critical global concern. The combustion of gas, a fossil fuel, releases carbon dioxide and other greenhouse gases into the atmosphere, contributing significantly to climate change.
This global perspective underscores the urgency of addressing the environmental impact of gas and the need for concerted efforts towards cleaner, more sustainable energy solutions.
When we refer to “gas” in the context of energy, we typically talk about natural gas, a fossil fuel primarily composed of methane (CH4). This gaseous hydrocarbon is formed over millions of years from the decomposed remains of plants and animals.
This gaseous mixture of hydrocarbons is used across various sectors, including electricity generation, heating, cooking, and as a fuel for vehicles, though its use in transportation is minimal.
Globally, natural gas plays a significant role in the energy mix. The demand for natural gas is expected to grow, with forecasts predicting a 2.5% increase in global gas demand in 2024.
Despite its cleaner-burning nature, natural gas still contributes to greenhouse gas emissions. In 2021, energy-related CO2 emissions reached a record high, with gas flaring contributing to these emissions.
The environmental impact of gas varies by region, with countries like the United States, Russia, and China being among the highest emitters.
The environmental impact of gas, including natural gas and gasoline, is significant due to its contribution to greenhouse gas emissions when burned. While cleaner than coal or oil, natural gas still emits considerable CO2 and other pollutants into the atmosphere.
CO2 emissions from fossil fuels, which include natural gas and gasoline, have increased by about 90% since 1970. The impact of gas per usage can be illustrated by the fact that every five mph increase in vehicle speed over 50 mph is equivalent to paying an extra £0.20-£0.40 per gallon due to decreased fuel efficiency and increased emissions.
Methane, the primary component of natural gas, has a global warming potential 21 times higher than carbon dioxide over 100 years. Furthermore, a study suggests that methane leaks could account for around 10% of natural gas’s contribution to climate change, with CO2 emissions accounting for the other 90%.
This illustrates the impact of natural gas on the environment:
|Percentage of Global Emissions
|Global Warming Potential (over 100 years)
|Carbon Dioxide (CO2) from fossil fuel use
|1 (reference gas)
|Methane (CH4) from natural gas systems
|16% of total methane emissions
|21 times higher than CO2
|Nitrous Oxide (N2O) from fuel combustion
|Minor compared to CO2 and CH4
The impact of gas on the environment is multifaceted, affecting air quality, human health, and the global climate. In 2022, natural gas accounted for 22% of global emissions from fuel combustion, with the most significant emitters being China and the United States.
|Percentage of Global Emissions
Annually, the environmental impact of gas usage is substantial. In 2021, U.S. CO2 emissions from natural gas combustion for energy accounted for about 34% of total U.S. energy-related CO2 emissions. Air pollution from oil and natural gas production causes roughly £77 billion in health impacts nationwide annually.
Globally, fossil fuels, which include coal, oil, and gas, are responsible for over 75% of greenhouse gas emissions, with the energy sector, including transportation, electricity, and heat, being the most significant contributor.
Daily, every gallon of gasoline burned creates approximately 8,887 grams of CO2. This translates to significant daily emissions, considering the vast number of vehicles and the amount of gas consumed daily.
The average passenger vehicle emits about 400 grams of CO2 per mile, leading to an annual emission of about 4.6 metric tons of CO2 per vehicle.
Each gas usage, whether a car journey, a flight, or electricity generation, has a quantifiable environmental impact. The environmental consequences of using gas vary across sectors. In the energy production sector, natural gas emits 50 to 60 per cent less CO2 than coal or oil when burned for power generation, but it also releases methane, a potent greenhouse gas.
Manufacturing processes have grown by 203% since 1990, contributing significantly to the increase in greenhouse gas emissions. The energy sector, including transportation, electricity and heat, buildings, manufacturing and construction, is responsible for 75.6% of worldwide greenhouse gas emissions.
|Global Greenhouse Gas Emissions (% of global emissions)
|Electricity and Heat Production
|Coal, natural gas, and oil power plants
|Manufacturing, chemical production, waste management
|Agriculture, Forestry, and Other Land Use
|Crop cultivation, livestock, deforestation
|Road, rail, air, and marine transport
The United States is the largest producer of natural gas, followed by Russia, Iran, and China. The United States also leads in consumption, with Texas being the largest natural gas-consuming state.
In 2024, global gas demand is forecast to grow by 2.5%, or 100 billion cubic metres (BCM), with expected colder winter weather being a contributing factor.
China is the biggest emitter, accounting for 26.4% of global greenhouse gas emissions, followed by the United States at 12.5% and India at 7.06%.
|Annual Gas Production (Billion Cubic Metres)
|Annual Gas Consumption (Billion Cubic Metres)
|Export Volume (Billion Cubic Metres)
|National Iranian Oil Company
|Canadian Natural Resources, Encana
The oil and gas industry is the engine of the world economy, with OPEC member countries holding almost half of the world’s proven natural gas reserves.
The top five countries with the largest natural gas reserves are:
|Gas Reserves (MMcf)
These figures represent the countries’ proven reserves of natural gas.
In its various forms, gas can be toxic and pose significant health risks. The toxicity of a gas depends on its type, concentration, and duration of exposure. Some gases are harmful in small amounts, while others become dangerous at higher concentrations or after prolonged exposure.
Several gases are known for their toxicity. These include, but are not limited to:
|Conjunctivitis, headache, fatigue, cardiovascular events
|Organ damage, death
|Respiratory distress, mucous membrane irritation
|Nausea, paresthesis, respiratory distress, cardiac shock
|Central nervous system damage, death
Eliminating natural gas as an energy source is a complex challenge due to its widespread use for heating, electricity generation, and industrial processes.
In the UK, for example, the government has set targets to reduce the use of gas significantly. The UK is estimated to stop using gas after 2035, with a phase-out of 80% of gas boilers from the UK homes by that year.
However, getting rid of gas, particularly in homes, involves a process known as electrification. This entails converting all heating, cooling, and appliances to electricity, effectively removing natural gas from the house. This process is feasible as electricity can power all appliances, generate heat, and even power cars.
However, this transition requires careful planning and execution. It involves replacing gas-powered appliances with electric options, which can be a significant upfront cost.
Biodegradability is often associated with solid waste, particularly plastics and organic materials. However, it can also apply to gases, albeit in a different context.
Gases, like solid and liquid waste, can undergo biodegradation.
Depending on the environment, the decomposition process can take weeks to years. For instance, peak landfill gas production usually occurs 5 to 7 years after dumping waste. Almost all gas is produced within 20 years after waste is dumped; however, small quantities of gas may continue to be emitted from a landfill for 50 or more years.
The biodegradability of gases has significant implications for environmental sustainability and waste management. For instance, the methane produced from the decomposition of organic waste in landfills can be captured and used as a renewable energy source, reducing greenhouse gas emissions and contributing to energy sustainability.
These fuels can be derived from renewable feedstocks, reducing greenhouse gas emissions and offering a sustainable alternative to traditional fossil fuels.
Yes, gas can be recycled in several ways, depending on the gas type. In many industries, gases are recycled to improve efficiency, reduce emissions, and save costs.
|Use of Recycled Gas
|Oil and Gas
|Enhanced oil recovery, energy source
|Reused in manufacturing processes
|Coke oven gas, Blast furnace gas, Converter gas
|Heating, electricity generation
|CO2, Landfill gas
|Plant growth, carbonated beverages, energy source
While the recycling of gases may not be as well-known as the recycling of solid materials, it is a vital practice in many industries and environmental conservation efforts.
The question of whether gas is sustainable is complex and multifaceted. The environmental impact, particularly methane emission, is a significant concern. Methane is about 84 times more potent than CO2 over 20 years.
|Global Warming Potential (over 20 years)
|84-87 times greater than CO2
|Carbon Dioxide (CO2)
|Up to 100+ years
|1 (reference gas)
However, strategies such as gas recycling and renewable gas alternatives like RNG offer promising pathways towards a more sustainable use of gas.
RNG drastically reduces carbon emissions by an average of 300% versus diesel, and unlike conventional natural gas, it is not a fossil fuel and does not involve drilling.
|Bridge fuel in energy transition, but methane emissions are a concern
|A significant source of methane emissions
|Renewable Natural Gas
|Reduces carbon emissions, not fossil fuel
Alternatives to natural gas present a pathway to a more sustainable and environmentally friendly future. These alternatives are not only environmentally friendly but also offer a range of benefits over conventional natural gas.
The sustainability of these alternatives is often superior to that of natural gas. They offer the potential for reduced greenhouse gas emissions, lower environmental impact, and, in some cases, better efficiency and cost-effectiveness.
Natural gas plays a significant role in the global energy landscape. Here, we delve into the key statistics, facts, and figures related to gas worldwide.
2022 global gas production remained stable, following a 4.3% increase in 2021.
Despite a 12% fall in Russia’s production due to lower exports to Europe, the overall global production was balanced by higher outputs in North America, the Middle East, China, and Australia.
The world has proven gas reserves equivalent to 6,923 trillion cubic feet (Tcf), about 52.3 times its annual consumption.
Russia held the largest share of these reserves at 24.3%, followed by Iran (17.3%) and Qatar (12.5%)
Russia is the second-largest natural gas exporter globally, following the United States. Qatar and Norway were the next largest exporters.
The United States is the largest consumer, followed by Russia and China.
Interestingly, while the United States saw a slight decrease in consumption, Russia and China recorded increases of over 11%.
The Asia Pacific region, particularly China and India, is expected to drive over half of the incremental global gas consumption through 2025, with an average annual growth rate of 1.5%.
The U.S. is expected to increase its LNG exports to an average of 12.0 billion cubic feet per day (Bcf/d) in 2023, with two new LNG export projects expected to come online by the end of 2024.
Global gas demand is set to grow by an average of 1.6% between 2022 and 2026, down from an average of 2.5% between 2017 and 2021.
There is a shift in global LNG demand from Asia to Europe, with Europe requiring 4.8 Bcf/d of gas until 2030, representing about half of the US LNG export in 2021.
By 2025, the U.S. is projected to reach an annual gas production of over 1,030 billion cubic meters (BCM), growing by 1.2% annually.
Natural gas is a fossil fuel primarily composed of methane. It is used in various applications, including heating homes and powering vehicles, and it serves as a critical input in industries such as power generation and manufacturing.
Several factors influence natural gas prices, including supply and demand, weather conditions, economic growth, and geopolitical events. Prices can vary significantly across different regions due to differences in supply and demand dynamics, infrastructure, and regulatory environments.
Natural gas is not renewable. It is a fossil fuel formed from the remains of plants and animals that lived millions of years ago. Once extracted and used, it cannot be replenished.
As of 2017, there were 6,923 trillion cubic feet (Tcf) of proven gas reserves worldwide. This equates to about 52 years of gas left at current consumption levels, excluding unproven reserves.
As of 2022, global natural gas consumption amounted to roughly 3.94 trillion cubic meters. Consumption varies significantly by country, with some countries, such as the United States and Russia, consuming large amounts due to their size and industrial activity.
Inemesit is a seasoned content writer with 9 years of experience in B2B and B2C. Her expertise in sustainability and green technologies guides readers towards eco-friendly choices, significantly contributing to the field of renewable energy and environmental sustainability. |
4th Grade Interactive Math Skill Builders
Geometry - CCSS 4.G.A.1, 4.G.A.2, 4.G.A.3
Links verified on 7/16/2014
- 3D Earth Exploration - Find 3D shapes inside a picture.
- Alien Angles - Rescue friendly aliens by estimating the angle to where they are located. (to within 5 degrees)
- Angle - Guess the Random Angle - A demonstration tool to aid the use of a protractor and test your angle estimation skills.
- Angle Estimator - Estimate which angle is similar to the top one.
- Angle Quiz - Select the correct answer choice to each question. Self checking.
- Angles - The Mission 2110 Roboidz from CBBC join Bitesize to play an angles game
- Baseball Geometry - Select the game to play: identifying, measuring or labeling angles; find the area of triangles, rectangles or circles; or find the perimeter of rectangles and the circumference circles.
- Do You Know Your Shapes? - shapes activities at BBC
- Identify Geometric Shapes - Match the shape with the correct name to uncover a picture.
- Measuring Angles - Use a protractor to determine the angle. Includes teacher and pupil exercises.
- Patterns for Solid Figures - Make a cube figure.
- Sorting Triangles - Triangle properties - Drag and drop different types of angles in Venn diagram.
- Symmetry Game - For each shape that is shown, determine how many lines of symmetry it has.
Internet4classrooms is a collaborative effort by
Susan Brooks and Bill Byles. |
In SAS programming, concatenation is the process of merging two or more character strings to create a single, longer string. This operation is frequently used in data analysis and manipulation, particularly when dealing with text data. Concatenation enables SAS programmers to combine text strings from different sources, perform text searches, and generate new variables based on existing ones. As such, it plays a critical role in SAS programming, and mastering this operation is a key skill for any aspiring SAS programmer.
Method 1: Using the Concatenation Operator (||)
Concatenation is a common task that is performed frequently when working with strings in SAS programming. One method of concatenating strings in SAS is by using the concatenation operator (||).
To use the concatenation operator, simply place two or more strings next to each other, separated by the concatenation operator. For example:
string1 || string2
This will concatenate
string2 into a single string. The concatenation operator can also be used with variables:
var1 || var2
Here is an example of concatenating strings with the concatenation operator:
last_name, you can concatenate them into a single variable with:
full_name = first_name || " " || last_name;
It should be noted that the concatenation operator (||) has been available in SAS since version 6 in the late 1980s. It remains a reliable and efficient way to concatenate strings in SAS programming.
Method 2: Using the CAT Function
The CAT function is an alternative approach to concatenating strings in SAS programming. Unlike the concatenation operator approach, which uses ||, the CAT function concatenates and returns the values passed to it. Additionally, it removes any trailing blanks from each argument before concatenating.
Using the CAT function is simple. The syntax is as follows:
|CATX (delimiter, var1, var2, …, varn)||Concatenates the values passed as arguments, separated by the specified delimiter|
|CAT (var1, var2, …, varn)||Concatenates the values passed as arguments, without any delimiter|
|CATT (var1, var2, …, varn)||Concatenates the values passed as arguments, removing any trailing blanks from each argument|
|CATS (var1, var2, …, varn)||Concatenates the values passed as arguments, stripping any leading and trailing blanks from each argument|
Using the CAT function can make your SAS code simpler and more efficient. Here are a few examples of using the CAT function:
data example; set data; newvar = cat(name, ' ', age); run; data example; set data; newvar = catx('|', name, age, address); run; data example; set data; newvar = catt(name, 'n', address); run;
Overall, the CAT function is a powerful tool in SAS programming for concatenating strings. Understanding and using its various options can make your code simpler and more efficient.
Method 3: Using the CATT Function
The CATT function is one of the most useful functions for concatenating strings in SAS programming. It is mainly used to remove trailing blanks from each argument before concatenating. The function concatenates and returns the values passed as if the concatenation operator, ||, were used.
One advantage of using the CATT function is that it removes trailing blanks from each argument before concatenating. This can be a big help when dealing with large datasets or when the strings being concatenated have different lengths. Another advantage of using the CATT function is that it is easy to use and understand, even for newcomers to SAS programming.
Here is an example of using the CATT function in SAS programming:
This program merges the strings “SAS” and “programming” together, separated by a space. The result is the string “SAS programming”.
When comparing CATT and CAT functions, one disadvantage of using the CATT function is that it can be slower than using the CAT function. This is due to the extra work required to strip trailing blanks before concatenating. Another disadvantage is that the CATT function does not concatenate numbers as easily as the CAT function, since numeric values are formatted to character values using BEST32.
In summary, the CATT function is a useful tool for concatenating strings in SAS programming, especially when dealing with character strings that have varying lengths. It is easy to use and understand, but can be slower than the CAT function and is not as effective when used with numeric values.
Method 4: Using the CATS Function
The CATS function is another method in SAS programming for concatenating strings. Similar to the CAT and CATT functions, the CATS function joins two or more character strings into a single, longer string. However, the CATS function strips leading and trailing blanks before concatenating the strings.
Using the CATS function in SAS programming is straightforward. To concatenate strings using the CATS function, simply list the character strings you want to join as function arguments. For example, consider the following code:
x = cats('John', ' ', 'Doe');
This code uses the CATS function to concatenate three strings into a single variable, x. The resulting value of x is ‘John Doe’.
One advantage of using the CATS function in SAS programming is that it removes leading and trailing blanks before concatenating the strings. This can be useful for ensuring consistent formatting in your data.
One disadvantage of using the CATS function is that it can be less efficient than the CAT and CATT functions. This is because the CATS function processes an additional step – removing leading and trailing blanks – before concatenating the strings.
Method 5: Using the CATX Function
The CATX function in SAS programming allows for concatenation of strings with a delimiter, providing more flexibility than other concatenation methods. It works by placing a delimiter between values being concatenated, allowing for easier visualization of the separate values in the final output.
For example, suppose we have the following SAS dataset:
|ID||First Name||Last Name|
We can use the CATX function to concatenate the first and last names using a comma delimiter:
fullname = catx(',', first_name, last_name);
The resulting output dataset would appear as:
|ID||First Name||Last Name||Full Name|
The advantages of using the CATX function include the ability to specify a delimiter, which can make the final output easier to visualize and understand. One disadvantage of using the CATX function is that numeric values are formatted to character values using BEST32, which may result in unexpected formatting if not considered.
Concatenating a Range of Variables in SAS
When working with SAS programming, it is often necessary to concatenate multiple variables into a single string. This can be done by using loops to iterate over a range of variables and concatenating them one by one.
For example, consider a data set with variables named VAR1, VAR2, VAR3, and VAR4. To concatenate all of these variables into a single string, you can use a DO loop as follows:
data CONCATENATE; set INPUT; length ALL_VARS $200.; /* Set the length of the concatenated string */ DO I=1 to 4; /* Loop over variables VAR1 to VAR4 */ /* Use the CATX function to concatenate the variables with a comma separator */ ALL_VARS=catx(", ",ALL_VARS,VVALUE("VAR"&I)); END; run;
The advantage of using this method is the flexibility it provides in terms of the range of variables to be concatenated. You can easily adjust the loop index to concatenate any range of variables you need.
One disadvantage of this approach is that it can be memory-intensive, especially when working with large data sets. Each iteration of the loop creates a new string, which can quickly consume memory.
Another approach to concatenating variables in SAS is to use the array function. Arrays allow you to reference a group of variables using a single name, which makes it easier to concatenate them into a single string.
In conclusion, concatenating a range of variables in SAS requires the use of loops or arrays. While loops provide more flexibility, they may also be more memory-intensive than arrays. It is important to choose the best approach based on the size of your data set and your specific requirements.
Concatenating All Variables of the Same Type in SAS
Concatenating all variables of the same type in SAS is a useful technique for combining multiple strings into a single longer string. This can be accomplished using a variety of SAS functions, including CAT, CATS, CATT, CATX, and vvaluex.
To concatenate all variables of the same type, you can use PROC CONTENTS to create a macro variable containing the variable names of all variables of the same type. For example, the following code creates a macro variable containing the names of all character variables in a SAS data set:
Once you have created the macro variable containing the variable names, you can use the CATS function to concatenate the values of the specified variables:
The above code creates a new variable called “new_var” in the “output” data set, and assigns it the concatenated values of all character variables in the “input” data set.
One advantage of concatenating all variables of the same type is that it can simplify the code necessary to perform certain tasks. For example, concatenating all character variables in a data set can make it easier to search for specific values or patterns within the data.
However, one disadvantage of concatenating all variables of the same type is that it can be memory-intensive for large data sets with many variables. In addition, concatenating all variables in a data set may result in loss of information, if the concatenated string exceeds the length of the maximum character length for SAS variables.
Concatenating Strings in SAS with PROC SQL
String concatenation is a fundamental requirement in SAS programming. Using PROC SQL is an excellent way to concatenate strings in SAS. PROC SQL allows you to join multiple strings into a single, longer string. Below are some examples of how to use PROC SQL to concatenate strings in SAS:
Using PROC SQL to concatenate strings has its benefits, including:
- PROC SQL is easy to use and learn.
- PROC SQL is efficient and faster than other methods of string concatenation.
- PROC SQL can handle different string types, such as numeric and character strings.
However, there are some disadvantages to using PROC SQL for concatenating strings:
- PROC SQL is less flexible when it comes to formatting concatenated strings compared to other SAS functions like CATS and CATX.
- PROC SQL produces an error if any of the strings being concatenated are null, so null values must be handled before concatenating.
Overall, using PROC SQL to concatenate strings in SAS is a useful and efficient method. Just be aware of its limitations and make sure to consider which method is best for your specific use case.
Commonly asked questions about concatenating strings in SAS programming and their answers:
1. What is concatenation in SAS programming?
Concatenation refers to the process of combining two or more strings to form a single, longer string. This is a common task performed in SAS programming.
2. What are the different methods used for string concatenation in SAS programming?
There are several methods used for string concatenation in SAS programming. These include the CAT, CATT, CATS, and CATX functions, as well as manual looping over PDV variables and using a hash to track variables.
3. What is the difference between different concatenation methods?
The main differences between the concatenation methods lie in how they handle spaces and delimiters, as well as trailing and leading blanks. CAT concatenates values as if the concatenation operator were used. CATT removes trailing blanks from each argument before concatenating. CATS strips leading and trailing blanks before concatenating. CATX places a delimiter between concatenated values and strips leading and trailing blanks.
4. How can I concatenate multiple ranges of variables?
To concatenate multiple ranges of variables, you can use the CATX function along with the colon operator. For example, you could use CATX(“-“, var1:var3) to concatenate variables var1, var2, and var3 with a hyphen delimiter.
5. What are the considerations when deciding which method to use for string concatenation in SAS programming?
When deciding which method to use for string concatenation in SAS programming, it’s important to consider the specific requirements and goals of your program. Some methods may be more efficient or better suited to certain data or task types. Additionally, some methods may handle spaces and delimiters differently, which can impact the output of your program. It’s generally recommended to test different methods and compare their results before deciding on a final approach.
In conclusion, string concatenation is a common task in SAS programming. Prior to SAS 9, concatenating strings can be done using the concatenation operator or with the CAT, CATT, CATS, and CATX functions. These functions enable SAS programmers to join two or more character strings into a single, longer string, with the option to remove leading or trailing spaces or to customize the delimiter. Using any of these functions can make SAS programming more efficient and organized.
To sum up, SAS programmers can try different methods for string concatenation depending on their specific requirements. They can use concatenation operators, the CAT, CATT, CATS, and CATX functions, or loop over PDV variables using vvaluex. By using the right method for the job, SAS programmers can produce codes that are tidy, effective, and easy to read.
Here are some trusted references and external links that were used in this article:
1. SAS Documentation: Concatenating Strings2. SAS Programmer’s Guide3. SAS Whitepaper: Use of SQL in SAS Programs |
DNA Structure, Replication, and Technology
Topics in Depth
The Theme of Common Mistakes in DNA Structure, Replication, and Technology
The Genetic Code
Many people confuse DNA with RNA, which is why TV networks canceled their buddy cop drama. A few easy ways to tell the difference are as follows:
- DNA has no 2' hydroxyl, which is why it is called "deoxy" ribonucleic acid. See what they did there?
- DNA is rarely single stranded, while RNA is regularly single stranded.
- DNA has thymine, but RNA has uracil.
- DNA is found as a double helix, while RNA forms structures from base-pairing within the RNA molecule.
Often, when people are trying to read a sequence of DNA, they are unaware that there are two strands: a positive strand that goes from 5' to 3' and a minus strand that goes 3' to 5'. It is important to realize that sequences are always read 5' to 3'. Therefore, if the sequence in question is from the minus strand, you would need to reverse it to know the correct order for the positive strand.
How DNA is packaged can be confusing. Acetyltransferase? Methylwhosiwhatsis? Enzymes are straightforward to understand if you break them down:
- Acetyltransferase = Acetyl + Transfer + Ase
Acetyl is a negatively charged group, transfer means "add to," and –ase is a general suffix added to all enzymes. This enzyme (acetyltransferase) adds an acetyl group to histones. We feel like Sylvester the Cat. Sufferin succotash!
- Methyltransferase = Methyl + Transfer + Ase
Methyl is a hydrophobic group, and the same as above for the rest of the word. Therefore, this enzyme adds a methyl group.
- Deacetylase = De + Acetyl + Ase
"De" means to "undo" the acetyl group. This enzyme removes acetyl groups.
From Genes to Proteins
One of the biggest confusions about transcription and translation arises from keeping track of what strand of nucleotide sequence is being used for what. The mRNA transcript is identical in sequence to the gene sequence in the DNA. Therefore, the template for mRNA is the complementary strand of DNA, which is identical in sequence to the anticodon sequence that the tRNA binds to.
|Template for mRNA||GTCAGTCAAGACTAG|
|Anticodons||GUC, AGU, CAA, GAC, and UAG|
Many people find translation and transcription confusing, but it is actually straightforward if you think about the roots of the words. Transcription comes from "transcribe," which essentially means to rewrite something. Court reporters are transcribers: they copy what is said aloud onto paper. Transcription in a cell is copying the gene sequence in DNA to RNA. Nucleotides are copied into nucleotides.
However, translation comes from "translate," which suggests the action that RNA information is translated into protein information through amino acids.
One of the major misconceptions of DNA replication and cell division is the role of "interphase" in cell replication. Most people think that interphase is this inert "sleeping" phase, and all the exciting activity happens in mitosis. Yet again, it is those mitosis-centric jerks that keep maligning "interphase."
As mentioned before, interphase is where the cell prepares for another round of mitosis, turning on metabolic processes (G1), activating DNA replication (S), and preparing the infrastructure of microtubules for mitosis/meiosis to occur (G2). Therefore, even though it is not as exciting as mitosis, it is still a lot of work. Would it be correct to say that the cell spends most of its time in interphase?
Polymerase Chain Reactions
One of the biggest problems with PCR is proper primer design. If you wanted to amplify a gene of interest, you need to select primers that match the gene properly. Therefore, you need to make a 5' primer that is identical to the start of the gene, and a 3' primer that is identical in sequence to the reverse complement of the 3' end of the gene. If we take the example from Robertson and Phillips (CITATION) below showing the aroA gene:
We want to amplify the underlined sequence, so we will design primers that will match the grey sequence. Therefore, our 5' primer will be: 5'-GGAAGGGAGTGGTGAAGAG-3', and our 3' primer will be: 5'-CTGCAAAGAACCATCAGGC-3'. Notice that the 3' primer is the reverse complement of the 2nd grey sequence, while the 5' primer is identical to the 1st grey sequence.
Many people think of their genomes as containing only genetic information. There are a lot of DNA sequences that do not have a function, or have functions that we do not know of yet. The name "junk DNA" is often applied to these DNA sequences; however, it is unclear whether these sequences are actually "junk," or if they serve a function that we have yet to understand. Intronic sequences sometimes have promoters that encode mRNAs that are antisense to the RNAs that regulate mRNA expression. Antisense RNAs bind sense mRNAs to destroy the mRNAs. The flanking sequence also encodes the enhancer sequence that affects activation of gene expression. And, sometimes, the distance of the enhancer sequence is important for expression, so it remains unclear whether these can be considered "junk," either. In the words of our mom, all bases are special because they make us who we are.
Hopefully, you do not come out of this unit thinking there is a SUPER APE gene, because there is not. Beyond that, many people are concerned about the use of recombinant DNA, though most of these concerns are unwarranted. In many instances, especially for crops, genetic engineering is not too dissimilar to what has already been going on for years by cross-breeding plants, like Mendel's work. Recombinant DNA has sped up the process in many instances. Though, in some instances, genetic engineers have inserted genes that never could be introduced naturally, like tomatoes with fish genes.
Many of the fears of using biotechnology come from human conventions of what is "natural." Putting pig hearts in humans sounds terrible unless you need a heart transplant. Because much of the technology is in its infancy, the failures of biotechnology are always overpublicized compared to the successes. That being said, there should also be more caution when introducing genetically modified products for general usage: they must be properly tested beforehand to confirm that they cause no detrimental effects to the health of humans, the environment, or the planet at large.
We’re adding new materials and resources all the time.
Sign up for our newsletter to stay up to date.
An informed Shmooper is the greatest weapon against pop quizzees. |
If the given condition is met, the cell will be filled with the value given as the second argument. In other cases, the cell will be filled with the value given as the third argument.
Value if true,
Value if false)
|1||Logical test||The condition that is checked to be True or False.|
|2||Value if True||The output that will be returned by Excel if the logical test is True.|
|3||Value if False||The output that will be returned by Excel if the logical test is False.|
Try it for yourself
Take a look at the formulas in cells C2:C5 and try to do it yourself in cell C6.
If the score of a student is higher than 60, the function returns the value for True, which is ‘Pass’. If the value is lower than 60 or equal to 60, it will return the value for False, which is ‘Fail’.
Certainly the best way to understand what a condition is and how they can be useful is by looking at some examples.
What type of conditions are there?
The most often used way of creating conditions is by combining any sort of data (numbers, text, dates, etc) with a comparison operator. For example:
5 < 6, 5 is indeed less than 6, so
10 < 8, 10 is not less than 8, it is greater than 8, so this is
"Textual data1" = "Textual data2", the two texts are not the same, therefore the statement is
1-1-2000 <> 2-1-2000, the dates are unequal, which is exactly what the operator says, so the statement is
Here’s an overview of the operators you can use to form conditions:
|<>||NOT equal to|
|>=||Greater than or equal to|
|<=||Less than or equal to|
However, conditions do not have to be from comparison operators, there are other functions that evaluate to
False that can be used as a condition. For example:
=IF(ISBLANK(A1), "A1 is blank", "A1 is not blank")
You can see this one in action down at the examples.
Yes they can. The second and third argument of an if function can, in turn, be if functions themselves. And if the result is True or False, even the first argument can be an if function. You can see an example of this in our Complete Guide on If.
You probably did not fill in the second or third argument of the function. Add arguments two and three.
This means that the formula name was not recognized. It was probably misspelled. Make sure the function name is spelled correctly and try again.
Using functions as condition
Instead of regular conditions, you can also use functions that evaluate to
False as conditions for an If function. In this example the
ISBLANK function is used as a condition.
ISBLANK is True when the cell is empty and False when the cell is not.
Try typing anything in cell A1 to see B1 change. |
A ideal spring has an equilibrium length. If a spring is compressed, then a force with magnitude proportional to the decrease in length from the equilibrium length is pushing each end away from the other. If a spring is stretched, then a force with magnitude proportional to the increase in length from the equilibrium length is pulling each end towards the other.
The force exerted by a spring on
objects attached to its ends is proportional to the spring's change
in length away from its equilibrium length and is always directed
towards its equilibrium position.
Assume one end of a spring is fixed to a wall or ceiling and an object pulls or pushes on the other end. The object exerts a force on the spring and the spring exerts a force on the object. The force F the spring exerts on the object is in a direction opposite to the displacement of the free end. If the x-axis of a coordinate system is chosen parallel to the spring and the equilibrium position of the free end of the spring is at x = 0, then
F = -kx.
The proportional constant k is called the spring constant. It is a measure of the spring's stiffness.
When a spring is stretched or compressed, so that its length changes by an amount x from its equilibrium length, then it exerts a force F = -kx in a direction towards its equilibrium position. The force a spring exerts is a restoring force, it acts to restore the spring to its equilibrium length.
A stretched spring supports a 0.1 N weight. Adding another 0.1 N weight, stretches the string by an additional 3.5 cm. What is the spring constant k of the spring?
k = |F/x| = (0.1 N)/ (0.035 m) = 2.85 N/m.
You want to know your weight. You get onto the bathroom scale. You want to
know how much cabbage you are buying in the grocery store. You put the cabbage
onto the scale in the grocery store.
The bathroom scale and the scale in the grocery store are probably spring scales. They operate on a simple principle. They measure the stretch or the compression of a spring. When you stand still on the bathroom scale the total force on you is zero. Gravity acts on you in the downward direction, and the spring in the scale pushes on you in the upward direction. The two forces have the same magnitude.
Since the force the spring exerts on you is equal in magnitude to your weight, you exert a force equal to your weight on the spring, compressing it. The change in length of the spring is proportional to your weight.
Spring scales use a spring of known spring constant and provide a calibrated readout of the amount of stretch or compression. Spring scales measure forces. They determine the weight of an object. On the surface of the earth weight and mass are proportional to each other, w = mg, so the readout can easily be calibrated in units of force (N or lb) or in units of mass (kg). On the moon, your bathroom spring scale calibrated in units of force would accurately report that your weight has decreased, but your spring scale calibrated in units of mass would inaccurately report that your mass has decreased.
Spring scales obey Hooke's law, F = -kx. Hooke's law is remarkably general. Almost any object that can be distorted pushes or pulls with a restoring force proportional to the displacement from equilibrium towards the equilibrium position, for very small displacements. However, when the displacements become large, the elastic limit is reached. The stiffer the object, the smaller the displacement it can tolerate before the elastic limit is reached. If you distort an object beyond the elastic limit, you are likely to cause permanent distortion or to break the object.
The elastic properties of linear objects, such as wires, rods, and columns
which can be stretched or compressed, can be described by a parameter called the
Young's modulus of the material.
Before the elastic limit is reached, Young's modulus Y is the ratio of the force
per unit area F/A, called the stress, to the fractional change in length ∆L/L.
(This is an equation relating magnitudes. All quantities are positive.)
Y = (F/A)/(∆L/L), F/A = Y∆L/L.
Young's modulus is a property of the material. It be used to predict the elongation or compression of an object before the elastic limit is reached.
Consider a metal bar of initial length L and cross-sectional area A.
The Young's modulus of the material of the bar is Y. Find the "spring
constant" k of such a bar for low values of tensile strain.
From the definition of Young's modulus: F = Y A ∆L/L.
From the definition of the spring constant: F = k∆L. (Equation, relating magnitudes, ∆L = magnitude of the displacement from equilibrium.)
Therefore k = Y A/L.
Consider a steel guitar string of initial length L = 1 m and cross-sectional
area A = 0.5 mm2.
The Young's modulus of the steel is Y = 2*1011 N/m2.
How much would such a string stretch under a tension of 1500 N?
∆L = F*L/(Y*A) = 1500 N*(1 m)/(2*1011 N/m2*0.5 mm2*(1 m/103 mm)2) = 0.015 m = 15 mm.
In order to compress or stretch a spring, you have to do work. You must exert a force on the spring equal in magnitude to the force the spring exerts on you, but opposite in direction. The force you exert is in the direction of the displacement x from its equilibrium position, Fext = kx. You therefore do work.
W = ∫x1x2F(x)dx = Favg(x2
- x1) =
Favg = (kx2 + kx1)/2,
W = ½k(x2 = x1)(x2 - x1) = ½k(x22 - x12).
Work = area under the curve = width * average height
= (x2 - x1)*k(x2 + x1)/2 = ½k(x22 - x12).
The average force you exert as you change the displacement from 0 to x is (1/2)kx.
The work you do when stretching or compressing a spring a distance x from its equilibrium position therefore is W = ½kx2.
When a 4 kg mass is hung vertically on a certain light spring that obeys
Hooke's law, the spring stretches 2.5 cm. If the
4 kg mass is removed,
(a) how far will the spring stretch if a 1.5 kg mass is hung on it, and
(b) how much work must an external agent do to stretch the same spring 4 cm from its unstretched position?
(a) We find the spring constant of the spring from the given data.
F = -kx.
F = -mg = -(4 kg)(9.8 m/s2) = -39.2 N.
k = F/x = (39.2 N)/(0.025 m) = 1568 N/m.
Now we use x = -F/k to find the displacement of a 1.5 kg mass.
F = -(1.5 kg)(9.8 m/s2)= -14.7 N.
x = (14.7 N)/(1568 N/m) = 0.009375 m = 0.975 cm.
(b) W = ½kx2 = ½(1589 N/m)(0.04 m)2 = 1.2712 Nm = 1.2712 J.
If it takes 4 J of work to
stretch a Hooke's law spring 10 cm from its unstretched length, determine the
extra work required to stretch it an additional 10 cm.
The work done in stretching or compressing a spring is proportional to the square of the displacement. If we double the displacement, we do 4 times as much work. It takes 16 J to stretch the spring 20 cm from its unstretched length, so it takes 12 J to stretch it from 10 cm to 20 cm.
W = ½kx2. Given W and x we find k.
4 J = ½k(0.1 m)2; k = (8J )/(0.1 m)2= 800 N/m.
Now x = 0.2 m. W = ½(800 N/m)(0.2 m)2 = 16 J.
∆W = 18 J - 4 J = 12 J. |
Unformatted text preview: of the circular path is given by r = mv / ( qB ) .
Since v, q, and B are the same for the proton and the electron, the more-massive proton
travels on the circle with the greater radius. The centripetal force Fc acting on the proton
must point toward the center of the circle. In this case, the centripetal force is provided by
the magnetic force F. According to Right-Hand Rule No. 1, the direction of F is related to
the velocity v and the magnetic field B. An application of this rule shows that the proton Chapter 21 Answers to Focus on Concepts Questions 1135 must travel counterclockwise around the circle in order that the magnetic force point toward
the center of the circle.
9. rproton/relectron = 1835 10. (c) When, for example, a particle moves perpendicular to a magnetic field, the field exerts a
force that causes the particle to move on a circular path. Any object moving on a circular
path experiences a centripetal acceleration.
11. F = 3.0 N, along the −y axis
12. (e) The magnetic field is directed from the north pole to the south pole (Section 21.1).
According to Right-Hand Rule No. 1 (Section 21.5), the magnetic force in drawing 1 points
13. (c) There is no net force. No force is exerted on the top and bottom wires, because the
current is either in the same or opposite direction as the magnetic field. According to RightHand Rule No. 1 (Section 21.5), the left side of the loop experiences a force that is directed
into the screen, and the right side experiences a force that is directed out of the screen
(toward the reader). The two forces have the same magnitude, so the net force is zero. The
two forces on the left and right sides, however, do exert a net torque on the loop with respect
to the axis.
14. (d) According to Right-Hand Rule No. 1 (Section 21.5), all four sides of the loop are
subject to forces that are directed perpendicularly toward the opposite side of the square. In
addition, the forces have the same magnitude, so the net force is zero. A torque consists of a
force and a lever arm. For the axis of rotation through the center of the loop, the lever arm
for each of the four forces is zero, so the net torque is also zero.
15. N = 86 turns
16. (a) Right-Hand Rule No. 2 (Section 21.7) indicates that the magnetic field from the top wire
in 2 points into the screen and that from the bottom wire points out of the screen. Thus, the
net magnetic field in 2 is zero. Also, the magnetic field from the horizontal wire in 4 points
into the screen and that from the vertical wire points out of the screen. Thus, the net
magnetic field in 4 is also zero.
17. (b) Two wires attract each other when the currents are in the same direction and repel each
other when the currents are in the opposite direction (see Section 21.7). Wire B is attracted
to A and repelled by C, but the forces reinforce one another. Therefore, the net force has a
magnitude of FBA + FBC, where FBA and FBC are the magnitudes of the forces exerted on
wire B by A and on wire B by C. However, FBA = FBC, since the wires A and C are
equidistance from B. Therefore, the net force on wire B has a magnitude of 2FBA. The net
force exerted on wire A is less than this, because wire A is attracted to B and repelled by C, 1136 MAGNETIC FORCES AND MAGNETIC FIELDS the forces partially canceling. The net force expected on wire C is also less than that on A. It
is repelled by both A and B, but A is twice as far away as B.
18. (a) The magnetic field in the region inside a solenoid is constant, both in magnitude and in
direction (see Section 21.7).
19. B = 4.7 × 10−6 T, out of the screen
20. (d) According to Ampere’s law, I is the net current passing through the surface bounded by
the path. The net current is 3 A + 4 A − 5 A = 2 A. Chapter 21 Problems 1137 CHAPTER 21 MAGNETIC FORCES AND MAGNETIC FIELDS
1. SSM REASONING AND SOLUTION The magnitude of the force can be determined
using Equation 21.1, F = q vB sin θ, where θ is the angle between the velocity and the
magnetic field. The direction of the force is determined by using Right-Hand Rule No. 1.
a. F = q vB sin 30.0° = (8.4 × 10–6 C)(45 m/s)(0.30 T) sin 30.0° = 5.7 × 10 −5 N , directed into the paper .
b. F = q vB sin 90.0° = (8.4 × 10–6 C)(45 m/s)(0.30 T) sin 90.0° = 1.1 × 10 −4 N , directed into the paper .
c. F = q vB sin 150° = (8.4 × 10 –6 C)(45 m/s)(0.30 T) sin 150° = 5.7 × 10 −5 N , directed into the paper . 2. REASONING The electron’s acceleration is related to the net force ΣF acting on it by
Newton’s second law: a = ΣF/m (Equation 4.1), where m is the electron’s mass. Since we
are ignoring the gravitational force, the net force is that caused by the magnetic force, whose
magnitude is expressed by Equation 21.1 as F = q0 vB sin θ. Thus, the magnitude of the ( ) electron’s acceleration can be written as a = q0 vB sin θ / m . SOLUTION We note that θ = 90.0°, since the velocity of the electron is perpendicular to
the magnetic field. Th...
View Full Document |
|DNA-directed DNA polymerase|
|PDB structures||RCSB PDB PDBe PDBsum|
|Gene Ontology||AmiGO / QuickGO|
DNA polymerase is an enzyme that synthesizes DNA molecules from deoxyribonucleotides, the building blocks of DNA. These enzymes are essential for DNA replication and usually work in pairs to create two identical DNA strands from a single original DNA molecule. During this process, DNA polymerase "reads" the existing DNA strands to create two new strands that match the existing ones.
DNA polymerase adds nucleotides to the three prime (3')-end of a DNA strand, one nucleotide at a time.
Every time a cell divides, DNA polymerases are required to help duplicate the cell's DNA, so that a copy of the original DNA molecule can be passed to each daughter cell. In this way, genetic information is passed down from generation to generation.
Before replication can take place, an enzyme called helicase unwinds the DNA molecule from its tightly woven form, in the process breaking the hydrogen bonds between the nucleotide bases. This opens up or "unzips" the double-stranded DNA to give two single strands of DNA that can be used as templates for replication.
In 1956, Arthur Kornberg and colleagues discovered DNA polymerase I (Pol I), in Escherichia coli. They described the DNA replication process by which DNA polymerase copies the base sequence of a template DNA strand. Kornberg was later awarded the Nobel Prize in Physiology or Medicine in 1959 for this work.DNA polymerase II was also discovered by Thomas Kornberg (the son of Arthur Kornberg) and Malcolm E. Gefter in 1970 while further elucidating the role of Pol I in E. coli DNA replication.
The main function of DNA polymerase is to synthesize DNA from deoxyribonucleotides, the building blocks of DNA. The DNA copies are created by the pairing of nucleotides to bases present on each strand of the original DNA molecule. This pairing always occurs in specific combinations, with cytosine along with guanine, and thymine along with adenine, forming two separate pairs, respectively. By contrast, RNA polymerases synthesize RNA from ribonucleotides from either RNA or DNA.
When synthesizing new DNA, DNA polymerase can add free nucleotides only to the 3' end of the newly forming strand. This results in elongation of the newly forming strand in a 5'-3' direction. No known DNA polymerase is able to begin a new chain (de novo); it can only add a nucleotide onto a pre-existing 3'-OH group, and therefore needs a primer at which it can add the first nucleotide. Primers consist of RNA or DNA bases (or both). In DNA replication, the first two bases are always RNA, and are synthesized by another enzyme called primase. Helicase and topoisomerase II are required to unwind DNA from a double-strand structure to a single-strand structure to facilitate replication of each strand consistent with the semiconservative model of DNA replication.
It is important to note that the directionality of the newly forming strand (the daughter strand) is opposite to the direction in which DNA polymerase moves along the template strand. Since DNA polymerase requires a free 3' OH group for initiation of synthesis, it can synthesize in only one direction by extending the 3' end of the preexisting nucleotide chain. Hence, DNA polymerase moves along the template strand in a 3'-5' direction, and the daughter strand is formed in a 5'-3' direction. This difference enables the resultant double-strand DNA formed to be composed of two DNA strands that are antiparallel to each other.
The function of DNA polymerase is not quite perfect, with the enzyme making about one mistake for every billion base pairs copied. Error correction is a property of some, but not all DNA polymerases. This process corrects mistakes in newly synthesized DNA. When an incorrect base pair is recognized, DNA polymerase moves backwards by one base pair of DNA. The 3'-5' exonuclease activity of the enzyme allows the incorrect base pair to be excised (this activity is known as proofreading). Following base excision, the polymerase can re-insert the correct base and replication can continue forwards. This preserves the integrity of the original DNA strand that is passed onto the daughter cells.
Fidelity is very important in DNA replication. Mismatches in DNA base pairing can potentially result in dysfunctional proteins and could lead to cancer. Many DNA polymerases contain an exonuclease domain, which acts in detecting base pair mismatches and further performs in the removal of the incorrect nucleotide to be replaced by the correct one. The shape and the interactions accommodating the Watson and Crick base pair are what primarily contribute to the detection or error. Hydrogen bonds play a key role in base pair binding and interaction. The loss of an interaction, which occurs at a mismatch, is said to trigger a shift in the balance, for the binding of the template-primer, from the polymerase, to the exonuclease domain. In addition, an incorporation of a wrong nucleotide causes a retard in DNA polymerization. This delay gives time for the DNA to be switched from the polymerase site to the exonuclease site. Different conformational changes and loss of interaction occur at different mismatches. In a purine:pyrimidine mismatch there is a displacement of the pyrimidine towards the major groove and the purine towards the minor groove. Relative to the shape of DNA polymerase's binding pocket, steric clashes occur between the purine and residues in the minor groove, and important van der Waals and electrostatic interactions are lost by the pyrimidine. Pyrimidine:pyrimidine and purine:purine mismatches present less notable changes since the bases are displaced towards the major groove, and less steric hindrance is experienced. However, although the different mismatches result in different steric properties, DNA polymerase is still able to detect and differentiate them so uniformly and maintain fidelity in DNA replication. DNA polymerization is also critical for many mutagenesis processes and is widely employed in biotechnologies.
The known DNA polymerases have highly conserved structure, which means that their overall catalytic subunits vary very little from species to species, independent of their domain structures. Conserved structures usually indicate important, irreplaceable functions of the cell, the maintenance of which provides evolutionary advantages. The shape can be described as resembling a right hand with thumb, finger, and palm domains. The palm domain appears to function in catalyzing the transfer of phosphoryl groups in the phosphoryl transfer reaction. DNA is bound to the palm when the enzyme is active. This reaction is believed to be catalyzed by a two-metal-ion mechanism. The finger domain functions to bind the nucleoside triphosphates with the template base. The thumb domain plays a potential role in the processivity, translocation, and positioning of the DNA.
DNA polymerase's rapid catalysis is due to its processive nature. Processivity is a characteristic of enzymes that function on polymeric substrates. In the case of DNA polymerase, the degree of processivity refers to the average number of nucleotides added each time the enzyme binds a template. The average DNA polymerase requires about one second locating and binding a primer/template junction. Once it is bound, a nonprocessive DNA polymerase adds nucleotides at a rate of one nucleotide per second.:207-208 Processive DNA polymerases, however, add multiple nucleotides per second, drastically increasing the rate of DNA synthesis. The degree of processivity is directly proportional to the rate of DNA synthesis. The rate of DNA synthesis in a living cell was first determined as the rate of phage T4 DNA elongation in phage infected E. coli. During the period of exponential DNA increase at 37 °C, the rate was 749 nucleotides per second.
DNA polymerase's ability to slide along the DNA template allows increased processivity. There is a dramatic increase in processivity at the replication fork. This increase is facilitated by the DNA polymerase's association with proteins known as the sliding DNA clamp. The clamps are multiple protein subunits associated in the shape of a ring. Using the hydrolysis of ATP, a class of proteins known as the sliding clamp loading proteins open up the ring structure of the sliding DNA clamps allowing binding to and release from the DNA strand. Protein-protein interaction with the clamp prevents DNA polymerase from diffusing from the DNA template, thereby ensuring that the enzyme binds the same primer/template junction and continues replication.:207-208 DNA polymerase changes conformation, increasing affinity to the clamp when associated with it and decreasing affinity when it completes the replication of a stretch of DNA to allow release from the clamp.
|DNA polymerase family A|
c:o6-methyl-guanine pair in the polymerase-2 basepair position
|SCOPe||1dpi / SUPFAM|
|DNA polymerase family B|
crystal structure of rb69 gp43 in complex with dna containing thymine glycol
|SCOPe||1noy / SUPFAM|
|DNA polymerase type B, organellar and viral|
phi29 dna polymerase, orthorhombic crystal form, ssdna complex
Based on sequence homology, DNA polymerases can be further subdivided into seven different families: A, B, C, D, X, Y, and RT.
Some viruses also encode special DNA polymerases, such as Hepatitis B virus DNA polymerase. These may selectively replicate viral DNA through a variety of mechanisms. Retroviruses encode an unusual DNA polymerase called reverse transcriptase, which is an RNA-dependent DNA polymerase (RdDp). It polymerizes DNA from a template of RNA.
|Family||Types of DNA polymerase||Species||Examples||Feature|
|A||Replicative and Repair Polymerases||Eukaryotic and Prokaryotic||T7 DNA polymerase, Pol I, Pol ?, ?, and ?||Two exonuclease domains (3'-5' and 5'-3')|
|B||Replicative and Repair Polymerases||Eukaryotic and Prokaryotic||Pol II, Pol B, Pol ?, Pol ?, ?, and ?||3'-5 exonuclease (proofreading); viral ones use protein primer|
|C||Replicative Polymerases||Prokaryotic||Pol III||3'-5 exonuclease (proofreading)|
|D||Replicative Polymerases||Euryarchaeota||PolD (DP1/DP2 heterodimer)||No "hand" feature, RNA polymerase-like; 3'-5 exonuclease (proofreading)|
|X||Replicative and Repair Polymerases||Eukaryotic||Pol ?, Pol ?, Pol ?, Pol ?, and Terminal deoxynucleotidyl transferase||template-independent; 5' phosphatase (only Pol ?)|
|Y||Replicative and Repair Polymerases||Eukaryotic and Prokaryotic||Pol ?, Pol ?, Pol ?, Pol IV, and Pol V|
|RT||Replicative and Repair Polymerases||Viruses, Retroviruses, and Eukaryotic||Telomerase, Hepatitis B virus||RNA-dependent|
Prokaryotic polymerases exist in two forms: core polymerase and holoenzyme. Core polymerase synthesizes DNA from the DNA template but it cannot initiate the synthesis alone or accurately. Holoenzyme accurately initiates synthesis.
Prokaryotic family A polymerases include the DNA polymerase I (Pol I) enzyme, which is encoded by the polA gene and ubiquitous among prokaryotes. This repair polymerase is involved in excision repair with both 3'-5' and 5'-3' exonuclease activity and processing of Okazaki fragments generated during lagging strand synthesis. Pol I is the most abundant polymerase, accounting for >95% of polymerase activity in E. coli; yet cells lacking Pol I have been found suggesting Pol I activity can be replaced by the other four polymerases. Pol I adds ~15-20 nucleotides per second, thus showing poor processivity. Instead, Pol I starts adding nucleotides at the RNA primer:template junction known as the origin of replication (ori). Approximately 400 bp downstream from the origin, the Pol III holoenzyme is assembled and takes over replication at a highly processive speed and nature.
DNA polymerase II is a family B polymerase encoded by the polB gene. Pol II has 3'-5' exonuclease activity and participates in DNA repair, replication restart to bypass lesions, and its cell presence can jump from ~30-50 copies per cell to ~200-300 during SOS induction. Pol II is also thought to be a backup to Pol III as it can interact with holoenzyme proteins and assume a high level of processivity. The main role of Pol II is thought to be the ability to direct polymerase activity at the replication fork and helped stalled Pol III bypass terminal mismatches.
Pfu DNA polymerase is a heat-stable enzyme of this family found in the hyperthermophilic archaeon Pyrococcus furiosus. Detailed classification divides family B in archaea into B1, B2, B3, in which B2 is a group of pseudoenzymes. Pfu belongs to family B3. Others PolBs found in archaea are part of "Casposons", Cas1-dependent transposons. Some viruses (including ?29 DNA polymerase) and mitochondrial plasmids carry polB as well.
DNA polymerase III holoenzyme is the primary enzyme involved in DNA replication in E. coli and belongs to family C polymerases. It consists of three assemblies: the pol III core, the beta sliding clamp processivity factor, and the clamp-loading complex. The core consists of three subunits, the polymerase activity hub, ?, exonucleolytic proofreader, and ?, which may act as a stabilizer for ?. The beta sliding clamp processivity factor is also present in duplicate, one for each core, to create a clamp that encloses DNA allowing for high processivity. The third assembly is a seven-subunit (?2) clamp loader complex. Recent research has classified Family C polymerases as a subcategory of Family X with no eukaryotic equivalents.[failed verification]
The old textbook "trombone model" depicts an elongation complex with two equivalents of the core enzyme at each replication fork (RF), one for each strand, the lagging and leading. However, recent evidence from single-molecule studies indicates an average of three stoichiometric equivalents of core enzyme at each RF for both Pol III and its counterpart in B. subtilis, PolC. In-cell fluorescent microscopy has revealed that leading strand synthesis may not be completely continuous, and Pol III* (i.e., the holoenzyme ?, ?, ?, ? and ? subunits without the ß2 sliding clamp) has a high frequency of dissociation from active RFs. In these studies, the replication fork turnover rate was about 10s for Pol III*, 47s for the ß2 sliding clamp, and 15m for the DnaB helicase. This suggests that the DnaB helicase may remain stably associated at RFs and serve as a nucleation point for the competent holoenzyme. In vitro single-molecule studies have shown that Pol III* has a high rate of RF turnover when in excess, but remains stably associated with replication forks when concentration is limiting. Another single-molecule study showed that DnaB helicase activity and strand elongation can proceed with decoupled, stochastic kinetics.
In E. coli, DNA polymerase IV (Pol IV) is an error-prone DNA polymerase involved in non-targeted mutagenesis. Pol IV is a Family Y polymerase expressed by the dinB gene that is switched on via SOS induction caused by stalled polymerases at the replication fork. During SOS induction, Pol IV production is increased tenfold and one of the functions during this time is to interfere with Pol III holoenzyme processivity. This creates a checkpoint, stops replication, and allows time to repair DNA lesions via the appropriate repair pathway. Another function of Pol IV is to perform translesion synthesis at the stalled replication fork like, for example, bypassing N2-deoxyguanine adducts at a faster rate than transversing undamaged DNA. Cells lacking dinB gene have a higher rate of mutagenesis caused by DNA damaging agents.
DNA polymerase V (Pol V) is a Y-family DNA polymerase that is involved in SOS response and translesion synthesis DNA repair mechanisms. Transcription of Pol V via the umuDC genes is highly regulated to produce only Pol V when damaged DNA is present in the cell generating an SOS response. Stalled polymerases causes RecA to bind to the ssDNA, which causes the LexA protein to autodigest. LexA then loses its ability to repress the transcription of the umuDC operon. The same RecA-ssDNA nucleoprotein posttranslationally modifies the UmuD protein into UmuD' protein. UmuD and UmuD' form a heterodimer that interacts with UmuC, which in turn activates umuC's polymerase catalytic activity on damaged DNA. In E. coli, a polymerase "tool belt" model for switching pol III with pol IV at a stalled replication fork, where both polymerases bind simultaneously to the ?-clamp, has been proposed. However, the involvement of more than one TLS polymerase working in succession to bypass a lesion has not yet been shown in E. coli. Moreover, Pol IV can catalyze both insertion and extension with high efficiency, whereas pol V is considered the major SOS TLS polymerase. One example is the bypass of intra strand guanine thymine cross-link where it was shown on the basis of the difference in the mutational signatures of the two polymerases, that pol IV and pol V compete for TLS of the intra-strand crosslink.
In 1998, the family D of DNA polymerase was discovered in Pyrococcus furiosus and Methanococcus jannaschii. The PolD complex is a heterodimer of two chains, each encoded by DP1 (small proofreading) and DP2 (large catalytic). Unlike other DNA polymerases, the structure and mechanism of the catalytic core resemble that of multi-subunit RNA polymerases. The DP1-DP2 interface resembles that of Eukaryotic Class B polymerase zinc finger and its small subunit. DP1, a Mre11-like exonuclease, is likely the precursor of small subunit of Pol ? and ?, providing proofreading capabilities now lost in Eukaryotes. Its N-terminal HSH domain is similar to AAA proteins, especially Pol III subunit ? and RuvB, in structure. DP2 has a Class II KH domain.Pyrococcus abyssi polD is more heat-stable and more accurate than Taq polymerase, but has not yet been commercialized.
Family X polymerases contain the well-known eukaryotic polymerase pol ? (beta), as well as other eukaryotic polymerases such as Pol ? (sigma), Pol ? (lambda), Pol ? (mu), and Terminal deoxynucleotidyl transferase (TdT). Family X polymerases are found mainly in vertebrates, and a few are found in plants and fungi. These polymerases have highly conserved regions that include two helix-hairpin-helix motifs that are imperative in the DNA-polymerase interactions. One motif is located in the 8 kDa domain that interacts with downstream DNA and one motif is located in the thumb domain that interacts with the primer strand. Pol ?, encoded by POLB gene, is required for short-patch base excision repair, a DNA repair pathway that is essential for repairing alkylated or oxidized bases as well as abasic sites. Pol ? and Pol ?, encoded by the POLL and POLM genes respectively, are involved in non-homologous end-joining, a mechanism for rejoining DNA double-strand breaks due to hydrogen peroxide and ionizing radiation, respectively. TdT is expressed only in lymphoid tissue, and adds "n nucleotides" to double-strand breaks formed during V(D)J recombination to promote immunological diversity.
Pol ? (alpha), Pol ? (delta), and Pol ? (epsilon) are members of Family B Polymerases and are the main polymerases involved with nuclear DNA replication. Pol ? complex (pol ?-DNA primase complex) consists of four subunits: the catalytic subunit POLA1, the regulatory subunit POLA2, and the small and the large primase subunits PRIM1 and PRIM2 respectively. Once primase has created the RNA primer, Pol ? starts replication elongating the primer with ~20 nucleotides. Due to its high processivity, Pol ? takes over the leading and lagging strand synthesis from Pol ?.:218-219 Pol ? is expressed by genes POLD1, creating the catalytic subunit, POLD2, POLD3, and POLD4 creating the other subunits that interact with Proliferating Cell Nuclear Antigen (PCNA), which is a DNA clamp that allows Pol ? to possess processivity. Pol ? is encoded by the POLE1, the catalytic subunit, POLE2, and POLE3 gene. It has been reported that the function of Pol ? is to extend the leading strand during replication, while Pol ? primarily replicates the lagging strand; however, recent evidence suggested that Pol ? might have a role in replicating the leading strand of DNA as well. Pol ?'s C-terminus "polymerase relic" region, despite being unnecessary for polymerase activity, is thought to be essential to cell vitality. The C-terminus region is thought to provide a checkpoint before entering anaphase, provide stability to the holoenzyme, and add proteins to the holoenzyme necessary for initiation of replication. Pol ? has a larger "palm" domain that provides high processivity independently of PCNA.
Compared to other Family B polymerases, the DEDD exonuclease family responsible for proofreading is inactivated in Pol ?. Pol ? is unique in that it has two zinc finger domains and an inactive copy of another family B polymerase in its C-terminal. The presence of this zinc finger has implications in the origins of Eukaryota, which in this case is placed into the Asgard group with archaeal B3 polymerase.
Pol ? (eta), Pol ? (iota), and Pol ? (kappa), are Family Y DNA polymerases involved in the DNA repair by translesion synthesis and encoded by genes POLH, POLI, and POLK respectively. Members of Family Y have five common motifs to aid in binding the substrate and primer terminus and they all include the typical right hand thumb, palm and finger domains with added domains like little finger (LF), polymerase-associated domain (PAD), or wrist. The active site, however, differs between family members due to the different lesions being repaired. Polymerases in Family Y are low-fidelity polymerases, but have been proven to do more good than harm as mutations that affect the polymerase can cause various diseases, such as skin cancer and Xeroderma Pigmentosum Variant (XPS). The importance of these polymerases is evidenced by the fact that gene encoding DNA polymerase ? is referred as XPV, because loss of this gene results in the disease Xeroderma Pigmentosum Variant. Pol ? is particularly important for allowing accurate translesion synthesis of DNA damage resulting from ultraviolet radiation. The functionality of Pol ? is not completely understood, but researchers have found two probable functions. Pol ? is thought to act as an extender or an inserter of a specific base at certain DNA lesions. All three translesion synthesis polymerases, along with Rev1, are recruited to damaged lesions via stalled replicative DNA polymerases. There are two pathways of damage repair leading researchers to conclude that the chosen pathway depends on which strand contains the damage, the leading or lagging strand.
Pol ? another B family polymerase, is made of two subunits Rev3, the catalytic subunit, and Rev7 (MAD2L2), which increases the catalytic function of the polymerase, and is involved in translesion synthesis. Pol ? lacks 3' to 5' exonuclease activity, is unique in that it can extend primers with terminal mismatches. Rev1 has three regions of interest in the BRCT domain, ubiquitin-binding domain, and C-terminal domain and has dCMP transferase ability, which adds deoxycytidine opposite lesions that would stall replicative polymerases Pol ? and Pol ?. These stalled polymerases activate ubiquitin complexes that in turn disassociate replication polymerases and recruit Pol ? and Rev1. Together Pol ? and Rev1 add deoxycytidine and Pol ? extends past the lesion. Through a yet undetermined process, Pol ? disassociates and replication polymerases reassociate and continue replication. Pol ? and Rev1 are not required for replication, but loss of REV3 gene in budding yeast can cause increased sensitivity to DNA-damaging agents due to collapse of replication forks where replication polymerases have stalled.
Telomerase is a ribonucleoprotein which functions to replicate ends of linear chromosomes since normal DNA polymerase cannot replicate the ends, or telomeres. The single-strand 3' overhang of the double-strand chromosome with the sequence 5'-TTAGGG-3' recruits telomerase. Telomerase acts like other DNA polymerases by extending the 3' end, but, unlike other DNA polymerases, telomerase does not require a template. The TERT subunit, an example of a reverse transcriptase, uses the RNA subunit to form the primer-template junction that allows telomerase to extend the 3' end of chromosome ends. The gradual decrease in size of telomeres as the result of many replications over a lifetime are thought to be associated with the effects of aging.:248-249
Pol ? (gamma), Pol ? (theta), and Pol ? (nu) are Family A polymerases. Pol ?, encoded by the POLG gene, is the only mtDNA polymerase and therefore replicates, repairs, and has proofreading 3'-5' exonuclease and 5' dRP lyase activities. Any mutation that leads to limited or non-functioning Pol ? has a significant effect on mtDNA and is the most common cause of autosomal inherited mitochondrial disorders. Pol ? contains a C-terminus polymerase domain and an N-terminus 3'-5' exonuclease domain that are connected via the linker region, which binds the accessory subunit. The accessory subunit binds DNA and is required for processivity of Pol ?. Point mutation A467T in the linker region is responsible for more than one-third of all Pol ?-associated mitochondrial disorders. While many homologs of Pol ?, encoded by the POLQ gene, are found in eukaryotes, its function is not clearly understood. The sequence of amino acids in the C-terminus is what classifies Pol ? as Family A polymerase, although the error rate for Pol ? is more closely related to Family Y polymerases. Pol ? extends mismatched primer termini and can bypass abasic sites by adding a nucleotide. It also has Deoxyribophosphodiesterase (dRPase) activity in the polymerase domain and can show ATPase activity in close proximity to ssDNA. Pol ? (nu) is considered to be the least effective of the polymerase enzymes. However, DNA polymerase nu plays an active role in homology repair during cellular responses to crosslinks, fulfilling its role in a complex with helicase.
Plants use two Family A polymerases to copy both the mitochrondrial and plastid genomes. They are more similar to bacterial Pol I than they are to mamallian Pol ?.
Retroviruses encode an unusual DNA polymerase called reverse transcriptase, which is an RNA-dependent DNA polymerase (RdDp) that synthesizes DNA from a template of RNA. The reverse transcriptase family contain both DNA polymerase functionality and RNase H functionality, which degrades RNA base-paired to DNA. An example of a retrovirus is HIV.: |
Reverberation, in psychoacoustics and acoustics, is a persistence of sound after the sound is produced.A reverberation, or reverb, is created when a sound or signal is reflected causing a large number of reflections to build up and then decay as the sound is absorbed by the surfaces of objects in the space – which could include furniture, people, and air. This is most noticeable when the sound source stops but the reflections continue, decreasing in amplitude, until they reach zero amplitude.
Psychoacoustics is the scientific study of sound perception and audiology—how humans perceive various sounds. More specifically, it is the branch of science studying the psychological and physiological responses associated with sound. It can be further categorized as a branch of psychophysics. Psychoacoustics received its name from a field within psychology — i.e., recognition science — which deals with all kinds of human perceptions. It is an interdisciplinary field of many areas, including psychology, acoustics, electronic engineering, physics, biology, physiology, and computer science.
Acoustics is the branch of physics that deals with the study of all mechanical waves in gases, liquids, and solids including topics such as vibration, sound, ultrasound and infrasound. A scientist who works in the field of acoustics is an acoustician while someone working in the field of acoustics technology may be called an acoustical engineer. The application of acoustics is present in almost all aspects of modern society with the most obvious being the audio and noise control industries.
In physics, sound is a vibration that typically propagates as an audible wave of pressure, through a transmission medium such as a gas, liquid or solid.
Reverberation is frequency dependent: the length of the decay, or reverberation time, receives special consideration in the architectural design of spaces which need to have specific reverberation times to achieve optimum performance for their intended activity. ms after the previous sound, reverberation is the occurrence of reflections that arrive in a sequence of less than approximately 50 ms. As time passes, the amplitude of the reflections gradually reduces to non-noticeable levels. Reverberation is not limited to indoor spaces as it exists in forests and other outdoor environments where reflection exists.In comparison to a distinct echo, that is detectable at a minimum of 50 to 100
In audio signal processing and acoustics, echo is a reflection of sound that arrives at the listener with a delay after the direct sound. The delay is directly proportional to the distance of the reflecting surface from the source and the listener. Typical examples are the echo produced by the bottom of a well, by a building, or by the walls of an enclosed room and an empty room. A true echo is a single reflection of the sound source.
A millisecond is a thousandth of a second.
Reverberation occurs naturally when a person sings, talks, or plays an instrument acoustically in a hall or performance space with sound-reflective surfaces.The sound of reverberation is often electronically added to the vocals of singers and to musical instruments. This is done in both live sound systems and sound recordings by using effects units. Effects units that are specialized in the generation of the reverberation effect are commonly called reverbs.
Singing is the act of producing musical sounds with the voice and augments regular speech by the use of sustained tonality, rhythm, and a variety of vocal techniques. A person who sings is called a singer or vocalist. Singers perform music that can be sung with or without accompaniment by musical instruments. Singing is often done in an ensemble of musicians, such as a choir of singers or a band of instrumentalists. Singers may perform as soloists or accompanied by anything from a single instrument up to a symphony orchestra or big band. Different singing styles include art music such as opera and Chinese opera, Indian music and religious music styles such as gospel, traditional music styles, world music, jazz, blues, gazal and popular music styles such as pop, rock, electronic dance music and filmi.
A musical instrument is an instrument created or adapted to make musical sounds. In principle, any object that produces sound can be considered a musical instrument—it is through purpose that the object becomes a musical instrument. The history of musical instruments dates to the beginnings of human culture. Early musical instruments may have been used for ritual, such as a trumpet to signal success on the hunt, or a drum in a religious ceremony. Cultures eventually developed composition and performance of melodies for entertainment. Musical instruments evolved in step with changing applications.
A sound reinforcement system is the combination of microphones, signal processors, amplifiers, and loudspeakers in enclosures all controlled by a mixing console that makes live or pre-recorded sounds louder and may also distribute those sounds to a larger or more distant audience. In many situations, a sound reinforcement system is also used to enhance or alter the sound of the sources on the stage, typically by using electronic effects, such as reverb, as opposed to simply amplifying the sources unaltered.
Whereas reverberation normally adds to the naturalness of recorded sound by adding a sense of space, reverberation can reduce speech intelligibility, especially when noise is also present. Users of hearing aids frequently report difficult in understanding speech in reverberant, noisy situations. Reverberation is a very significant source of mistakes in automatic speech recognition. Dereverberation is the process of reducing the level of reverberation in a sound or signal.
Speech recognition is a interdisciplinary subfield of computational linguistics that develops methodologies and technologies that enables the recognition and translation of spoken language into text by computers. It is also known as automatic speech recognition (ASR), computer speech recognition or speech to text (STT). It incorporates knowledge and research in the linguistics, computer science, and electrical engineering fields.
Dereverberation is the process by which the effects of reverberation are removed from sound, after such reverberant sound has been picked up by microphones. Dereverberation is a subtopic of acoustic digital signal processing and is most commonly applied to speech but also has relevance in some aspects of music processing. Dereverberation of audio is a corresponding function to blind deconvolution of images, although the techniques used are usually very different. Reverberation itself is caused by sound reflections in a room and is quantified by the room reverberation time and the direct-to-reverberant ratio. The effect of dereverberation is to increase the direct-to-reverberant ratio so that the sound is perceived as closer and clearer.
Reverberation time is a measure of the time required for the sound to "fade away" in an enclosed area after the source of the sound has stopped.
When it comes to accurately measuring reverberation time with a meter, the term T60 dB, measured after the generated test signal is abruptly ended.(an abbreviation for Reverberation Time 60dB) is used. T60 provides an objective reverberation time measurement. It is defined as the time it takes for the sound pressure level to reduce by 60
The decibel is a unit of measurement used to express the ratio of one value of a power or field quantity to another on a logarithmic scale, the logarithmic quantity being called the power level or field level, respectively. It can be used to express a change in value or an absolute value. In the latter case, it expresses the ratio of a value to a fixed reference value; when used in this way, a suffix that indicates the reference value is often appended to the decibel symbol. For example, if the reference value is 1 volt, then the suffix is "V", and if the reference value is one milliwatt, then the suffix is "m".
Reverberation time is frequently stated as a single value if measured as a wideband signal (20 Hz to 20 kHz). However, being frequency dependent, it can be more precisely described in terms of frequency bands (one octave, 1/3 octave, 1/6 octave, etc.). Being frequency dependent, the reverberation time measured in narrow bands will differ depending on the frequency band being measured. For precision, it is important to know what ranges of frequencies are being described by a reverberation time measurement.
In the late 19th century, Wallace Clement Sabine started experiments at Harvard University to investigate the impact of absorption on the reverberation time. Using a portable wind chest and organ pipes as a sound source, a stopwatch and his ears, he measured the time from interruption of the source to inaudibility (a difference of roughly 60 dB). He found that the reverberation time is proportional to room dimensions and inversely proportional to the amount of absorption present.
Wallace Clement Sabine was an American physicist who founded the field of architectural acoustics. He graduated from Ohio State University in 1886 at the age of 18 before joining Harvard University for graduate study and remaining as a faculty member. Sabine was architectural acoustician of Boston's Symphony Hall, widely considered one of the two or three best concert halls in the world for its acoustics.
A stopwatch is a handheld timepiece designed to measure the amount of time that elapses between its activation and deactivation. A large digital version of a stopwatch designed for viewing at a distance, as in a sports stadium, is called a stopclock. In manual timing, the clock is started and stopped by a person pressing a button. In fully automatic time, both starting and stopping are triggered automatically, by sensors.
The optimum reverberation time for a space in which music is played depends on the type of music that is to be played in the space. Rooms used for speech typically need a shorter reverberation time so that speech can be understood more clearly. If the reflected sound from one syllable is still heard when the next syllable is spoken, it may be difficult to understand what was said."Cat", "Cab", and "Cap" may all sound very similar. If on the other hand the reverberation time is too short, tonal balance and loudness may suffer. Reverberation effects are often used in studios to add depth to sounds. Reverberation changes the perceived spectral structure of a sound but does not alter the pitch.
Basic factors that affect a room's reverberation time include the size and shape of the enclosure as well as the materials used in the construction of the room. Every object placed within the enclosure can also affect this reverberation time, including people and their belongings.
Historically, reverberation time could only be measured using a level recorder (a plotting device which graphs the noise level against time on a ribbon of moving paper). A loud noise is produced, and as the sound dies away the trace on the level recorder will show a distinct slope. Analysis of this slope reveals the measured reverberation time. Some modern digital sound level meters can carry out this analysis automatically.
Several methods exist for measuring reverb time. An impulse can be measured by creating a sufficiently loud noise (which must have a defined cut-off point). Impulse noise sources such as a blank pistol shot or balloon burst may be used to measure the impulse response of a room.
Alternatively, a random noise signal such as pink noise or white noise may be generated through a loudspeaker, and then turned off. This is known as the interrupted method, and the measured result is known as the interrupted response.
A two-port measurement system can also be used to measure noise introduced into a space and compare it to what is subsequently measured in the space. Consider sound reproduced by a loudspeaker into a room. A recording of the sound in the room can be made and compared to what was sent to the loudspeaker. The two signals can be compared mathematically. This two port measurement system utilizes a Fourier transform to mathematically derive the impulse response of the room. From the impulse response, the reverberation time can be calculated. Using a two-port system allows reverberation time to be measured with signals other than loud impulses. Music or recordings of other sounds can be used. This allows measurements to be taken in a room after the audience is present.
Reverberation time is usually stated as a decay time and is measured in seconds. There may or may not be any statement of the frequency band used in the measurement. Decay time is the time it takes the signal to diminish 60 dB below the original sound. It is often difficult to inject enough sound into the room to measure a decay of 60 dB, particularly at lower frequencies. If the decay is linear, it is sufficient to measure a drop of 20 dB and multiply the time by 3, or a drop of 30 dB and multiply the time by 2. These are the so-called T20 and T30 measurement methods.
The RT60 reverberation time measurement is defined in the ISO 3382-1 standard for performance spaces, the ISO 3382-2 standard for ordinary rooms, and the ISO 3382-3 for open-plan offices, as well as the ASTM E2235 standard.
The concept of Reverberation Time implicitly supposes that the decay rate of the sound is exponential, so that the sound level diminishes regularly, at a rate of so many dB per second. It is not often the case in real rooms, depending on the disposition of reflective, dispersive and absorbing surfaces. Moreover, successive measurement of the sound level often yields very different results, as differences in phase in the exciting sound build up in notably different sound waves. In 1965, Manfred R. Schroeder published "A new method of Measuring Reverberation Time" in the Journal of the Acoustical Society of America. He proposed to measure, not the power of the sound, but the energy, by integrating it. This made it possible to show the variation in the rate of decay and to free acousticians from the necessity of averaging many measurements.
Sabine's reverberation equation was developed in the late 1890s in an empirical fashion. He established a relationship between the T60 of a room, its volume, and its total absorption (in sabins). This is given by the equation:
where is the speed of sound in the room (for 20 degrees Celsius), is the volume of the room in m³, total surface area of room in m², is the average absorption coefficient of room surfaces, and the product is the total absorption in sabins.
The total absorption in sabins (and hence reverberation time) generally changes depending on frequency (which is defined by the acoustic properties of the space). The equation does not take into account room shape or losses from the sound traveling through the air (important in larger spaces). Most rooms absorb less sound energy in the lower frequency ranges resulting in longer reverb times at lower frequencies.
Sabine concluded that the reverberation time depends upon the reflectivity of sound from various surfaces available inside the hall. If the reflection is coherent, the reverberation time of the hall will be longer; the sound will take more time to die out.
The reverberation time RT60 and the volume V of the room have great influence on the critical distance dc (conditional equation):
where critical distance is measured in meters, volume is measured in m³, and reverberation time is measured in seconds.
The absorption coefficient of a material is a number between 0 and 1 which indicates the proportion of sound which is absorbed by the surface compared to the proportion which is reflected back . the room. A large, fully open window would offer no reflection as any sound reaching it would pass straight out and no sound would be reflected. This would have an absorption coefficient of 1. Conversely, a thick, smooth painted concrete ceiling would be the acoustic equivalent of a mirror and have an absorption coefficient very close to 0.
Several composers employ the reverberation effect as a main sound resource, having a comparable relavance as the solo instrument. For example, Pauline Oliveros, Henrique Machado and many others. In order to employ the reverberant properties of the room, the composers are supposed to investigate and probe the sound response of that particular ambient, that will affect and inspire the creation of the musical work.
A performer or a producer of live or recorded music often induces reverberation in a work. Several systems have been developed to produce or to simulate reverberation.
The first reverb effects created for recordings used a real physical space as a natural echo chamber. A loudspeaker would play the sound, and then a microphone would pick it up again, including the effects of reverb. Although this is still a common technique, it requires a dedicated soundproofed room, and varying the reverb time is difficult.
A plate reverb system uses an electromechanical transducer, similar to the driver in a loudspeaker, to create vibrations in a large plate of sheet metal. The plate’s motion is picked up by one or more contact microphones whose output is an audio signal which may be added to the original "dry" signal. In the late 1950s, Elektro-Mess-Technik (EMT) introduced the EMT 140; 600-pound (270 kg) model popular in recording studios, contributing to many hit records such as Beatles and Pink Floyd albums recorded at Abbey Road Studios in the 1960s, and others recorded by Bill Porter in Nashville's RCA Studio B.[ citation needed ] Early units had one pickup for mono output, and later models featured two pickups for stereo use. The reverb time can be adjusted by a damping pad, made from framed acoustic tiles. The closer the damping pad, the shorter the reverb time. However, the pad never touches the plate. Some units also featured a remote control.a
A spring reverb system uses a transducer at one end of a spring and a pickup at the other, similar to those used in plate reverbs, to create and capture vibrations within a metal spring. Laurens Hammond was granted a patent on a spring-based mechanical reverberation system in 1939.The Hammond Organ included a built-in spring reverberator.
Spring reverberators were once widely used in semi-professional recording and are frequently incorporated into Guitar amplifiers due to their modest cost and small size. One advantage over more sophisticated alternatives is that they lend themselves to the creation of special effects; for example rocking them back and forth creates a thundering, crashing sound caused by the springs colliding with each other.
Digital reverberators use various signal processing algorithms in order to create the reverb effect. Since reverberation is essentially caused by a very large number of echoes, simple reverberation algorithms use several feedback delay circuits to create a large, decaying series of echoes. More advanced digital reverb generators can simulate the time and frequency domain response of a specific room (using room dimensions, absorption, and other properties). In a music hall, the direct sound always arrives at the listener's ear first because it follows the shortest path. Shortly after the direct sound, the reverberant sound arrives. The time between the two is called the "pre-delay."
Reverberation, or informally, "reverb" or "verb", is one of the most universally used audio effects and is often found in guitar pedals, synthesizers, effects units, digital audio workstations (DAWs) and VST plug-ins.
Convolution reverb is a process used for digitally simulating reverberation. It uses the mathematical convolution operation, a pre-recorded audio sample of the impulse response of the space being modeled, and the sound to be echoed, to produce the effect. The impulse-response recording is first stored in a digital signal-processing system. This is then convolved with the incoming audio signal to be processed.
Distortion is the alteration of the original shape of something. In communications and electronics it means the alteration of the waveform of an information-bearing signal, such as an audio signal representing sound or a video signal representing images, in an electronic device or communication channel.
Audio power is the electrical power transferred from an audio amplifier to a loudspeaker, measured in watts. The electrical power delivered to the loudspeaker, together with its efficiency, determines the sound power generated.
Room acoustics describes how sound behaves in an enclosed space.
An echo chamber is a hollow enclosure used to produce reverberation, usually for recording purposes. For example, the producers of a television or radio program might wish to produce the aural illusion that a conversation is taking place in a large room or a cave; these effects can be accomplished by playing the recording of the conversation inside an echo chamber, with an accompanying microphone to catch the reverberation. Nowadays effects units are more widely used to create such effects, but echo chambers are still used today, such as the famous echo chambers at Capitol Studios.
An anechoic chamber is a room designed to completely absorb reflections of either sound or electromagnetic waves. They are also often isolated from waves entering from their surroundings. This combination means that a person or detector exclusively hears direct sounds, in effect simulating being inside an infinitely large room.
Frequency response is the quantitative measure of the output spectrum of a system or device in response to a stimulus, and is used to characterize the dynamics of the system. It is a measure of magnitude and phase of the output as a function of frequency, in comparison to the input. In simplest terms, if a sine wave is injected into a system at a given frequency, a linear system will respond at that same frequency with a certain magnitude and a certain phase angle relative to the input. Also for a linear system, doubling the amplitude of the input will double the amplitude of the output. In addition, if the system is time-invariant, then the frequency response also will not vary with time. Thus for LTI systems, the frequency response can be seen as applying the system's transfer function to a purely imaginary number argument representing the frequency of the sinusoidal excitation.
Audio system measurements are made for several purposes. Designers take measurements so that they can specify the performance of a piece of equipment. Maintenance engineers make them to ensure equipment is still working to specification, or to ensure that the cumulative defects of an audio path are within limits considered acceptable. Some aspects of measurement and specification relate only to intended usage. Audio system measurements often accommodate psychoacoustic principles to measure the system in a way that relates to human hearing.
Thiele/Small parameters are a set of electromechanical parameters that define the specified low frequency performance of a loudspeaker driver. These parameters are published in specification sheets by driver manufacturers so that designers have a guide in selecting off-the-shelf drivers for loudspeaker designs. Using these parameters, a loudspeaker designer may simulate the position, velocity and acceleration of the diaphragm, the input impedance and the sound output of a system comprising a loudspeaker and enclosure. Many of the parameters are strictly defined only at the resonant frequency, but the approach is generally applicable in the frequency range where the diaphragm motion is largely pistonic, i.e. when the entire cone moves in and out as a unit without cone breakup.
Loudspeaker acoustics is a subfield of acoustical engineering concerned with the reproduction of sound and the parameters involved in doing so in actual equipment.
Digital room correction is a process in the field of acoustics where digital filters designed to ameliorate unfavorable effects of a room's acoustics are applied to the input of a sound reproduction system. Modern room correction systems produce substantial improvements in the time domain and frequency domain response of the sound reproduction system.
Speech Transmission Index (STI) is a measure of speech transmission quality. The absolute measurement of speech intelligibility is a complex science. The STI measures some physical characteristics of a transmission channel, and expresses the ability of the channel to carry across the characteristics of a speech signal. STI is a well-established objective measurement predictor of how the characteristics of the transmission channel affect speech intelligibility.
Critical distance is, in acoustics, the distance at which the sound pressure level of the direct sound D and the reverberant sound R are equal when dealing with a directional source. In other words, it is the point in space at which the combined amplitude of all the reflected echoes are the same as the amplitude of the sound coming directly from the source. This distance, called the critical distance , is dependent on the geometry and absorption of the space in which the sound waves propagate, as well as the dimensions and shape of the sound source.
In a reverberant space, the sound perceived by a listener is a combination of direct and reverberant sound. The ratio of direct sound is dependent on the distance between the source and the listener, and upon the reverberation time in [the room]. At a certain distance the two will be equal. This is called the "critical distance."
Loudspeaker measurement is the practice of determining the behavior of loudspeakers by measuring various aspects of performance. This measurement is especially important because loudspeakers, being transducers, have a higher level of distortion than other audio system components used in playback or sound reinforcement.
In audio signal processing, convolution reverb is a process used for digitally simulating the reverberation of a physical or virtual space through the use of software profiles; a piece of software that creates a simulation of an audio environment. It is based on the mathematical convolution operation, and uses a pre-recorded audio sample of the impulse response of the space being modeled. To apply the reverberation effect, the impulse-response recording is first stored in a digital signal-processing system. This is then convolved with the incoming audio signal to be processed. The process of convolution multiplies each sample of the audio to be processed (reverberated) with the samples in the impulse response file.
The sound reduction index is used to measure the level of sound insulation provided by a structure such as a wall, window, door, or ventilator. It is defined in the series of international standards ISO 16283 and the older ISO 140, or the regional or national variants on these standards. In the United States, the sound transmission class rating is generally used instead. The basic method for both the actual measurements and the mathematical calculations behind both standards is similar, however they diverge to a significant degree in the detail, and in the numerical results produced.
Underwater acoustics is the study of the propagation of sound in water and the interaction of the mechanical waves that constitute sound with the water, its contents and its boundaries. The water may be in the ocean, a lake, a river or a tank. Typical frequencies associated with underwater acoustics are between 10 Hz and 1 MHz. The propagation of sound in the ocean at frequencies lower than 10 Hz is usually not possible without penetrating deep into the seabed, whereas frequencies above 1 MHz are rarely used because they are absorbed very quickly. Underwater acoustics is sometimes known as hydroacoustics.
An Audio Analyzer is a test and measurement instrument used to objectively quantify the audio performance of electronic and electro-acoustical devices. Audio quality metrics cover a wide variety of parameters, including level, gain, noise, harmonic and intermodulation distortion, frequency response, relative phase of signals, interchannel crosstalk, and more. In addition, many manufacturers have requirements for behavior and connectivity of audio devices that require specific tests and confirmations.
Diffuse field acoustic testing is the testing of the mechanical resistance of a spacecraft to the acoustic pressures during launch.
3D sound reconstruction is the application of reconstruction techniques to 3D sound localization technology. These methods of reconstructing three-dimensional sound are used to recreate sounds to match natural environments and provide spatial cues of the sound source. They also see applications in creating 3D visualizations on a sound field to include physical aspects of sound waves including direction, pressure, and intensity. This technology is used in entertainment to reproduce a live performance through computer speakers. The technology is also used in military applications to determine location of sound sources. Reconstructing sound fields is also applicable to medical imaging to measure points in ultrasound. |
St. Paul's Statistics Introduction: Chapter 3
Computing Answers with the Binomial Equation
Updated September 2019
- Develop capability to build 'Pascal's Triangle'
- Understand how to write the Binomial Distribution as an equation
- Use terms from the Binomial Equation to solve real world problems
- Build a Binomial PDF graph from the Binomial Equation
Binomial Distribution Use Criteria
the Binomial Distribution can be used when all 4 conditions below are met:
When these four conditions are met, use of a Binomial Distribution is appropriate.
- Results of each trial (coin flips, shell hits, failed students etc.) are independent. (i.e. if four coins are flipped, a coin landing 'heads' has no effect on the behavior of its neighbors)
- The probability of 'success' is known. (for a coin flip, it would be 50%)
- The total number of trials (e.g. the number of coins pitched in the air) is known.
- The total number of trials (e.g. the number of coins, cannon shells, students etc.) is less than 13. Optional: Why limit it to 13 Terms?
The method of solving real world problems using the Binomial Distribution will be taught by working an example. Commentary will be provided to explain different aspects of the example.
Example Problem Statement: Four coins are thrown in the air, and land as they choose. Develop a Binomial Equation that represents each term of the PDF.
Step 1: Build Pascal's Triangle with a row that has one more number, than the problem has sample items or trials (be they coins, cannon shells or kittens). Pascal's Triangle will be used to provide numbers used in the probability formulas (4 coins->5 numbers)
Ch 3 Figure 1
The Steps for creating Pascal's Triangle are almost intuitive. Simply:
Step 2: p stands for success (e.g. heads up). q stands for failure (i.e. not heads up). Write pq pairs across the page until you have one more pq pair than there are samples in the problem (n=4 coins means you need 5 pq pairs).
- Write down the three 1's at the top of the triangle
- Every place you find a pair of numbers side by side, total them & write the answers in the row below
- Add 1's on the outside to enable creating another row. Apply Step 2 again until your triangle has enough numbers to meet the need of your number of trials. (How Many? If the number of coins, shells or whatever is n, then n+1 numbers are needed.)
Ch 3 Figure 2
Step 3: Use the number of coins (4 in our example) as an exponent on the first p.
Ch 3 Figure 3
Step 4: Decrease the exponent by one each time you put it on the next pair. Continue until you have a zero exponent on the last p. (see below)
Ch 3 Figure 4
Step 5: Do the same thing for q, but work from right to left.
Ch 3 Figure 5
Step 6: Put 'plus' signs between all the terms and set the entire thing equal to 100%. Oh, by the way, you must
use the numbers from Pascal's Triangle in front of the pq pairs as shown.
Ch 3 Figure 6
The equation you just wrote, is a math representation of the the following BINOMIAL PDF. We discuss that next.
Ch 3 Figure 7
Discussion: The Binomial PDF has now been reduced to an equation, in which each of the terms represents the probability of one possible state (i.e. one of the bars in the PDF). Because this is true, we should be able to match each term (pq pair) to one of the bars on the PDF Ch 3 Figure 7.
You will need a scientific calculator with a y^x function button on it. Any calculator will do, but that y^x button makes the computations easier and reduces errors.
Ch 3 Figure 7 above is the PDF for the example problem being worked. Let us verify the second probability bar (i.e. probability of 3 heads). You will notice, that for 3 heads calculation, I choose the term with p^3. (i.e. p raised to power of 3). If I wanted the probability of 2 heads, I would compute based on the term with p^2.
P(3 heads) = 4* p^3 * q^1 = 4 * (.5^3) * (.5^1) = .2500 = 25.00%
The math confirms the the area of the bar for 3 heads. (Note: We need to get rid of the idea that Ch 3 Figure 7 is somehow right because some expert said so. The PDF was not given to me by the Prophets. I had to calculate each term & then I drew the figure. So the 25% we just computed merely confirms I did it right when I built the graph shown. When something is true, YOU should be able to confirm its truth by study, by thinking and by applying yourself. That is what this statistics course is about.
Optional Paragraph: This is an introductory course, so I am not going into full detail about every subject; but, the Binomial itself is fully understandable if you continue your studies. If you study, you will discover that the terms from Pascal's triangle are related to the number of ways that each state can occur. The number of ways is tied back to Ch2 Probability Law 2 Correlary which states that if something can happen in 4 equally likely ways, then the probability of finding that state is increased by four times. I might ask, if four cannon shells (named Albert, Birtha, Charlie & Donna) are fired, how many different ways can all of them hit? The answer: Only one way. Albert must hit, Birtha must hit, same for Charlie, and same for Donna. So the number before the pq pair representing four hits is 1 -> because there is only one way it can happen. If we ask how many ways we can get exactly three hits, then answer is four ways. Albert can miss, Birtha can miss, Charlie can miss & Donna also. Each of those is a different way that 3 hits (and 1 miss) can occur. Thus we would expect the 'number' in front of p^3*q^1 should be four. Look up above and you will find that the full binomial equation with all of its terms has a 4 for the numerical coefficient representing 3 hits. Statistics is not about politics, or wishful thinking. At its best, it is a clear analysis of the likelihood of different events happening. At its worst, it is a complex playground where politically and economically motivated liars can hide misrepresentation amid its complexity.)
Required Material Resumes: For the example just worked, we now have a PDF Graph, and we have an equivalent PDF equation. While the method is a bit different from how it is done with the Gaussian Distribution, this should clarify how a PDF can be either in the form of an equation, or an equivalent graph. If both are done correctly, they 'say' the same thing. But for calculations, you will need the equation form. If you want to make sense out of it by looking, the PDF graph is far better. As said previously, the PDF graph is a road map to understanding the math.
Homework: Ch 3, Problem 1: Use the method above to compute the probability for each of the 5 possible outcomes of a 4 coin throw. (We just did the 2nd bar for you. Do the math for the other 4 bars). E-mail your answers to your teacher.
Homework: Ch 3 Problem 2:
Cats and Kittens:A vetranarian researcher has been keeping records for years on the color of kittens born in his office. He wants to know if color of the kitten matches a random variable, or if some other factor is at work in determining the color. His goal is to compare the probability to the office records.
A mother cat has 4 kittens. If the probability of a kitten being born black is 25%, what is the probability she will have exactly 3 black kittens?
Ch 3 Figure 8
Homework: Ch 3 Problem 3: The PDF for the coins is symmetric, but the PDF for the kittens is skewed to the right. What is unique about the problem statement that is causing one PDF to be symetric and another to be skewed? In words, describe a similar problem that can be used to test your theory about non-symetric PDFs. Do the math for each of the bars. By hand, draw the graph. Does it confirm your thory? E-mail your answer to your instructor.
- First, check the problem to verify that a Binomial Analysis is appropriate. The Binomial criteria is provided at the beginning of this chapter. Do you think we can use the Binomial Equation? Do all four criteria seem to be met?
- Create the Pascal's triangle with a sufficient number of terms for this problem.
- Create the Binomial Equation (all 5 terms) that describes the problem.
- Choose the correct term. Fill in the probabilities and calculate the answer.
- What is the answer? What is the probability of getting exactly 3 black kittens?
- Does your answer agree with the PDF Ch 3 Figure 8?
- E-mail your steps and the answer to your instructor.
Homework: Ch 3 Problem 4: Repeat Ch 3 Problem 1, but this time, assume a 'not black kitten' is success. Compute the probability of getting 0 or 1 or 2 or 4 'not black' kittens (refer to Probability Law 2). Should this answer be the same as what you got before? How can you use this observation to simplify problem solving? E-mail your answers to your instructor.
Homework: Ch 3 Problem 5:
- For any problem in general, if the probability of success is 1%, what is the probility of failure?
- Write the PDF equation for 6 bolts. Each bolt has a 1% chance of failure. Do not bother solving probabilities, just write the equation.
- E-mail your answers to your instructor.
Optional: Where are the Realistic Examples? you might ask. Ch3 Problem 5 is pretty realistic. Precisely that kind of analysis is used for important aircraft structures where failure could result in loss of aircraft. The analysis is also performed for multiple computers that check each other during flight- but the issue here is probability of a computer breaking during the flight. Last, we will make great use of the binomial distribution during our development and explanation of the Weibull Probability Distribution. That is, some of the procedural methods in performing a Weibull Analysis are based on Binomial Analysis.
End of Chapter 3
Beginning of St. Paul's Statistics Introduction
Dionysus.biz Home Page |
language, the hypothetical ancestor of the modern-day Slavic languages
, developed from the ancestral Proto-Balto-Slavic
language ( 1500 BC), which is the parent language of the Balto-Slavic languages
(both the Slavic and Baltic languages
, e.g. Latvian
). The first 2,000 years or so consist of the pre-Slavic era, a long period during which none of the later dialectal differences between Slavic languages had yet happened. The last stage in which the language remained without internal differences that later characterize different Slavic languages can be dated around AD 500 and is sometimes termed ''Proto-Slavic proper'' or ''Early Common Slavic''. Following this is the Common Slavic period ( 500–1000), during which the first dialectal differences appeared but the entire Slavic-speaking area continued to function as a single language, with sound change
s tending to spread throughout the entire area. By around 1000, the area had broken up into separate East Slavic
, West Slavic
and South Slavic
languages, and in the following centuries it broke up further into the various modern Slavic languages of which the following are extant: Belarusian
in the East; Czech
and the Sorbian languages
in the West, and Bulgarian
in the South.
The period from the early centuries AD to the end of the Common Slavic period around 1000 was a time of rapid change, concurrent with the explosive growth of the Slavic-speaking area. By the end of this period, most of the features of the modern Slavic languages had been established. The first historical documentation of the Slavic languages is found in isolated names and words in Greek
documents starting in the 6th century, when Slavic-speaking tribes first came in contact with the Greek-speaking Byzantine Empire
. The first continuous texts date from the late 9th century and were written in Old Church Slavonic
—based on the Slavic dialect used in the region of Thessaloniki
in Greek Macedonia
—as part of the Christianization of the Slavs
by Saints Cyril and Methodius
and their followers. Because these texts were written during the Common Slavic period, the language they document is close to the ancestral Proto-Slavic language and is still presenting enough unity, therefore it is critically important to the linguistic reconstruction of Slavic-language history.
This article covers historical developments up through the end of the Common Slavic period. For later developments, see History of the Slavic languages
Proto-Slavic is descended from Proto-Balto-Slavic
(the ancestor of the Balto-Slavic languages
). This language in turn is descended from Proto-Indo-European
, the parent language of the vast majority of European languages
, etc.). Proto-Slavic gradually evolved into the various Slavic languages during the latter half of the first millennium AD, concurrent with the explosive growth of the Slavic-speaking area. There is no scholarly consensus concerning either the number of stages involved in the development of the language (its periodization
) or the terms used to describe them. For consistency and convenience, this article and the Proto-Slavic
article adopt the following scheme:
# ''Pre-Slavic'' ( 1500 BC – AD 300): A long period of gradual development. The most significant phonological developments during this period involved the prosodic system
, e.g. tonal
and other register
distinctions on syllables.
# ''Proto-Slavic proper'' or ''Early Common Slavic'' ( AD 300–600): The early, uniform stage of Common Slavic, a period of rapid phonological change. There are no dialectal distinctions reconstructible from this period.
# ''Middle Common Slavic'' ( 600–800): The stage with the earliest identifiable dialectal distinctions. Rapid phonological change continued, although with the massive expansion of the Slavic-speaking area. Although some dialectal variation did exist, most sound changes were still uniform and consistent in their application. By the end of this stage, the vowel and consonant phonemes of the language were largely the same as those still found in the modern languages. For this reason, reconstructed "Proto-Slavic" forms commonly found in scholarly works and etymological dictionaries normally correspond to this period.
# ''Late Common Slavic'' ( 800–1000, although perhaps through 1150 in Kievan Rus'
, in the far northeast): The last stage in which the whole Slavic-speaking area still functioned as a single language, with sound changes normally propagating throughout the entire area, although often with significant dialectal variation in the details.
differ widely in both the terminology and periodization of these developments. Some scholars do not use the term "Common Slavic" at all. For some others, the Common Slavic period comes ''after'' Proto-Slavic rather than including it. Some scholars (e.g. Frederik Kortlandt
) divide the Common Slavic period into five or more stages, while others use as few as two (an early, uniform stage and a late, dialectally differentiated stage).
The currently most favoured model, the Kurgan hypothesis
, places the ''Urheimat
'' of the Proto-Indo-European people in the Pontic steppe
, represented archaeologically by the 5th millennium BCE Sredny Stog culture
From here, various daughter dialects dispersed radially in several waves between 4400 and 3000 BC.
The phonological changes which set Balto-Slavic apart from other Indo-European languages probably lasted from 3000 to 1000 BC, a period known as common ''Proto-Balto-Slavic
''. links the earliest stages of Balto-Slavic development with the Middle Dnieper culture
which connects the Corded Ware
and Yamna culture
s. Kurganists connect the latter two cultures with the so-called "Northwest (IE) group"
and the Iranian-speaking steppe nomads, respectively. This fits with the linguistic evidence in that Balto-Slavic appears to have had close contacts with Indo-Iranian
Scholars have proposed an association between Balto-Slavic and Germanic on the basis of lexical and morphological similarities that are unique to these languages.
Apart from a proposed genetic relationship (PIE forming a Germano-Balto-Slavic sub-branch), the similarities are likely due to continuous contacts, whereby common loan words spread through the communities in the forest zones at an early time of their linguistic development.
Similarly, Balto-Slavic and Indo-Iranian might have formed some kind of continuum from the north-west to the south-east, given that they share both satem
ization and the Ruki sound law
On the other hand, genetic studies have shown that Slavs and North Indians share much larger amounts of the R1a
haplogroup (associated with the spread of Indo-European languages) than do most Germanic populations. The Balto-Slavic - Indo-Iranian link might thus be a result of a large part of common ancestry, between Eastern Europeans and Indo-Iranians. Balto-Slavic then expanded along the forest zone, replacing earlier centum dialects, such as Pre-Proto-Germanic
. This might explain the presence of a few prehistoric centum adstratal lexeme
A ''pre-Slavic'' period began 1500 to 1000 BC, whereby certain phonological changes and linguistic contacts did not disperse evenly through all Balto-Slavic dialects. The development into ''Proto-Slavic'' probably occurred along the southern periphery of the Proto-Balto-Slavic continuum. The most archaic Slavic hydronyms
are found here, along the middle Dnieper
and upper Dniester
rivers. This agrees well with the fact that inherited Common Slavic vocabulary does not include detailed terminology for physical surface features peculiar of the mountains or the steppe, nor any relating to the sea, to coastal features, littoral flora or fauna, or salt water fishes. On the other hand, it does include well-developed terminology for inland bodies of water (lakes, river, swamps) and kinds of forest (deciduous and coniferous), for the trees, plants, animals and birds indigenous to the temperate forest zone, and for the fish native to its waters.
Indeed, Trubachev argues that this location fostered contacts between speakers of Pre-Proto-Slavic with the cultural innovations which emanated from central Europe and the steppe. Although language groups cannot be straightforwardly equated with archaeological cultures, the emergence of a Pre-Proto-Slavic linguistic community corresponds temporally and geographically with the Komarov and Chernoles
cultures (Novotna, Blazek). Both linguists and archaeologists therefore often locate the Slavic ''Urheimat'' specifically within this area.
times, the Slavic homeland experienced intrusions of foreign elements. Beginning from 500 BC to AD 200, the Scythians
and then the Sarmatians
expanded their control into the forest steppe. A few Eastern Iranian
loan words, especially relating to religious and cultural practices, have been seen as evidence of cultural influences. Subsequently, loan words of Germanic origin also appear. This is connected to the movement of east Germanic groups into the Vistula
basin, and subsequently to the middle Dnieper basin, associated with the appearance of the Przeworsk
Despite these developments, Slavic remained conservative and was still typologically very similar to other Balto-Slavic dialects. Even into the Common Era, the various Balto-Slavic dialects formed a dialect continuum stretching from the Vistula to the Don
basins, and from the Baltic and upper Volga
to southern Russia
and northern Ukraine
. Exactly when Slavs began to identify as a distinct ethno-cultural unit remains a subject of debate. For example, links the phenomenon to the Zarubinets
culture 200 BC to AD 200, Vlodymyr Baran places Slavic ethnogenesis within the Chernyakov era, while Curta places it in the Danube basin in the sixth century CE. It is likely that linguistic affinity played an important role in defining group identity for the Slavs.
The term ''Slav'' is proposed to be an autonym referring to "people who (use the words to) speak."
Another important aspect of this period is that the Iranian dialects of the Scythians
had a considerable impact on the Slavic vocabulary, during the extensive contacts between the aforementioned languages and (early) Proto-Slavic for about a millennium, and the eventual absorption and assimilation (e.g. Slavicisation
) of the Iranian-speaking Scythians, Sarmatians, and Alans in Eastern Europe
by the Proto-Slavic population of the region.
Proto-Slavic ( 400–600)
Beginning around AD 500, the Slavic speakers rapidly expanded in all directions from a homeland in eastern Poland and western Ukraine. As it expanded throughout eastern Europe, it obliterated whatever remained of easternmost Celtic
, possibly Dacian
, as well as many other Balto-Slavic dialects, and the Slav ethnonym spread out considerably. By the 8th century, Proto-Slavic is believed to have been spoken uniformly in the Slavic part of eastern Europe.
What caused the rapid expansion of Slavic remains a topic of discussion. Traditional theories link its spread to a demographic expansion of Slavs migrating radially from their ''Urheimat'', whereas more processual
theories attempt to modify the picture by introducing concepts such as "elite dominance" and language shift
s. Literary and archaeological evidence suggests that eastern European ''barbaricum'' in the 6th century was linguistically and culturally diverse, somewhat going against the idea of a large demographic expansion of an ethnically homogeneous Slavic people. Instead, Proto-Slavic might have been ''lingua franca
'' among the various barbarian ethnicities that emerged in the Danubian, Carpathian and steppe regions of Europe after the fall of the Hun Empire, such as the ''Sklaveni'', ''Antes
'', and ''Avars
''. Cultural contacts between emerging societal elites might have led to the "language of one agricultural community spread(ing) to other agricultural societies."
This has been substantiated archaeologically, seen by the development of networks which spread of "Slavic fibulae", artifacts representing social status and group identity. Horace Lunt argues that only as a ''lingua franca'' could Slavic have remained mutually intelligible over vast areas of Europe, and that its disintegration into different dialects occurred after the collapse of the Avar khanate. However, even proponents of this theory concede that it fails to explain how Slavic spread to the Baltic and western Russia, areas which had no historical connection with the Avar Empire. Whatever the case, Johanna Nichols points out that the expansion of Slavic was not just a linguistic phenomenon, but the expansion of an ethnic identity.
Common Slavic ( 600–1000)
Due to incompletely understood sociocultural factors, a number of sound changes occurred that uniformly affected all later dialects even well after the Slavic-speaking area had become dialectally differentiated, for at least four or five centuries after the initial Slavic dispersion. This makes it difficult to identify a single point at which Proto-Slavic broke up into regional dialects. As a result, it is customary to speak of a "Common Slavic" period during which sound changes spread across the entire Slavic-speaking area, but not necessarily with uniform results. The Early Common Slavic period, from roughly 400 to 600, can be identified as ''Proto-Slavic proper''. The onomastic evidence and glosses of Slavic words in foreign-language texts show no detectable regional differences during this period.
During the Middle Common Slavic period, from perhaps 600 to 800, some dialectal differences existed, especially in peripheral dialects, but most sound changes still occurred uniformly. (For example, the Old Novgorod dialect
did not exhibit the ''second palatalization of velars
'' while all the other Slavic dialects did.) Reconstructed "Proto-Slavic" forms are normally from this period. It is thought that the distinction of long and short vowels by quality, normally reflected in "Proto-Slavic" reconstructed forms, occurred during this time: Greek transcriptions from the 5th and 6th centuries still indicate Common Slavic *o as ''a''.
During the Late Common Slavic period, from 800 to 1000, conceptual sound changes (e.g. the conversion of ''TORT'' sequences into open syllables and the development of the neoacute accent) still occurred across the entire Slavic area, but often in dialectally differentiated ways. In addition, migrations of Uralic and Romance speaking peoples into modern Hungary
created geographic separations between Slavic dialects. Written documents of the ninth, tenth and eleventh centuries demonstrate some local features. For example, the Freising monuments
show a dialect which contains some phonetic and lexical elements peculiar to Slovenian dialects (e.g. rhotacism
, the word ''krilatec''). Significant continuous Slavic-language texts exist from this period, beginning with the extant Old Church Slavonic
(OCS) texts, composed in the 9th century but copied in the 10th century. The end of the Common Slavic period is usually reckoned with the loss of weak yers
, which occurred in Bulgaria 950 but did not reach Russia until 1150. This is clearly revealed in the texts themselves: During the century or so between the composition and copying of the OCS texts, the weak yer
s disappeared as vowels, and as a result, the texts show marked instability in their representation. (The main exception is the Codex Zographensis
, copied just before yer loss.) On the other hand, the Old East Slavic
texts represent the weak yers with almost complete etymological fidelity until nearly two centuries later.
The terminology of these periods is not consistent. For example, Schenker speaks only of "Early Proto-Slavonic" (= Early Common Slavic, the period of entirely uniform developments) and "Late Proto-Slavonic" (= Middle and Late Common Slavic), with the latter period beginning with the second regressive palatalization, due to the differing outcomes of pre-Proto-Slavic *x.
(Note that some authors, e.g. Kortlandt, place the beginning of dialectal developments later by postulating an outcome *ś of the second regressive palatalization, which only later developed into *s or *š.) Kortlandt's chronology, on the other hand, includes six stages after the Balto-Slavic period:
#"Early Slavic" (≈ pre-Proto-Slavic)
#"Early Middle Slavic" (≈ Early Common Slavic)
#"Late Middle Slavic" (≈ Middle Common Slavic)
#"Young Proto-Slavic" (≈ first part of Late Common Slavic)
#"Late Proto-Slavic" (≈ second part of Late Common Slavic)
#"Disintegrating Slavic" (widespread post-Common-Slavic developments, e.g. loss of nasalization)
The first regressive palatalization of velars (see below) may well have operated during Early Common Slavic and is thought by Arnošt Lemprecht to have specifically operated during the 5th century. The progressive palatalization of velars, if it is older, can predate this only by 200 to 300 years at most, since it post-dates Proto-Germanic borrowings into Slavic, which are generally agreed to have occurred no earlier than the 2nd century. The monophthongization of /au/, /ai/ is thought to have occurred near the end of Early Common Slavic or beginning of Middle Common Slavic ( 600), and the second regressive palatalization of velars not long afterwards. This implies that, until around the time of the earliest Slavic expansion, Slavic was a conservative language not so different from the various attested Baltic languages.
First written Slavic languages
In the second half of the ninth century, the Slavic dialect spoken north of Thessaloniki
, in the hinterlands of Macedonia
, became the basis for the first written Slavic language, created by the brothers Cyril and Methodius
who translated portions of the Bible and other church books. The language they recorded is known as Old Church Slavonic
. Old Church Slavonic is not identical to Proto-Slavic, having been recorded at least two centuries after the breakup of Proto-Slavic, and it shows features that clearly distinguish it from Proto-Slavic. However, it is still reasonably close, and the mutual intelligibility
between Old Church Slavonic and other Slavic dialects of those days was proved by Cyril and Methodius' mission to Great Moravia
. There, their early South Slavic
dialect used for the translations was clearly understandable to the local population which spoke an early West Slavic
See Proto-Balto-Slavic language#Notation
for much more detail on the uses of the most commonly encountered diacritics for indicating prosody
(''á, à, â, ã, ȁ, a̋, ā, ă'') and various other phonetic distinctions (''ą, ẹ, ė, š, ś'', etc.) in different Balto-Slavic languages.
Two different and conflicting systems for denoting vowels are commonly in use in Indo-European and Balto-Slavic linguistics on one hand, and Slavic linguistics on the other. In the first, vowel length is consistently distinguished with a macron above the letter, while in the latter it is not clearly indicated. The following table explains these differences:
For consistency, all discussions of sounds up to (but not including) Middle Common Slavic use the common Balto-Slavic notation of vowels, while discussions of Middle and Late Common Slavic (the phonology and grammar sections) and later dialects use the Slavic notation.
Other vowel and consonant diacritics
Other marks used within Balto-Slavic and Slavic linguistics are:
* The háček
on consonants (''č š ž''), indicating a "hushing" quality , as in English ''kitchen, mission, vision''.
* Various strongly palatal(ized) consonants (a more "hissing" quality in case of sibilant
s) usually indicated by an acute accent (''ć ǵ ḱ ĺ ń ŕ ś ź'') or a háček (''ď ľ ň ř ť'').
* The ogonek
(''ą ę ǫ''), indicating vowel nasalization
(in modern standard Lithuanian this is historical only).
For Middle and Late Common Slavic, the following marks are used to indicate prosodic
distinctions, based on the standard notation in Serbo-Croatian
*Long rising (''á''): This indicates the Balto-Slavic acute accent in Middle Common Slavic only.
*Short rising (''à''): This indicates the Balto-Slavic acute accent in Late Common Slavic, where it was shortened.
*Long falling (''ȃ''): This normally indicates the Balto-Slavic circumflex accent. In Late Common Slavic, it also indicates originally short (falling) accent that was lengthened in monosyllables. This secondary circumflex occurs only on the short vowels ''e, o, ь, ъ'' in an open syllable
(i.e. when not forming part of a liquid diphthong
*Short falling (''ȁ''): This indicates the Balto-Slavic short accent. In Late Common Slavic, this accent was lengthened in monosyllables (see preceding entry).
*Neoacute (''ã''): This indicates the Late Common Slavic neoacute accent, which was pronounced as a rising accent, usually long but short when occurring on some syllable types in certain languages. This results from retraction of the accent, i.e. the Middle Common Slavic accent fell on the following syllable (usually specifically a weak yer
Other prosodic diacritics
There are unfortunately multiple competing systems used to indicate prosody in different Balto-Slavic languages (see Proto-Balto-Slavic language#Notation
for more details). The most important for this article are:
# Three-way system of Proto-Slavic, Proto-Balto-Slavic, modern Lithuanian: Acute tone (''á'') vs. circumflex tone (''ȃ'' or ''ã'') vs. short accent (''à'').
# Four-way Serbo-Croatian system, also used in Slovenian and often in Slavic reconstructions: long rising (''á''), short rising (''à''), long falling (''ȃ''), short falling (''ȁ''). In the Chakavian
dialect and other archaic dialects, the long rising accent is notated with a tilde (''ã''), indicating its normal origin in the Late Common Slavic neoacute accent (see above).
# Length only, as in Czech and Slovak: long (''á'') vs. short (''a'').
# Stress only, as in Russian, Ukrainian and Bulgarian: stressed (''á'') vs. unstressed (''a'').
Historical development up to Proto-Slavic
Split from Indo-European
Proto-Balto-Slavic has the satem
sound changes wherein Proto-Indo-European
(PIE) palatovelar consonants
became affricate or fricative consonant
s pronounced closer to the front of the mouth, conventionally indicated as ''*ś'' and ''*ź''. These became simple dental fricatives ''*s'' and ''*z'' in Proto-Slavic:
* * → *ś → *s
* * → *ź → *z
* * → *ź → *z
This sound change was incomplete, in that all Baltic and Slavic languages have instances where PIE palatovelars appear as ''*k'' and ''*g'', often in doublets (i.e. etymologically related words, where one has a sound descended from ''*k'' or ''*g'' and the other has a sound descended from ''*ś'' or ''*ź'').
Other satem sound changes are delabialization
of labiovelar consonants before rounded vowels and the ruki sound law
, which shifted ''*s'' to ''*š'' after ''*r'', ''*u'', ''*k'' or ''*i''. In Proto-Slavic, this sound was shifted backwards to become ''*x'', although it was often shifted forward again by one of the three sound laws causing palatalization of velars.
In the Balto-Slavic period, final and were lost.
Also present in Balto-Slavic were the diphthongs *ei and *ai as well as liquid diphthong
s *ul, *il, *ur, *ir, the latter set deriving from syllabic liquids; the vocalic element merged with *u after labiovelar stops and with *i everywhere else, and the remaining labiovelars subsequently lost their labialization.
Around this time, the PIE aspirated consonants merged with voiced ones:
* * → *
* * → *
* * → *
Once it split off, the Proto-Slavic period probably encompassed a period of stability lasting 2,000 years with only several centuries of rapid change before and during the breakup of Slavic linguistic unity that came about due to Slavic migrations in the early sixth century. As such, the chronology of changes including the three palatalizations and ending with the change of *ě to *a in certain contexts defines the Common Slavic period.
Long *ē and *ō raised to *ī and *ū before a final sonorant, and sonorants following a long vowel were deleted. Proto-Slavic shared the common Balto-Slavic merging of *o with *a. However, while long *ō and *ā remained distinct in Baltic, they merged in Slavic (after the previous change), so that early Slavic did not possess the sounds *o or *ō.
Elimination of syllable codas
A tendency for rising sonority in a syllable (arrangement of phonemes in a syllable from lower to higher sonority) marks the beginning of the Common Slavic period. One aspect of this, generally referred to as the "Law of Open Syllables", led to a gradual elimination of closed syllable
s. When possible, consonants in the coda were resyllabified into the onset of the following syllable. For example, *kun-je-mou "to him" became *ku-nje-mou (OCS
''kъňemu''), and *vuz-dā-tēi "to give back" became *vu-zdā-tēi (OCS ''vъzdati''). This entailed no actual phonetic change, but simply a reinterpretation of syllable boundaries, and was possible only when the entire cluster could begin a syllable or word (as in *nj, *zd, *stv, but not *nt, *rd, *pn).
When the cluster was not permissible as a syllable onset, any impermissible consonants were deleted from the coda. Thus, e.g. PIE > Slavic , eliminating the impermissible onset ''pn-''. With regard to clusters of stop + sonorant, not all Slavic languages show the same outcome. The cluster ''*dl'' is preserved in West Slavic, but simplified to ''*l'' in East and South Slavic, e.g. > Czech , Polish , but Serbo-Croatian . The verb appears with the cluster ''gn'' intact in South and West Slavic, while it is simplified to ''n'' in East Slavic. The verb , on the other hand, preserves the cluster ''dn'' only in Czech and Slovak, simplifying it to ''n'' elsewhere.
As part of this development, diphthongs were monophthong
ized, and nasal consonants in the syllable coda were reduced to nasalization of the preceding vowel (* and *). Liquid diphthong
s were eliminated in most Slavic languages, but with different outcomes in different languages.
After these changes, a CV syllable structure (that is, one of segments ordered from lower to higher sonority) arose and the syllable became a basic structural unit of the language.
Another tendency arose in the Common Slavic period wherein successive segmental phonemes in a syllable assimilated articulatory features (primarily place of articulation
). This is called ''syllable synharmony'' or ''intrasyllabic harmony''. Thus syllables (rather than just the consonant or the vowel) were distinguished as either "soft" (palatal) or "hard" (non-palatal). This led to consonants developing palatalized
allophones in syllables containing front vowels, resulting in the first regressive palatalization.
It also led to the fronting of back vowels after /j/.
Syllable-final nasals *m and *n (i.e. when not directly followed by a vowel) coalesced with a previous vowel, causing it to become nasalized
(indicated with an ogonek
diacritic below the vowel):
The nasal element of *im, *in, *um, *un is lost word-finally in inflectional endings, and therefore does not cause nasalization.
Examples showing these developments:
The nasalization of *ų̄ was eventually lost. However, when *ų̄ followed a palatal consonant
such as /j/ (indicated generically as *J), it was fronted
to *į̄, which preserved its nasalization much longer. This new *į̄ did not originally merge with the result of nasalizing original *im/*in, as shown in the table. Instead, it evolved in Common Slavic times to a high-mid nasal vowel *ę̇, higher than the low-mid vowel *ę. In South Slavic, these two vowels merged as *ę. Elsewhere, however, *ę̇ was denasalized, merging with *ě, while *ę was generally lowered to *æ̨ (often reflected as ''ja''). Common Slavic *''desętyję̇ koňę̇'' "the tenth horses (accusative plural)" appears as ''desętyję koňę'' in Old Church Slavonic and ''desete konje'' in Serbo-Croatian (South Slavic), but as ''desáté koně'' in modern Czech and ''dziesiąte konie'' in Polish (West Slavic), and as ''десятые кони'' (desjatyje koni, nominative plural) in Russian (East Slavic). Note that Polish normally preserves nasal vowels, but it does not have a nasal vowel in the accusative plural ending, while it retains it in the stem of "tenth".
Nasalization also occurred before a nasal consonant, whenever a vowel was followed by two nasals. However, in this case, several later dialects denasalized the vowel at an early date. Both ''pomęnǫti'' and ''poměnǫti'' "remember" (from earlier *pa-men-nantī?) are found in Old Church Slavonic. The common word *''jĭmę'' "name" can be traced back to earlier *''inmen'' with denasalization, from a PIE zero grade alternant *''h₁n̥h₃mén-''.
First regressive palatalization
As an extension of the system of syllable synharmony, velar consonants were palatalized to postalveolar consonants before front vowels (*i, *ī, *e, *ē) and before *j:
* *k → *č
* *g → *dž → *ž
* *x → *š
* *sk → *šč
* *zg → *ždž
This was the first regressive palatalization. Although *g palatalized to an affricate, this soon lenited to a fricative (but *ždž was retained).
Some Germanic loanwords were borrowed early enough to be affected by the first palatalization. One example is *šelmŭ, from earlier *xelmŭ, from Germanic *helmaz.
In a process called ''iotation'' or ''yodization'', *j merged with a previous consonant (unless it was labial), and those consonants acquired a palatal articulation. Compare English yod-coalescence
. This change probably did not occur together with the first regressive palatalization, but somewhat later, and it remained productive well into the Late Common Slavic period.
* *tj → *ť
* *dj → *ď
* *stj → *šť (→ presumably šč)
* *zdj → *žď (→ presumably ždž)
* *sj → *š
* *zj → *ž
* *lj → ľ
* *nj → ň
* *rj → ř
The combinations *gt and *kt merged into *ť in Proto-Slavic times and show outcomes identical to *ť in all languages. This combination occurred in a few lexical items (*dъťi "daughter" < *dъkti, *noťь "night" < *noktь), but also occurred in infinitives of verbs with stems ending in -g and -k, which would have originally ended in *-gti and *-kti. This accounts for the irregular infinitive ending some verbs such as Polish ''móc'', Russian ''мочь'' from Proto-Slavic *moťi < *mog-ti, where normally these languages have infinitives in ''-ć'' and ''-ть'' respectively.
In the case of the palatal consonants that had resulted from the first regressive palatalization, the *j simply disappeared without altering the preceding consonant:
* *čj → *č
* *(d)žj → *(d)ž
* *šj → *š
* *ščj → *šč
* *ždžj → *ždž
In both East and South Slavic, labial consonants (*m, *b, *p, *v) were also affected by iotation, acquiring a lateral off-glide ľ :
* *mj → mľ
* *bj → bľ
* *pj → pľ
* *vj → vľ
Many researchers believe that this change actually occurred throughout Proto-Slavic and was later 'reversed' in West Slavic and in most dialects of the Eastern subgroup of South Slavic languages (Macedonian
, and the transitional Torlakian dialect
) by analogy with related word forms lacking the lateral. The Codex Suprasliensis
, for example, has < *zemja (i.e. an intrusive *ь where East and South Slavic languages have *ľ); compare:
* *zemja (→ *zemľa) → *zemьja →
** Bulgarian: земя
** Macedonian: земја
** Polish: ziemia
** Torlakian: zemja
Some Northern Macedonian dialects
, however, acquired an *n (e.g. < *zemja).
A few words with etymological initial *bj- and *pj- are reflected as *bľ- and *pľ- even in West Slavic:
* *pľьvàti "to spit" < PIE *(s)pieHu-, cf. Lithuanian ''spjáuti''.
* *bľustì "to watch, to perk up" (1sg. *bľudǫ̀) < PIE *bʰeudʰ-.
Syllabic synharmony also worked in reverse, and caused the palatal articulation of a consonant to influence a following vowel, turning it from a back vowel into a front vowel. There were two sources for this process. The first was a preceding *j or a consonant that had undergone iotation. The second was the progressive palatalization (see below), which produced new palatal consonants before back vowels. The result of this fronting was as follows (with J acting as a cover symbol for any consonant with a palatal articulation):
* *Ja → *Je
* *Jā → *Jē
* *Ju → *Ji
* *Jū → *Jī
* *Jai → *Jei (→ *Jī)
* *Jau → *Jeu (→ *Jū)
* *Jų̄ → *Jį̄ (→ *Ję̇)
Towards the end of the Late Common Slavic period, an opposing change happened, in which long *Jē was ''backed'' to *Jā. This change is normally identified with the end of the tendency for syllabic synharmony.
Vowel fronting clearly preceded monophthongization, in that the outputs *Jei, *Jeu were later affected by monophthongization just as original *ei, *eu were. However, there is no guarantee that vowel fronting followed the progressive palatalization despite the fact that the output of the latter process was affected by vowel fronting. The reason is that the rule triggering vowel fronting may well have operated as a surface filter
, i.e. a rule that remained part of the grammar for an extended period of time, operating automatically on any new palatal consonants as they were produced.
Vowel fronting did not operate on the low nasal vowel *ą (later *ǫ), cf. Old Church Slavonic ''znajǫ'' "I know". However, it did operate on the high nasal vowel *ų, leading to alternations, e.g. Old Church Slavonic accusative plural ''raby'' "slaves" (< *-ų̄) vs. ''koňę'' "horses" (< *-jį̄ < *-jų̄). See the section on nasalization
for more discussion.
During the Common Slavic period, prothetic glides were inserted before words that began with vowels, consistent with the tendency for rising sonority within a syllable. These cases merged with existing word-initial sequences of glide + vowel, and show the same outcome in the later languages. *v was inserted before rounded vowels (*u, *ū), *j before unrounded vowels (*e, ē, *i, *ī). Not all vowels show equal treatment in this respect, however. High vowels generally have prothesis without exception in all Slavic languages, as do *e, *ě and nasal *ę:
* *i- > *ji- (> *jь-)
* *ī- > *jī- (> *ji-)
* *u- > *wu- (> *vъ-)
* *ū- > *wū- (> *vy-)
* *e- > *je-
* *ę- > *ję-
* *ē- > *jē- (> *jě- or *ja-)
In later Slavic, ''*jь-'' and ''*ji-'' appear to have merged, and both are reflected as simple ''i-'' in many modern Slavic languages. In Common Slavic itself, however, they were still distinguished by length for the purpose of intonation. The sequence ''*ji-'' could belong to accent paradigm ''a'', while the sequence ''*jь-'' could not.
Prothesis generally did not apply to short *a (which developed into *o or nasal *ǫ), although some East Slavic dialects seem to have developed it regardless. There seems to have been some uncertainty concerning the interpretation of long *ā as a rounded or unrounded vowel. Prothesis seems to have applied intermittently to it. When it does apply, *ā- > *jā- is frequent, but *ā- > *vā- is also found.
The old diphthongs ''*ei-'' and ''*ai-'' develop the same as ''*ī-'' and ''*ē-'' respectively, although ''*ai-'' never develops into ''*ja-''. The diphthong ''*au-'', later ''*u-'', mostly resists prothesis, but some cases (e.g. ) also show ''*ju-''.
Monophthongization and other vowel changes
ū lost its labialization (possibly or , represented hereafter as , as in modern Polish), but not before prothesis occurred, as prothesis of *v before unrounded *y seems unlikely. This was closely followed by the monophthongization of diphthongs in all environments, in accordance with the law of open syllables. Following this change, short *a acquired non-distinctive rounding (probably in first instance), and is denoted as *o from this point onwards.
* *ū → *ȳ → y
* *au, *eu → *ū
* *ei → *ī
* *ai → *ē or ī
* *a → *o
In many common grammatical forms such as the nominative plural of o-stems , the second person imperative , in the second singular of athematic verbs and in the dative singular of the clitic personal pronouns, *ai became *ī .
Second regressive palatalization
Proto-Slavic had acquired front vowels, ē (possibly an open front vowel ) and sometimes ī, from the earlier change of *ai to *ē/ī. This resulted in new sequences of velars followed by front vowels, where they did not occur before. Additionally, some new loanwords also had such sequences.
However, Proto-Slavic was still operating under the system of syllabic synharmony. Therefore, the language underwent the second regressive palatalization, in which velar consonants preceding the new (secondary) phonemes *ē and *ī, as well as *i and *e in new loanwords, were palatalized.
As with the progressive palatalization, these became palatovelar. Soon after, palatovelar consonants from both the progressive palatalization and the second regressive palatalization became sibilants:
* → *c ()
* → *dz (→ *z in most dialects)
* → *ś → *s/*š
In noun declension, the second regressive palatalization originally figured in two important Slavic stem types: o-stems (masculine and neuter consonant-stems) and a-stems (feminine and masculine vowel-stems). This rule operated in the o-stem masculine paradigm in three places: before nominative plural and both singular and plural locative affixes.
An additional palatalization of velar consonants occurred in Common Slavic times, formerly known as the ''third palatalization'' but now more commonly termed the ''progressive palatalization'' due to uncertainty over when exactly it occurred. Unlike the other two, it was triggered by a ''preceding'' vowel, in particular a preceding *i or *ī, with or without an intervening *n.
Furthermore, it was probably disallowed before consonants and the high back vowels *y, *ъ. The outcomes are exactly the same as for the second regressive palatalization, i.e. alveolar rather than palatoalveolar affricates, including the East/West split in the outcome of palatalized *x:
* → *c ()
* → *dz (→ *z in most dialects)
* → *ś → *s/*š
* *atiku(s) "father" (nom. sg.) → *aticu(s) → (with vowel fronting) Late Common Slavic *otьcь
* Proto-Germanic *kuningaz "king" → Early Common Slavic *kuningu(s) → Late Common Slavic *kъnędzь
* *vixu(s) "all" → *vьśь → *vьšь (West), *vьsь (East and South)
There is significant debate over when this palatalization took place and the exact contexts in which the change was phonologically regular. The traditional view is that this palatalization took place just after the second regressive palatalization (hence its traditional designation as the "third palatalization"), or alternatively that the two occurred essentially simultaneously. This is based on the similarity of the development to the second regressive palatalization and examples like *atike "father" (voc. sg.) → *otьče (not *otьce) that appear to show that the first regressive palatalization preceded the progressive palatalization.
A dissenting view places the progressive palatalization before one or both regressive palatalizations. This dates back to and was continued more recently by and . Lunt's chronology places the progressive palatalization first of the three, in the process explaining both the occurrence of *otĭče and the identity of the outcomes of the progressive and second regressive palatalizations:
# Progressive palatalization: *k > *ḱ (presumably a palatal stop) after *i(n) and *j
# First regressive palatalization: *k/*ḱ > *č before front vowels
# Fronting of back vowels after palatal consonants
# Monophthongization of diphthongs
# Second regressive palatalization: *k/*ḱ > *c before front vowels
(similarly for *g and possibly *x)
Significant complications to all theories are posed by the Old Novgorod dialect, known particularly since the 1950s, which has no application of the second regressive palatalization and only partial application of the progressive palatalization (to *k and sometimes *g, but not to *x).
More recent scholars have continued to argue in favor of the traditional chronology, and there is clearly still no consensus.
The three palatalizations must have taken place between the 2nd and 9th century. The earlier date is the earliest likely date for Slavic contact with Germanic tribes (such as the migrating Goths), because loanwords from Germanic (such as *''kъnędzь'' "king" mentioned above) are affected by all three palatalizations. On the other hand, loan words in the early historic period ( 9th century) are generally not affected by the palatalizations. For example, the name of the Varangians, from Old Norse ''Væringi'', appears in Old East Slavic as варѧгъ ''varęgъ'', with no evidence of the progressive palatalization (had it followed the full development as "king" did, the result would have been **''varędzь'' instead). The progressive palatalization also affected vowel fronting; it created palatal consonants before back vowels, which were then fronted. This does not necessarily guarantee a certain ordering of the changes, however, as explained above in the vowel fronting section.
The Baltic languages, as well as conservative Slavic languages like Serbo-Croatian, have a complex accentual system with short and long vowels in all syllables, a free pitch accent that can fall on any syllable, and multiple types of pitch accent. (Vowel length is normally considered a separate topic from accent, but in the Slavic languages in particular, the two are closely related, and are usually treated together.) Not surprisingly, the historical development of accent in the Slavic languages is complex and was one of the last areas to be clearly understood. Even now, there is no complete consensus.
The Balto-Slavic languages inherited from PIE a free, mobile pitch accent:
#There was (at most) a single accented syllable per word, distinguished by higher pitch (as in e.g. Mohawk) rather than greater dynamic stress (as in English).
#The accent was ''free'' in that it could occur on any syllable, and was phonemic (i.e. its position could not be automatically predicted).
#The accent was ''mobile'' in that its position could potentially vary among closely related words within a single paradigm.
In inflectional paradigms, Proto-Slavic inherited the distinction between fixed-accented and mobile-accented paradigms from Proto-Balto-Slavic.
Acute, pitch and vowel length
Proto-Balto-Slavic "long" syllables could have an additional feature known as "acute". This feature was inherited by Proto-Slavic, and was still present on all syllables throughout the Middle Common Slavic period. At this time, this distinction could occur on the following syllable types:
* Those containing the long vowels *a *ě *i *u *y.
* Those containing the nasal vowels *ę *ǫ.
* Those containing a liquid diphthong.
When accented, acuted vowels developed a rising intonation, while non-acuted long vowels became falling in pitch. Short vowels, i.e. the vowels *e *o *ь *ъ, did not have distinctive intonations, but developed different pitch contours in different positions in the word. In the first syllable of the word, the pitch was falling, while in non-initial syllables the pitch was rising.
The development of vowel length in Proto-Slavic remains controversial, with different linguists and linguistic schools holding different positions on the matter. Traditionally, it is held that Late Common Slavic retained the original distribution of short and long vowels, as it was inherited from Proto-Balto-Slavic. Under this position, vowel length was an automatic consequence of vowel quality, with *e *o *ь *ъ being always short, and all other vowels, including nasal vowels and liquid diphthongs, being always long. The decoupling of length from quality is ascribed to the post-Common Slavic period.
Linguists of the Leiden accentological school, on the other hand, posit accentual changes that disrupted the original distribution of length, so that length became independent of quality. The most important early changes are:
# The loss of the acute feature in all syllables, except in accented syllables and syllables that immediately followed the accent. The length of these syllables was retained.
# The loss of the acute feature in syllables immediately following the accent, this time with shortening of the vowel.
# Loss of all length distinctions in syllables preceding the accent.
# Shortening of acuted accented syllables. The acute feature was converted into short rising pitch contour, while non-acuted long syllables received a long falling intonation.
# Van Wijk's law: Lengthening of vowels (except for yers and nasal vowels) following palatal consonants. This led to the increased occurrence of long vowels in the endings of ''jā'' and ''jo'' stems, which had consequences for Ivšić's law. Some of these long vowels were later shortened by analogy, especially in endings that were unstressed in the mobile paradigm.
# Loss of ''*j'' between two unaccented vowels, resulting in contraction of the adjacent syllables into a long vowel. This occurred only in some languages, especially Czech, and did not occur at all in Russian. This, again, affected Ivšić's law, which retracted the accent from these contracted long vowels but not from the uncontracted vowels.
# Eventual loss of length in final syllables in most languages. However, the former long vowels are reflected to some extent in Slovene and Serbo-Croatian, and more directly by the neo-circumflex accent in Slovene, which developed early on from former acute-register syllables when followed by a long syllable or internal yer.
According to Meillet's law, words with a mobile accent paradigm lost the acute feature in the first syllable of the word, if there was one. Such words consequently do not show any difference in intonation in forms where the accent is on the first syllable; the pitch is always falling. Where the accent is on a non-initial syllable, the distinction is maintained.
Dybo's law was the first of the major accent shifts in Proto-Slavic. In fixed-accent inflectional paradigms, non-acute syllables (both short and long) lost the accent to the following syllable. This caused a split in the fixed-accented paradigms, between the acuted "accent paradigm ''a''", which retained the accent on the stem of the word, and the non-acuted "accent paradigm ''b''", where the accent had shifted onto the inflectional ending.
In the traditional interpretation, the newly-accented syllable retained its acuteness until this point, and became rising or falling in pitch accordingly. Following the Leiden school, a formerly accented long syllable remained distinctively long, resulting in new long vowels before the accent. Newly accented long vowels gained a falling tone, while short vowels (whether originally short or shortened acute) received a rising tone.
Dybo's law occurred before the loss of ''*j'' between unstressed vowels, and could shift the accent onto the resulting long vowel. The accent would then be retracted again by Ivšić's law.
Havlík's law, Ivšić's law and the neoacute accent
During the Late Common Slavic period, the short close vowels *ь *ъ (known as yers) developed into "strong" and "weak" variants according to Havlík's law. The weak variants could no longer be accented, and if they were accented before, the accent was retracted onto the preceding syllable if there was one. This change is known as Ivšić's law or Stang's law. The newly-accented syllable gained a new type of rising accent, termed the ''neoacute''.
* Early Slavic *sȃndu(s) "court of law, trial" > Middle Common Slavic *sǫ̂dъ > MCS *sǫdъ̀ (by Dybo's law) > Late Common Slavic *sǫ̃dъ (= *sǫ́dъ) > Čakavian (Vrgara) ''sũd'' (G sg ''sūdȁ''), Russian ''sud'' (G sg ''sudá'').
The neoacuted vowel could be either short or long, depending on the original length of the syllable before the retraction. The short neoacute is denoted with a grave accent (''ò''), while the long neoacute is variously written with an acute accent (''á'', following Serbo-Croatian and Slovene notation) or with a tilde (''ã'', following Chakavian notation). In West Slavic (except southern Slovak), short ''e'' and ''o'' gaining the neoacute were automatically lengthened.
Retraction also occurred from long falling ("circumflex") vowels, such as in the following cases:
# In verbs with a present tense in ''*-i(tь)'', e.g.:
#* MCS *nosȋ(tь) "s/he carries" > *nòsi(tь) > Russian но́сит ''nósit''
# From a vowel immediately preceded by an original *j, i.e. where Van Wijk's law operated:
#* PSl. *venzjè(ti) "s/he ties" > MCS *vęžè(tь) > LCS *vę̃že(tь) > Russian вя́жет ''v'ážet''
#* MCS ''*voljà'' "will" > ''*vol'à'' > LCS ''*võl'a'' > Russian dialectal ''vôlja''
Ivšić's law produced different results in different Slavic dialects. In languages that show long vowels through loss of ''*j'', followed by a shift of the accent onto the long vowel by Dybo's law, the accent is retracted again by Ivšić's law. In languages that retain ''*j'', the accent is shifted forward by Dybo's law, but then remains there if the vowel is short.
After these changes, falling pitch could only occur on the first syllable of the word, where it contrasted with rising pitch. In non-initial syllables, all accented syllables were rising in pitch. The complicated accentual patterns produced by Ivšić's law were levelled to some degree already within Common Slavic. In ''jā''-stems this resulted in neoacute on the stem in all forms, and in ''jo''-stems in all plural forms.
*History of the Slavic languages
*Old Church Slavonic
*Slavic liquid metathesis and pleophony
;In other languages
* Blazek, Václav. “Iranian and Slavic”. In: ''Encyclopedia of Slavic Languages and Linguistics Online''. Editor-in-Chief: Marc L. Greenberg. First published online: 2020 |
Cisco Networking Basics IP AddressingPosted on: April 12, 2020, by : Admin
An IP address is a 32-bit identifier that uniquely identifies an endpoint on an IP network. Remembering a 32-bit IP address would be a nightmare, so the address is represented as a dotted decimal notation. An IP address is a logical identifier for an interface that is connected to the network. Two versions of IP are currently in use: IPv4 and IPv6. We will focus on IPv4 in this guild.
As discussed in the previous chapters, Internet Protocol (IP) is a layer 3 protocol. Recall that a primary function of layer 3 is routing of packets across different subnets. Every interface that is connected to the network needs to have an IP address for identification. Since IP addressing design is critical to any network, we will start with a quick recap of IP addressing and then delve into the considerations for IP address planning for networks.
Firstly, the 32 bits are grouped into four octets having 8 bits each. Secondly, the IP address is represented in a doted decimal notation, meaning that the four octets are separated by a decimal between them, which is read as a dot while reading the address. Thirdly, the octets are converted into a decimal number for easier identification, and the IP address takes the form A.B.C.D.
The following figure illustrates the steps in converting the 32 bits of an IP address into the familiar dotted decimal notation:
An IP address is a logical address for the network layer of the host connected to the network. Note that each interface of the host has an IP address, and if a host has two interfaces connected to two different networks, it will have two different IP addresses–one for each interface. The host may also have an address for a logical interface, which is different from a physical interface. As an example, most devices on a network will have a logical interface configured as a loop back interface.This is purely a logical interface with no physical interface mapping. This is done to identify the device on the network through the logical interface, as that interface would never go down as long as any one interface on the device is connected to the network and the TCP/IP stack on the device works normally.
Since an IP address is a logical address, it is easy to build a hierarchy in the addressing schema, which is required for routing the packets from one network to another. All interfaces connected on one network have a common network that is a logical identifier for the network. This network number is embedded in the IP address itself, and can be derived from the IP address using the network mask. Another way of looking at the network number being embedded in the IP address is to look at the IP address as a combination of the network number and host number. The 32 bits of the IP address are divided into two parts: the Network Bitson the left, and the Host Bitson the right:
The number of bits that denote the network bits is represented as a value called the network mask. The network mask is denoted as a number between 1 and 32 after a / sign. This is appended to the IP address in the dotted decimal notation. As an example, since the number of network bits in the preceding example is 16, the IP address will be written as 18.104.22.168/16.This means that the IP address is 22.214.171.124 and the network mask is a/16.
Sometimes the network mask is also represented in the dotted decimal notation. To get the dotted decimal notation of the network mask, write the first n bits in the 32 bits as 1, and the remaining trailing bits as 0. Then, convert the resultant 32-bit number into the dotted decimal notation. We use the method shown in the following figure to convert the network mask of/16 into the dotted decimal notation:
If we look only at the network bits and replace all the host bits with a 0, the resultant 32-bit representation is called the network identifier. For the IP address example that we considered earlier, and assuming the network mask is /16, the network identifier is as derived in the following figure:
The network identifier can also be derived by masking the IP address with the network mask in dotted decimal notation. To mask the IP address with the subnet mask, just do a logical AND operation bit by bit for 32bits.Note that a logical AND between a binary1and binary x is x itself, and the result of the AND operation between a binary 0 and a binary x is always 0. Hence, the logical AND operation for the 32 bits will get us the same result, as shown in the following figure:
All hosts on the same layer 3 network have the same network number or ID. Therefore, if a network has a mask of /n, the first nbits of the IP address are the network bits. The remaining (32-n) bits out of the 32 bits of the IPv4 address represent the host part of the address. Since the IP addresses are unique, in a subnet the maximum number of host addresses is 2 (32-n). The first and last addresses of the range have a special purpose. The first address will be where all the host bits are 0 and, as we discussed earlier, this number represents the network identifier for the network. If all the host bits of an IP address are set to 1, we get an address that is called the IP broadcast address or broadcast ID for the network, which represents the collection of all hosts/interfaces on the network. Hence, the maximum number of hosts/interfaces on a subnet is 2 (32-n)-2. The following figure explains this with the IPv4 example:
We have over 15,628 DOWNLOAD of this E-Book on our site from different individual in IT Field and Discipline. Discover how we can help you too.
Fill the form below and Get this E-Book for FREE.
When IP addresses were formalized, IP addressing was classified into five classes, A through E, as shown in the following figure. Classes A, B, and C were used for user addressing, while class D was used for multicast addressing, and Class E addresses were reserved for experimental use. The identification of the address was based on the higher order bits in the binary representation of the address. If the highest order bit was 0, the address was a class A address. If the highest order 2 bits were 10, the address was a class B address, and if the highest order 3 bits were 110, it was a class C address. Similarly, the highest order 4 bits for class D and class E addresses were 1110 and 1111 respectively.
The rationale was based on the fact that there would be large, medium, and small networks, and hence three different classes of address, namely A, B, and C, were devised accordingly. The class A addresses were for the largest networks, where the number of hosts would be very large. It was assumed that very few networks of such types would exist on the internet. Accordingly, 8 bits were reserved in a class A address for the network bits in a class A address, and the remaining 32 bits were for hosts. Since the first bit was already set to 0, this meant that there could be 128 such networks (2 (8-1)), and each network could have up to 2 24- 2 hosts each. This can also be interpreted as class A networks having a network mask of /8 or 255.0.0.0.
Similarly, the network bits in a class B address for medium networks was 16, leaving 16 bits for the host part. Class C networks had 24 bits reserved for the network and 8 for the hosts:
The following table summarizes the hosts and networks for the different classes of address:
The address structure discussed so far is generally referred to as classful addressing. This format of addressing had severe limitations as the IP networks expanded rapidly. Since the number of networks were finite, the demand for IP addresses far outnumbered what was available.
Also, the smallest networks had 254 addresses for hosts. However, even if there were 10 hosts on the network, the remaining addresses could not be used anywhere, the network numbers had to be unique, leading to a lot of wastage of the available IP addresses. This wastage of IP addresses became a big cause for concern and the industry started looking at new ways of reducing this wastage and finding efficient ways to use the available IP address space. This led to the introduction of classless addressing, where in the concept of bucketing all networks into small, medium, and large, and hence allocating class C, B, or A addresses was done away with.
In classless addressing, the number of network bits was not fixed like in classful addressing at 8, 16, or 24, but there was flexibility that the number of bits reserved for the network (Network mask) could be any number from 1 to 32. Hence, the networks could be partitioned into smaller networks called subnets, and the utilization of IP addresses improved drastically. This meant that any number of bits could be reserved for the host bits, and the remaining bits would be the network bits. Hence each network could have addresses that were as granular as the power of 2. For example, if the host bits were 4, there could be 24 addresses, similarly if the number of host bits were 5, there could be 2 5addresses. Note that the usable addresses would still be 2 less than the actual number, one each being reserved for the network ID and the broadcast address.
As an example, consider a situation where there were four different LAN segments or networks that had 40 hosts each to be connected. If we had followed classful addressing, we would have assigned a class C address to each of the 4 LAN segments and utilized only 40 addresses out of the 254 available for use in each segment leading to a huge wastage of addresses. In classless addressing, the addressing is done based on the requirement of addresses.
In this example, since we need 40 addresses per network and we can allot addresses in powers of 2 (2, 4, 8, 16, 32, 64, 128, 256, and so on), the minimum number of addresses that would fulfil the requirement of 40 addresses is 64, which requires 6 bits for the hosts. Hence for one class C (/24) network, we can fulfil the requirement of four such networks. Let’s assume that the address block 126.96.36.199/24 was allocated to us.
The same address block can be subnetted into four smaller subnets for each subnetwork. Since the address block allocated to us had a subnet mask of /24, the first 24 bits are fixed. Now we need 40 addresses per LAN segment and, as discussed earlier, we need 6 bits as the host bits for each subnetwork. The number of bits that we can use as the subnet bits are or 2 bits. Using the 2 bits, we can have 2 2or 4 subnets. This is shown in the following figure for the address block of 188.8.131.52/24. Note that the subnets now have a subnet mask of /26 or 255.255.255.192 because each subnet now has 26 bits that represent the subnet:
Note that in the new scheme of addressing, the subnet mask is not fixed at octet boundaries, but can have any value as long as it can be represented as a continuous string of binary1 followed by a continuous string of binary 0, and the total number of bits being 32. This concept of splitting subnets for creating smaller subnets, where each subnet can have different subnet masks depending upon the number of addresses required in the subnet is called Variable Length Subnet Masking (VLSM).
The concept of subnetting led to a large number of prefixes on the network. This led to an increase in the resources required for the routing tables that stored these prefixes. To overcome the effect of the increase in the number of prefixes, the standards defined a new concept called supernetting, which aggregated smaller prefixes into a fewer number of prefixes with a smaller netmask. We will discuss this later in the routing section. |
|Part of a series on|
An earthquake (also known as a quake, tremor or temblor) is the result of a sudden release of energy in the Earth's crust that creates seismic waves. The seismicity, seismism or seismic activity of an area refers to the frequency, type and size of earthquakes experienced over a period of time.
Earthquakes are measured using observations from seismometers. The moment magnitude is the most common scale on which earthquakes larger than approximately 5 are reported for the entire globe. The more numerous earthquakes smaller than magnitude 5 reported by national seismological observatories are measured mostly on the local magnitude scale, also referred to as the Richter scale. These two scales are numerically similar over their range of validity. Magnitude 3 or lower earthquakes are mostly almost imperceptible or weak and magnitude 7 and over potentially cause serious damage over larger areas, depending on their depth. The largest earthquakes in historic times have been of magnitude slightly over 9, although there is no limit to the possible magnitude. The most recent large earthquake of magnitude 9.0 or larger was a 9.0 magnitude earthquake in Japan in 2011 (as of October 2012), and it was the largest Japanese earthquake since records began. Intensity of shaking is measured on the modified Mercalli scale. The shallower an earthquake, the more damage to structures it causes, all else being equal.
At the Earth's surface, earthquakes manifest themselves by shaking and sometimes displacement of the ground. When the epicenter of a large earthquake is located offshore, the seabed may be displaced sufficiently to cause a tsunami. Earthquakes can also trigger landslides, and occasionally volcanic activity.
In its most general sense, the word earthquake is used to describe any seismic event — whether natural or caused by humans — that generates seismic waves. Earthquakes are caused mostly by rupture of geological faults, but also by other events such as volcanic activity, landslides, mine blasts, and nuclear tests. An earthquake's point of initial rupture is called its focus or hypocenter. The epicenter is the point at ground level directly above the hypocenter.
Naturally occurring earthquakes
Tectonic earthquakes occur anywhere in the earth where there is sufficient stored elastic strain energy to drive fracture propagation along a fault plane. The sides of a fault move past each other smoothly and aseismically only if there are no irregularities or asperities along the fault surface that increase the frictional resistance. Most fault surfaces do have such asperities and this leads to a form of stick-slip behaviour. Once the fault has locked, continued relative motion between the plates leads to increasing stress and therefore, stored strain energy in the volume around the fault surface. This continues until the stress has risen sufficiently to break through the asperity, suddenly allowing sliding over the locked portion of the fault, releasing the stored energy. This energy is released as a combination of radiated elastic strain seismic waves, frictional heating of the fault surface, and cracking of the rock, thus causing an earthquake. This process of gradual build-up of strain and stress punctuated by occasional sudden earthquake failure is referred to as the elastic-rebound theory. It is estimated that only 10 percent or less of an earthquake's total energy is radiated as seismic energy. Most of the earthquake's energy is used to power the earthquake fracture growth or is converted into heat generated by friction. Therefore, earthquakes lower the Earth's available elastic potential energy and raise its temperature, though these changes are negligible compared to the conductive and convective flow of heat out from the Earth's deep interior.
Earthquake fault types
There are three main types of fault, all of which may cause an earthquake: normal, reverse (thrust) and strike-slip. Normal and reverse faulting are examples of dip-slip, where the displacement along the fault is in the direction of dip and movement on them involves a vertical component. Normal faults occur mainly in areas where the crust is being extended such as a divergent boundary. Reverse faults occur in areas where the crust is being shortened such as at a convergent boundary. Strike-slip faults are steep structures where the two sides of the fault slip horizontally past each other; transform boundaries are a particular type of strike-slip fault. Many earthquakes are caused by movement on faults that have components of both dip-slip and strike-slip; this is known as oblique slip.
Reverse faults, particularly those along convergent plate boundaries are associated with the most powerful earthquakes, including almost all of those of magnitude 8 or more. Strike-slip faults, particularly continental transforms can produce major earthquakes up to about magnitude 8. Earthquakes associated with normal faults are generally less than magnitude 7.
This is so because the energy released in an earthquake, and thus its magnitude, is proportional to the area of the fault that ruptures and the stress drop. Therefore, the longer the length and the wider the width of the faulted area, the larger the resulting magnitude. The topmost, brittle part of the Earth's crust, and the cool slabs of the tectonic plates that are descending down into the hot mantle, are the only parts of our planet which can store elastic energy and release it in fault ruptures. Rocks hotter than about 300 degrees Celsius flow in response to stress; they do not rupture in earthquakes. The maximum observed lengths of ruptures and mapped faults, which may break in one go are approximately 1000 km. Examples are the earthquakes in Chile, 1960; Alaska, 1957; Sumatra, 2004, all in subduction zones. The longest earthquake ruptures on strike-slip faults, like the San Andreas Fault (1857, 1906), the North Anatolian Fault in Turkey (1939) and the Denali Fault in Alaska (2002), are about half to one third as long as the lengths along subducting plate margins, and those along normal faults are even shorter.
The most important parameter controlling the maximum earthquake magnitude on a fault is however not the maximum available length, but the available width because the latter varies by a factor of 20. Along converging plate margins, the dip angle of the rupture plane is very shallow, typically about 10 degrees. Thus the width of the plane within the top brittle crust of the Earth can become 50 to 100 km (Japan, 2011; Alaska, 1964), making the most powerful earthquakes possible.
Strike-slip faults tend to be oriented near vertically, resulting in an approximate width of 10 km within the brittle crust, thus earthquakes with magnitudes much larger than 8 are not possible. Maximum magnitudes along many normal faults are even more limited because many of them are located along spreading centers, as in Iceland, where the thickness of the brittle layer is only about 6 km.
In addition, there exists a hierarchy of stress level in the three fault types. Thrust faults are generated by the highest, strike slip by intermediate, and normal faults by the lowest stress levels. This can easily be understood by considering the direction of the greatest principal stress, the direction of the force that 'pushes' the rock mass during the faulting. In the case of normal faults, the rock mass is pushed down in a vertical direction, thus the pushing force (greatest principal stress) equals the weight of the rock mass itself. In the case of thrusting, the rock mass 'escapes' in the direction of the least principal stress, namely upward, lifting the rock mass up, thus the overburden equals the least principal stress. Strike-slip faulting is intermediate between the other two types described above. This difference in stress regime in the three faulting environments can contribute to differences in stress drop during faulting, which contributes to differences in the radiated energy, regardless of fault dimensions.
Earthquakes away from plate boundaries
Where plate boundaries occur within continental lithosphere, deformation is spread out over a much larger area than the plate boundary itself. In the case of the San Andreas fault continental transform, many earthquakes occur away from the plate boundary and are related to strains developed within the broader zone of deformation caused by major irregularities in the fault trace (e.g., the "Big bend" region). The Northridge earthquake was associated with movement on a blind thrust within such a zone. Another example is the strongly oblique convergent plate boundary between the Arabian and Eurasian plates where it runs through the northwestern part of the Zagros mountains. The deformation associated with this plate boundary is partitioned into nearly pure thrust sense movements perpendicular to the boundary over a wide zone to the southwest and nearly pure strike-slip motion along the Main Recent Fault close to the actual plate boundary itself. This is demonstrated by earthquake focal mechanisms.
All tectonic plates have internal stress fields caused by their interactions with neighbouring plates and sedimentary loading or unloading (e.g. deglaciation). These stresses may be sufficient to cause failure along existing fault planes, giving rise to intraplate earthquakes.
Shallow-focus and deep-focus earthquakes
The majority of tectonic earthquakes originate at the ring of fire in depths not exceeding tens of kilometers. Earthquakes occurring at a depth of less than 70 km are classified as 'shallow-focus' earthquakes, while those with a focal-depth between 70 and 300 km are commonly termed 'mid-focus' or 'intermediate-depth' earthquakes. In subduction zones, where older and colder oceanic crust descends beneath another tectonic plate, deep-focus earthquakes may occur at much greater depths (ranging from 300 up to 700 kilometers). These seismically active areas of subduction are known as Wadati-Benioff zones. Deep-focus earthquakes occur at a depth where the subducted lithosphere should no longer be brittle, due to the high temperature and pressure. A possible mechanism for the generation of deep-focus earthquakes is faulting caused by olivine undergoing a phase transition into a spinel structure.
Earthquakes and volcanic activity
Earthquakes often occur in volcanic regions and are caused there, both by tectonic faults and the movement of magma in volcanoes. Such earthquakes can serve as an early warning of volcanic eruptions, as during the Mount St. Helens eruption of 1980. Earthquake swarms can serve as markers for the location of the flowing magma throughout the volcanoes. These swarms can be recorded by seismometers and tiltmeters (a device that measures ground slope) and used as sensors to predict imminent or upcoming eruptions.
A tectonic earthquake begins by an initial rupture at a point on the fault surface, a process known as nucleation. The scale of the nucleation zone is uncertain, with some evidence, such as the rupture dimensions of the smallest earthquakes, suggesting that it is smaller than 100 m while other evidence, such as a slow component revealed by low-frequency spectra of some earthquakes, suggest that it is larger. The possibility that the nucleation involves some sort of preparation process is supported by the observation that about 40% of earthquakes are preceded by foreshocks. Once the rupture has initiated it begins to propagate along the fault surface. The mechanics of this process are poorly understood, partly because it is difficult to recreate the high sliding velocities in a laboratory. Also the effects of strong ground motion make it very difficult to record information close to a nucleation zone.
Rupture propagation is generally modeled using a fracture mechanics approach, likening the rupture to a propagating mixed mode shear crack. The rupture velocity is a function of the fracture energy in the volume around the crack tip, increasing with decreasing fracture energy. The velocity of rupture propagation is orders of magnitude faster than the displacement velocity across the fault. Earthquake ruptures typically propagate at velocities that are in the range 70–90% of the S-wave velocity and this is independent of earthquake size. A small subset of earthquake ruptures appear to have propagated at speeds greater than the S-wave velocity. These supershear earthquakes have all been observed during large strike-slip events. The unusually wide zone of coseismic damage caused by the 2001 Kunlun earthquake has been attributed to the effects of the sonic boom developed in such earthquakes. Some earthquake ruptures travel at unusually low velocities and are referred to as slow earthquakes. A particularly dangerous form of slow earthquake is the tsunami earthquake, observed where the relatively low felt intensities, caused by the slow propagation speed of some great earthquakes, fail to alert the population of the neighbouring coast, as in the 1896 Meiji-Sanriku earthquake.
Most earthquakes form part of a sequence, related to each other in terms of location and time. Most earthquake clusters consist of small tremors that cause little to no damage, but there is a theory that earthquakes can recur in a regular pattern.
An aftershock is an earthquake that occurs after a previous earthquake, the mainshock. An aftershock is in the same region of the main shock but always of a smaller magnitude. If an aftershock is larger than the main shock, the aftershock is redesignated as the main shock and the original main shock is redesignated as a foreshock. Aftershocks are formed as the crust around the displaced fault plane adjusts to the effects of the main shock.
Earthquake swarms are sequences of earthquakes striking in a specific area within a short period of time. They are different from earthquakes followed by a series of aftershocks by the fact that no single earthquake in the sequence is obviously the main shock, therefore none have notable higher magnitudes than the other. An example of an earthquake swarm is the 2004 activity at Yellowstone National Park. In August 2012, a swarm of earthquakes shook Southern California's Imperial Valley, showing the most recorded activity in the area since the 1970s.
Sometimes a series of earthquakes occur in a sort of earthquake storm, where the earthquakes strike a fault in clusters, each triggered by the shaking or stress redistribution of the previous earthquakes. Similar to aftershocks but on adjacent segments of fault, these storms occur over the course of years, and with some of the later earthquakes as damaging as the early ones. Such a pattern was observed in the sequence of about a dozen earthquakes that struck the North Anatolian Fault in Turkey in the 20th century and has been inferred for older anomalous clusters of large earthquakes in the Middle East.
Size and frequency of occurrence
It is estimated that around 500,000 earthquakes occur each year, detectable with current instrumentation. About 100,000 of these can be felt. Minor earthquakes occur nearly constantly around the world in places like California and Alaska in the U.S., as well as in Mexico, Guatemala, Chile, Peru, Indonesia, Iran, Pakistan, the Azores in Portugal, Turkey, New Zealand, Greece, Italy, India and Japan, but earthquakes can occur almost anywhere, including New York City, London, and Australia. Larger earthquakes occur less frequently, the relationship being exponential; for example, roughly ten times as many earthquakes larger than magnitude 4 occur in a particular time period than earthquakes larger than magnitude 5. In the (low seismicity) United Kingdom, for example, it has been calculated that the average recurrences are: an earthquake of 3.7–4.6 every year, an earthquake of 4.7–5.5 every 10 years, and an earthquake of 5.6 or larger every 100 years. This is an example of the Gutenberg–Richter law.
The number of seismic stations has increased from about 350 in 1931 to many thousands today. As a result, many more earthquakes are reported than in the past, but this is because of the vast improvement in instrumentation, rather than an increase in the number of earthquakes. The United States Geological Survey estimates that, since 1900, there have been an average of 18 major earthquakes (magnitude 7.0–7.9) and one great earthquake (magnitude 8.0 or greater) per year, and that this average has been relatively stable. In recent years, the number of major earthquakes per year has decreased, though this is probably a statistical fluctuation rather than a systematic trend. More detailed statistics on the size and frequency of earthquakes is available from the United States Geological Survey (USGS). A recent increase in the number of major earthquakes has been noted, which could be explained by a cyclical pattern of periods of intense tectonic activity, interspersed with longer periods of low-intensity. However, accurate recordings of earthquakes only began in the early 1900s, so it is too early to categorically state that this is the case.
Most of the world's earthquakes (90%, and 81% of the largest) take place in the 40,000 km long, horseshoe-shaped zone called the circum-Pacific seismic belt, known as the Pacific Ring of Fire, which for the most part bounds the Pacific Plate. Massive earthquakes tend to occur along other plate boundaries, too, such as along the Himalayan Mountains.
With the rapid growth of mega-cities such as Mexico City, Tokyo and Tehran, in areas of high seismic risk, some seismologists are warning that a single quake may claim the lives of up to 3 million people.
While most earthquakes are caused by movement of the Earth's tectonic plates, human activity can also produce earthquakes. Four main activities contribute to this phenomenon: storing large amounts of water behind a dam (and possibly building an extremely heavy building), drilling and injecting liquid into wells, and by coal mining and oil drilling. Perhaps the best known example is the 2008 Sichuan earthquake in China's Sichuan Province in May; this tremor resulted in 69,227 fatalities and is the 19th deadliest earthquake of all time. The Zipingpu Dam is believed to have fluctuated the pressure of the fault 1,650 feet (503 m) away; this pressure probably increased the power of the earthquake and accelerated the rate of movement for the fault. The greatest earthquake in Australia's history is also claimed to be induced by humanity, through coal mining. The city of Newcastle was built over a large sector of coal mining areas. The earthquake has been reported to be spawned from a fault that reactivated due to the millions of tonnes of rock removed in the mining process.
Measuring and locating earthquakes
Earthquakes can be recorded by seismometers up to great distances, because seismic waves travel through the whole Earth's interior. The absolute magnitude of a quake is conventionally reported by numbers on the Moment magnitude scale (formerly Richter scale, magnitude 7 causing serious damage over large areas), whereas the felt magnitude is reported using the modified Mercalli intensity scale (intensity II–XII).
Every tremor produces different types of seismic waves, which travel through rock with different velocities:
- Longitudinal P-waves (shock- or pressure waves)
- Transverse S-waves (both body waves)
- Surface waves — (Rayleigh and Love waves)
Propagation velocity of the seismic waves ranges from approx. 3 km/s up to 13 km/s, depending on the density and elasticity of the medium. In the Earth's interior the shock- or P waves travel much faster than the S waves (approx. relation 1.7 : 1). The differences in travel time from the epicentre to the observatory are a measure of the distance and can be used to image both sources of quakes and structures within the Earth. Also the depth of the hypocenter can be computed roughly.
In solid rock P-waves travel at about 6 to 7 km per second; the velocity increases within the deep mantle to ~13 km/s. The velocity of S-waves ranges from 2–3 km/s in light sediments and 4–5 km/s in the Earth's crust up to 7 km/s in the deep mantle. As a consequence, the first waves of a distant earthquake arrive at an observatory via the Earth's mantle.
On average, the kilometer distance to the earthquake is the number of seconds between the P and S wave times 8. Slight deviations are caused by inhomogeneities of subsurface structure. By such analyses of seismograms the Earth's core was located in 1913 by Beno Gutenberg.
Earthquakes are not only categorized by their magnitude but also by the place where they occur. The world is divided into 754 Flinn-Engdahl regions (F-E regions), which are based on political and geographical boundaries as well as seismic activity. More active zones are divided into smaller F-E regions whereas less active zones belong to larger F-E regions.
Standard reporting of earthquakes includes its magnitude, date and time of occurrence, geographic coordinates of its epicenter, depth of the epicenter, geographical region, distances to population centers, location uncertainty, a number of parameters that are included in USGS earthquake reports (number of stations reporting, number of observations, etc.), and a unique event ID.
Effects of earthquakes
The effects of earthquakes include, but are not limited to, the following:
Shaking and ground rupture
Shaking and ground rupture are the main effects created by earthquakes, principally resulting in more or less severe damage to buildings and other rigid structures. The severity of the local effects depends on the complex combination of the earthquake magnitude, the distance from the epicenter, and the local geological and geomorphological conditions, which may amplify or reduce wave propagation. The ground-shaking is measured by ground acceleration.
Specific local geological, geomorphological, and geostructural features can induce high levels of shaking on the ground surface even from low-intensity earthquakes. This effect is called site or local amplification. It is principally due to the transfer of the seismic motion from hard deep soils to soft superficial soils and to effects of seismic energy focalization owing to typical geometrical setting of the deposits.
Ground rupture is a visible breaking and displacement of the Earth's surface along the trace of the fault, which may be of the order of several metres in the case of major earthquakes. Ground rupture is a major risk for large engineering structures such as dams, bridges and nuclear power stations and requires careful mapping of existing faults to identify any which are likely to break the ground surface within the life of the structure.
Landslides and avalanches
Earthquakes, along with severe storms, volcanic activity, coastal wave attack, and wildfires, can produce slope instability leading to landslides, a major geological hazard. Landslide danger may persist while emergency personnel are attempting rescue.
Earthquakes can cause fires by damaging electrical power or gas lines. In the event of water mains rupturing and a loss of pressure, it may also become difficult to stop the spread of a fire once it has started. For example, more deaths in the 1906 San Francisco earthquake were caused by fire than by the earthquake itself.
Soil liquefaction occurs when, because of the shaking, water-saturated granular material (such as sand) temporarily loses its strength and transforms from a solid to a liquid. Soil liquefaction may cause rigid structures, like buildings and bridges, to tilt or sink into the liquefied deposits. For example, in the 1964 Alaska earthquake, soil liquefaction caused many buildings to sink into the ground, eventually collapsing upon themselves.
Tsunamis are long-wavelength, long-period sea waves produced by the sudden or abrupt movement of large volumes of water. In the open ocean the distance between wave crests can surpass 100 kilometers (62 mi), and the wave periods can vary from five minutes to one hour. Such tsunamis travel 600-800 kilometers per hour (373–497 miles per hour), depending on water depth. Large waves produced by an earthquake or a submarine landslide can overrun nearby coastal areas in a matter of minutes. Tsunamis can also travel thousands of kilometers across open ocean and wreak destruction on far shores hours after the earthquake that generated them.
Ordinarily, subduction earthquakes under magnitude 7.5 on the Richter scale do not cause tsunamis, although some instances of this have been recorded. Most destructive tsunamis are caused by earthquakes of magnitude 7.5 or more.
A flood is an overflow of any amount of water that reaches land. Floods occur usually when the volume of water within a body of water, such as a river or lake, exceeds the total capacity of the formation, and as a result some of the water flows or sits outside of the normal perimeter of the body. However, floods may be secondary effects of earthquakes, if dams are damaged. Earthquakes may cause landslips to dam rivers, which collapse and cause floods.
The terrain below the Sarez Lake in Tajikistan is in danger of catastrophic flood if the landslide dam formed by the earthquake, known as the Usoi Dam, were to fail during a future earthquake. Impact projections suggest the flood could affect roughly 5 million people.
An earthquake may cause injury and loss of life, road and bridge damage, general property damage (which may or may not be covered by earthquake insurance), and collapse or destabilization (potentially leading to future collapse) of buildings. The aftermath may bring disease, lack of basic necessities, and higher insurance premiums.
One of the most devastating earthquakes in recorded history occurred on 23 January 1556 in the Shaanxi province, China, killing more than 830,000 people (see 1556 Shaanxi earthquake). Most of the population in the area at the time lived in yaodongs, artificial caves in loess cliffs, many of which collapsed during the catastrophe with great loss of life. The 1976 Tangshan earthquake, with a death toll estimated to be between 240,000 to 655,000, is believed to be the largest earthquake of the 20th century by death toll.
The 1960 Chilean Earthquake is the largest earthquake that has been measured on a seismograph, reaching 9.5 magnitude on 22 May 1960. Its epicenter was near Cañete, Chile. The energy released was approximately twice that of the next most powerful earthquake, the Good Friday Earthquake, which was centered in Prince William Sound, Alaska. The ten largest recorded earthquakes have all been megathrust earthquakes; however, of these ten, only the 2004 Indian Ocean earthquake is simultaneously one of the deadliest earthquakes in history.
Earthquakes that caused the greatest loss of life, while powerful, were deadly because of their proximity to either heavily populated areas or the ocean, where earthquakes often create tsunamis that can devastate communities thousands of kilometers away. Regions most at risk for great loss of life include those where earthquakes are relatively rare but powerful, and poor regions with lax, unenforced, or nonexistent seismic building codes.
Many methods have been developed for predicting the time and place in which earthquakes will occur. Despite considerable research efforts by seismologists, scientifically reproducible predictions cannot yet be made to a specific day or month. However, for well-understood faults the probability that a segment may rupture during the next few decades can be estimated.
Earthquake warning systems have been developed that can provide regional notification of an earthquake in progress, but before the ground surface has begun to move, potentially allowing people within the system's range to seek shelter before the earthquake's impact is felt.
The objective of earthquake engineering is to foresee the impact of earthquakes on buildings and other structures and to design such structures to minimize the risk of damage. Existing structures can be modified by seismic retrofitting to improve their resistance to earthquakes. Earthquake insurance can provide building owners with financial protection against losses resulting from earthquakes.
Emergency management strategies can be employed by a government or organization to mitigate risks and prepare for consequences.
Ways to Survive an Earthquake
- Be Prepared: Before, During and After an Earthquake Earthquakes do not last for a long time, generally a few seconds to a minute. The 1989 San Francisco earthquake only lasted 15 seconds.
- Securing water heaters, major appliances and tall, heavy furniture to prevent them from toppling are prudent steps. So, too, are storing hazardous or flammable liquids, heavy objects and breakables on low shelves or in secure cabinets.
- If you're indoors, stay there. Get under -- and hold onto --a desk or table, or stand against an interior wall. Stay clear of exterior walls, glass, heavy furniture, fireplaces and appliances. The kitchen is a particularly dangerous spot. If you’re in an office building, stay away from windows and outside walls and do not use the elevator. Stay low and cover your head and neck with your hands and arms. Bracing yourself to a wall or heavy furniture when weaker earthquakes strike usually works.
- Cover your head and neck. Use your hands and arms. If you have any respiratory disease, make sure that you cover your head with a t-shirt or bandana, until all the debris and dust has settled. Inhaled dirty air is not good for your lungs.
- DO NOT stand in a doorway: An enduring earthquake image of California is a collapsed adobe home with the door frame as the only standing part. From this came our belief that a doorway is the safest place to be during an earthquake. True- if you live in an old, unreinforced adobe house or some older woodframe houses. In modern houses, doorways are no stronger than any other part of the house, and the doorway does not protect you from the most likely source of injury- falling or flying objects. You also may not be able to brace yourself in the door during strong shaking. You are safer under a table. Many are certain that standing in a doorway during the shaking is a good idea. That’s false, unless you live in an unreinforced adode structure; otherwise, you're more likely to be hurt by the door swinging wildly in a doorway.
- Inspect your house for anything that might be in a dangerous condition. Glass fragments, the smell of gas, or damaged electrical appliances are examples of hazards.
- Do not move. If it is safe to do so, stay where you are for a minute or two, until you are sure the shaking has stopped. Slowly get out of the house. Wait until the shaking has stopped to evacuate the building carefully.
- PRACTICE THE RIGHT THING TO DO… IT COULD SAVE YOUR LIFE, You will be more likely to react quickly when shaking begins if you have actually practiced how to protect yourself on a regular basis. A great time to practice Drop, Cover, and Hold.
- If you're outside, get into the open. Stay clear of buildings, power lines or anything else that could fall on you. Glass looks smooth and still, but when broken apart, a small piece can damage your foot. This is why you wear heavy shoes to protect your feet at such times.
- Be aware that items may fall out of cupboards or closets when the door is opened, and also that chimneys can be weakened and fall with a touch. Check for cracks and damage to the roof and foundation of your home.
- Things You'll Need: Blanket, Sturdy shoes, Dust mask to help filter contaminated air and plastic sheeting and duct tape to shelter-in-place, basic hygiene supplies, e.g. soap, Feminine supplies and personal hygiene items.
From the lifetime of the Greek philosopher Anaxagoras in the 5th century BCE to the 14th century CE, earthquakes were usually attributed to "air (vapors) in the cavities of the Earth." Thales of Miletus, who lived from 625–547 (BCE) was the only documented person who believed that earthquakes were caused by tension between the earth and water. Other theories existed, including the Greek philosopher Anaxamines' (585–526 BCE) beliefs that short incline episodes of dryness and wetness caused seismic activity. The Greek philosopher Democritus (460–371 BCE) blamed water in general for earthquakes. Pliny the Elder called earthquakes "underground thunderstorms."
Earthquakes in culture
Mythology and religion
In Norse mythology, earthquakes were explained as the violent struggling of the god Loki. When Loki, god of mischief and strife, murdered Baldr, god of beauty and light, he was punished by being bound in a cave with a poisonous serpent placed above his head dripping venom. Loki's wife Sigyn stood by him with a bowl to catch the poison, but whenever she had to empty the bowl the poison dripped on Loki's face, forcing him to jerk his head away and thrash against his bonds, which caused the earth to tremble.
In Greek mythology, Poseidon was the cause and god of earthquakes. When he was in a bad mood, he struck the ground with a trident, causing earthquakes and other calamities. He also used earthquakes to punish and inflict fear upon people as revenge.
In Japanese mythology, Namazu (鯰) is a giant catfish who causes earthquakes. Namazu lives in the mud beneath the earth, and is guarded by the god Kashima who restrains the fish with a stone. When Kashima lets his guard fall, Namazu thrashes about, causing violent earthquakes.
In modern popular culture, the portrayal of earthquakes is shaped by the memory of great cities laid waste, such as Kobe in 1995 or San Francisco in 1906. Fictional earthquakes tend to strike suddenly and without warning. For this reason, stories about earthquakes generally begin with the disaster and focus on its immediate aftermath, as in Short Walk to Daylight (1972), The Ragged Edge (1968) or Aftershock: Earthquake in New York (1998). A notable example is Heinrich von Kleist's classic novella, The Earthquake in Chile, which describes the destruction of Santiago in 1647. Haruki Murakami's short fiction collection after the quake depicts the consequences of the Kobe earthquake of 1995.
The most popular single earthquake in fiction is the hypothetical "Big One" expected of California's San Andreas Fault someday, as depicted in the novels Richter 10 (1996) and Goodbye California (1977) among other works. Jacob M. Appel's widely anthologized short story, A Comparative Seismology, features a con artist who convinces an elderly woman that an apocalyptic earthquake is imminent.
Contemporary depictions of earthquakes in film are variable in the manner in which they reflect human psychological reactions to the actual trauma that can be caused to directly afflicted families and their loved ones. Disaster mental health response research emphasizes the need to be aware of the different roles of loss of family and key community members, loss of home and familiar surroundings, loss of essential supplies and services to maintain survival. Particularly for children, the clear availability of caregiving adults who are able to protect, nourish, and clothe them in the aftermath of the earthquake, and to help them make sense of what has befallen them has been shown even more important to their emotional and physical health than the simple giving of provisions. As was observed after other disasters involving destruction and loss of life and their media depictions, such as those of the 2001 World Trade Center Attacks or Hurricane Katrina—and has been recently observed in the 2010 Haiti earthquake, it is also important not to pathologize the reactions to loss and displacement or disruption of governmental administration and services, but rather to validate these reactions, to support constructive problem-solving and reflection as to how one might improve the conditions of those affected.
- "Earthquake FAQ". Crustal.ucsb.edu. Retrieved 2011-07-24.
- Spence, William; S. A. Sipkin, G. L. Choy (1989). "Measuring the Size of an Earthquake". United States Geological Survey. Retrieved 2006-11-03.
- Wyss, M. (1979). "Estimating expectable maximum magnitude of earthquakes from fault dimensions". Geology 7 (7): 336–340. Bibcode:1979Geo.....7..336W. doi:10.1130/0091-7613(1979)7<336:EMEMOE>2.0.CO;2.
- Sibson R. H. (1982) "Fault Zone Models, Heat Flow, and the Depth Distribution of Earthquakes in the Continental Crust of the United States", Bulletin of the Seismological Society of America, Vol 72, No. 1, pp. 151–163
- Sibson, R. H. (2002) "Geology of the crustal earthquake source" International handbook of earthquake and engineering seismology, Volume 1, Part 1, page 455, eds. W H K Lee, H Kanamori, P C Jennings, and C. Kisslinger, Academic Press, ISBN / ASIN: 0124406521
- "Global Centroid Moment Tensor Catalog". Globalcmt.org. Retrieved 2011-07-24.
- "Instrumental California Earthquake Catalog". WGCEP. Retrieved 2011-07-24.
- Hjaltadóttir S., 2010, "Use of relatively located microearthquakes to map fault patterns and estimate the thickness of the brittle crust in Southwest Iceland"
- "Reports and publications | Seismicity | Icelandic Meteorological office". En.vedur.is. Retrieved 2011-07-24.
- Schorlemmer, D.; Wiemer, S.; Wyss, M. (2005). "Variations in earthquake-size distribution across different stress regimes". Nature 437 (7058): 539–542. Bibcode:2005Natur.437..539S. doi:10.1038/nature04094. PMID 16177788.
- Talebian, M; Jackson, J (2004). "A reappraisal of earthquake focal mechanisms and active shortening in the Zagros mountains of Iran". Geophysical Journal International 156 (3): 506–526. Bibcode:2004GeoJI.156..506T. doi:10.1111/j.1365-246X.2004.02092.x.
- Nettles, M.; Ekström, G. (May 2010). "Glacial Earthquakes in Greenland and Antarctica". Annual Review of Earth and Planetary Sciences 38 (1): 467–491. Bibcode:2010AREPS..38..467N. doi:10.1146/annurev-earth-040809-152414. Avinash Kumar
- Noson, Qamar, and Thorsen (1988). Washington State Earthquake Hazards: Washington State Department of Natural Resources. Washington Division of Geology and Earth Resources Information Circular 85.
- "M7.5 Northern Peru Earthquake of 26 September 2005" (PDF). National Earthquake Information Center. 17 October 2005. Retrieved 2008-08-01.
- Greene II, H. W.; Burnley, P. C. (October 26, 1989). "A new self-organizing mechanism for deep-focus earthquakes". Nature 341 (6244): 733–737. Bibcode:1989Natur.341..733G. doi:10.1038/341733a0.
- Foxworthy and Hill (1982). Volcanic Eruptions of 1980 at Mount St. Helens, The First 100 Days: USGS Professional Paper 1249.
- Watson, John; Watson, Kathie (January 7, 1998). "Volcanoes and Earthquakes". United States Geological Survey. Retrieved May 9, 2009.
- National Research Council (U.S.). Committee on the Science of Earthquakes (2003). "5. Earthquake Physics and Fault-System Science". Living on an Active Earth: Perspectives on Earthquake Science. Washington D.C.: National Academies Press. p. 418. ISBN 978-0-309-06562-7. Retrieved 8 July 2010.
- Thomas, Amanda M.; Nadeau, Robert M.; Bürgmann, Roland (December 24, 2009). "Tremor-tide correlations and near-lithostatic pore pressure on the deep San Andreas fault". Nature 462 (7276): 1048–51. Bibcode:2009Natur.462.1048T. doi:10.1038/nature08654. PMID 20033046.
- "Gezeitenkräfte: Sonne und Mond lassen Kalifornien erzittern" SPIEGEL online, 29.12.2009
- Tamrazyan, Gurgen P. (1967). "Tide-forming forces and earthquakes". Icarus 7 (1–3): 59–65. Bibcode:1967Icar....7...59T. doi:10.1016/0019-1035(67)90047-4.
- Tamrazyan, Gurgen P. (1968). "Principal regularities in the distribution of major earthquakes relative to solar and lunar tides and other cosmic forces". Icarus 9 (1–3): 574–92. Bibcode:1968Icar....9..574T. doi:10.1016/0019-1035(68)90050-X.
- "What are Aftershocks, Foreshocks, and Earthquake Clusters?".
- "Repeating Earthquakes". United States Geological Survey. January 29, 2009. Retrieved May 11, 2009.
- "Earthquake Swarms at Yellowstone". United States Geological Survey. Retrieved 2008-09-15.
- Duke, Alan. "Quake 'swarm' shakes Southern California". CNN. Retrieved 27 August 2012.
- Amos Nur; Cline, Eric H. (2000). "Poseidon's Horses: Plate Tectonics and Earthquake Storms in the Late Bronze Age Aegean and Eastern Mediterranean". Journal of Archaeological Science 27 (1): 43–63. doi:10.1006/jasc.1999.0431. ISSN 0305-4403.
- "Earthquake Storms". Horizon. 1 April 2003. Retrieved 2007-05-02.
- "Earthquake Facts". United States Geological Survey. Retrieved 2010-04-25.
- Pressler, Margaret Webb (14 April 2010). "More earthquakes than usual? Not really.". KidsPost (Washington Post: Washington Post). pp. C10.
- "Earthquake Hazards Program". United States Geological Survey. Retrieved 2006-08-14.
- "Seismicity and earthquake hazard in the UK". Quakes.bgs.ac.uk. Retrieved 2010-08-23.
- "Italy's earthquake history." BBC News. October 31, 2002.
- "Common Myths about Earthquakes". United States Geological Survey. Retrieved 2006-08-14.
- "Earthquake Facts and Statistics: Are earthquakes increasing?". United States Geological Survey. Retrieved 2006-08-14.
- The 10 biggest earthquakes in history, Australian Geographic, March 14, 2011.
- "Historic Earthquakes and Earthquake Statistics: Where do earthquakes occur?". United States Geological Survey. Retrieved 2006-08-14.
- "Visual Glossary — Ring of Fire". United States Geological Survey. Retrieved 2006-08-14.
- Jackson, James, "Fatal attraction: living with earthquakes, the growth of villages into megacities, and earthquake vulnerability in the modern world," Philosophical Transactions of the Royal Society, doi:10.1098/rsta.2006.1805 Phil. Trans. R. Soc. A 15 August 2006 vol. 364 no. 1845 1911–1925.
- "Global urban seismic risk." Cooperative Institute for Research in Environmental Science.
- Madrigal, Alexis (4 June 2008). "Top 5 Ways to Cause a Man-Made Earthquake". Wired News (CondéNet). Retrieved 2008-06-05.
- "How Humans Can Trigger Earthquakes". National Geographic. February 10, 2009. Retrieved April 24, 2009.
- Brendan Trembath (January 9, 2007). "Researcher claims mining triggered 1989 Newcastle earthquake". Australian Broadcasting Corporation. Retrieved April 24, 2009.
- "Speed of Sound through the Earth". Hypertextbook.com. Retrieved 2010-08-23.
- Geographic.org. "Magnitude 8.0 - SANTA CRUZ ISLANDS Earthquake Details". Gobal Earthquake Epicenters with Maps. Retrieved 2013-03-13.
- "On Shaky Ground, Association of Bay Area Governments, San Francisco, reports 1995,1998 (updated 2003)". Abag.ca.gov. Retrieved 2010-08-23.
- "Guidelines for evaluating the hazard of surface fault rupture, California Geological Survey". California Department of Conservation. 2002.
- "Natural Hazards — Landslides". United States Geological Survey. Retrieved 2008-09-15.
- "The Great 1906 San Francisco earthquake of 1906". United States Geological Survey. Retrieved 2008-09-15.
- "Historic Earthquakes — 1946 Anchorage Earthquake". United States Geological Survey. Retrieved 2008-09-15.
- Noson, Qamar, and Thorsen (1988). Washington Division of Geology and Earth Resources Information Circular 85. Washington State Earthquake Hazards.
- MSN Encarta Dictionary. Flood. Retrieved on 2006-12-28. Archived 2009-10-31.
- "Notes on Historical Earthquakes". British Geological Survey. Retrieved 2008-09-15.
- "Fresh alert over Tajik flood threat". BBC News. 2003-08-03. Retrieved 2008-09-15.
- USGS: Magnitude 8 and Greater Earthquakes Since 1900
- "Earthquakes with 50,000 or More Deaths". U.S. Geological Survey
- Spignesi, Stephen J. (2005). Catastrophe!: The 100 Greatest Disasters of All Time. ISBN 0-8065-2558-4
- Kanamori Hiroo. "The Energy Release in Great Earthquakes". Journal of Geophysical Research. Retrieved 2010-10-10.
- USGS. "How Much Bigger?". United States Geological Survey. Retrieved 2010-10-10.
- Earthquake Prediction. Ruth Ludwin, U.S. Geological Survey.
- Working Group on California Earthquake Probabilities in the San Francisco Bay Region, 2003 to 2032, 2003, http://earthquake.usgs.gov/regional/nca/wg02/index.php.
- "Earthquakes". Encyclopedia of World Environmental History 1. Encyclopedia of World Environmental History. 2003. pp. 358–364.
- Sturluson, Snorri (1220). Prose Edda. ISBN 1-156-78621-5.
- Sellers, Paige (1997-03-03). "Poseidon". Encyclopedia Mythica. Retrieved 2008-09-02.
- Van Riper, A. Bowdoin (2002). Science in popular culture: a reference guide. Westport: Greenwood Press. p. 60. ISBN 0-313-31822-0.
- JM Appel. A Comparative Seismology. Weber Studies (first publication), Volume 18, Number 2.
- Goenjian, Najarian; Pynoos, Steinberg; Manoukian, Tavosian; Fairbanks, AM; Manoukian, G; Tavosian, A; Fairbanks, LA (1994). "Posttraumatic stress disorder in elderly and younger adults after the 1988 earthquake in Armenia". Am J Psychiatry 151 (6): 895–901. PMID 8185000.
- Wang, Gao; Shinfuku, Zhang; Zhao, Shen; Zhang, H; Zhao, C; Shen, Y (2000). "Longitudinal Study of Earthquake-Related PTSD in a Randomly Selected Community Sample in North China". Am J Psychiatry 157 (8): 1260–1266. doi:10.1176/appi.ajp.157.8.1260. PMID 10910788.
- Goenjian, Steinberg; Najarian, Fairbanks; Tashjian, Pynoos (2000). "Prospective Study of Posttraumatic Stress, Anxiety, and Depressive Reactions After Earthquake and Political Violence". Am J Psychiatry 157 (6): 911–895. doi:10.1176/appi.ajp.157.6.911.
- Coates SW, Schechter D (2004). Preschoolers' traumatic stress post-9/11: relational and developmental perspectives. Disaster Psychiatry Issue. Psychiatric Clinics of North America, 27(3), 473–489.
- Schechter, DS; Coates, SW; First, E (2002). "Observations of acute reactions of young children and their families to the World Trade Center attacks". Journal of ZERO-TO-THREE: National Center for Infants, Toddlers, and Families 22 (3): 9–13.
- Deborah R. Coen. The Earthquake Observers: Disaster Science From Lisbon to Richter (University of Chicago Press; 2012) 348 pages; explores both scientific and popular coverage
- Donald Hyndman, David Hyndman (2009). "Chapter 3: Earthquakes and their causes". Natural Hazards and Disasters (2nd ed.). Brooks/Cole: Cengage Learning. ISBN 0-495-31667-9.
|Wikimedia Commons has media related to: Earthquake|
- Earthquake Hazards Program of the U.S. Geological Survey
- European-Mediterranean Seismological Centre a real-time earthquake information website
- Seismological Society of America
- Incorporated Research Institutions for Seismology
- Open Directory - Earthquakes
- World earthquake map captures every rumble since 1898 —Mother Nature Network (MNN) (29 June 2012) |
Floating-point literalssuggest change
Floating point literals provide values that can be used where you need a
double instance. There are three kinds of floating point literal.
- Simple decimal forms
- Scaled decimal forms
- Hexadecimal forms
(The JLS syntax rules combine the two decimal forms into a single form. We treat them separately for ease of explanation.)
There are distinct literal types for
double literals, expressed using suffixes. The various forms use letters to express different things. These letters are case insensitive.
Simple decimal forms
The simplest form of floating point literal consists of one or more decimal digits and a decimal point (
.) and an optional suffix (
D). The optional suffix allows you to specify that the literal is a
D) value. The default (when no suffix is specified) is
0.0 // this denotes zero .0 // this also denotes zero 0. // this also denotes zero 3.14159 // this denotes Pi, accurate to (approximately!) 5 decimal places. 1.0F // a `float` literal 1.0D // a `double` literal. (`double` is the default if no suffix is given)
In fact, decimal digits followed by a suffix is also a floating point literal.
1F // means the same thing as 1.0F
The meaning of a decimal literal is the IEEE floating point number that is closest to the infinite precision mathematical Real number denoted by the decimal floating point form. This conceptual value is converted to IEEE binary floating point representation using round to nearest. (The precise semantics of decimal conversion are specified in the javadocs for
Float.valueOf(String), bearing in mind that there are differences in the number syntaxes.)
Scaled decimal forms
Scaled decimal forms consist of simple decimal with an exponent part introduced by an
e, and followed by a signed integer. The exponent part is a short hand for multiplying the decimal form by a power of ten, as shown in the examples below. There is also an optional suffix to distinguish
double literals. Here are some examples:
1.0E1 // this means 1.0 x 10^1 ... or 10.0 (double) 1E-1D // this means 1.0 x 10^(-1) ... or 0.1 (double) 1.0e10f // this means 1.0 x 10^(10) ... or 10000000000.0 (float)
The size of a literal is limited by the representation (
double). It is a compilation error if the scale factor results in a value that is too large or too small.
Starting with Java 6, it is possible to express floating point literals in hexadecimal. The hexadecimal form have an analogous syntax to the simple and scaled decimal forms with the following differences:
- Every hexadecimal floating point literal starts with a zero (
0) and then an
- The digits of the number (but not the exponent part!) also include the hexadecimal digits
fand their uppercase equivalents.
- The exponent is mandatory, and is introduced by the letter
P) instead of an
E. The exponent represents a scaling factor that is a power of 2 instead of a power of 10.
Here are some examples:
0x0.0p0f // this is zero expressed in hexadecimal form (`float`) 0xff.0p19 // this is 255.0 x 2^19 (`double`)
Advice: since hexadecimal floating-point forms are unfamiliar to most Java programmers, it is advisable to use them sparingly.
Starting with Java 7, underscores are permitted within the digit strings in all three forms of floating point literal. This applies to the “exponent” parts as well. See Using underscores to improve readability.
It is a compilation error if a floating point literal denotes a number that is too large or too small to represent in the selected representation; i.e. if the number would overflow to +INF or -INF, or underflow to 0.0. However, it is legal for a literal to represent a non-zero denormalized number.
The floating point literal syntax does not provide literal representations for IEEE 754 special values such as the INF and NaN values. If you need to express them in source code, the recommended way is to use the constants defined by the |
Artificial Neural Network or Neural Network was modeled after the human brain. Human has a mind to think and to perform the task in a particular condition, but how can the machine do that thing? For this purpose, the artificial brain was designed, which is called a neural network. Similar to the human brain has neurons for passing information; the same way the neural network has nodes to perform that task. Nodes are mathematical functions.
A neural network is based on the structure and functions of biological neural networks. A neural network itself changes or learn based on input and output. The information that flows through the network affects the structure of the artificial neural network because of its learning and improving the property.
Dr. Robert Hecht-Nielsen explains the neural network as:
“The computing system made up of several simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.”
Components of an Artificial Neural Network
Neurons are similar to the biological neurons. Neurons are nothing but the activation function. Artificial neurons or Activation function has a “switch on” characteristic when it performs the classification task.
We can say when the input is higher than a certain value; the output should change state, i.e., 0 to 1, -1 to 1, etc. The sigmoid function is commonly used activation function in Artificial Neural Network.
F (Z) = 1/1+EXP (-Z)
The biological neuron is connected in hierarchical networks, with the output of some neurons being the input to others. These networks are represented as a connected layer of nodes. Each node carries multiple weighted inputs and applies to the neuron to the summation of these inputs and generates an output.
In the neural network, we predict the output (y) based on the given input (x). We create a model, i.e. (MX + c), which help us to predict the output. When we train the model, it finds the appropriate value of the constants m and c itself.
The constant c is the bias. Bias helps a model in such a manner that it can fit best for the given data. We can say bias gives freedom to perform best.
Algorithms are required in the neural network. Biological neurons have self-understanding and working capability, but how an artificial neuron will work in the same way? For this, it is necessary to train our artificial neural network. For this purpose, there are lots of algorithms used. Each algorithm has a different way of working.
There are five algorithms which are used in training of our ANN
- Gradient Descent
- Newton’s Method
- Conjugate Gradient
- Quasi Newton’s
- Levenberg Marquardt
Types of Artificial Neural Network
Neural Network works similarly as the human nervous system works. There are several types of neural network. These networks implementation are based on the set of parameter and mathematical operation that is required for determining the output.
- Feedforward Neural Network (Artificial Neuron)
FNN is the purest form of ANN in which input and data travel in only one direction. Data flows only in a forward direction; that’s why it is known as the Feedforward Neural Network. The data passes through input nodes and exit from the output nodes. The nodes are not connected cyclically. It doesn’t need to have a hidden layer. In FNN, it doesn’t require multiple layers. It may have a single layer also.
It has a front propagate wave that is achieved by using a classifying activation function. All other types of neural network use backpropagation, but FNN can’t. In FNN, the sum of the product’s input and weight are calculated, and then it is fed to the output. Technologies such as face recognition and computer vision are used FNN.
- Redial basis function Neural Network
RBFNN find the distance of a point to the center and considered it to work smoothly. There are two layers in the RBF Neural Network. In the inner layer, the features are combined with the radial basis function. Features provide an output that is used in consideration. Other measures can also be used rather than Euclidean.
Redial Basis Function
- We define a receptor t.
- Confronted maps are drawn around the receptor.
- For RBF Gaussian Functions are generally used. So we can define the radial distance r=||X-t||.
Redial Function=ϕ(r) = exp (- r²/2σ²), where σ > 0
The Neural Network is used in power restoration system. In the present era power system have increased in size and complexity. It’s both factors increases the risk of major power outages. Power needs to be restored as quickly and reliably as possible after a blackout.
3. Multilayer Perceptron
A Multilayer Perceptron has three or more layer. The data that cannot be separated linearly is classified with the help of this network. This network is a fully connected network that means every single node is connected with all other nodes that are in the next layer. A Nonlinear Activation Function is used in Multilayer Perceptron. It’s input and output layer nodes are connected as a directed graph. It is a deep learning method so, for training the network it uses back propagation. It is extensively applied in speech recognition and machine translation technologies.
4. Convolution Neural Network
In image classification and image recognition, convolution Neural Network plays a vital role, or we can say it is the main category of CNN. Face recognition; object detection, etc., are some areas where CNN are widely used. It is similar to FNN, learn-able weights and biases are available in neurons.
CNN takes an image as input that is classified and process under a certain category such as dog, cat, lion, tiger, etc. As we know, the computer sees an image as an array of pixel and depends on the resolution of the picture. Based on image resolution, it will see h * w * d, where h= height w= width and d= dimension. For example, An RGB image is 6 * 6 * 3 array of the matrix, and the grayscale image is 4 * 4 * 3 array of the matrix.
In CNN, each input image will pass through a sequence of convolution layers along with pooling, fully connected layers, filters (Also known as kernels). It can apply Soft-max function to classify an object with probabilistic values 0 and 1.
5. Recurrent Neural Network
Recurrent Neural Network is based on prediction. In this neural network, the output of a particular layer is saved and fed back to the input. It will help to predict the outcome of the layer. In Recurrent Neural Network, the first layer is formed in the same way as FNN’s layer, and in the subsequent layer, the recurrent neural network process begins.
All the inputs and outputs are independent of each other. In some cases, RNN is required to predict the next word of the sentence, and it will depend on the previous word of the sentence. RNN is popular for its main and most important feature, i.e., Hidden State. Hidden State remembers the information about a sequence.
RNN has a memory to store the result after calculation. RNN uses the same parameters on each input to perform the same task on all the hidden layers or inputs to produce the output. Unlike other neural networks, RNN parameter complexity is less.
6. Modular Neural Network
In Modular Neural Network, several different networks are functionally independent. In MNN the task is divided into sub-task and performed by several networks. During the computational process, networks don’t communicate directly with each other.
All the networks work independently towards achieving the output. Combined networks are more powerful than flat and unrestricted. Intermediary takes the output of each network; process them to produce the final output.
Training of a Neural Network
Training Strategy is the process which is used to carry out the learning process. For obtaining minimum possible loss, we apply a training strategy to the neural network. It is done when we search for a set of parameters that fit the neural network to the dataset.
There are two different concepts in training of a neural network.
- Loss Index
- Optimization Algorithms
Loss Index is one of the important concepts which plays a vital role in the training of a neural network. Loss index defines a task which is required to do by a neural network. A neural network is required to learn the measure of the quality of the representation. The loss index provides this measure.
We have to choose two different terms, i.e., error term and regularization_term, to set-up a loss index.
In the loss expression error is the most important term which measures how a neural network fits the training instances in the dataset. There are several errors which are used in the neural network.
- Mean squared error
Its main task is to calculate the average squared error between targets in the dataset and the output from the neural network.
- Normalized Squared Error
NSE main task is to divide the squared error between the targets in the dataset and the output from the neural network using the normalization coefficient. The neural network predicts the data ‘on the mean’ only if, NSE has a value of unity. If the value is zero, that indicates a perfect prediction of the data.
- Weighted Squared Error
In binary classification application, WSE is used with unbalance targets. Its main task is to give different weight to positive and negative instances when the numbers of positives and negatives are very different.
- Minkowski Error
Over the training instances, the sum of the difference between targets elevated to an exponent (vary between 1 and 2) the output. The exponent is called as the Minkowski parameter, and the default value of the exponent is 5.
- Regularization Term
After doing small changes in the input variables, it leads to small changes in the output; the solution is said to be regular.
- L1 Regularization
This method contains the sum of the absolute values of all the parameters in the neural network.
- L2 Regularization
This method contains the squared sum of all the parameters in the neural network.
Loss function in the network depends on the Adaptative parameters. These parameters group into a single n-dimensional weight vector w.
Let see a diagram that describes the loss function f (w).
In the above diagram, at the point, w* minimum loss function occurs. We can find the first and second derivatives of the loss function at any point A. The elements of the first derivatives which are grouped in the gradient vector can be written as
For i= 0, 1, n.
Similarly, the elements of the second derivatives which are grouped in the Hessian matrix can be written as
For i, j= 0, 1…..
The optimization algorithm is the procedure which is used to carry out the learning process in a neural network. It is also known as the optimizer. There are different types of the optimization algorithm. Each algorithm has different characteristics and performance in terms of speed, precision, and memory requirements.
Gradient descent algorithm is also known as the steepest descent algorithm. It is the simplest algorithm which requires information from the gradient vector. GD algorithm is a first-order method.
For i= 0, 1…
Activity diagram of the training process is given below.
GD algorithm is recommended when we have a big neural network along with thousands of parameters. The reason behind this is GD stores the gradient vector, not a Hessian matrix.
Newton’s method is a second-order algorithm. It makes use of the Hessian matrix. Its main task is to find better training directions by using the second derivatives of the loss function.
Newton’s method iterates as follows.
w(i+1) = w(i) – H(i)-1.g(i)
For i = 0, 1…..
Here, H(i)-1.g(i) is known as Newton’s step. The change for parameters may move toward a maximum rather than a minimum. Below is the diagram of the training of a neural network with Newton’s method. The improvement of the parameter is made by obtaining the training direction and a suitable training rate.
Conjugate gradient works in between gradient descent and Newton’s method. Conjugate gradient avoids the information requirements associated with evaluation, inversion of the Hessian matrix, and storage as required by Newton’s method.
In the CG algorithm, searching is done in a conjugate direction, which gives faster convergence rather than gradient descent direction. The training is done in a conjugate direction in concern with the Hessian matrix. The improvement of the parameter is made by computing the conjugate training direction and then suitable training rate in that direction.
Applications of Newton’s method are very expensive in terms of computation. To evaluate the Hessian matrix, it requires many operations to do. For resolving this drawback, Quasi-Newton Method was developed.
It is also known as a variable matrix method. At each iteration of an algorithm, it builds up an approximation to the inverse hessian rather than calculating the hessian directly. Information on the first derivative of the loss function is used to compute approximation.
The improvement of the parameter is made by obtaining a Quasi-Newton training direction and then finds a satisfactory training rate.
Levenberg Marquardt is also known as a damped least-squares method. This algorithm is designed to work with loss function specifically. This algorithm does not compute the Hessian matrix. It works with the Jacobian matrix and the gradient vector.
In Levenberg Marquardt, the First step is to find the loss, the gradient, and the Hessian approximation, and then the dumpling parameter is adjusted.
Advantages and Disadvantages of Artificial Neural Network
Advantages of ANN
- It stores the information on the entire network rather than the database.
- After the training of ANN, the data may give the result even with incomplete information.
- Even if one or more cell of ANN is corrupted, the output still gets generated.
- ANN has distributed memory that helps to generate the desired output.
- ANN can make a machine learnable.
- ANN has a parallel processing capability, which means it can perform more than one task at the same time.
Disadvantages of ANN
- It requires a processor with parallel processing power according to their structure.
- Unexplained behavior of the network is the main problem of ANN. ANN doesn’t give a clue when it produces a probing solution.
- There are no specific rules provided to determine the structure of ANN.
- There is no information about the duration of the network.
- It’s too typical to show the problem to the network.
In this tutorial, we explained only the basic concepts of the Neural Network. In Neural Network, there are many more techniques and algorithms other than backpropagation. Neural Network works well in image processing and classification.
Currently, on the neural network, very deep research is going on. Once you have sufficient knowledge about the basic concepts and algorithms, you may want to explore reinforced learning, Deep learning, etc.by |
In physics, angular momentum (rarely, moment of momentum or rotational momentum) is the rotational equivalent of linear momentum. It is an important quantity in physics because it is a conserved quantity—the total angular momentum of a closed system remains constant.
This gyroscope remains upright while spinning due to the conservation of its angular momentum.
|In SI base units||kg m2 s−1|
|L = Iω = r × p|
In three dimensions, the angular momentum for a point particle is a pseudovector r × p, the cross product of the particle's position vector r (relative to some origin) and its momentum vector; the latter is p = mv in Newtonian mechanics. This definition can be applied to each point in continua like solids or fluids, or physical fields. Unlike momentum, angular momentum does depend on where the origin is chosen, since the particle's position is measured from it.
Just like for angular velocity, there are two special types of angular momentum: the spin angular momentum and the orbital angular momentum. The spin angular momentum of an object is defined as the angular momentum about its centre of mass coordinate. The orbital angular momentum of an object about a chosen origin is defined as the angular momentum of the centre of mass about the origin. The total angular momentum of an object is the sum of the spin and orbital angular momenta. The orbital angular momentum vector of a particle is always parallel and directly proportional to the orbital angular velocity vector ω of the particle, where the constant of proportionality depends on both the mass of the particle and its distance from origin. However, the spin angular momentum of the object is proportional but not always parallel to the spin angular velocity Ω, making the constant of proportionality a second-rank tensor rather than a scalar.
Angular momentum is additive; the total angular momentum of any composite system is the (pseudo) vector sum of the angular momenta of its constituent parts. For a continuous rigid body, the total angular momentum is the volume integral of angular momentum density (i.e. angular momentum per unit volume in the limit as volume shrinks to zero) over the entire body.
Torque can be defined as the rate of change of angular momentum, analogous to force. The net external torque on any system is always equal to the total torque on the system; in other words, the sum of all internal torques of any system is always 0 (this is the rotational analogue of Newton's Third Law). Therefore, for a closed system (where there is no net external torque), the total torque on the system must be 0, which means that the total angular momentum of the system is constant. The conservation of angular momentum helps explain many observed phenomena, for example the increase in rotational speed of a spinning figure skater as the skater's arms are contracted, the high rotational rates of neutron stars, the Coriolis effect, and the precession of gyroscopes. In general, conservation does limit the possible motion of a system, but does not uniquely determine what the exact motion is.
In quantum mechanics, angular momentum (like other quantities) is expressed as an operator, and its one-dimensional projections have quantized eigenvalues. Angular momentum is subject to the Heisenberg uncertainty principle, implying that at any time, only one projection (also called "component") can be measured with definite precision; the other two then remain uncertain. Because of this, it turns out that the notion of a quantum particle literally "spinning" about an axis does not exist. Nevertheless, elementary particles still possess a spin angular momentum, but this angular momentum does not correspond to spinning motion in the ordinary sense.
- 1 In classical mechanics
- 1.1 Definition
- 1.2 Discussion
- 1.3 Conservation of angular momentum
- 1.4 Angular momentum in orbital mechanics
- 1.5 Solid bodies
- 1.6 Collection of particles
- 2 Angular momentum (modern definition)
- 3 In quantum mechanics
- 4 In electrodynamics
- 5 In optics
- 6 History
- 7 See also
- 8 Footnotes
- 9 References
- 10 External links
In classical mechanicsEdit
Orbital angular momentum in two dimensionsEdit
Angular momentum is a vector quantity (more precisely, a pseudovector) that represents the product of a body's rotational inertia and rotational velocity (in radians/sec) about a particular axis. However, if the particle's trajectory lies in a single plane, it is sufficient to discard the vector nature of angular momentum, and treat it as a scalar (more precisely, a pseudoscalar). Angular momentum can be considered a rotational analog of linear momentum. Thus, where linear momentum is proportional to mass and linear speed ,
Unlike mass, which depends only on amount of matter, moment of inertia is also dependent on the position of the axis of rotation and the shape of the matter. Unlike linear speed, which does not depend upon the choice of origin, angular velocity is always measured with respect to a fixed origin. Therefore, strictly speaking, should be referred to as the angular momentum relative to that center.
Because for a single particle and for circular motion, angular momentum can be expanded, and reduced to,
where is the perpendicular component of the motion. Expanding, rearranging, and reducing, angular momentum can also be expressed,
where is the length of the moment arm, a line dropped perpendicularly from the origin onto the path of the particle. It is this definition, (length of moment arm)×(linear momentum) to which the term moment of momentum refers.
Scalar—angular momentum from Lagrangian mechanicsEdit
Another approach is to define angular momentum as the conjugate momentum (also called canonical momentum) of the angular coordinate expressed in the Lagrangian of the mechanical system. Consider a mechanical system with a mass constrained to move in a circle of radius in the absence of any external force field. The kinetic energy of the system is
And the potential energy is
Then the Lagrangian is
The generalized momentum "canonically conjugate to" the coordinate is defined by
Orbital angular momentum in three dimensionsEdit
To completely define orbital angular momentum in three dimensions, it is required to know the rate at which the position vector sweeps out angle, the direction perpendicular to the instantaneous plane of angular displacement, and the mass involved, as well as how this mass is distributed in space. By retaining this vector nature of angular momentum, the general nature of the equations is also retained, and can describe any sort of three-dimensional motion about the center of rotation – circular, linear, or otherwise. In vector notation, the orbital angular momentum of a point particle in motion about the origin can be expressed as:
This can be expanded, reduced, and by the rules of vector algebra, rearranged:
which is the cross product of the position vector and the linear momentum of the particle. By the definition of the cross product, the vector is perpendicular to both and . It is directed perpendicular to the plane of angular displacement, as indicated by the right-hand rule – so that the angular velocity is seen as counter-clockwise from the head of the vector. Conversely, the vector defines the plane in which and lie.
- where is the perpendicular component of the motion, as above.
The two-dimensional scalar equations of the previous section can thus be given direction:
and for circular motion, where all of the motion is perpendicular to the radius .
Angular momentum can be described as the rotational analog of linear momentum. Like linear momentum it involves elements of mass and displacement. Unlike linear momentum it also involves elements of position and shape.
Many problems in physics involve matter in motion about some certain point in space, be it in actual rotation about it, or simply moving past it, where it is desired to know what effect the moving matter has on the point—can it exert energy upon it or perform work about it? Energy, the ability to do work, can be stored in matter by setting it in motion—a combination of its inertia and its displacement. Inertia is measured by its mass, and displacement by its velocity. Their product,
is the matter's momentum. Referring this momentum to a central point introduces a complication: the momentum is not applied to the point directly. For instance, a particle of matter at the outer edge of a wheel is, in effect, at the end of a lever of the same length as the wheel's radius, its momentum turning the lever about the center point. This imaginary lever is known as the moment arm. It has the effect of multiplying the momentum's effort in proportion to its length, an effect known as a moment. Hence, the particle's momentum referred to a particular point,
is the angular momentum, sometimes called, as here, the moment of momentum of the particle versus that particular center point. The equation combines a moment (a mass turning moment arm ) with a linear (straight-line equivalent) speed . Linear speed referred to the central point is simply the product of the distance and the angular speed versus the point: another moment. Hence, angular momentum contains a double moment: Simplifying slightly, the quantity is the particle's moment of inertia, sometimes called the second moment of mass. It is a measure of rotational inertia.
Because moment of inertia is a crucial part of the spin angular momentum, the latter necessarily includes all of the complications of the former, which is calculated by multiplying elementary bits of the mass by the squares of their distances from the center of rotation. Therefore, the total moment of inertia, and the angular momentum, is a complex function of the configuration of the matter about the center of rotation and the orientation of the rotation for the various bits.
For a rigid body, for instance a wheel or an asteroid, the orientation of rotation is simply the position of the rotation axis versus the matter of the body. It may or may not pass through the center of mass, or it may lie completely outside of the body. For the same body, angular momentum may take a different value for every possible axis about which rotation may take place. It reaches a minimum when the axis passes through the center of mass.
For a collection of objects revolving about a center, for instance all of the bodies of the Solar System, the orientations may be somewhat organized, as is the Solar System, with most of the bodies' axes lying close to the system's axis. Their orientations may also be completely random.
In brief, the more mass and the farther it is from the center of rotation (the longer the moment arm), the greater the moment of inertia, and therefore the greater the angular momentum for a given angular velocity. In many cases the moment of inertia, and hence the angular momentum, can be simplified by,
- where is the radius of gyration, the distance from the axis at which the entire mass may be considered as concentrated.
- where is the radius of the point mass from the center of rotation,
and for any collection of particles as the sum,
Angular momentum's dependence on position and shape is reflected in its units versus linear momentum: kg⋅m2/s, N⋅m⋅s, or J⋅s for angular momentum versus kg⋅m/s or N⋅s for linear momentum. When calculating angular momentum as the product of the moment of inertia times the angular velocity, the angular velocity must be expressed in radians per second, where the radian assumes the dimensionless value of unity. (When performing dimensional analysis, it may be productive to use orientational analysis which treats radians as a base unit, but this is outside the scope of the International system of units). Angular momentum's units can be interpreted as torque⋅time or as energy⋅time per angle. An object with angular momentum of L N⋅m⋅s can be reduced to zero rotation (all of the rotational energy can be transferred out of it) by an angular impulse of L N⋅m⋅s or equivalently, by torque or work of L N⋅m for one second, or energy of L J for one second.
The plane perpendicular to the axis of angular momentum and passing through the center of mass is sometimes called the invariable plane, because the direction of the axis remains fixed if only the interactions of the bodies within the system, free from outside influences, are considered. One such plane is the invariable plane of the Solar System.
Angular momentum and torqueEdit
Newton's second law of motion can be expressed mathematically,
which means that the torque (i.e. the time derivative of the angular momentum) is
Because the moment of inertia is , it follows that , and which, reduces to
This is the rotational analog of Newton's Second Law. Note that the torque is not necessarily proportional or parallel to the angular acceleration (as one might expect). The reason for this is that the moment of inertia of a particle can change with time, something that cannot occur for ordinary mass.
Conservation of angular momentumEdit
A rotational analog of Newton's third law of motion might be written, "In a closed system, no torque can be exerted on any matter without the exertion on some other matter of an equal and opposite torque." Hence, angular momentum can be exchanged between objects in a closed system, but total angular momentum before and after an exchange remains constant (is conserved).
Seen another way, a rotational analogue of Newton's first law of motion might be written, "A rigid body continues in a state of uniform rotation unless acted by an external influence." Thus with no external influence to act upon it, the original angular momentum of the system remains constant.
The conservation of angular momentum is used in analyzing central force motion. If the net force on some body is directed always toward some point, the center, then there is no torque on the body with respect to the center, as all of the force is directed along the radius vector, and none is perpendicular to the radius. Mathematically, torque because in this case and are parallel vectors. Therefore, the angular momentum of the body about the center is constant. This is the case with gravitational attraction in the orbits of planets and satellites, where the gravitational force is always directed toward the primary body and orbiting bodies conserve angular momentum by exchanging distance and velocity as they move about the primary. Central force motion is also used in the analysis of the Bohr model of the atom.
For a planet, angular momentum is distributed between the spin of the planet and its revolution in its orbit, and these are often exchanged by various mechanisms. The conservation of angular momentum in the Earth–Moon system results in the transfer of angular momentum from Earth to Moon, due to tidal torque the Moon exerts on the Earth. This in turn results in the slowing down of the rotation rate of Earth, at about 65.7 nanoseconds per day, and in gradual increase of the radius of Moon's orbit, at about 3.82 centimeters per year.
The conservation of angular momentum explains the angular acceleration of an ice skater as she brings her arms and legs close to the vertical axis of rotation. By bringing part of the mass of her body closer to the axis, she decreases her body's moment of inertia. Because angular momentum is the product of moment of inertia and angular velocity, if the angular momentum remains constant (is conserved), then the angular velocity (rotational speed) of the skater must increase.
The same phenomenon results in extremely fast spin of compact stars (like white dwarfs, neutron stars and black holes) when they are formed out of much larger and slower rotating stars. Decrease in the size of an object n times results in increase of its angular velocity by the factor of n2.
Conservation is not always a full explanation for the dynamics of a system but is a key constraint. For example, a spinning top is subject to gravitational torque making it lean over and change the angular momentum about the nutation axis, but neglecting friction at the point of spinning contact, it has a conserved angular momentum about its spinning axis, and another about its precession axis. Also, in any planetary system, the planets, star(s), comets, and asteroids can all move in numerous complicated ways, but only so that the angular momentum of the system is conserved.
Noether's theorem states that every conservation law is associated with a symmetry (invariant) of the underlying physics. The symmetry associated with conservation of angular momentum is rotational invariance. The fact that the physics of a system is unchanged if it is rotated by any angle about an axis implies that angular momentum is conserved.
Angular momentum in orbital mechanicsEdit
called specific angular momentum. Note that Mass is often unimportant in orbital mechanics calculations, because motion is defined by gravity. The primary body of the system is often so much larger than any bodies in motion about it that the smaller bodies have a negligible gravitational effect on it; it is, in effect, stationary. All bodies are apparently attracted by its gravity in the same way, regardless of mass, and therefore all move approximately the same way under the same conditions.
For a continuous mass distribution with density function ρ(r), a differential volume element dV with position vector r within the mass has a mass element dm = ρ(r)dV. Therefore, the infinitesimal angular momentum of this element is:
In the derivation which follows, integrals similar to this can replace the sums for the case of continuous mass.
Collection of particlesEdit
Center of massEdit
For a collection of particles in motion about an arbitrary origin, it is informative to develop the equation of angular momentum by resolving their motion into components about their own center of mass and about the origin. Given,
- is the mass of particle ,
- is the position vector of particle vs the origin,
- is the velocity of particle vs the origin,
- is the position vector of the center of mass vs the origin,
- is the velocity of the center of mass vs the origin,
- is the position vector of particle vs the center of mass,
- is the velocity of particle vs the center of mass,
The total mass of the particles is simply their sum,
The position vector of the center of mass is defined by,
The total angular momentum of the collection of particles is the sum of the angular momentum of each particle,
It can be shown that (see sidebar),
which, by the definition of the center of mass, is and similarly for
therefore the second and third terms vanish,
The first term can be rearranged,
and total angular momentum for the collection of particles is finally,
The first term is the angular momentum of the center of mass relative to the origin. Similar to Single particle, below, it is the angular momentum of one particle of mass M at the center of mass moving with velocity V. The second term is the angular momentum of the particles moving relative to the center of mass, similar to Fixed center of mass, below. The result is general—the motion of the particles is not restricted to rotation or revolution about the origin or center of mass. The particles need not be individual masses, but can be elements of a continuous distribution, such as a solid body.
Rearranging equation (2) by vector identities, multiplying both terms by "one", and grouping appropriately,
In the case of a single particle moving about the arbitrary origin,
Fixed center of massEdit
For the case of the center of mass fixed in space with respect to the origin,
Angular momentum (modern definition)Edit
In modern (20th century) theoretical physics, angular momentum (not including any intrinsic angular momentum – see below) is described using a different formalism, instead of a classical pseudovector. In this formalism, angular momentum is the 2-form Noether charge associated with rotational invariance. As a result, angular momentum is not conserved for general curved spacetimes, unless it happens to be asymptotically rotationally invariant.
In classical mechanics, the angular momentum of a particle can be reinterpreted as a plane element:
in which the exterior product ∧ replaces the cross product × (these products have similar characteristics but are nonequivalent). This has the advantage of a clearer geometric interpretation as a plane element, defined from the x and p vectors, and the expression is true in any number of dimensions (two or higher). In Cartesian coordinates:
or more compactly in index notation:
The angular velocity can also be defined as an antisymmetric second order tensor, with components ωij. The relation between the two antisymmetric tensors is given by the moment of inertia which must now be a fourth order tensor:
Again, this equation in L and ω as tensors is true in any number of dimensions. This equation also appears in the geometric algebra formalism, in which L and ω are bivectors, and the moment of inertia is a mapping between them.
In each of the above cases, for a system of particles, the total angular momentum is just the sum of the individual particle angular momenta, and the centre of mass is for the system.
In quantum mechanicsEdit
Angular momentum in quantum mechanics differs in many profound respects from angular momentum in classical mechanics. In relativistic quantum mechanics, it differs even more, in which the above relativistic definition becomes a tensorial operator.
Spin, orbital, and total angular momentumEdit
The classical definition of angular momentum as can be carried over to quantum mechanics, by reinterpreting r as the quantum position operator and p as the quantum momentum operator. L is then an operator, specifically called the orbital angular momentum operator. The components of the angular momentum operator satisfy the commutation relations of the Lie algebra so(3). Indeed, these operators are precisely the infinitesimal action of the rotation group on the quantum Hilbert space. (See also the discussion below of the angular momentum operators as the generators of rotations.)
However, in quantum physics, there is another type of angular momentum, called spin angular momentum, represented by the spin operator S. Almost all elementary particles have spin. Spin is often depicted as a particle literally spinning around an axis, but this is a misleading and inaccurate picture: spin is an intrinsic property of a particle, unrelated to any sort of motion in space and fundamentally different from orbital angular momentum. All elementary particles have a characteristic spin, for example electrons have "spin 1/2" (this actually means "spin ħ/2") while photons have "spin 1" (this actually means "spin ħ").
Finally, there is total angular momentum J, which combines both the spin and orbital angular momentum of all particles and fields. (For one particle, J = L + S.) Conservation of angular momentum applies to J, but not to L or S; for example, the spin–orbit interaction allows angular momentum to transfer back and forth between L and S, with the total remaining constant. Electrons and photons need not have integer-based values for total angular momentum, but can also have fractional values.
In quantum mechanics, angular momentum is quantized – that is, it cannot vary continuously, but only in "quantum leaps" between certain allowed values. For any system, the following restrictions on measurement results apply, where is the reduced Planck constant and is any Euclidean vector such as x, y, or z:
|If you measure…||The result can be...|
(There are additional restrictions as well, see angular momentum operator for details.)
The reduced Planck constant is tiny by everyday standards, about 10−34 J s, and therefore this quantization does not noticeably affect the angular momentum of macroscopic objects. However, it is very important in the microscopic world. For example, the structure of electron shells and subshells in chemistry is significantly affected by the quantization of angular momentum.
In the definition , six operators are involved: The position operators , , , and the momentum operators , , . However, the Heisenberg uncertainty principle tells us that it is not possible for all six of these quantities to be known simultaneously with arbitrary precision. Therefore, there are limits to what can be known or measured about a particle's angular momentum. It turns out that the best that one can do is to simultaneously measure both the angular momentum vector's magnitude and its component along one axis.
The uncertainty is closely related to the fact that different components of an angular momentum operator do not commute, for example . (For the precise commutation relations, see angular momentum operator.)
Total angular momentum as generator of rotationsEdit
As mentioned above, orbital angular momentum L is defined as in classical mechanics: , but total angular momentum J is defined in a different, more basic way: J is defined as the "generator of rotations". More specifically, J is defined so that the operator
is the rotation operator that takes any system and rotates it by angle about the axis . (The "exp" in the formula refers to operator exponential) To put this the other way around, whatever our quantum Hilbert space is, we expect that the rotation group SO(3) will act on it. There is then an associated action of the Lie algebra so(3) of SO(3); the operators describing the action of so(3) on our Hilbert space are the (total) angular momentum operators.
The relationship between the angular momentum operator and the rotation operators is the same as the relationship between Lie algebras and Lie groups in mathematics. The close relationship between angular momentum and rotations is reflected in Noether's theorem that proves that angular momentum is conserved whenever the laws of physics are rotationally invariant.
When describing the motion of a charged particle in an electromagnetic field, the canonical momentum P (derived from the Lagrangian for this system) is not gauge invariant. As a consequence, the canonical angular momentum L = r × P is not gauge invariant either. Instead, the momentum that is physical, the so-called kinetic momentum (used throughout this article), is (in SI units)
The interplay with quantum mechanics is discussed further in the article on canonical commutation relations.
The angular momentum density vector is given by a vector product as in classical mechanics:
The above identities are valid locally, i.e. in each space point in a given moment .
- A top, whose parts by their cohesion are perpetually drawn aside from rectilinear motions, does not cease its rotation, otherwise than as it is retarded by the air. The greater bodies of the planets and comets, meeting with less resistance in more free spaces, preserve their motions both progressive and circular for a much longer time.
He did not further investigate angular momentum directly in the Principia,
- From such kind of reflexions also sometimes arise the circular motions of bodies about their own centres. But these are cases which I do not consider in what follows; and it would be too tedious to demonstrate every particular that relates to this subject.
The Law of AreasEdit
As a planet orbits the Sun, the line between the Sun and the planet sweeps out equal areas in equal intervals of time. This had been known since Kepler expounded his second law of planetary motion. Newton derived a unique geometric proof, and went on to show that the attractive force of the Sun's gravity was the cause of all of Kepler's laws.
During the first interval of time, an object is in motion from point A to point B. Undisturbed, it would continue to point c during the second interval. When the object arrives at B, it receives an impulse directed toward point S. The impulse gives it a small added velocity toward S, such that if this were its only velocity, it would move from B to V during the second interval. By the rules of velocity composition, these two velocities add, and point C is found by construction of parallelogram BcCV. Thus the object's path is deflected by the impulse so that it arrives at point C at the end of the second interval. Because the triangles SBc and SBC have the same base SB and the same height Bc or VC, they have the same area. By symmetry, triangle SBc also has the same area as triangle SAB, therefore the object has swept out equal areas SAB and SBC in equal times.
At point C, the object receives another impulse toward S, again deflecting its path during the third interval from d to D. Thus it continues to E and beyond, the triangles SAB, SBc, SBC, SCd, SCD, SDe, SDE all having the same area. Allowing the time intervals to become ever smaller, the path ABCDE approaches indefinitely close to a continuous curve.
Note that because this derivation is geometric, and no specific force is applied, it proves a more general law than Kepler's second law of planetary motion. It shows that the Law of Areas applies to any central force, attractive or repulsive, continuous or non-continuous, or zero.
Conservation of angular momentum in the Law of AreasEdit
The proportionality of angular momentum to the area swept out by a moving object can be understood by realizing that the bases of the triangles, that is, the lines from S to the object, are equivalent to the radius r, and that the heights of the triangles are proportional to the perpendicular component of velocity v⊥. Hence, if the area swept per unit time is constant, then by the triangular area formula 1/(base)(height), the product (base)(height) and therefore the product rv⊥ are constant: if r and the base length are decreased, v⊥ and height must increase proportionally. Mass is constant, therefore angular momentum rmv⊥ is conserved by this exchange of distance and velocity.
In the case of triangle SBC, area is equal to 1/(SB)(VC). Wherever C is eventually located due to the impulse applied at B, the product (SB)(VC), and therefore rmv⊥ remain constant. Similarly so for each of the triangles.
Leonhard Euler, Daniel Bernoulli, and Patrick d'Arcy all understood angular momentum in terms of conservation of areal velocity, a result of their analysis of Kepler's second law of planetary motion. It is unlikely that they realized the implications for ordinary rotating matter.
Bernoulli wrote in a 1744 letter of a "moment of rotational motion", possibly the first conception of angular momentum as we now understand it.
Louis Poinsot in 1803 began representing rotations as a line segment perpendicular to the rotation, and elaborated on the "conservation of moments".
William J. M. Rankine's 1858 Manual of Applied Mechanics defined angular momentum in the modern sense for the first time:
- ...a line whose length is proportional to the magnitude of the angular momentum, and whose direction is perpendicular to the plane of motion of the body and of the fixed point, and such, that when the motion of the body is viewed from the extremity of the line, the radius-vector of the body seems to have right-handed rotation.
In an 1872 edition of the same book, Rankine stated that "The term angular momentum was introduced by Mr. Hayward," probably referring to R.B. Hayward's article On a Direct Method of estimating Velocities, Accelerations, and all similar Quantities with respect to Axes moveable in any manner in Space with Applications, which was introduced in 1856, and published in 1864. Rankine was mistaken, as numerous publications feature the term starting in the late 18th to early 19th centuries. However, Hayward's article apparently was the first use of the term and the concept seen by much of the English-speaking world. Before this, angular momentum was typically referred to as "momentum of rotation" in English.
- Absolute angular momentum
- Angular momentum coupling
- Angular momentum of light
- Angular momentum diagrams (quantum mechanics)
- Clebsch–Gordan coefficients
- Linear-rotational analogs
- Orders of magnitude (angular momentum)
- Pauli–Lubanski pseudovector
- Relative angular momentum
- Relativistic angular momentum
- Rigid rotor
- Rotational energy
- Specific relative angular momentum
- de Podesta, Michael (2002). Understanding the Properties of Matter (2nd, illustrated, revised ed.). CRC Press. p. 29. ISBN 978-0-415-25788-6. Extract of page 29
- Wilson, E. B. (1915). Linear Momentum, Kinetic Energy and Angular Momentum. The American Mathematical Monthly. XXII. Ginn and Co., Boston, in cooperation with University of Chicago, et al. p. 190 – via Google books.
- Worthington, Arthur M. (1906). Dynamics of Rotation. Longmans, Green and Co., London. p. 21 – via Google books.
- Taylor, John R. (2005). Classical Mechanics. University Science Books, Mill Valley, CA. p. 90. ISBN 978-1-891389-22-1.
- Dadourian, H. M. (1913). Analytical Mechanics for Students of Physics and Engineering. D. Van Nostrand Company, New York. p. 266 – via Google books.
- Watson, W. (1912). General Physics. Longmans, Green and Co., New York. p. 33 – via Google books.
- Barker, George F. (1893). Physics: Advanced Course (4th ed.). Henry Holt and Company, New York. p. 66 – via Google Books.
- Barker, George F. (1893). Physics: Advanced Course (4th ed.). Henry Holt and Company, New York. pp. 67–68 – via Google Books.
- Oberg, Erik; et al. (2000). Machinery's Handbook (26th ed.). Industrial Press, Inc., New York. p. 143. ISBN 978-0-8311-2625-4.
- Watson, W. (1912). General Physics. Longmans, Green and Co., New York. p. 34 – via Google books.
- Kent, William (1916). The Mechanical Engineers' Pocket Book (9th ed.). John Wiley and Sons, Inc., New York. p. 517 – via Google books.
- Oberg, Erik; et al. (2000). Machinery's Handbook (26th ed.). Industrial Press, Inc., New York. p. 146. ISBN 978-0-8311-2625-4.
- Oberg, Erik; et al. (2000). Machinery's Handbook (26th ed.). Industrial Press, Inc., New York. pp. 161–162. ISBN 978-0-8311-2625-4.
- Kent, William (1916). The Mechanical Engineers' Pocket Book (9th ed.). John Wiley and Sons, Inc., New York. p. 527 – via Google books.
- Battin, Richard H. (1999). An Introduction to the Mathematics and Methods of Astrodynamics, Revised Edition. American Institute of Aeronautics and Astronautics, Inc. ISBN 978-1-56347-342-5., p. 97
- Rankine, W. J. M. (1872). A Manual of Applied Mechanics (6th ed.). Charles Griffin and Company, London. p. 507 – via Google books.
- Crew, Henry (1908). The Principles of Mechanics: For Students of Physics and Engineering. Longmans, Green, and Company, New York. p. 88 – via Google books.
- Worthington, Arthur M. (1906). Dynamics of Rotation. Longmans, Green and Co., London. p. 82 – via Google books.
- Worthington, Arthur M. (1906). Dynamics of Rotation. Longmans, Green and Co., London. p. 11 – via Google books.
- Stephenson, F. R.; Morrison, L. V.; Whitrow, G. J. (1984). "Long-term changes in the rotation of the earth – 700 B.C. to A.D. 1980". Philosophical Transactions of the Royal Society. 313 (1524): 67. Bibcode:1984RSPTA.313...47S. doi:10.1098/rsta.1984.0082. +2.40 ms/century divided by 36525 days.
- Dickey, J. O.; et al. (1994). "Lunar Laser Ranging: A Continuing Legacy of the Apollo Program" (PDF). Science. 265 (5171): 482–90, see 486. Bibcode:1994Sci...265..482D. doi:10.1126/science.265.5171.482. PMID 17781305.
- Landau, L. D.; Lifshitz, E. M. (1995). The classical theory of fields. Course of Theoretical Physics. Oxford, Butterworth–Heinemann. ISBN 978-0-7506-2768-9.
- Battin, Richard H. (1999). An Introduction to the Mathematics and Methods of Astrodynamics, Revised Edition. American Institute of Aeronautics and Astronautics, Inc. p. 115. ISBN 978-1-56347-342-5.
- Wilson, E. B. (1915). Linear Momentum, Kinetic Energy and Angular Momentum. The American Mathematical Monthly. XXII. Ginn and Co., Boston, in cooperation with University of Chicago, et al. p. 188, equation (3) – via Google books.
- Wilson, E. B. (1915). Linear Momentum, Kinetic Energy and Angular Momentum. The American Mathematical Monthly. XXII. Ginn and Co., Boston, in cooperation with University of Chicago, et al. p. 191, Theorem 8 – via Google books.
- Synge and Schild, Tensor calculus, Dover publications, 1978 edition, p. 161. ISBN 978-0-486-63612-2.
- R.P. Feynman; R.B. Leighton; M. Sands (1964). Feynman's Lectures on Physics (volume 2). Addison–Wesley. pp. 31–7. ISBN 978-0-201-02117-2.
- Hall 2013 Section 17.3
- Ballantine, K. E.; Donegan, J. F.; Eastham, P. R. (2016). "There are many ways to spin a photon: Half-quantization of a total optical angular momentum". Science Advances. 2 (4): e1501748. Bibcode:2016SciA....2E1748B. doi:10.1126/sciadv.1501748. PMC 5565928. PMID 28861467.
- Littlejohn, Robert (2011). "Lecture notes on rotations in quantum mechanics" (PDF). Physics 221B Spring 2011. Retrieved 13 Jan 2012.
- Okulov, A Yu (2008). "Angular momentum of photons and phase conjugation". Journal of Physics B: Atomic, Molecular and Optical Physics. 41 (10): 101001. arXiv:0801.2675. Bibcode:2008JPhB...41j1001O. doi:10.1088/0953-4075/41/10/101001.
- Okulov, A.Y. (2008). "Optical and Sound Helical structures in a Mandelstam – Brillouin mirror". JETP Letters (in Russian). 88 (8): 561–566. Bibcode:2008JETPL..88..487O. doi:10.1134/s0021364008200046.
- Newton, Isaac (1803). "Axioms; or Laws of Motion, Law I". The Mathematical Principles of Natural Philosophy. Andrew Motte, translator. H. D. Symonds, London. p. 322 – via Google books.
- Newton, Axioms; or Laws of Motion, Corollary III
- see Borrelli, Arianna (2011). "Angular momentum between physics and mathematics" (PDF). for an excellent and detailed summary of the concept of angular momentum through history.
- Bruce, Ian (2008). "Euler : Mechanica Vol. 1".
- "Euler's Correspondence with Daniel Bernoulli, Bernoulli to Euler, 04 February, 1744" (PDF). The Euler Archive.
- Rankine, W. J. M. (1872). A Manual of Applied Mechanics (6th ed.). Charles Griffin and Company, London. p. 506 – via Google books.
- Hayward, Robert B. (1864). "On a Direct Method of estimating Velocities, Accelerations, and all similar Quantities with respect to Axes moveable in any manner in Space with Applications". Transactions of the Cambridge Philosophical Society. 10: 1. Bibcode:1864TCaPS..10....1H.
- see, for instance, Gompertz, Benjamin (1818). "On Pendulums vibrating between Cheeks". The Journal of Science and the Arts. III (V): 17 – via Google books.; Herapath, John (1847). Mathematical Physics. Whittaker and Co., London. p. 56 – via Google books.
- see, for instance, Landen, John (1785). "Of the Rotatory Motion of a Body of any Form whatever". Philosophical Transactions. LXXV (I): 311–332. doi:10.1098/rstl.1785.0016.
- Cohen-Tannoudji, Claude; Diu, Bernard; Laloë, Franck (2006). Quantum Mechanics (2 volume set ed.). John Wiley & Sons. ISBN 978-0-471-56952-7.
- Condon, E. U.; Shortley, G. H. (1935). "Especially Chapter 3". The Theory of Atomic Spectra. Cambridge University Press. ISBN 978-0-521-09209-8.
- Edmonds, A. R. (1957). Angular Momentum in Quantum Mechanics. Princeton University Press. ISBN 978-0-691-07912-7.
- Hall, Brian C. (2013), Quantum Theory for Mathematicians, Graduate Texts in Mathematics, 267, Springer, ISBN 978-0-387-40122-5.
- Jackson, John David (1998). Classical Electrodynamics (3rd ed.). John Wiley & Sons. ISBN 978-0-471-30932-1.
- Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.). Brooks/Cole. ISBN 978-0-534-40842-8.
- Thompson, William J. (1994). Angular Momentum: An Illustrated Guide to Rotational Symmetries for Physical Systems. Wiley. ISBN 978-0-471-55264-2.
- Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics (5th ed.). W. H. Freeman. ISBN 978-0-7167-0809-4.
- Feynman R, Leighton R, and Sands M. (September 2013). "19–4 Rotational kinetic energy". The Feynman Lectures on Physics. The Feynman Lectures Website (online ed.).CS1 maint: uses authors parameter (link) |
Right angle (geometry)
Jump to navigation Jump to search
In Euclidean geometry, a right angle, symbolized by the L-shaped figure ∟, bisects the angle of the line into two equal parts. The right angle is created when two straight lines meet perpendicularly at 90 degrees to each other.
The right angle is demonstrated:
Given a line DC with point B lying on it Project a line from B through point A Take B as the vertex of angle ABC If the angle ABC equals the angle ABD then angle ABC is a right angle, and so is angle ABD
The plus sign, +, consists of two such lines, and so the four angles at its heart are all right angles. |
4th grade (Eureka Math/EngageNY)
- Relating decimals and fractions in words
- Relate decimals and fractions in words
- Decimal place value
- Decimals as words
- Decimals in words
- Graphing tenths from 0 to 1
- Decimals on the number line: tenths 0-1
- Identifying tenths on a number line
- Decimals on the number line: tenths
- Equivalent fractions with fraction models (denominators 10 & 100)
Sal relates equivalent decimals and fractions written in word form.
Want to join the conversation?
- How do you multiply fractions?(9 votes)
- To multiply two fractions, you would take their numerators (the number on top of the line) and multiply them as you would any other numbers. Then, take the denominators (the numbers on the bottom of the line) and multiply them together. Your answer should be the product of the numerators above the product of the denominators.(9 votes)
- Why is there no ones place in decimals?(4 votes)
- The decimal place value tells you the denominator of the fraction that the digit represents. For example:
0.1 = 1 tenth = 1/10
0.01 = 1 hundredth = 1/100
So if you had a oneth place, what would you get? You would have 1/1 = 1 (a whole number). You're back to the ones place which is on the left side of the decimal.
Hope this helps.(7 votes)
- how long is the right of the decimal(4 votes)
- What would you need decimals for in life?(3 votes)
- Decimals are most often used with money. $1.23 That means that when you are buying something you will probably see a decimal. It is also good to know how to convert fractions to decimals as they appear cleaner and can be used easier in some cases. Instead of 1 1/2, we could simply write 1.5. I hope this helps! If you have any questions be sure to let me know.(4 votes)
- To multiply two fractions, you would take their numerators (the number on top of the line) and multiply them as you would any other numbers. Then, take the denominators (the numbers on the bottom of the line) and multiply them together. Your answer should be the product of the numerators above the product of the denominators.(3 votes)
- Why do fractions have to be more exact? I like decimals more. :((3 votes)
- Some fractions convert to decimals that have infinitely many digits. For those fractions, using a finite number of decimal digits gives an approximation. For example, if we want to express 2 out of 3 equal parts, the exact answer is 2/3, but something like 0.667 is an approximation. The decimal for 2/3 has infinitely many digits.(2 votes)
- [Instructor] We are told to write seven hundredths as a fraction and a decimal. Why don't you get some paper and a pencil out and see if you can do that before we do it together. All right, so let's do it first as a fraction. So what is going to be the denominator of our fraction if they're saying seven hundredths? And the way I'm saying it is a little bit of a hint. Seven hundredths, oh, I think you got the picture. We're dealing with hundredths. So our denominator is going to be 100. And then how many hundredths do we have? Well we have seven of them. I'll do that in a different color just to be clear. We have seven of those hundredths. So there you have it, seven hundredths. That's this expressed as a fraction. Now what about as a decimal? Well, we could think about our decimal places. If let's say that this is the ones place, and I'm just putting a little blank here. So this is the ones place, and you have a decimal right over here. And then this would be the tenths place. And then this would be the hundredths place. Well we have, we want to represent seven hundredths. So let me be clear, this right over here is ones. This is tenths, and this is hundredths. I like saying, it's unusually fun to say that, hundredths. All right, ones, tenths, hundredths. So how many ones do we have here? Well we have no ones, we have zero ones. How many tenths do we have here? Well we have no tenths. How many hundredths do we have? Well we have seven hundredths. Okay now it's getting annoying. We have seven hundredths. So you can write it that way as well. And if I wanted to just write it a little bit cleaner, I could just write no ones, no tenths, and seven hundredths. I said it the last time like a normal person. Let's do another example. Here, we're told select the written form of each number. And so they, on the left right over here, you have different representations here. We have things written as a decimal, a fraction, another decimal, and then we want to say hey, which of these are represented in words or a combination of numbers and words up here. So pause this video and have a go at this as well. Okay, so this first number right over here, we have no ones, and then as we go one space to the right of the decimal, this is the tenths place. And it's clear we have four of those tenths. So this right over here is four tenths. So that is this choice right here, so we would, I'll shade it in, if you're doing this on Khan Academy, you would just click there and it would fill in. So what about this one? Well this one, we would read, you have four out of ten, or four tenths. So this again would be four tenths. So we would shade that one in. Now what's going on over here? We have no ones, we have no tenths. But we have four hundredths. I said it again, it's too much fun. So we have four hundredths. So that's this column. So we would fill that one in, and we're done. |
Strike and dip
||This article includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. (March 2010) (Learn how and when to remove this template message)|
Strike and dip refer to the orientation or attitude of a geologic feature. The strike line of a bed, fault, or other planar feature, is a line representing the intersection of that feature with a horizontal plane. On a geologic map, this is represented with a short straight line segment oriented parallel to the strike line. Strike (or strike angle) can be given as either a quadrant compass bearing of the strike line (N25°E for example) or in terms of east or west of true north or south, a single three digit number representing the azimuth, where the lower number is usually given (where the example of N25°E would simply be 025), or the azimuth number followed by the degree sign (example of N25°E would be 025°).
The dip gives the steepest angle of descent of a tilted bed or feature relative to a horizontal plane, and is given by the number (0°-90°) as well as a letter (N,S,E,W) with rough direction in which the bed is dipping downwards. One technique is to always take the strike so the dip is 90° to the right of the strike, in which case the redundant letter following the dip angle is omitted (right hand rule, or RHR). The map symbol is a short line attached and at right angles to the strike symbol pointing in the direction which the planar surface is dipping down. The angle of dip is generally included on a geologic map without the degree sign. Beds that are dipping vertically are shown with the dip symbol on both sides of the strike, and beds that are flat are shown like the vertical beds, but with a circle around them. Both vertical and flat beds do not have a number written with them.
Another way of representing strike and dip is by dip and dip direction. The dip direction is the azimuth of the direction the dip as projected to the horizontal (like the trend of a linear feature in trend and plunge measurements), which is 90° off the strike angle. For example, a bed dipping 30° to the South, would have an East-West strike (and would be written 090°/30° S using strike and dip), but would be written as 30/180 using the dip and dip direction method.
Strike and dip are determined in the field with a compass and clinometer or a combination of the two, such as a Brunton compass named after D.W. Brunton, a Colorado miner. Compass-clinometers which measure dip and dip direction in a single operation (as pictured) are often called "stratum" or "Klar" compasses after a German professor. Smartphone apps are also now available, that make use of the internal accelerometer to provide orientation measurements. Combined with the GPS functionality of such devices, this allows readings to be recorded and later downloaded onto a map.
Any planar feature can be described by strike and dip. This includes sedimentary bedding, faults and fractures, cuestas, igneous dikes and sills, metamorphic foliation and any other planar feature in the Earth. Linear features are measured with very similar methods, where "plunge" is the dip angle and "trend" is analogous to the dip direction value.
Apparent dip is the name of any dip measured in a vertical plane that is not perpendicular to the strike line. True dip can be calculated from apparent dip using trigonometry if you know the strike. Geologic cross sections use apparent dip when they are drawn at some angle not perpendicular to strike.
- Weng Y.-H., Sun F.-S. & Grigsby J.D. (2012). "GeoTools: An android phone application in geology". Computers & Geosciences 44: 24–30. doi:10.1016/j.cageo.2012.02.027.
- Compton, Robert R. (1985). Geology in the Field. New York: J. Wiley and Sons. ISBN 978-0-471-82902-7. OCLC 301031779.
- Lahee, Frederic Henry (1961) . Field Geology (6th ed.). New York: McGraw-Hill. OCLC 500832981.
- Tarbuck, Edward J.; Lutgens, Frederick K. (2008). 0-13-092025-8 Earth: An Introduction to Physical Geology Check
|url=value (help) (9th ed.). Upper Saddle River, N.J.: Pearson Prentice Hall. ISBN 0-13-156684-9. OCLC 70408067.
- "Digital Cartographic Standard for Geologic Map Symbolization". FGDC Geological Data Subcommittee. USGS. August 2006. Retrieved 20 March 2010. |
Industrial revolution describes the period between 1750 and 1850, in which tremendous changes characterized by developments in textile, iron were realized. The revolution was spearheaded by Britain. Modern historians refer to these changes as the first industrial revolution (Clark, 2007). The second revolution was characterized by steel, electronics and automobiles and was spearheaded by Germany (Clark, 2007). The Industrial Revolution was a period filled with drastic social and economic changes. The transformation between hand-made tools and goods to machine-manufactured products changed not only the economy, but also the lives of the workers. The first changes began in Great Britain in the 1780’s and spread across Europe and North America by the 19th century leaving a profound effect on the entire world. The Industrial Revolution effected every aspect of human society including the nature of work, child labor, and health conditions of the workers.
Agriculture was a dominant job for workers before the Industrial Revolution. Sebastian Le Prestre Vauban listed many typical jobs including “…mowing, harvesting, threshing, woodcutting, working the soil and the vineyards, clearing land, ditching, carrying soil to vineyards or elsewhere, laboring for builders and several other tasks…” (Wiesner 152) in his tax-reform proposal. This document shows that life as farmer consisted of purely manual labor. Although these jobs were arduous and demanding, the typical agricultural worker was only employed for half the year according to Vauban. Agriculture was a task-based working system where the work was completed according to a completing a task by a certain deadline. As long as the tasks were completed on time, the hours spent working were not tightly regimented. With the dawn of the Industrial Revolution, workers moved from the fields to the factories.
The Industrial Revolution had a great impact on the human’s rights and conditions and it also resulted in significant technological advancements, but it can be categorically stated that the technological advancements during the Industrial Revolution were paramount when compared to the revolution’s impact upon human rights and conditions. Agriculture was the main-stay for livelihood before the era of Industrial Revolution. Most of the people owned farmlands and workers were employed to work in the farms. Although Industrial revolution brought about significant economic development throughout Europe, there were also considerable social and cultural changes seen as well (Snooks, 2002). Industrial revolution had a tremendous transformation on the middle class, which initially was comprised of industrialists and businessmen to another class of noble and gentry people. There were increased employment opportunities for the ordinary working people in the industries, but under strict working conditions. There work was monitored and controlled by machines hence long hours of work (Clark, 2007).
Industrial revolution led to the introduction of urbanization since many people relocated to the cities to look for employment in factories; such as the water power silk mill and the cotton spinning mill. This was characterized by dense, cramped housing and poor living conditions. There was the introduction of new laws guarding child labor, public health and working condition for the ordinary workers to avoid exploitation of the minority (Snooks, 2002). As much as there were positive effects of the industrial revolution such as urbanization, there was also a negative impact on industrial revolution that comprised of people who were anti technologists such as the luddites (Clark, 2007). There was a change in culture since new cities grew rapidly, affecting families and peer groups. For instance, there was an influence in drugs by peer groups and the following: Economic Changes
During the first industrial revolution, there was an unprecedented economic transformation; there was a tremendous increase in population growth that was sustained. This led to considerable expansions of commercial activities in Europe (Snooks, 2002). Steam power was invented that was used to provide power in the factories, used for mining, and transport. It replaced human labor and introduced machines that could do mining in depth, increase production in the industries, and fast means of transport to the markets. The textile industry was changed by new machines the spinning Jenny allowing for much higher production at lower costs and in less time (Jacob, 1997). Thanks to the industrial revolution that brought about, better transport system such as the canals and then the railway. These provided quick, better means of transporting raw materials from the mines and also finished products to the market. Trade expansion was enabled. There was also much development in metal and chemical industries due to the industrial revolution that provided better working conditions for its workers (Clark, 2007). Development of all-metal machine tools enabled the manufacture of more production machines for industries. These spread all over Western Europe and North America then to the rest of the world. The industrial revolution facilitated the manufacture of more production machines.
Causes of Industrial revolution
Industrial revolution came about due to several inventions and the scientific revolution allowing for new discoveries such as technology. Resources required for the industrial revolutions were readily available hence boost industrialization to occur. There was a culture of hard work, developing ideas and risk taking that initiated for the industrial revolution in Europe. Availability of large amount of capital that Europe was ready to use for investment also led to the industrial revolution (Clark, 2007). There was the end of feudalism that changed the economic relationship among the Europe continent, this encouraged industrial revolution. A large population that allowed for industrial workforce was available.
As much as Western Europe tried to do away with capitalism, industrial revolution contributed to the creation of a true capitalist system. There was wide spread of investments, stock markets, and business corporations. Britain was the main advocator for the industrial revolution due to the agricultural revolution. The British kings lost power and the land holders gained power (Clark, 2007). There is no doubt that the Industrial Revolution was one of the most influential time periods of human history. It was almost solely responsible for propelling society into the modern economies that we still have in place today. The technological advances of this time are what allowed for the mass production of goods and services for society, which allowed for trade to be conducted on a much larger scale. Additionally, the average family saw in increase in the amount of income that they received because an unskilled worker could find work in one of the many new factories that were opened to produce the goods needed for the business
world. Unfortunately the workers of the time were usually taken advantage of because of their apparent lack of skill and the abundance of workers available for the same jobs. This created a work environment especially negative for women who would be treated unfairly in the workplace and would receive less pay for the work that they did. Prompted by the oppression that many in the working class felt, literary works were put out to inspire the workers to take back their freedoms. Karl Marx advocated for a revolution of the working class over the management that held them back. Bakunin advocated for the overthrow of the government to get society to a natural state of harmony, and the Pope pushed for a united workers front where the government protected its citizens from being oppressed in the workplace. Over the course of the Industrial Revolution the worker saw vast changes, which ultimately lead to the economic times we have now that are improved from the days of the past.
Clark, G. (2007). A Farewell to Alms: A Brief Economic History of the World. Princeton University Press. Princeton University Press: Princeton. Jacob, M. (1997). Scientific Culture and the Making of the Industrial West. Oxford: Oxford University Press. Snooks, G. D. (2002). Was the Industrial Revolution Necessary. London: Routledge. (2008, 04). Impact of the Industrial Revolution. StudyMode.com. Retrieved 04, 2008, from http://www.studymode.com/essays/Impact-Of-The-Industrial-Revolution-144806.html “Industrial Revolution Research Paper” StudyMode.com. 05 2011. 2011. 05
View as multi-pages
The European Industrial Revolution was a time of drastic change. In England it became a transformation from hand tools and hand made items to machined and mass-produced goods. The growth of factories replaced the cottage industries and spawned the development of cities. Growing cities and factories led to changes in transportation, labor, and working conditions. These changes generally helped workers lives, even though initially there were more negatives than positives.
Before the Industrial Revolution England’s economy was based on its cottage industry. Workers would buy raw materials from merchants, take it back to their cottages, hence the name, and produce the goods at their homes. This industry was efficient but the workers productivity was low. Subsequently, goods were high in price and exclusive to only wealthy people. The Industrial Revolution meant factories could mass-produce items at much lower costs than the cottage industries, making goods more affordable to consumers.
With the invention of the steam engine, a shift from rural waterwheels to steam engines as an industrial power source facilitated the emergence of factories and industrial cities. Factories started the process of urbanization by causing people to leave rural sectors and move to the cities looking for a better life. The increase in population in the cities caused overcrowding, pollution, and thus became a breeding ground for communicable diseases. Cities had a snowballing effect developing new business. New and improved transportation systems evolved.
The developments in transportation played an important role in industrialization. Growing cities necessitated investments to be made in improving infrastructure, including roads, bridges, and canals. This paved the way for industrialization which needed an efficient system to transport mass amounts of goods from factories to markets. As the sale of goods increased, factories’ production needed to increase causing problems for the factory worker.
Factories changed the meaning of labor. Even if the hours worked were roughly the same in the factory or in the cottage, factory wage earners lost control over the pace and methods of their work. Constant supervision was also a novel experience, at least for the head of the household. The head of the household, usually the father of the family, was the supervisor in the family run cottage. In the factories he lost his supervision power and was just a worker. Consequently men avoided factory work in the early nineteenth century.
Most cottage working families chose to stay in their homes. As factory production grew, home workers saw their earnings shrink. The next generation cottage workers would find their choice tipped much more heavily toward factories.
Life was drastically changed during the Industrial Revolution. Factory workers were living in germ infested, crowded and very unhealthful conditions, much like their place of work. Children and women labored in harsh conditions, working long hours with little pay. Much of the British working class was worse off. Real wages stagnated while workers sacrificed freedom, health and family. Government involvement was needed to change these conditions. Laws such as the Factory Act (1833) were passed to improve working conditions. The Industrial Revolution changed Europe forever and it’s social and economic changes helped guide other countries through their growth and industrialization processes.
Cross, Gary. Szostak, Rick. Technology and American Society.
New Jersey: Prentice and Hall Inc. 1995
Willner, Mark. Martin, Mary. Weiner, Jerry. More, David. Hero,George.
Lets Review: Global Studies. Barron’s Educational Series Inc. 1994
Greenberg, Marc. Lectures at The College of Aeronautics.
Windsor Locks, Ct. 1999
Filed Under: England, History, Industrial Revolution |
What happens….in CIRCLE TIME?
A HELP SHEET FOR PARENTS
Your child may have come home from school talking about ‘circle time’. This article explains what circle time is and why it is an important part of your child’s learning.
We all know, from our own experience, that there are some things you can learn from a book – but some things you can’t. Understanding feelings, respecting other people’s views, developing confidence and learning to trust each other are all good examples. To learn these things we need to think, talk about them with other people and work in a way that helps them to develop.
All parents want their children to be able to do these things, so how do teachers get on with trying to teach them? One method that teachers in Coxheath Primary use is called ‘circle time’. This isn’t a new idea, but more and more schools are using it as they see how valuable it is in teaching personal and social skills to children.
What is circle time?
‘Circle time’ is a term used by many teachers to describe a time when the whole class meets together, sitting in a circle, either on the floor or on chairs. But it isn’t as simple as that. It’s a carefully planned time in which children can develop a wide range of skills and attitudes such as confidence, self-esteem, talking and listening. It’s particularly useful for:
- developing trust
- helping a class to ‘gel’
- working on problems such as bullying
- developing children’s awareness of their responsibilities towards others and towards themselves
- exploring new ideas
- developing moral values
- helping children to feel they ‘belong’
- making children feel special
- having fun
Circle time can help children to enjoy learning. It also helps children with their friendships and strengthens the relationship between the teacher and the class. This in turn improves everyone’s experience of school and helps children to get the most out of their school day.
How long does it last?
Circle time may last from about 20 to 45 minutes, depending on the age of the children and the reason why the teacher is using it. This time is just as valuable as the daily numeracy and literacy hours, but it deals with different things and different ways of learning. Teachers and schools who use circle time choose to do so because they believe it is the best way to teach the things they use it for.
Circle time has its own rules and children come to think of it as a ‘special’ time, which they feel good about. These rules make sure that children feel confident during circle time and not nervous about having to say or do something. For example, children know they will have a time when they can say something without anyone else interrupting. To help children remember this, the class might pass an object round the circle and the rule is that only the person holding the object may speak. Nobody has to speak but the teacher makes sure that everyone has an equal chance to do so.
What’s so special about a circle?
Sitting in a circle is important for two reasons. First, it is practical because everyone can clearly see and hear everyone else. Second, there is no front or back, no beginning or end, no ‘best’ or ‘worst’ position – everyone is in an equally good place to take part in the activities, including the teacher. Children see this as ‘fair’ and it helps teachers to work on the idea of equal respect for everyone, an attitude that is developed through circle time.
What kinds of activities happen in circle time?
Sometimes the activities are good fun and sometimes they are more serious.
The activities in a typical circle time include :
- short ‘rounds’ where children complete a phrase such as ‘Something I’ve really enjoyed this week is…’
- debates on issues such as animal rights and children’s TV
- discussions about good and not so good things going on in the classroom
- activities to sort out a problem such as bullying
- games to develop trust and help children to work together
- games to improve children’s listening skills , drama , sharing ideas, celebrating good things that have happened , opportunities for children to share thoughts and feelings in a carefully managed way
- trying different ways to relax.
The list is almost endless! One thing circle time does is provide an opportunity for smiling – which has to be good for learning. |
Are you looking for innovative ways to make math more engaging for your students or children? Why not take the classroom outdoors and into the garden? Math is all around us, and the garden provides a rich environment for fun, practical, hands-on learning experiences. Gardening gets kids outdoors and offers a great opportunity to teach measurements, addition, and other mathematical principles.
“According to national research, garden-based learning delivers,” writes Maaike Baker in Civil Eats. “REAL School Gardens, a nonprofit organization that trains teachers and creates garden learning environments for schools across the country, has seen a 12 to 15 percent increase in standardized test score pass rates in their schools. In addition, 94 percent of teachers reported an increase in student engagement in the garden and the classroom.” That is an immense leap in proficiency (and a great introduction to healthy eating, too!)
From counting seeds to measuring plant growth, here are eight creative math garden activities:
Seed counting and sorting
Start by having your students or children count and sort seeds before planting. This activity helps reinforce counting skills and introduces the concept of grouping and categorizing objects based on similarities and differences. You can also incorporate basic addition and subtraction by asking questions like, “If we plant five rows of carrots with ten seeds in each row, how many seeds do we need in total?”
Measuring plant spacing
Teach measurement concepts by having students measure the distance between plants as they plant them in the garden. They can use rulers or measuring tapes to determine the appropriate spacing for different types of plants. This activity reinforces the importance of precision and accuracy in measurement. It also teaches problem-solving and will help determine how many plants are needed!
Estimating and measuring the garden area
Encourage students to estimate the area of the garden bed and then measure it using non-standard units such as footsteps or hand spans. This activity introduces the concept of area and helps develop estimation skills. You can also extend this activity by calculating the perimeter of the garden bed.
Tracking plant growth
Set up a growth chart to track the height of plants over time. Have students measure and record the height of each plant at regular intervals using measuring tapes or rulers. This fun activity not only reinforces measurement skills but also introduces the concept of data collection and graphing. Incorporate questions like, How long are your bean sprouts? Who grew the biggest zucchini?
Harvesting and weighing produce
When it’s time to harvest fruits and vegetables from the garden, involve students in the process of weighing and recording the produce. They can use kitchen scales or balance scales to determine the weight of each item. This activity provides practical experience with measuring weight and reinforces the connection between math skills and real-world applications.
Calculating watering needs
Introduce the concept of volume by having students calculate the amount of water needed to irrigate the garden. They can measure the capacity of watering cans or hoses and calculate how much water is required based on the size of the garden bed and the water needs of different plants. This activity helps develop problem-solving skills and an understanding of mathematical relationships. Another fun idea: Leave a vase or jar outside to collect rainwater. Use your rain gauge to introduce everything from counting to basic fluid measurements.
Planning garden layouts
Have students design and plan garden layouts using graph paper. They can use scale drawings to represent different elements of the garden, such as plant beds, pathways, and structures. This activity integrates basic geometry concepts like shapes, angles, and symmetry, while also fostering creativity and spatial reasoning skills.
Another important consideration when laying out your garden is how much each plant likes the sun. Some plants thrive in full sunlight, whereas others will wilt and die in the summer heat. Have your little ones help you determine where the sun is coming from and plan which plants go where using height and angles.
Budgeting for garden supplies
Teach financial literacy skills by involving students in budgeting for garden supplies. Have them research the cost of seeds, soil, tools, and other materials needed for the garden, and then create a budget to manage expenses. This activity helps develop math skills related to addition, subtraction, multiplication, and division, as well as critical thinking skills related to decision-making and resource allocation.
Many kids are visual learners. Using graphs to track weekly growth, harvest, and rainwater is fun and a great way to introduce basic math skills.
Not every garden learning opportunity comes straight from Mother Nature. You can use the garden as a setting for different games, such as hide-and-seek sticks. To play this game, write up numbers on popsicle sticks and push them into the soil, hiding the number underground. Each kid collects sticks and counts how many points they got.
Have your little one use their newfound math skills to help put together a picnic lunch, which you can eat surrounded by your flowers and veggies in the garden itself!
By incorporating math into garden activities, you can make learning more meaningful and engaging for children while also fostering a deeper appreciation for the natural world. So, roll up your sleeves, grab your gardening tools, and get ready to explore the fascinating intersection of math and gardening! |
This tutorial covers hypothesis testing. Hypothesis testing is a very common procedure in making inferences. First off, what's a hypothesis? This might be a word that you've heard in common language or in a science class. When talking about statistics, a hypothesis is a claim about a population parameter.
So the key about a hypothesis is that you're making a claim about the population. So that's the key. It's only a claim. It's not something we know to be true. It's something we believe or are trying to prove is true, and it's about the population. An example could be a college president claiming that 75% of the athletes graduate in four years.
So the population they're talking about there is the athletes, and the claim they're making is that 75% of them graduate in four years. Now, a hypothesis test considers the probability that a sample statistic is equal to or more extreme than it could come from a population with a particular parameter.
So our example will help to make sense of that. It says, we take a sample of 30 student athletes and find that 18 graduated in four years. So we would want to test whether or not-- for that sample, how likely is it that it comes from this population, or that it could come from a population where 75% graduate in four years?
One are doing a hypothesis test, first we set up a null hypothesis. The null hypothesis is the starting assumption. It's the default. It's the no change, and it's a claim about a particular value. So for example, in the last one, our null hypothesis is that the population proportion is 75%. So the population proportion equals 0.75.
Now, this here is how we write null hypothesis. The H is telling us a hypothesis, at that 0 that's in subscript is telling us that it's the null hypothesis. The reason that's important is because there's another kind of hypothesis, the alternative hypothesis.
The alternative hypothesis is the other one. It's the one that's saying it's a claim that the population differs from the value claimed in the null. So whatever happens in the null, the alternative is saying that it's different. Now, you can say that it's less than, greater than, or just that it's not the same, that it's not equal to. So for example, we want to say that our alternative hypothesis is that the population proportion is not 75%. So we say P does not equal 0.75.
If we had reason to believe that the population proportion was less than 75%, we could have done that our alternative hypothesis is that the proportion is less than 0.75. So again, it's for a particular value. And you use the same kind of notation. This H with a subscript of a is telling us it's a hypothesis-- it's the alternative hypothesis.
One important thing to note is that the null hypothesis is always written as an equality. While the alternative hypothesis can be written as an inequality, the null hypothesis is always, always, always in equality. Now, when you're doing hypothesis testing, you're going to end up using statistics, and particular forms of statistics. We'll see more about that in other tutorials.
But when you're doing those statistics, the only thing you can do is reject or fail to reject the null hypothesis. You cannot accept the alternative hypothesis, because you're only testing the null one, so you're only testing whether or not you can accept and say, yes, 75% of that population is likely to graduate in four years.
You can reject that if your test shows that's not true, but you cannot accept the alternative hypothesis. That's one common error that people make, so beware of that-- that when you're drawing your conclusions during a hypothesis test, you're only accepting or rejecting the null. You're not talking about the alternative. This has been your tutorial on hypothesis testing. |
|THERMO Spoken Here! ~ J. Pohl © ( A2740~1/15)||( A2980 - 1.11 Uniform Motion: BODY)|
Typically, HS students are taught Newton's Laws of Motion by use of an algebra-based, "ab," approach. The idea is to simplify Newton's Laws so they are easier to understand. However, to formulate and write his Laws, Newton invented vectors and calculus. This was not to be fancy; Newton knew mathematics more powerful than algebra was needed. Now,some 450 years after Newton, HS physics still teaches his Laws "in first gear" unemcumbered by vectors or calculus.
Most physics texts present Newton's idea, his 2nd Law of Motion, as the words "force equals mass times acceleration." Students learn the idea, saying the words like a mantra: "f equals ma." To procede, texts assign symbols to force, mass and acceleration then represent the idea mathematically as an algebraic equation. Mathematics is the means whereby symbolic physical entities (force, mass, et al) are made quantative. The idea is abstract. Application will be
The words of the second law and its mantra are abstract, just beginnings of the idea. Text are alike to this point. Text for text the words and form of the equation are the same but the equations have different notations. Inspection of three texts yield these equation forms equal algebraically but with different notations.
HS text forms of Newton's|
2nd Law of Motion.
Suppose we write Newton's 2nd Law in its HS algebra-based form then change it to use vectors (origin, basis, unit vectors...) then modify it to include concepts of calculus (difference, limit, derivative...). Changing backward toward what Newton wrote in 1687, step by step, back in history. What would the original 2nd Law, before the ink was dry, look like. Also, with this task might we find a superior notational form for the algebraic-based 2nd Law? If so, what is it?
Newton's writings regarding the Laws of Motion express his emphasis. Precedeing his laws he wrote two axioms and he specified a physical model for analysis.
Axiom I: "...quantity of matter:" Newton accepted, axiomatically, the existence of matter and further specified that matter could be be made quantitative. Conclusion: Existence and quantifiability of mass is the beginning idea. A second idea is "what mass" or the "system."
Axiom I: Quantity of Motiom is mass. The identification of "quantity" (of motion) constitutes identification of a "system." The symbol "m" is used to denote the mass of a system.
System modeled as BODY: The subject (system) of Newton's Laws are a "model" of physical reality, the "BODY." This simplification of the mass of something physically real to be idealized as mass located "at a point."
Axiom II: "...quantity of motion" Newton defined "quantity of motion" to be a measurable, scalar times vector, mass times velocity.
BODY: ... real mass assumed to exist at a point.
Axiom II: "...quantity of motion" today is called "momentum." Momentum is a vector-calculus idea. The momentum of a mass requiores special specification. Momentum for a BODY is written as:
Newton's Three Laws: The names, "First, Second..." cause one to rank the ideas.
Newton's First Law: in his statement... "Every BODY..." Body is a model of physical reality. While any real amount of mass (a body, as we might say) occupies space (has a volume) Newton's perspective of mass was that it had no volume. The subject of the 2nd Law is an amount of physical reality which has a mass. To generalize, let's call that selected mass the system. Newton used the very simplest model of system - the BODY; all mass located at a point.
The 2nd Law addresses an aspect of that mass; its momentum (mBODYVBODY). By his second axiom, Newton stated that "quantity of motion" (or momentum as called today) was the property of motion of a BODY.
The 2nd Law of Motion (with momentum, "mV," as the independent variable) is a first-order differential equation. Newton did not use mathematics "to be fancy."
In most HS physics texts, Newton's motion, his idea "momentum," (the vector entity, mass times velocity product) is replaced by the scalar product, mass times acceleration.
Many physics-text-forms of Newton's Second Law have F or f written left-of-equality. In the paragraph immediately beneath the equation it is stated that F and f do not represent a force. Rather F or f represent a vector sum of all forces applicable. Some texts use the notation Fnet which is a vector sum of all forces applicable.
The mantra taught is "force equals mass times acceleration."
Most applications of Newton's 2'nd Law involve more than one force. There is a common mathematical notation to designate an equation term as being the "discrete sum of its occurrences." That notation is to prefix the term with the Greek upper-case letter sigma, Σ. When force, F, is prefixed with "ΣF," the meaning is clear: "this term is the sum of all relevant forces - be sure to identify and sum the forces."
We now return to equations (1) and rearrange as Equation (2)(below left):
|(2) The commonly used, F, is in fact a ΣF.||(3)
It is motion of a “BODY” we observe. Vectors (position, velocity, |
force) are written in the “0XYZ” vector space.
We choose to make Equation (2) more specific, to become Equation (3). Forces and acceleration are not algebraic entities; they are vectors. The distinction, what is a scalar versus what is a vector is important. Vector entities are written (here) with an over-arrow (and sometimes with an over-bar). To specify a vector, a vector space, origin, coordinate axes, and a unit vector basis, must be defined. Students are familiar with Cartesian coordinates (0XYZ). Around 1850, Sir William Hamilton invented the unit vector triple, I, J and K. Since equations contain thought, equations with vectors should identify their space. (To identify space is a necessary skill for those who program video-games or the actions of robots).
"mV," is called its "momentum."
Finally, about Equation (3), the system of Newton's Laws was a collection of matter he modeled as a BODY (the simplest approximation of matter). Specifically the mass and acceleration of equation (3) are those of the BODY. We place that subscript behind those terms then move them left-of-equality for reasons explained below.
System (with its system states) is left-of-equality. The actions of|
Forces (right-of-equality) might change system states:
Newton invented calculus not for fun but to define velocity and to define acceleration, the A of f = ma.
The “left” term is identically the same as|
the “right” term. The equation simply
identifies a “shorter” way of writing things"
Acceleration is a characteristic of the motion of something (we call it a BODY) in space (we use the Cartesian space, 0XYZ). Acceleration equals the derivative of the velocity of the something in the space (written above left).
Below Left: mass of a BODY is multiplied by the acceleration of that BODY. Acceleration is the derivative of velocity. Therefore (as shown left) mA = mdV/dt. Below Right: the mass term (a constant) is brought inside of the derivative ("d/dt") showing that mA = d(mV)/dt which is our preferred form of mA.
Our next step is simply to substitute the extreme right side of (6) into the left side of (4).
The above form Newton's Second Law of Motion contains momentum, his axiomatic "quantity of motion," explicitly. Forces are physical "constructs" or reasons for change of momentum. Notice that when ΣF = 0, Equation (7) becomes Newton's First Law of Motion.
Newton's perspective, the "subjects of his Laws," is made clear upon understanding the axioms he set down as basis. The first axiom established the existence of a "quantity of matter." No proof is required in an axiomatic method. Mass (a measurable scalar property of matter) is possessed by a body and is of importance in its motion. A second axiom Newton called "quantity of motion." Today we call that quantity of Newton's, momentum. Momentum is less easy to quantify that is mass. Momentum is the produce to mass times its velocity define than mass. Newton was obliged to use vectors to quantify position (relative position, he realized) as a prerequisite idea. Space needed to be quantified - a vector space. Change of position in time begat velocity. which required vector calculus. Velocity is the derivative ov the vector, position. This momentum is the principle and second idea of his laws of motion.
The form of Newton's Laws of Motion selected for this writing is arranged in accord with Newton's axiomatic approach. Calculus is made explicit; the derivative of body momentum is placed prominently, left of equality. Subscripts of the derivative to identify the system (body for now). Superscripts to the derivative to identify the vector space. Finally ΣF replaces F right of equality in s differential equation, where non-homogeneous terms belongs. Vectors... all properly notated. Phew! Also we need the initial conditions written near the differential equation.
Mathematical notation says this better than words.
In use, Newton's equation is accompanied by a sketch of the physical situation. The sketch below is over- complete. In any application such completeness is rarely needed. However the sketch puts a picture to all of the ideas brought together.
There is a highly compelling reason students should use the differential equation form of Newton's Laws of Motion. In later engineering study you will find the mass equation, momentum equation and energy equation for a body have the same form. All three physical statements are first order differential equations (with their respective initial conditions). The skills and understanding gained in solving any of them are the same skills needed to solve and understand the others. All three equations take the same perspective - system.
By physics, the center equation (which includes the idea "f = ma") would be written "f = ma." The left and right equations are engineering rate form expressions that account for changes of mass and energy of a BODY as system, respectively.
Further Reason: In closing, even if Newton did not use the formulation of Equation (7), we should use it. It does everything "f = ma" does and it expresses the mathematical meaning of Newton's Laws better. Since 1687 science and engineering has used the form more and more. For example, to describe expansion of the universe. The mean distance " l " between conserved cosmological particles is increasing with time. The mathematical statement of the rate of increase of this mean distance is written as:
This also, is a first-order differential equation!
Newton studied Cosmology and he used the rate form.
Further proof of the power of the first order differential equation, read about the mathematics of the Lotka-Volterra Predator-Prey Equations.
Momentum also is "what matter has" until it changes. Far better (than "f = ma" ) for engineering is the system property perspective of momentum, written (for a BODY) as before:
In this writing, "BODY" (uppercase) is used rather than "body" to emphasize that Newton studied the former which is a very special, isolated, perspective of the latter. Newton used mathematical and physical analysis to study physical reality. He sought to discover the methods and perspectives of study that would reveal the secrets of nature. His approach, which we use today, might well be called Newton's Analytic Method." In any analysis, his very first step was a complete mental and mathematical identification and extraction of the body from its space to become a BODY for analysis.
While there are three laws, by strict way of usage Newton's Second Law is sufficient. The First Law represents its truth but the statement of Second Law, for the case of "zero net Force" is equivalent (there is reason to view the First Law as a special case of the Second Law). Newton's Third Law is different altogether. It does not apply to a BODY. Rather it applies as a condition or rule regarding the inter-dependence of forces within systems of two or more BODIES. Thus, in application, Newton's Laws of Motion are competently represented by his Second Law of Motion (2'nd Law). In high school, "force equals mass times acceleration" is 2'nd Law mantra.
The idea, "Force," was created (prior to Newton) as a means or "strategem of use" in understanding motion. Two types were understood as: those "acting at the surface" and another "acting from a distance" over the mass of the BODY, calles gravity. Force, is a "construct."
As a recent and excellent example see Richard Fitzpatrick's Newtonian Dynamics, page 9 (briefly paraphrased here).
In 1687 Newton published his famous Three Laws of Motion. Every two or so years thereafter, some scholar, physicist or engineer has published an article or text chapter in which "What Newton Meant," is explained. That debate continues. A recent, representative writing (which I paraphrase here) is Richard Fitzpatrick's Newtonian Dynamics.
In presenting Newton's 2'nd law to students, HS physics avoids the vectors and calculus Newton invented and used. Most physics texts reduce Newton's understandings of motion to an algebraic equation. Three equational notations used in physics texts are:
Premise presently unwritted! |
In chemistry, a coordination complex consists of a central atom or ion, metallic and is called the coordination centre, a surrounding array of bound molecules or ions, that are in turn known as ligands or complexing agents. Many metal-containing compounds those of transition metals, are coordination complexes. A coordination complex whose centre is a metal atom is called a metal complex. Coordination complexes are so pervasive that their structures and reactions are described in many ways, sometimes confusingly; the atom within a ligand, bonded to the central metal atom or ion is called the donor atom. In a typical complex, a metal ion is bonded to several donor atoms, which can be the same or different. A polydentate ligand is a molecule or ion that bonds to the central atom through several of the ligand's atoms; these complexes are called chelate complexes. The central atom or ion, together with all ligands, comprise the coordination sphere; the central atoms or ion and the donor atoms comprise the first coordination sphere.
Coordination refers to the "coordinate covalent bonds" between the central atom. A complex implied a reversible association of molecules, atoms, or ions through such weak chemical bonds; as applied to coordination chemistry, this meaning has evolved. Some metal complexes are formed irreversibly and many are bound together by bonds that are quite strong; the number of donor atoms attached to the central atom or ion is called the coordination number. The most common coordination numbers are 2, 4, 6. A hydrated ion is one kind of a complex ion, a species formed between a central metal ion and one or more surrounding ligands, molecules or ions that contain at least one lone pair of electrons. If all the ligands are monodentate the number of donor atoms equals the number of ligands. For example, the cobalt hexahydrate ion or the hexaaquacobalt ion 2+ is a hydrated-complex ion that consists of six water molecules attached to a metal ion Co; the oxidation state and the coordination number reflect the number of bonds formed between the metal ion and the ligands in the complex ion.
However, the coordination number of Pt2+2 is 4 since it has two bidentate ligands, which contain four donor atoms in total. Any donor atom will give a pair of electrons. There are some donor groups which can offer more than one pair of electrons; such are called polydentate. In some cases an atom or a group offers a pair of electrons to two similar or different central metal atoms or acceptors—by division of the electron pair—into a three-center two-electron bond; these are called bridging ligands. Coordination complexes have been known since the beginning of modern chemistry. Early well-known coordination complexes include dyes such as Prussian blue, their properties were first well understood in the late 1800s, following the 1869 work of Christian Wilhelm Blomstrand. Blomstrand developed; the theory claimed that the reason coordination complexes form is because in solution, ions would be bound via ammonia chains. He compared this effect to the way. Following this theory, Danish scientist Sophus Mads Jørgensen made improvements to it.
In his version of the theory, Jørgensen claimed that when a molecule dissociates in a solution there were two possible outcomes: the ions would bind via the ammonia chains Blomstrand had described or the ions would bind directly to the metal. It was not until 1893 that the most accepted version of the theory today was published by Alfred Werner. Werner’s work included two important changes to the Blomstrand theory; the first was that Werner described the two different ion possibilities in terms of location in the coordination sphere. He claimed that if the ions were to form a chain this would occur outside of the coordination sphere while the ions that bound directly to the metal would do so within the coordination sphere. In one of Werner’s most important discoveries however he disproved the majority of the chain theory. Werner was able to discover the spatial arrangements of the ligands that were involved in the formation of the complex hexacoordinate cobalt, his theory allows one to understand the difference between a coordinated ligand and a charge balancing ion in a compound, for example the chloride ion in the cobaltammine chlorides and to explain many of the inexplicable isomers.
In 1914, Werner first resolved the coordination complex, called hexol, into optical isomers, overthrowing the theory that only carbon compounds could possess chirality. The ions or molecules surrounding the central atom are called ligands. Ligands are bound to the central atom by a coordinate covalent bond, are said to be coordinated to the atom. There are organic ligands such as alkenes whose pi bonds can coordinate to empty metal orbitals. An example is ethene in the complex known as Zeise's salt, K+−. In coordination chemistry, a structure is first described by its coordination number, the number of ligands attached to the metal. One can count the ligands attached, but sometimes the counting can become ambiguous. Coordination numbers are between two and nine, but large numbers of ligands are not uncommon for the lanthanides and actinides; the number of bonds
Organic chemistry is a subdiscipline of chemistry that studies the structure and reactions of organic compounds, which contain carbon in covalent bonding. Study of structure determines their chemical formula. Study of properties includes physical and chemical properties, evaluation of chemical reactivity to understand their behavior; the study of organic reactions includes the chemical synthesis of natural products and polymers, study of individual organic molecules in the laboratory and via theoretical study. The range of chemicals studied in organic chemistry includes hydrocarbons as well as compounds based on carbon, but containing other elements oxygen, sulfur and the halogens. Organometallic chemistry is the study of compounds containing carbon–metal bonds. In addition, contemporary research focuses on organic chemistry involving other organometallics including the lanthanides, but the transition metals zinc, palladium, cobalt and chromium. Organic compounds constitute the majority of known chemicals.
The bonding patterns of carbon, with its valence of four—formal single and triple bonds, plus structures with delocalized electrons—make the array of organic compounds structurally diverse, their range of applications enormous. They form the basis of, or are constituents of, many commercial products including pharmaceuticals; the study of organic chemistry overlaps organometallic chemistry and biochemistry, but with medicinal chemistry, polymer chemistry, materials science. Before the nineteenth century, chemists believed that compounds obtained from living organisms were endowed with a vital force that distinguished them from inorganic compounds. According to the concept of vitalism, organic matter was endowed with a "vital force". During the first half of the nineteenth century, some of the first systematic studies of organic compounds were reported. Around 1816 Michel Chevreul started a study of soaps made from various alkalis, he separated the different acids. Since these were all individual compounds, he demonstrated that it was possible to make a chemical change in various fats, producing new compounds, without "vital force".
In 1828 Friedrich Wöhler produced the organic chemical urea, a constituent of urine, from inorganic starting materials, in what is now called the Wöhler synthesis. Although Wöhler himself was cautious about claiming he had disproved vitalism, this was the first time a substance thought to be organic was synthesized in the laboratory without biological starting materials; the event is now accepted as indeed disproving the doctrine of vitalism. In 1856 William Henry Perkin, while trying to manufacture quinine accidentally produced the organic dye now known as Perkin's mauve, his discovery, made known through its financial success increased interest in organic chemistry. A crucial breakthrough for organic chemistry was the concept of chemical structure, developed independently in 1858 by both Friedrich August Kekulé and Archibald Scott Couper. Both researchers suggested that tetravalent carbon atoms could link to each other to form a carbon lattice, that the detailed patterns of atomic bonding could be discerned by skillful interpretations of appropriate chemical reactions.
The era of the pharmaceutical industry began in the last decade of the 19th century when the manufacturing of acetylsalicylic acid—more referred to as aspirin—in Germany was started by Bayer. By 1910 Paul Ehrlich and his laboratory group began developing arsenic-based arsphenamine, as the first effective medicinal treatment of syphilis, thereby initiated the medical practice of chemotherapy. Ehrlich popularized the concepts of "magic bullet" drugs and of systematically improving drug therapies, his laboratory made decisive contributions to developing antiserum for diphtheria and standardizing therapeutic serums. Early examples of organic reactions and applications were found because of a combination of luck and preparation for unexpected observations; the latter half of the 19th century however witnessed systematic studies of organic compounds. The development of synthetic indigo is illustrative; the production of indigo from plant sources dropped from 19,000 tons in 1897 to 1,000 tons by 1914 thanks to the synthetic methods developed by Adolf von Baeyer.
In 2002, 17,000 tons of synthetic indigo were produced from petrochemicals. In the early part of the 20th century and enzymes were shown to be large organic molecules, petroleum was shown to be of biological origin; the multiple-step synthesis of complex organic compounds is called total synthesis. Total synthesis of complex natural compounds increased in complexity to terpineol. For example, cholesterol-related compounds have opened ways to synthesize complex human hormones and their modified derivatives. Since the start of the 20th century, complexity of total syntheses has been increased to include molecules of high complexity such as lysergic acid and vitamin B12; the discovery of petroleum and the development of the petrochemical industry spurred the development of organic chemistry. Converting individual petroleum compounds into different types of compounds by various chemical processes led to organic reactions enabling a broad range of
A molecule is an electrically neutral group of two or more atoms held together by chemical bonds. Molecules are distinguished from ions by their lack of electrical charge. However, in quantum physics, organic chemistry, biochemistry, the term molecule is used less also being applied to polyatomic ions. In the kinetic theory of gases, the term molecule is used for any gaseous particle regardless of its composition. According to this definition, noble gas atoms are considered molecules as they are monatomic molecules. A molecule may be homonuclear, that is, it consists of atoms of one chemical element, as with oxygen. Atoms and complexes connected by non-covalent interactions, such as hydrogen bonds or ionic bonds, are not considered single molecules. Molecules as components of matter are common in organic substances, they make up most of the oceans and atmosphere. However, the majority of familiar solid substances on Earth, including most of the minerals that make up the crust and core of the Earth, contain many chemical bonds, but are not made of identifiable molecules.
No typical molecule can be defined for ionic crystals and covalent crystals, although these are composed of repeating unit cells that extend either in a plane or three-dimensionally. The theme of repeated unit-cellular-structure holds for most condensed phases with metallic bonding, which means that solid metals are not made of molecules. In glasses, atoms may be held together by chemical bonds with no presence of any definable molecule, nor any of the regularity of repeating units that characterizes crystals; the science of molecules is called molecular chemistry or molecular physics, depending on whether the focus is on chemistry or physics. Molecular chemistry deals with the laws governing the interaction between molecules that results in the formation and breakage of chemical bonds, while molecular physics deals with the laws governing their structure and properties. In practice, this distinction is vague. In molecular sciences, a molecule consists of a stable system composed of two or more atoms.
Polyatomic ions may sometimes be usefully thought of as electrically charged molecules. The term unstable molecule is used for reactive species, i.e. short-lived assemblies of electrons and nuclei, such as radicals, molecular ions, Rydberg molecules, transition states, van der Waals complexes, or systems of colliding atoms as in Bose–Einstein condensate. According to Merriam-Webster and the Online Etymology Dictionary, the word "molecule" derives from the Latin "moles" or small unit of mass. Molecule – "extremely minute particle", from French molécule, from New Latin molecula, diminutive of Latin moles "mass, barrier". A vague meaning at first; the definition of the molecule has evolved. Earlier definitions were less precise, defining molecules as the smallest particles of pure chemical substances that still retain their composition and chemical properties; this definition breaks down since many substances in ordinary experience, such as rocks and metals, are composed of large crystalline networks of chemically bonded atoms or ions, but are not made of discrete molecules.
Molecules are held together by ionic bonding. Several types of non-metal elements exist only as molecules in the environment. For example, hydrogen only exists as hydrogen molecule. A molecule of a compound is made out of two or more elements. A covalent bond is a chemical bond; these electron pairs are termed shared pairs or bonding pairs, the stable balance of attractive and repulsive forces between atoms, when they share electrons, is termed covalent bonding. Ionic bonding is a type of chemical bond that involves the electrostatic attraction between oppositely charged ions, is the primary interaction occurring in ionic compounds; the ions are atoms that have lost one or more electrons and atoms that have gained one or more electrons. This transfer of electrons is termed electrovalence in contrast to covalence. In the simplest case, the cation is a metal atom and the anion is a nonmetal atom, but these ions can be of a more complicated nature, e.g. molecular ions like NH4+ or SO42−. An ionic bond is the transfer of electrons from a metal to a non-metal for both atoms to obtain a full valence shell.
Most molecules are far too small to be seen with the naked eye. DNA, a macromolecule, can reach macroscopic sizes, as can molecules of many polymers. Molecules used as building blocks for organic synthesis have a dimension of a few angstroms to several dozen Å, or around one billionth of a meter. Single molecules cannot be observed by light, but small molecules and the outlines of individual atoms may be traced in some circumstances by use of an atomic force microscope; some of the largest molecules are supermolecules. The smallest molecule is the diatomic hydrogen, with a bond length of 0.74 Å. Effective molecular radius is the size; the table of permselectivity for different substances contains examples. The chemical formula for a molecule uses one line of chemical element symbols and sometimes al
A hydrogen bond is a electrostatic force of attraction between a hydrogen atom, covalently bound to a more electronegative atom or group the second-row elements nitrogen, oxygen, or fluorine —the hydrogen bond donor —and another electronegative atom bearing a lone pair of electrons—the hydrogen bond acceptor. Such an interacting system is denoted Dn–H···Ac, where the solid line denotes a covalent bond, the dotted line indicates the hydrogen bond. There is general agreement that there is a minor covalent component to hydrogen bonding for moderate to strong hydrogen bonds, although the importance of covalency in hydrogen bonding is debated. At the opposite end of the scale, there is no clear boundary between a weak hydrogen bond and a van der Waals interaction. Weaker hydrogen bonds are known for hydrogen atoms bound to elements such as chlorine; the hydrogen bond is responsible for many of the anomalous physical and chemical properties of compounds of N, O, F. Hydrogen bonds can be intramolecular.
Depending on the nature of the donor and acceptor atoms which constitute the bond, their geometry, environment, the energy of a hydrogen bond can vary between 1 and 40 kcal/mol. This makes them somewhat stronger than a van der Waals interaction, weaker than covalent or ionic bonds; this type of bond can occur in inorganic molecules such as water and in organic molecules like DNA and proteins. Intermolecular hydrogen bonding is responsible for the high boiling point of water compared to the other group 16 hydrides that have much weaker hydrogen bonds. Intramolecular hydrogen bonding is responsible for the secondary and tertiary structures of proteins and nucleic acids, it plays an important role in the structure of polymers, both synthetic and natural. It was recognized that there are many examples of weaker hydrogen bonding involving donor Dn other than N, O, or F and/or acceptor Ac with close to or the same electronegativity as hydrogen. Though they are quite weak, they are ubiquitous and are recognized as important control elements in receptor-ligand interactions in medicinal chemistry or intra-/intermolecular interactions in materials sciences.
Thus, there is a trend of gradual broadening for the definition of hydrogen bonding. In 2011, an IUPAC Task Group recommended a modern evidence-based definition of hydrogen bonding, published in the IUPAC journal Pure and Applied Chemistry; this definition specifies: The hydrogen bond is an attractive interaction between a hydrogen atom from a molecule or a molecular fragment X–H in which X is more electronegative than H, an atom or a group of atoms in the same or a different molecule, in which there is evidence of bond formation. Most introductory textbooks still restrict the definition of hydrogen bond to the "classical" type of hydrogen bond characterized in the opening paragraph. A hydrogen atom attached to a electronegative atom is the hydrogen bond donor. C-H bonds only participate in hydrogen bonding when the carbon atom is bound to electronegative substituents, as is the case in chloroform, CHCl3. In a hydrogen bond, the electronegative atom not covalently attached to the hydrogen is named proton acceptor, whereas the one covalently bound to the hydrogen is named the proton donor.
In the donor molecule, the H center is protic. The donor is a Lewis base. Hydrogen bonds are represented as H · · · Y system. Liquids that display hydrogen bonding are called associated liquids; the hydrogen bond is described as an electrostatic dipole-dipole interaction. However, it has some features of covalent bonding: it is directional and strong, produces interatomic distances shorter than the sum of the van der Waals radii, involves a limited number of interaction partners, which can be interpreted as a type of valence; these covalent features are more substantial when acceptors bind hydrogens from more electronegative donors. Hydrogen bonds can vary in strength from weak to strong. Typical enthalpies in vapor include: F−H···:F, illustrated uniquely by HF2−, bifluoride O−H···:N, illustrated water-ammonia O−H···:O, illustrated water-water, alcohol-alcohol N−H···:N, illustrated by ammonia-ammonia N−H···:O, illustrated water-amide HO−H···:OH+3 The strength of intermolecular hydrogen bonds is most evaluated by measurements of equilibria between molecules containing donor and/or acceptor units, most in solution.
The strength of intramolecular hydrogen bonds can be studied with equilibria between conformers with and without hydrogen bonds. The most important method for the identification of hydrogen bonds in complicated molecules is crystallography, sometimes NMR-spectroscopy. Structural details, in particular distances between donor and acceptor which are smaller than the sum of the van der Waals radii can be taken as indication of the hydrogen bond strength. One scheme gives the following somewhat arbitrary classification: those that are 15 to 40 kcal/mol, 5 to 15 kcal/mol, >0 to 5 kcal/mol are considered strong, moder
A drug is any substance that, when inhaled, smoked, absorbed via a patch on the skin, or dissolved under the tongue causes a physiological change in the body. In pharmacology, a drug is a chemical substance of known structure, other than a nutrient of an essential dietary ingredient, when administered to a living organism, produces a biological effect. A pharmaceutical drug called a medication or medicine, is a chemical substance used to treat, prevent, or diagnose a disease or to promote well-being. Traditionally drugs were obtained through extraction from medicinal plants, but more also by organic synthesis. Pharmaceutical drugs may be used for a limited duration, or on a regular basis for chronic disorders. Pharmaceutical drugs are classified into drug classes—groups of related drugs that have similar chemical structures, the same mechanism of action, a related mode of action, that are used to treat the same disease; the Anatomical Therapeutic Chemical Classification System, the most used drug classification system, assigns drugs a unique ATC code, an alphanumeric code that assigns it to specific drug classes within the ATC system.
Another major classification system is the Biopharmaceutics Classification System. This classifies drugs according to their permeability or absorption properties. Psychoactive drugs are chemical substances that affect the function of the central nervous system, altering perception, mood or consciousness, they include alcohol, a depressant, the stimulants nicotine and caffeine. These three are the most consumed psychoactive drugs worldwide and are considered recreational drugs since they are used for pleasure rather than medicinal purposes. Other recreational drugs include hallucinogens and amphetamines and some of these are used in spiritual or religious settings; some drugs can cause addiction and all drugs can have side effects. Excessive use of stimulants can promote stimulant psychosis. Many recreational drugs are illicit and international treaties such as the Single Convention on Narcotic Drugs exist for the purpose of their prohibition. In English, the noun "drug" is thought to originate from Old French "drogue" deriving into "droge-vate" from Middle Dutch meaning "dry barrels", referring to medicinal plants preserved in them.
The transitive verb "to drug" arose and invokes the psychoactive rather than medicinal properties of a substance. A medication or medicine is a drug taken to cure or ameliorate any symptoms of an illness or medical condition; the use may be as preventive medicine that has future benefits but does not treat any existing or pre-existing diseases or symptoms. Dispensing of medication is regulated by governments into three categories—over-the-counter medications, which are available in pharmacies and supermarkets without special restrictions. In the United Kingdom, behind-the-counter medicines are called pharmacy medicines which can only be sold in registered pharmacies, by or under the supervision of a pharmacist; these medications are designated by the letter P on the label. The range of medicines available without a prescription varies from country to country. Medications are produced by pharmaceutical companies and are patented to give the developer exclusive rights to produce them; those that are not patented are called generic drugs since they can be produced by other companies without restrictions or licenses from the patent holder.
Pharmaceutical drugs are categorised into drug classes. A group of drugs will share a similar chemical structure, or have the same mechanism of action, the same related mode of action or target the same illness or related illnesses; the Anatomical Therapeutic Chemical Classification System, the most used drug classification system, assigns drugs a unique ATC code, an alphanumeric code that assigns it to specific drug classes within the ATC system. Another major classification system is the Biopharmaceutics Classification System; this groups drugs according to their permeability or absorption properties. Some religions ethnic religions are based on the use of certain drugs, known as entheogens, which are hallucinogens,—psychedelics, dissociatives, or deliriants; some drugs used as entheogens include kava which can act as a stimulant, a sedative, a euphoriant and an anesthetic. The roots of the kava plant are used to produce a drink, consumed throughout the cultures of the Pacific Ocean; some shamans from different cultures use entheogens, defined as "generating the divine within" to achieve religious ecstasy.
Amazonian shamans use ayahuasca a hallucinogenic brew for this purpose. Mazatec shamans have a long and continuous tradition of religious use of Salvia divinorum a psychoactive plant, its use is to facilitate visionary states of consciousness during spiritual healing sessions. Silene undulata is used as an entheogen, its root is traditionally used to induce vivid lucid dreams during the initiation process of shamans, classifying it a occurring oneirogen similar to the more well-known dream herb Calea ternifolia. Peyote a small spineless cactus has been a
A clathrate is a chemical substance consisting of a lattice that traps or contains molecules. The word clathrate is derived from the Latin clatratus meaning with a lattice. Traditionally, clathrate compounds are polymeric and envelop the guest molecule, but in modern usage clathrates include host–guest complexes and inclusion compounds. According to IUPAC, clathrates are "Inclusion compounds in which the guest molecule is in a cage formed by the host molecule or by a lattice of host molecules." Traditionally clathrate compounds refer to polymeric hosts containing molecular guests. More the term refers to many molecular hosts, including calixarenes and cyclodextrins and some inorganic polymers such as zeolites; the natural silica clathrate mineral, chibaite was described from Japan. Many clathrates are derived from organic hydrogen-bonded frameworks; these frameworks are prepared from molecules that "self-associate" by multiple hydrogen-bonding interactions. The most famous clathrates are methane clathrates where the hydrogen-bonded framework is contributed by water and the guest molecules are methane.
Large amounts of methane frozen in this form exist both in permafrost formations and under the ocean sea-bed. Other hydrogen-bonded networks are derived from hydroquinone and thiourea. A much studied host molecule is Dianin's compound. Hofmann compounds are coordination polymers with the formula Ni4·Ni2; these materials crystallize with small aromatic guests, this selectivity has been exploited commercially for the separation of these hydrocarbons. Metal organic frameworks form clathrates. Photolytically-sensitive caged compounds have been examined as containers for releasing a drug or reagent. Clathrate hydrates were discovered in 1810 by Humphry Davy. Clathrates were studied by P. Pfeiffer in 1927 and in 1930, E. Hertel defined "molecular compounds" as substances decomposed into individual components following the mass action law in solution or gas state. In 1945, H. M. Powell named them clathrates. Inclusion compounds are molecules, whereas clathrates are polymeric. Intercalation compounds are not 3-dimensional, unlike clathrate compounds.
International Union of Pure and Applied Chemistry
The International Union of Pure and Applied Chemistry is an international federation of National Adhering Organizations that represents chemists in individual countries. It is a member of the International Council for Science. IUPAC is registered in Zürich and the administrative office, known as the "IUPAC Secretariat", is in Research Triangle Park, North Carolina, United States; this administrative office is headed by IUPAC's executive director Lynn Soby. IUPAC was established in 1919 as the successor of the International Congress of Applied Chemistry for the advancement of chemistry, its members, the National Adhering Organizations, can be national chemistry societies, national academies of sciences, or other bodies representing chemists. There are fifty-four National Adhering Organizations and three Associate National Adhering Organizations. IUPAC's Inter-divisional Committee on Nomenclature and Symbols is the recognized world authority in developing standards for the naming of the chemical elements and compounds.
Since its creation, IUPAC has been run by many different committees with different responsibilities. These committees run different projects which include standardizing nomenclature, finding ways to bring chemistry to the world, publishing works. IUPAC is best known for its works standardizing nomenclature in chemistry and other fields of science, but IUPAC has publications in many fields including chemistry and physics; some important work IUPAC has done in these fields includes standardizing nucleotide base sequence code names. IUPAC is known for standardizing the atomic weights of the elements through one of its oldest standing committees, the Commission on Isotopic Abundances and Atomic Weights; the need for an international standard for chemistry was first addressed in 1860 by a committee headed by German scientist Friedrich August Kekulé von Stradonitz. This committee was the first international conference to create an international naming system for organic compounds; the ideas that were formulated in that conference evolved into the official IUPAC nomenclature of organic chemistry.
IUPAC stands as a legacy of this meeting, making it one of the most important historical international collaborations of chemistry societies. Since this time, IUPAC has been the official organization held with the responsibility of updating and maintaining official organic nomenclature. IUPAC as such was established in 1919. One notable country excluded from this early IUPAC is Germany. Germany's exclusion was a result of prejudice towards Germans by the Allied powers after World War I. Germany was admitted into IUPAC during 1929. However, Nazi Germany was removed from IUPAC during World War II. During World War II, IUPAC was affiliated with the Allied powers, but had little involvement during the war effort itself. After the war and West Germany were readmitted to IUPAC. Since World War II, IUPAC has been focused on standardizing nomenclature and methods in science without interruption. In 2016, IUPAC denounced the use of chlorine as a chemical weapon; the organization pointed out their concerns in a letter to Ahmet Üzümcü, the director of the Organisation for the Prohibition of Chemical Weapons, in regards to the practice of utilizing chlorine for weapon usage in Syria among other locations.
The letter stated, "Our organizations deplore the use of chlorine in this manner. The indiscriminate attacks carried out by a member state of the Chemical Weapons Convention, is of concern to chemical scientists and engineers around the globe and we stand ready to support your mission of implementing the CWC." According to the CWC, "the use, distribution, development or storage of any chemical weapons is forbidden by any of the 192 state party signatories." IUPAC is governed by several committees. The committees are as follows: Bureau, CHEMRAWN Committee, Committee on Chemistry Education, Committee on Chemistry and Industry, Committee on Printed and Electronic Publications, Evaluation Committee, Executive Committee, Finance Committee, Interdivisional Committee on Terminology and Symbols, Project Committee, Pure and Applied Chemistry Editorial Advisory Board; each committee is made up of members of different National Adhering Organizations from different countries. The steering committee hierarchy for IUPAC is as follows: All committees have an allotted budget to which they must adhere.
Any committee may start a project. If a project's spending becomes too much for a committee to continue funding, it must take the issue to the Project Committee; the project committee either decides on an external funding plan. The Bureau and Executive Committee oversee operations of the other committees. IUPAC committee has a long history of naming organic and inorganic compounds. IUPAC nomenclature is developed so that any compound can be named under one set of standardized rules to avoid duplicate names; the first publication on IUPAC nomenclature of organic compounds was A Guide to IUPAC Nomenclature of Organic Compounds in 1900, which contained information from the International Congress of Applied Chemistry. IUPAC organic nomenclature has three basic parts: the substituents, carbon chain length and chemical ending; the substituents are any functional groups attached to the main carbon chain. The main carbon chain is the longest possible continuous chain; the chemical ending denotes. For example, the ending ane denotes a single bonded carbon chain, as in "hexane".
Another example of IUPAC organic no |
SQL Server FORMAT Function
By: Daniel Calbimonte
The FORMAT function is used to provide various output formats for values like numbers, dates, time, money.
FORMAT(expression, formatPattern, [Culture])
- expression - This is the value, number or expression that we want to set in a different format.
- formatPattern - This is the format that is going to be used by the expression or value.
- Culture- This is related to the format of each country and language. Microsoft stores information about date, numbers and currency formats used by different countries and languages which can be handled by this parameter. The Culture specifies the language and the country. For example, the culture English from USA is en-us. French from France is fr-fr. French from Belgium is fr-be. For a complete list of supported languages, refer to this list of supported languages.
Simple FORMAT Example
Below is a simple example of using FORMAT to display numbers with 3 decimals.
SELECT FORMAT(345,'###.000') as format
FORMAT Example with Decimals, Scientific Notation and Hexadecimal
The next example shows how to work with different number of decimals, scientific notation and hexadecimal.
SELECT FORMAT(200,'N1', 'en-US') onedecimal, -- 1 decimal FORMAT(200,'N2', 'en-US') twodecimals, -- 2 decimals FORMAT(200,'E2', 'en-US') scientificNotation, --scientific notation FORMAT(200,'X', 'en-US') hexadecimal --hexadecimal
FORMAT Numeric Values Using Cultural Parameter
In USA, the decimal point is used and in other cultures like in Belgium, the comma is used for decimals. Here it is the example.
SELECT FORMAT(200,'N2', 'en-US') [English US], FORMAT(200,'N2', 'fr-be') [Dutch Belgium]
FORMAT Currency with Cultural Parameter
In SQL Server, it is possible to work with currencies of different countries. The following example shows the currencies used in USA, France and Russia.
SELECT FORMAT(200,'C', 'en-US') [English US], FORMAT(200,'C', 'fr-fr') [French France], FORMAT(200,'C', 'ru-ru') [Russian Russian]
Using FORMAT for Custom Date Formats
The following example shows how to set a custom format for the current date. yyyy is for year, MM is for month, dd is for day, hh is for hour, mm for minutes, ss for seconds and tt for pm/am.
SELECT FORMAT(GETDATE(),'yyyy-MM-dd hh:mm:ss tt') as format
FORMAT Dates with Cultural Parameter
The next example will show the date format in USA, France and Russia.
SELECT FORMAT(GETDATE(),'d', 'en-US') [English US], FORMAT(GETDATE(),'d', 'fr-fr') [French France], FORMAT(GETDATE(),'d', 'ru-ru') [Russian Russian]
FORMAT Dates from Table Column
Here is an example of how this could be used by formatting data from a column in a table.
SELECT FORMAT(OrderDate,'d', 'en-US') [OrderDate English US], FORMAT(TotalDue,'C', 'en-US') [TotalDue English US] FROM [Sales].[SalesOrderHeader]
Here is a sample of the output.
Note: The FORMAT function uses Common Language Runtime (CLR) and there have been noticeable performance differences between other approaches (CONVERT Function, CAST Function, etc.) showing that FORMAT is much slower.
- Format SQL Server Dates with FORMAT Function
- Date and Time Conversions Using SQL Server
- Format numbers in SQL Server |
Approximately 86 years ago, that an astronomer named Fritz Zwicky observed that the universe seemed to have a missing mass. However, something had to be in the place of the missing mass, spread between cosmic bodies and the powerful forces that lock them on position. The elusive substance is known today as dark matter.
Dark matter remains an important topic for a large number of scientists and it continues to offer interesting dilemmas. It is thought that up to 80% of the universe is represented by dark matter but no one has managed to spot it during lab tests.
The presence of dark matter is signaled by the powerful gravitational pull that it generates. Some researchers argue that a new hypothetical particle that is known under the name of X17 could be used to uncover more data about dark matter.
X17 was discovered in 2015 by a team of Hungarian researchers. The team argued that the detection of the new particle is an important step forward and that it could explain how dark matter particles interact with each other.
The discovery was quite controversial, but the team continued the research and elaborated a new paper that contains surprising details about the particle. It is thought that X17 is a subatomic particle that can carry energy and force, with a mass of 17 million electron volts.
At this point, four forces are known: gravitational, electromagnetic, strong and weak, which direct how particles interact with each other. X17 could force physicists to rewrite the Standard Model, which does not take into account the interaction between particles of dark matter.
Many researchers have started to use devices like the Large Underground Xenon Detector, the Large Hadron Collider, or the DAMA/LlBRA experiment in an attempt to track down dark matter particles.
It remains to be seen if X17 is real and useful for future research. |
Concept: The question tests on basic Coordinate Geometry
The equation of the line is given by y = mx + c, where m is the slope and c is the y intercept.
(1) Since the line is moving downwards, the slope is negative. This rules out options A and C.
(2) The line cuts the negative y axis, therefore the y intercept has to be negative. This rules out B.
Now looking at the figure carefully, we see that the x intercept is a little bit more than the y intercept in terms of the distance from the origin, or they are almost equal.
From Option D, putting x = 0, the y intercept = -2/3 = -0.67.
Putting y = 0, the x intercept = -2/9 = -0.22
The difference is too much. Check for Option E
From Option E, putting x = 0, the y intercept = -3.
Putting y = 0, the x intercept = -9/2 = -4.5 |
The inside of a star is a noisy place to be, as many stars hum songs to themselves. We can’t hear these songs directly because the sound waves cannot escape the star, but they do create a visible effect on the surface.
Sound waves continuously bouncing around inside a star cause it to swell and contract, and these movements cause changes in the temperature at the surface, which can be detected as variations in the brightness of a star.
All stars have a pattern of brightness that changes over time, known as a light curve, but if there are numerous sound waves at once this pattern can become crowded and difficult to analyse.
Read more about stars:
A mathematical technique known as a Fourier transform can be used to pluck the individual frequencies out of the light curve.
By measuring characteristics such as the spacing between frequencies, it is possible to learn a lot about the star such as its mass, size, and age.
The Kepler Space Telescope stared at the same patch of sky for four years, which is crucial for detecting some of the ‘quieter’ sound waves.
While the primary aim of the Kepler mission was to search for exoplanets, the lengthy observations needed to find planets are also perfect for picking out the subtle frequencies in hundreds of pulsating Sun-like stars.
“This is opening the possibility to perform detailed studies of the evolution and internal structure of stars like the Sun”, says Bill Chaplin of the University of Birmingham.
These pulsations are much easier to detect in red giant stars than those less advanced in the stellar life cycle because the periods of the sound waves in red giants are much longer.
Younger stars have periods of only a few minutes, whereas a sound wave in a red giant will take hours to oscillate back and forth.
The interiors of red giants change dramatically as they evolve further. Having exhausted their supply of hydrogen in the core, red giants burn hydrogen in a shell around a dead core.
The core eventually reignites, this time burning helium. In both cases the red giant looks the same on the surface – but Kepler showed that the asteroseismic signatures are very different.
Distinguishing the two populations of red giants is a leap forward in our understanding of stellar evolution.
Delta Scuti variables and Gamma Doradus variables are types of main sequence star that have distinctive pulsation periods.
Both classes are hotter than the Sun, and typically Delta Scuti stars have higher temperatures than Gamma Doradus stars.
The different temperature ranges mean that these stars sit within their own region on the Hertzsprung-Russell diagram (see below), which is widely used to compare the temperature and brightness of stars in order to track their evolution.
Theory predicts that there should be a handful of ‘hybrid’ stars in the overlap area between the Delta Scuti and Gamma Doradus classes, and that these hybrids will show both types of pulsations.
It therefore came as a surprise when Kepler revealed hundreds of hybrid stars that were littered across both the Delta Scuti and Gamma Doradus regions of the Hertzsprung-Russell diagram.
In addition, some stars that should show pulsations remain mysteriously quiet.
Another curious discovery made by Kepler was unusual double periods in RR Lyrae variables.
These typically have pulsation periods of around half a day, but some of them also exhibit a longer period where the overall shape of the light curve changes on timescales between tens to hundreds of days, which is known as the Blazhko effect.
Kepler data revealed that the sound waves in some of these stars can vary on short timescales or on long timescales, and alternate between the two, an effect known as ‘period doubling’.
Period doubling is known to occur in other types of variable stars, but was never seen in RR Lyrae stars prior to Kepler.
Strangely, period doubling only occurs in stars that also exhibit the Blazhko effect, indicating a connection between the two types of periods.
The Kepler mission played a vital role in these discoveries and the field of asteroseismology, but in 2013 two of the telescope’s reaction wheels failed and it looked like the mission was at an end.
However, it was rebooted as K2, which observed different fields for around 80 days each. Asteroseismology needs lengthy observations for the utmost precision. The K2 Galactic Archaeology Programme surveyed red giants over a large proportion of the Galaxy.
“In essence what we do is to use the stars as probes of the Galaxy’s structure and we use the stellar ages as the clock to obtain a picture of how the Milky Way evolved over its 13 billion-year history,” explains Dennis Stello of the University of Sydney, who analyses K2 data as part of his work in the university’s Stellar Oscillations Group.
Stars like our Sun can be studied, and the necessity of observing brighter stars means that information will be available on those stars from other ground-based observational methods, allowing for better characterisation.
Spinning faster on the inside
Main sequence stars have two different types of sound waves – those that reverberate in the outer layers of the star, and those that are restricted to the core of the star.
Analysing core sound waves would reveal the innermost workings of a star, but it is currently impossible to detect these waves in stars like our Sun.
As a main sequence star evolves into a red giant, the changing density in the core allows the sound waves to drift upwards and interact with the waves in the outer layers, making them visible to astronomers.
Normally, a sound wave that has been extracted from the light curve will show a single peak at a certain frequency.
However, if the star is rotating then this single peak can get split into several peaks. By measuring the splitting of the frequency of the wave, it is possible to measure how fast the star is spinning.
With the Kepler red giant data, rotation splitting can be measured for both the core and for the surface.
Comparing the two data sets has shown that some red giant stars rotate much faster on the inside.
Stars like our Sun
Studying oscillations in our Sun, known as helioseismology, began in 1962, and having decades of data on the Sun has revealed much about our star.
For example, some pulsation frequencies vary with the 11-year activity cycle.
Helioseismology has paved the way for asteroseismology. While the disc of the Sun can be resolved, observing the Sun as if it were a distant point source has ensured that the knowledge gained through helioseismology can be exploited when looking at other Sun-like stars.
The Kepler mission measured oscillations in over 500 Sun-like stars, deciphering parameters such as mass, radius and age.
Before Kepler, only around 20 stars had measured pulsations. The stellar parameters from asteroseismology are generally measured to unprecedented precision, which is of particular importance for stars that host exoplanets, and thus tie in nicely with the main objective of the Kepler mission.
Without knowing the details of the host star, it is impossible to pinpoint the planet’s properties. For instance, knowing the mass and radius of the planet will help to reveal if it is comprised of dense iron or porous rock.
Dr Amanda Doyle is an astrophysicist based at the University of Warwick. |
Beacon Lesson Plan Library
Pi Day (March 14)
Santa Rosa District Schools
Students will determine the value of PI by measuring the circumference and diameter of circular objects such as soup cans, Oreo cookies, etc..
The student selects and uses strategies to understand words and text, and to make and confirm inferences from what is read, including interpreting diagrams, graphs, and statistical illustrations.
The student locates, gathers, analyzes, and evaluates written information for a variety of purposes, including research projects, real-world tasks, and self-improvement.
Understands that numbers can be represented in a variety of equivalent forms using integers, fractions, decimals, and percents, scientific notation, exponents, radicals, absolute value, or logarithms.
Adds, subtracts, multiplies, and divides real numbers, including square roots and exponents using appropriate methods of computing (mental mathematics, paper-and-pencil, calculator).
Uses concrete and graphic models to derive formulas for finding perimeter, area, surface area, circumference, and volume of two- and three- dimensional shapes including rectangular solids, cylinders, cones and pyramids.
Selects and uses direct (measured) or indirect (not measured) methods of measurement as appropriate.
-PI worksheet (See attached file.)
-Circular objects: Oreo cookie, soup can, and drinking cup
-Ruler (centimeter or inch)
-Marker (mark string)
1. Run-off PI worksheet for each student
2. One centimeter or inch ruler for each group
3. Roll of kite string
4. Marker for each group
5. Calculator for each group
6. Three circular objects to measure for each group
This lesson is similar to one done by the math department at Washington-Marion High School in Lousiana on March 14th of every year.
This activity can be used after discussing how to determine Circumference of circular objects or on March 14 of each year (3-14).
1. Break students in groups of two.
2. One student in each group will get required materials (worksheet, circular objects, ruler, string and marker).
One student will be responsible for measuring objects and the other will be responsible for recording results.
3. Students measure the circumference and diameter of each object using the string and record results on the worksheet. (Note: mark the string and then measure the string against the ruler.)
4. Students will then calculate the value of PI by dividing the diameter into the circumference (C/d) and record answer on the worksheet.
5. Compare the calculated value of PI with 3.14 (standard approximated value of PI).
6. Each group will write down some reasons why they think their calculated value of PI may differ from 3.14, (standard approximated value of PI).
The worksheet should contain the following information:
-Name of object measured.
-Measurement of Circumference of object (in correct units).
-Measurement of Diameter of object (in correct units).
-Calculation of PI as a fraction and a decimal.
-Difference of calculated PI and 3.14 ( standard approximated value of PI).
-Reasons why calculated value of PI may differ from 3.14 (standard approximated value of PI).
1. This lesson could easily be condensed or expanded as far as the amount and type of objects used.
2. The measurement in different units also may be used. Just remember larger units of measurement will generally create a greater difference in the standard approximated value of PI. |
When did the Earth actually form? There are a number of ways to approach that question. We can use radioactive dating to look at material that has fallen to Earth after remaining largely undisturbed since the formation of our Solar System, or we can obtain a date for the oldest materials we've found on Earth. But those methods simply provide an upper and lower limit; the Earth formed some time after the smaller material in the Solar System, while the earliest materials on Earth would have been produced some time after its formation.
Now, researchers have provided a new date for the formation of the Earth, based on the last time the planet was entirely molten—an event that was triggered by a collision with a body that ultimately created the Moon. The data is calculated using what we know about the early Solar System combined with the debris that fell to Earth after the big collision.
The early Solar System was a violent place, as small particles aggregated into bodies that then grew larger by collisions. These collisions eventually produced planetesimals the size of large asteroids, which merged to form the current collection of planets. So there was no clear start to what would ultimately become the Earth, but there was a clear end to the primary process of its formation.
That end came when the proto-Earth was smacked by a Mars-sized body. The resulting collision would have left the Earth a magma ocean, blown away its atmosphere and any volatile liquids on its surface, and put enough debris in orbit to form the Moon. In effect, it acted like a reset button for the timing of the Earth's formation, remixing all its components so that the raw material for radioactive dating—the stable maintenance of isotope differences caused by radioactive decay—was eliminated. Everything started afresh.
Figuring out the timing of that collision is important if we want to understand the conditions on the early Earth and in the early Solar System in general. The new study uses a clever way of providing an estimate, based on the fact that the last major collision in the Earth's history didn't mean that the Earth escaped further bombardment from space.
The logic is pretty clever. In the wake of the Moon-forming collision, the entire Earth was molten, which allowed the iron to sink to the core. A number of heavy elements that have an affinity for iron sunk to the core with it. This would have left the surface of the Earth completely stripped of these metals. (These elements, which include gold and platinum, are generically referred to as siderophiles.) But it's possible to mine this material from the crust.
The elements are there because the Earth's supply was partly refreshed by the arrival of new material in collisions with other planetesimals and smaller asteroids. (The most famous asteroid, the one that killed the dinosaurs, was first identified through the presence of a layer of another siderophile, iridium, which it carried to Earth.) If we total how much of these elements are in the crust and compare that number to the composition of asteroids in our Solar System, we can get an estimate of how much mass must have struck the Earth after the Moon-forming collision.
How do you go from that mass to a date? The authors recognized that over time, these bodies were lost through collisions with Earth and the other planets. Thus, early in the Solar System's history, there was an ever-shrinking stock of planetesimals that could have delivered material to Earth. To find out the timing of when the stock was depleted, the researchers relied on models of Solar System formation.
In fact, they relied on two different types of models: one that keeps the giant outer planets in their current locations and a second class in which Jupiter and Saturn engage in what's called a "Grand Tack," moving inward early in the Solar System's history before receding back out to their current locations. The Grand Tack simulations were more likely to produce a set of rocky inner planets that look like our Solar System, but in both cases, the planetesimals got depleted pretty rapidly.
Their vanishing act sets limits on when the Moon-forming collision could have happened and, thus, when the Earth could have formed. Too early, and the Earth would have seen lots of additional collisions and had its crust loaded with the elements that are now rare. Too late, and there wouldn't be enough around to give us any significant quantities. Consequently, the authors calculate that there's only a 0.1 percent chance that the Moon-forming collision took place prior to 40 million years after material started condensing in our Solar System. Instead, they place the likely date at 95 million years, with a margin of error of 30 million years on either side.
The authors suggest that we might want to revisit some of the results that were generated using isotope data—a few of these put the collision at 30 million years, and understanding why they got the number wrong may provide some details of the mechanics of the collision itself.
They also think that the reloading of the Earth with rare metals tells us something about the distribution of mass during the formation of the Solar System, which can constrain our models of planet formation. Right now, those models don't do a very good job of creating many of the tightly packed systems that we're currently discovering, so any improvements to them would be a positive step. |
Confederation Congress 1781-1789
CONFEDERATION. The era 1781–1789 takes its name from the Articles of Confederation, the first constitution of the new United States, ratified by the Second Continental Congress on 1 March 1781. This decade has sometimes been described as an era in which America experienced disastrously weak government under an inept Confederation Congress, an unstable economy that brought the nation to the brink of depression, and a society torn by violence and class conflict; in sum, a decade when the new republic threatened to unravel completely.
On the surface things did look bleak. But overt problems notwithstanding, the new nation made great strides in important ways. While national leadership was wanting during the Confederation period, there remained a strong center of political stability in most states. Both within the Confederation Congress and without, a healthy debate continued in the wake of the Revolution between Federalists, who pressed for a strong central government, and Antifederalists, who stressed preservation of individual liberties protected by strong state sovereignty. This political division culminated in the Constitutional Convention of 1787, the elections that followed of the first constitutional government, and the promulgation of the Bill of Rights in the form of the first ten amendments to the Constitution.
The 1780s also saw a rebirth of American merchant trade as the Confederation Congress established diplomatic relations and forged commercial ties with continental Europe and its Caribbean colonies. Agriculture benefited from the start of a dynamic westward expansion into the Ohio Valley, and with passage of the Northwest Ordinance of 1787, the Confederation Congress established the framework for further westward movement through its organization of the Northwest Territory, thus providing the blueprint for systematic transition from territory to statehood down to the present. The ordinance did more: it prohibited slavery in the new territory, which marked the first time any federal action was taken restricting the advance of the "peculiar institution," a vital precedent often invoked in the next century.
Overall, though, this progress was masked by political conflict—not only between Federalists and Antifederalists but between tidewater merchant interests and western agrarians—and by economic instability brought on by the lack of a national currency and the confusion generated by a muddle of state currencies. These problems were mostly a continuation of conflicts dating back to early in the colonial period, problems the Confederation Congress was too weak to cope with.
Political and Social Unrest
The currency mess created by thirteen fully sovereign states working at cross purposes was a problem that symbolized for ordinary people and legislators alike the need to somehow modify and weaken state sovereignty without sacrificing individual liberties in the process. The economic dislocation caused by the absence of federal authority, and the growing rift between large and small states over a host of economic and trade issues, drove the desire to reform the Articles that characterized much of the politics of the decade. This problem played out as well within many states. A tidewater/piedmont (eastern seaboard versus backcountry) schism in many states played powerfully into the economic instability of the era. In New Jersey, North Carolina, Rhode Island, and Massachusetts, for example, violence erupted as paper-money factions (usually debtor farmers and unskilled labor) fought a virtual class war against tidewater merchants, lawyers, and the landowning elite in an attempt to address the crisis that an absence of usable currency created for farmers and wage workers.
Shays's Rebellion, on the western frontier of Massachusetts in the heart of the Berkshire Mountains, was the worst of these confrontations. In 1786 frontier farmers in Stockbridge took the law into their own hands, in what quickly became a symbol across the nation of widespread class-oriented social unrest. The rebels, led by former Continental army captain Daniel Shays, were suppressed by eastern Massachusetts militia driven by well-to-do merchants from the eastern seaboard of the state. This social unrest, repeated elsewhere n America, generated enormous support in the new nation for a revision of the Articles of Confederation. In 1787 a convention initially called only to reform the Articles matured into a full-blown movement to scrap it and start anew in developing a workable government framework for the infant republic.
The debates at the Constitutional Convention of 1787 encapsulated the experience of the Confederation era. It was as if the decade formed a period of trial and error as Americans, divided politically into Federalist and Anti-federalist camps, moved toward a resolution that preserved both the order that a stable nation required to function in a world of nations and the liberty uniquely espoused by the founders, hard won in the Revolutionary War. The Constitution was very much a product of both the conflicts and successes of the Confederation. The Constitution embodied the enduring principles of representative government so central to the ideology and content of the Articles, and it uniformly incorporated all the legislative, diplomatic, and expansionist successes of the 1780s. More than anything else, it accommodated Antifederalist demands that state sovereignty be preserved even as the federal government was imbued with a new sovereign power of its own. The key notion that sovereignty could be divided was a revolutionary republican idea born entirely of the Confederation experience. Fears of executive autocracy and restoration of the monarchy experienced by colonial America were assuaged by severe checks on presidential power. Representative self-government as a basic operating principle was vested in a House of Representatives that looked very much like the old Confederation Congress. Elite fears of mob rule, with Shays's Rebellion and its like elsewhere in the 1780s, were met by the creation of the U.S. Senate as an upper house (building on a colonial model), and power over the military vested in the president. These were accommodations made possible only by the reality of experience endured in the decade beginning at the end of the American Revolution.
These accommodations framed by the Constitution of 1787 were tested in the final chapter of the Confederation era, the ratifying election campaigns in the states in 1788. In these separate polls each state was asked to elect delegates to a ratifying convention that would establish the Constitution drafted the year before as the law of the land. All the issues raised by the experiences of the 1780s, as well as the ideological conflicts between Federalists and Antifederalists, were played out in these ratifying elections, as the Confederation era drew to a close.
The nine states needed to ratify the Constitution were co-opted by the promises made by the victorious Federalist delegates to the ratifying conventions, who promised a Bill of Rights to meet Antifederalist fears of tyrannical authority vested in a strong central government. Critical as were the issues of that decade, tumultuous as were the politics, uneven as the economy turned out to be, the Confederation era of the 1780s stands as the gateway to the permanent establishment of the democratic republic most Americans wanted at the time of the American Revolution.
Borden, Morton. The Antifederalist Papers. East Lansing: Michigan State University Press, 1965.
Jenson, Merrill, and Robert A. Becker. The Documentary History of the First Federal Elections, 1788–1790. Madison: University of Wisconsin Press, 1976–1989.
Kenyon, Cecelia M., ed. The Antifederalists. Boston: Northeastern University Press, 1966.
Land Ordinance of 1785
LAND ORDINANCE OF 1785
The Land Ordinance of 1785 was the second of three land ordinances passed by the Confederation Congress after the Revolutionary War (1775–1783). The three ordinances, which included the Ordinance of 1784 and the Northwest Ordinance (1787), were meant to manage the lands of the Old Northwest, ceded by Great Britain at the end of the Revolution. The Treaty of Paris (1783), which established normal diplomatic relations between England and the former colonies after the Revolution, turned the area that is now the states of Ohio, Indiana, Illinois, Michigan, and Wisconsin over to the new U.S. government. In 1784 a committee led by Thomas Jefferson drew up legislation to provide for future statehood for settlers already in the area. The following year, in the Land Ordinance of 1785, Jefferson's committee established the way in which the territory would be measured and divided for sale.
The new nation was governed for the most part by the states. The relationship between the states and with the central government was defined by the Articles of Confederation. The central government was the Confederation Congress, a holdover from the Second Continental Congress which had been convened in the spring of 1775 and had coordinated the revolutionary war effort. The Articles of Confederation, ratified by the states in 1781, summarized the existing relationship between the Congress and the states.
It was an indication of the distrust with which the American people viewed central authority that the Articles of Confederation did not allow the Congress to tax either the states or individuals. As a way of keeping the nation solvent, the states that claimed western lands from the terms of their colonial charters gave up those lands to the Confederation government. The Confederation government expected to use these lands as a way of meeting governmental expenses. In order to attract land buyers, the Congress declared that these lands would be made into new states, which would enter the Union on an equal basis with the original thirteen colonies. This declaration made possible the creation of the modern United States.
The Land Ordinance created the pattern along which American public land would be divided and sold until the passage of the Homestead Act in 1862. The Ordinance of 1785 ruled that the western lands north of the Ohio River would be divided by surveyors into a square grid. Each square (called a township) measured six by six miles and was subdivided into thirty-six one-mile-square sections. Each section (measuring 640 acres) could then be further divided, usually into half, quarter, eighth, or sixteenth-section lots of 320, 160, 80, or 40 acres. Certain sections had restrictions placed on their sales; for instance, money from the sixteenth section of every township was to be set aside to fund public schools in the township. The first territorial survey took place in what is now southeastern Ohio, and it measured land that stretched westward from Little Beaver Creek to the Tuscarawas River and southward to the Ohio River. A total of about 91 townships were created (although some of them were fractional and did not contain a full 36 sections), with about 3,276 sections comprising 2,096,640 acres of land ready for development by U.S. farmers.
Although the Land Ordinance of 1785 was conceived as a way to divide the western territory more evenly than had been the case before the Revolution, in practice it was less than fair. Congress thought that land sales in the territory would help it meet its big debts left over from the war. As a result, land sales were aimed at wealthy purchasers rather than the poorest farmers, who were most in need of land on which to settle. Until 1841 the government also required that public land be offered at auction where syndicates of land speculators usually snatched it up before it could be sold to private individuals. Congress set the minimum amount of land that could be purchased at one section—640 acres—and the purchase price at one dollar per acre. Small purchases on credit were not allowed. The $640 minimum purchase placed the cost of western lands far outside the budget of most U.S. citizens. Most of the lands went instead to wealthy land speculators, who were also given the option of buying on credit. The speculators bought lands from the government, divided them up, and then resold them to small farmers at a profit.
An interesting sidelight to the Ordinances was the way that they steered the political culture of the nation. The third Ordinance, passed in 1787, stipulated that future inhabitants would be guaranteed a "bill of rights" guaranteeing freedom of religion and the right to a jury trial. It also prohibited slavery north of the Ohio River, although this applied to the future and did not contemplate the freeing of slaves that were already held in the Old Northwest Territory. The ordinance also contained provisions for the return of escaped slaves.
Gates, Paul W. History of Public Land Law Development. Washington, DC: Wm. W. Gaunt and Sons, 1968.
Morris, Richard B. The Forging of the Union, 1781– 1789. New York: Harper and Row, 1987.
Onuf, Peter S. Statehood and Union: The Northwest Ordinance. Bloomington, IN: Indiana University Press, 1987.
con·fed·er·a·tion / kənˌfedəˈrāshən/ • n. an organization that consists of a number of parties or groups united in an alliance or league. ∎ a more or less permanent union of countries with some or most political power vested in a central authority: Canada became a confederation in 1867. ∎ the action of confederating or the state of being confederated.
A union of states in which each member state retains some independent control over internal and external affairs. Thus, for international purposes, there are separate states, not just one state. A federation, in contrast, is a union of states in which external affairs are controlled by a unified, central government. |
kwizNET Subscribers, please login to turn off the Ads!
Email us to get an instant
on highly effective K-12 Math & English
Online Quiz (
Questions Per Quiz =
Grade 5 - Mathematics
7.18 Circle Facts - 2
Circles are simple closed curves which divide the plane into an interior and exterior.
Circle is a closed figure and not a polygon.
A circle can be drawn with the help of a compass.
All the points on a circle are at the same distance from the center of the circle.
A line from the center to any point on the circle is called radius.
Diameter is twice the radius.
The circumference of a circle means the length of the circle.
Answer the following. Draw a circle with compass and write all its properties.
: Circle is a closed figure and is not a ________
: All the points on a circle are at the same distance from the
points on the circle
center of the circle
points outside the circle
: You can draw a circle with
: The radius of a circle is always
same as diameter
half the diameter
twice the diameter
: In the circle shown what the segments of equal length
DT, TR and DS
DT, DR and DS
DR, DR and DS
: A line segment whose end points are the center of the circle and point on the circle is called
Question 7: This question is available to subscribers only!
Question 8: This question is available to subscribers only!
Subscription to kwizNET Learning System
offers the following benefits:
Unrestricted access to grade appropriate lessons, quizzes, & printable worksheets
Instant scoring of online quizzes
Progress tracking and award certificates to keep your student motivated
Unlimited practice with auto-generated 'WIZ MATH' quizzes
Child-friendly website with no advertisements
Choice of Math, English, Science, & Social Studies Curriculums
Excellent value for K-12 and ACT, SAT, & TOEFL Test Preparation
Get discount offers by sending an email to
Terms of Service |
Spiral galaxies such as the Milky Way have discs which are really thin, in which the major fraction of their stars are found. These discs are limited in size, so that beyond certain radius there are very few stars left.
In our Galaxy we were not aware that there are stars in the disc at distances from the centre more than twice that of the Sun. This means that our own star was apparently orbiting at about half the galactic radius. However now we know that there are stars quite a bit further out, at more than three times this distance, and it is probable that some stars are at more than four times the distance of the Sun from the Galactic centre.
“The disc of our Galaxy is huge, around 200 thousand light years in diameter” says Martín López-Corredoira, a researcher at the IAC and the first author of the article recently published in the journal Astronomy & Astrophysics and whose authors come from both the IAC and the NAOC.
In broad terms we can think of galaxies like the Milky Way as being composed of a rotating disc, which includes spiral arms, and a halo, spherical in shape, which surrounds it. This piece of research has compared the abundances of metals (heavy elements) in the stars of the Galactic plane with those of the halo, to find that there is a mixture of disc and halo stars out to the large distances indicated.
The researchers came to these conclusions after make a statistical analysis of survey date from APOGEE and LAMOST, two projects which obtain spectra of stars to extract information about their velocities and their chemical compositions. “Using the metallicities of the stars in the catalogues from the high quality spectral atlases of APOGEE and LAMOST, and with the distances at which the objects are situated, we have shown that there is an appreciable fraction of stars with higher metallicity, characteristic of disc stars, further out than the previously assumed limit on the radius of the Galaxy disc” explains Carlos Allende, a researcher at the IAC and a co-author of this publication.
Francisco Garzón, an IAC researcher who is another of the authors of the article explains that “We have not used models, which sometimes give us only the answers for which they were designed, but we have employed only the statistics of a large number of objects. The results are therefore free from a priori assumptions, apart from a few basic and well established ones.” |
Clouds of snow blanket the Red Planet’s poles during the dead of a Martian winter. However, unlike Earth’s water-based snow, the particles on Mars are actually frozen crystals of carbon dioxide.
Most of the Martian atmosphere is composed of carbon dioxide, and in the winter, the poles get so cold – cold enough to freeze alcohol – that the gas condenses, forming tiny particles of snow.
A team of MIT researchers recently managed to calculate the size of snow particles in clouds at both Martian poles from data gathered by orbiting spacecraft. The team determined that snow particles in the south are slightly smaller than snow in the north — but particles at both poles are about the size of a red blood cell. Interestingly eonough, the buildup is about 50 percent larger at Mars’ south pole than its north pole.
“These are very fine particles, not big flakes,” explained MIT Professor Kerri Cahoy. “If the carbon dioxide particles were eventually to fall and settle on the Martian surface, you would probably see it as a fog, because they’re so small.”
Over the course of a Martian year (a protracted 687 days, versus Earth’s 365), the researchers observed that as it gets colder and darker from fall to winter, snow clouds expand from the planet’s poles toward its equator. The snow reaches halfway to the equator before shrinking back toward the poles as winter turns to spring, much like on Earth.
“For the first time, using only spacecraft data, we really revealed this phenomenon on Mars,” said MIT graduate student Renyu Hu.
To determine an accurate picture of carbon dioxide condensation on Mars, Hu analyzed an immense amount of data, including temperature and pressure profiles taken by the MRO every 30 seconds over the course of five Martian years (more than nine years on Earth). The researchers then reviewed the data to see where and when conditions would allow carbon dioxide cloud particles to form.
The team also sifted through measurements from the spacecraft’s laser altimeter, which measured the topography of the planet by sending laser pulses to the surface, then timing how long it took for the beams to bounce back. Every once in a while, the instrument picked up a strange signal when the beam bounced back faster than anticipated, reflecting off an anomalously high point above the planet’s surface. Scientists figured these laser beams had encountered clouds in the atmosphere.
Hu analyzed the cloud returns, searching for additional evidence to confirm carbon dioxide condensation. He looked at every case where a cloud was detected, then tried to match the laser altimeter data with concurrent data on local temperature and pressure. In 11 instances, the laser altimeter detected clouds when temperature and pressure conditions were ripe for carbon dioxide to condense. Hu subsequently analyzed the opacity of each cloud – the amount of light reflected – and determined the density of carbon dioxide in each cloud.
To estimate the total mass of carbon dioxide snow deposited at both poles, Hu used earlier measurements of seasonal variations in the Martian gravitational field. As snow piles up at Mars’ poles each winter, the planet’s gravitational field changes by a tiny amount. By analyzing the gravitational difference through the seasons, the researchers determined the total mass of snow at the north and south poles.
Using the total mass, Hu calculated the number of snow particles in a given volume of snow cover, and from that, determined the size of the particles. In the north, molecules of condensed carbon dioxide ranged from 8 to 22 microns, while particles in the south were a smaller 4 to 13 microns.
“It’s neat to think that we’ve had spacecraft on or around Mars for over 10 years, and we have all these great datasets,” noted Cahoy. “If you put different pieces of them together, you can learn something new just from the data.”
According to Hu, knowing the size of carbon dioxide snow cloud particles on Mars may help researchers understand the properties and behavior of dust in the planet’s atmosphere. For snow to form, carbon dioxide requires something around which to condense — for instance, a small silicate or dust particle.
“What kinds of dust do you need to have this kind of condensation?” Hu asks. “Do you need tiny dust particles? Do you need a water coating around that dust to facilitate cloud formation?”
Just as snow on Earth affects the way heat is distributed around the planet, Hu says snow particles on Mars may have a similar effect, reflecting sunlight in various ways, depending on the size of each particle.
“They could be completely different in their contribution to the energy budget of the planet… These datasets could be used to study many problems,” he added. |
A rocket (from Italian rocchetto "bobbin")[nb 1] is a missile, spacecraft, aircraft or other vehicle that obtains thrust from a rocket engine. Rocket engine exhaust is formed entirely from propellant carried within the rocket before use. Rocket engines work by action and reaction and push rockets forward simply by expelling their exhaust in the opposite direction at high speed, and can therefore work in the vacuum of space.
In fact, rockets work more efficiently in space than in an atmosphere. Multi-stage rockets are capable of attaining escape velocity from Earth and therefore can achieve unlimited maximum altitude. Compared with airbreathing engines, rockets are lightweight and powerful and capable of generating large accelerations. To control their flight, rockets rely on momentum, airfoils, auxiliary reaction engines, gimballed thrust, momentum wheels, deflection of the exhaust stream, propellant flow, spin, and/or gravity.
Rockets for military and recreational uses date back to at least 13th century China. Significant scientific, interplanetary and industrial use did not occur until the 20th century, when rocketry was the enabling technology for the Space Age, including setting foot on the moon. Rockets are now used for fireworks, weaponry, ejection seats, launch vehicles for artificial satellites, human spaceflight, and space exploration.
Chemical rockets are the most common type of high power rocket, typically creating a high speed exhaust by the combustion of fuel with an oxidizer. The stored propellant can be a simple pressurized gas or a single liquid fuel that disassociates in the presence of a catalyst (monopropellants), two liquids that spontaneously react on contact (hypergolic propellants), two liquids that must be ignited to react, a solid combination of fuel with oxidizer (solid fuel), or solid fuel with liquid oxidizer (hybrid propellant system). Chemical rockets store a large amount of energy in an easily released form, and can be very dangerous. However, careful design, testing, construction and use minimizes risks.
- 1 History
- 2 Types
- 3 Design
- 4 Uses
- 5 Noise
- 6 Physics
- 7 Safety, reliability and accidents
- 8 Costs and economics
- 9 See also
- 10 Notes
- 11 External links
The first gunpowder-powered rockets were developed in Song China, by the 13th century. The Chinese rocket technology was adopted by the Mongols and the invention was spread via the Mongol invasions to the Near East and Europe in the mid 13th century. Medieval and early modern rockets were used militarily as incendiary weapons in sieges.
An early Chinese text to mention the use of rockets was the Huolongjing, written by the Chinese artillery officer Jiao Yu in the mid-14th century. Between 1270 and 1280, Hasan al-Rammah wrote al-furusiyyah wa al-manasib al-harbiyya (The Book of Military Horsemanship and Ingenious War Devices), which included 107 gunpowder recipes, 22 of which are for rockets. In Europe, Konrad Kyeser described rockets in his military treatise Bellifortis around 1405.
The name Rocket comes from the Italian rocchetta, meaning "bobbin" or "little spindle", given due to the similarity in shape to the bobbin or spool used to hold the thread to be fed to a spinning wheel. The Italian term was adopted into German in the mid 16th century by Leonhard Fronsperger and Conrad Haas, and by the early 17th century into English. Artis Magnae Artilleriae pars prima, an important early modern work on rocket artillery, by Kazimierz Siemienowicz, was first printed in Amsterdam in 1650.
The first iron-cased rockets were developed in the late 18th century in the Kingdom of Mysore, adopted and improved as the Congreve rocket and used in the Napoleonic Wars. The first mathematical treatment of the dynamics of rocket propulsion is due to William Moore (1813). In 1815, Alexander Dmitrievich Zasyadko constructed rocket-launching platforms, which allowed rockets to be fired in salvos (6 rockets at a time), and gun-laying devices. William Hale in 1844 greatly increased the accuracy of rocket artillery. The Congreve rocket was further improved by Edward Mounier Boxer in 1865.
Konstantin Tsiolkovsky (1903) first speculated on the possibility of manned spaceflight with rocket technology. Robert Goddard in 1920 published proposed improvements to rocket technology in A Method of Reaching Extreme Altitudes. In 1923, Hermann Oberth (1894–1989) published Die Rakete zu den Planetenräumen ("The Rocket into Planetary Space")
Modern rockets originated when Goddard attached a supersonic (de Laval) nozzle to the combustion chamber of a liquid-fueled rocket engine. These nozzles turn the hot gas from the combustion chamber into a cooler, hypersonic, highly directed jet of gas, more than doubling the thrust and raising the engine efficiency from 2% to 64%. Use of liquid propellants instead of gunpowder greatly improved the effectiveness of rocket artillery in World War II, and opened up the possibility of manned spaceflight after 1945.
In 1943, production of the V-2 rocket began in Germany. In parallel with the guided missile programme, rockets were also used on aircraft, either for assisting horizontal take-off (RATO), vertical take-off (Bachem Ba 349 "Natter") or for powering them (Me 163, see list of World War II guided missiles of Germany). The Allies' rocket programs were less sophisticated, relying mostly on unguided missiles like the Soviet Katyusha rocket. The Americans captured a large number of German rocket scientists, including von Wernher von Braun, and brought them to the United States as part of Operation Paperclip. After the war, rockets were used to study high-altitude conditions, by radio telemetry of temperature and pressure of the atmosphere, detection of cosmic rays, and further research; notably the Bell X-1, the first manned vehicle to break the sound barrier. Independently, in the Soviet Union's space program research continued under the leadership of the chief designer Sergei Korolev.
During the Cold War, rockets became extremely important militarily as modern intercontinental ballistic missiles (ICBMs). The 1960s became the decade of rapid development of rocket technology particularly in the Soviet Union (Vostok, Soyuz, Proton) and in the United States (e.g. the X-15). Rockets were now used for space exploration, with the American manned programs Project Mercury, Project Gemini and later the Apollo programme culminated in 1969 with the first manned landing on the moon via the Saturn V.
- Vehicle configurations
- tiny models such as balloon rockets, water rockets, skyrockets or small solid rockets that can be purchased at a hobby store
- space rockets such as the enormous Saturn V used for the Apollo program
- rocket cars
- rocket bike
- rocket-powered aircraft (including rocket assisted takeoff of conventional aircraft- RATO)
- rocket sleds
- rocket trains
- rocket torpedoes
- rocket-powered jet packs
- rapid escape systems such as ejection seats and launch escape systems
- space probes
A rocket design can be as simple as a cardboard tube filled with black powder, but to make an efficient, accurate rocket or missile involves overcoming a number of difficult problems. The main difficulties include cooling the combustion chamber, pumping the fuel (in the case of a liquid fuel), and controlling and correcting the direction of motion.
Rockets consist of a propellant, a place to put propellant (such as a propellant tank), and a nozzle. They may also have one or more rocket engines, directional stabilization device(s) (such as fins, vernier engines or engine gimbals for thrust vectoring, gyroscopes) and a structure (typically monocoque) to hold these components together. Rockets intended for high speed atmospheric use also have an aerodynamic fairing such as a nose cone, which usually holds the payload.
As well as these components, rockets can have any number of other components, such as wings (rocketplanes), parachutes, wheels (rocket cars), even, in a sense, a person (rocket belt). Vehicles frequently possess navigation systems and guidance systems that typically use satellite navigation and inertial navigation systems.
Rocket engines employ the principle of jet propulsion. The rocket engines powering rockets come in a great variety of different types; a comprehensive list can be found in rocket engine. Most current rockets are chemically powered rockets (usually internal combustion engines, but some employ a decomposing monopropellant) that emit a hot exhaust gas. A rocket engine can use gas propellants, solid propellant, liquid propellant, or a hybrid mixture of both solid and liquid. Some rockets use heat or pressure that is supplied from a source other than the chemical reaction of propellant(s), such as steam rockets, solar thermal rockets, nuclear thermal rocket engines or simple pressurized rockets such as water rocket or cold gas thrusters. With combustive propellants a chemical reaction is initiated between the fuel and the oxidizer in the combustion chamber, and the resultant hot gases accelerate out of a rocket engine nozzle (or nozzles) at the rearward-facing end of the rocket. The acceleration of these gases through the engine exerts force ("thrust") on the combustion chamber and nozzle, propelling the vehicle (according to Newton's Third Law). This actually happens because the force (pressure times area) on the combustion chamber wall is unbalanced by the nozzle opening; this is not the case in any other direction. The shape of the nozzle also generates force by directing the exhaust gas along the axis of the rocket.
Rocket propellant is mass that is stored, usually in some form of propellant tank or casing, prior to being used as the propulsive mass that is ejected from a rocket engine in the form of a fluid jet to produce thrust. For chemical rockets often the propellants are a fuel such as liquid hydrogen or kerosene burned with an oxidizer such as liquid oxygen or nitric acid to produce large volumes of very hot gas. The oxidiser is either kept separate and mixed in the combustion chamber, or comes premixed, as with solid rockets.
Sometimes the propellant is not burned but still undergoes a chemical reaction, and can be a 'monopropellant' such as hydrazine, nitrous oxide or hydrogen peroxide that can be catalytically decomposed to hot gas.
For smaller, low performance rockets such as attitude control thrusters where high performance is less necessary, a pressurised fluid is used as propellant that simply escapes the spacecraft through a propelling nozzle.
Rockets or other similar reaction devices carrying their own propellant must be used when there is no other substance (land, water, or air) or force (gravity, magnetism, light) that a vehicle may usefully employ for propulsion, such as in space. In these circumstances, it is necessary to carry all the propellant to be used.
However, they are also useful in other situations:
Some military weapons use rockets to propel warheads to their targets. A rocket and its payload together are generally referred to as a missile when the weapon has a guidance system (not all missiles use rocket engines, some use other engines such as jets) or as a rocket if it is unguided. Anti-tank and anti-aircraft missiles use rocket engines to engage targets at high speed at a range of several miles, while intercontinental ballistic missiles can be used to deliver multiple nuclear warheads from thousands of miles, and anti-ballistic missiles try to stop them. Rockets have also been tested for reconnaissance, such as the Ping-Pong rocket, which was launched to surveil enemy targets, however, recon rockets have never come into wide use in the military.
Science and research
Larger rockets are normally launched from a launch pad that provides stable support until a few seconds after ignition. Due to their high exhaust velocity—2,500 to 4,500 m/s (9,000 to 16,200 km/h; 5,600 to 10,100 mph)—rockets are particularly useful when very high speeds are required, such as orbital speed at approximately 7,800 m/s (28,000 km/h; 17,000 mph). Spacecraft delivered into orbital trajectories become artificial satellites, which are used for many commercial purposes. Indeed, rockets remain the only way to launch spacecraft into orbit and beyond. They are also used to rapidly accelerate spacecraft when they change orbits or de-orbit for landing. Also, a rocket may be used to soften a hard parachute landing immediately before touchdown (see retrorocket).
Some crewed rockets, notably the Saturn V and Soyuz have launch escape systems. This is a small, usually solid rocket that is capable of pulling the crewed capsule away from the main vehicle towards safety at a moments notice. These types of systems have been operated several times, both in testing and in flight, and operated correctly each time.
This was the case when the Safety Assurance System (Soviet nomenclature) successfully pulled away the L3 capsule during three of the four failed launches of the Soviet moon rocket, N1 vehicles 3L, 5L and 7L. In all three cases the capsule, albeit unmanned, was saved from destruction. It should be noted that only the three aforementioned N1 rockets had functional Safety Assurance Systems. The outstanding vehicle, 6L, had dummy upper stages and therefore no escape system giving the N1 booster a 100% success rate for egress from a failed launch.
Hobby, sport, and entertainment
|This section needs expansion. You can help by adding to it. (May 2016)|
Hobbyists build and fly a wide variety of model rockets. Many companies produce model rocket kits and parts but due to their inherent simplicity some hobbyists have been known to make rockets out of almost anything. Rockets are also used in some types of consumer and professional fireworks. A Water Powered Rocket is a type of model rocket using water as its reaction mass. The pressure vessel (the engine of the rocket) is usually a used plastic soft drink bottle. The water is forced out by a pressurized gas, typically compressed air. It is an example of Newton's third law of motion.
The scale of amateur rocketry can range from a small rocket launched in your own backyard to a rocket that reached space. Amateur rocketry is split into three categories: low power, mid power, and high power.
Australia, Austria, Canada, Germany, New Zealand, Switzerland, the U.K., and the U.S. have high power rocket associations which provide certifications to its members to fly different rocket motor sizes. While joining these organizations is not a requirement, they often provide insurance and flight waivers for their members.
Rocket exhaust generates a significant amount of acoustic energy. As the supersonic exhaust collides with the ambient air, shock waves are formed. The sound intensity from these shock waves depends on the size of the rocket as well as the exhaust velocity. The sound intensity of large, high performance rockets could potentially kill at close range.
The Space Shuttle generates 180 dB of noise around its base. To combat this, NASA developed a sound suppression system which can flow water at rates up to 900,000 gallons per minute (57 m3/s) onto the launch pad. The water reduces the noise level from 180 dB down to 142 dB (the design requirement is 145 dB). Without the sound suppression system, acoustic waves reflect off of the launch pad towards the rocket, vibrating the sensitive payload and crew. These acoustic waves can be so severe that they can destroy the rocket.
Noise is generally most intense when a rocket is close to the ground, since the noise from the engines radiates up away from the jet, as well as reflecting off the ground. This noise can be reduced somewhat by flame trenches with roofs, by water injection around the jet and by deflecting the jet at an angle.
For crewed rockets various methods are used to reduce the sound intensity for the passengers, and typically the placement of the astronauts far away from the rocket engines helps significantly. For the passengers and crew, when a vehicle goes supersonic the sound cuts off as the sound waves are no longer able to keep up with the vehicle.
The effect of the combustion of propellant in the rocket engine is to increase the velocity of the resulting gases to very high speeds, hence producing a thrust.[dubious ] Initially, the gases of combustion are sent in every direction, but only those that produce a net thrust have any effect.[dubious ] The ideal direction of motion of the exhaust is in the direction so as to cause thrust. At the top end of the combustion chamber the hot, energetic gas fluid cannot move forward, and so, it pushes upward against the top of the rocket engine's combustion chamber. As the combustion gases approach the exit of the combustion chamber, they increase in speed. The effect of the convergent part of the rocket engine nozzle on the high pressure fluid of combustion gases, is to cause the gases to accelerate to high speed. The higher the speed of the gases, the lower the pressure of the gas (Bernoulli's principle or conservation of energy) acting on that part of the combustion chamber. In a properly designed engine, the flow will reach Mach 1 at the throat of the nozzle. At which point the speed of the flow increases. Beyond the throat of the nozzle, a bell shaped expansion part of the engine allows the gases that are expanding to push against that part of the rocket engine. Thus, the bell part of the nozzle gives additional thrust. Simply expressed, for every action there is an equal and opposite reaction, according to Newton's third law with the result that the exiting gases produce the reaction of a force on the rocket causing it to accelerate the rocket.[nb 2]
In a closed chamber, the pressures are equal in each direction and no acceleration occurs. If an opening is provided in the bottom of the chamber then the pressure is no longer acting on the missing section. This opening permits the exhaust to escape. The remaining pressures give a resultant thrust on the side opposite the opening, and these pressures are what push the rocket along.
The shape of the nozzle is important. Consider a balloon propelled by air coming out of a tapering nozzle. In such a case the combination of air pressure and viscous friction is such that the nozzle does not push the balloon but is pulled by it. Using a convergent/divergent nozzle gives more force since the exhaust also presses on it as it expands outwards, roughly doubling the total force. If propellant gas is continuously added to the chamber then these pressures can be maintained for as long as propellant remains. Note that in the case of liquid propellant engines, the pumps moving the propellant into the combustion chamber must maintain a pressure larger than the combustion chamber -typically on the order of 100 atmospheres.
As a side effect, these pressures on the rocket also act on the exhaust in the opposite direction and accelerate this exhaust to very high speeds (according to Newton's Third Law). From the principle of conservation of momentum the speed of the exhaust of a rocket determines how much momentum increase is created for a given amount of propellant. This is called the rocket's specific impulse. Because a rocket, propellant and exhaust in flight, without any external perturbations, may be considered as a closed system, the total momentum is always constant. Therefore, the faster the net speed of the exhaust in one direction, the greater the speed of the rocket can achieve in the opposite direction. This is especially true since the rocket body's mass is typically far lower than the final total exhaust mass.
Forces on a rocket in flight
Flying rockets are primarily affected by the following:
- Thrust from the engine(s)
- Gravity from celestial bodies
- Drag if moving in atmosphere
- Lift; usually relatively small effect except for rocket-powered aircraft
Rockets that must travel through the air are usually tall and thin as this shape gives a high ballistic coefficient and minimizes drag losses.
In addition, the inertia and centrifugal pseudo-force can be significant due to the path of the rocket around the center of a celestial body; when high enough speeds in the right direction and altitude are achieved a stable orbit or escape velocity is obtained.
These forces, with a stabilizing tail (the empennage) present will, unless deliberate control efforts are made, naturally cause the vehicle to follow a roughly parabolic trajectory termed a gravity turn, and this trajectory is often used at least during the initial part of a launch. (This is true even if the rocket engine is mounted at the nose.) Vehicles can thus maintain low or even zero angle of attack, which minimizes transverse stress on the launch vehicle, permitting a weaker, and hence lighter, launch vehicle.
Drag is a force opposite to the direction of the rocket's motion. This decreases acceleration of the vehicle and produces structural loads. Deceleration force for fast-moving rockets are calculated using the drag equation.
Drag can be minimised by an aerodynamic nose cone and by using a shape with a high ballistic coefficient (the "classic" rocket shape—long and thin), and by keeping the rocket's angle of attack as low as possible.
During a rocket launch, as the vehicle speed increases, and the atmosphere thins, there is a point of maximum aerodynamic drag called Max Q. This determines the minimum aerodynamic strength of the vehicle, as the rocket must avoid buckling under these forces.
A typical rocket engine can handle a significant fraction of its own mass in propellant each second, with the propellant leaving the nozzle at several kilometres per second. This means that the thrust-to-weight ratio of a rocket engine, and often the entire vehicle can be very high, in extreme cases over 100. This compares with other jet propulsion engines that can exceed 5 for some of the better engines.
It can be shown that the net thrust of a rocket is:
- propellant flow (kg/s or lb/s)
- the effective exhaust velocity (m/s or ft/s)
The effective exhaust velocity is more or less the speed the exhaust leaves the vehicle, and in the vacuum of space, the effective exhaust velocity is often equal to the actual average exhaust speed along the thrust axis. However, the effective exhaust velocity allows for various losses, and notably, is reduced when operated within an atmosphere.
The rate of propellant flow through a rocket engine is often deliberately varied over a flight, to provide a way to control the thrust and thus the airspeed of the vehicle. This, for example, allows minimization of aerodynamic losses and can limit the increase of g-forces due to the reduction in propellant load.
Impulse is defined as a force acting on an object over time, which in the absence of opposing forces (gravity and aerodynamic drag), changes the momentum (integral of mass and velocity) of the object. As such, it is the best performance class (payload mass and terminal velocity capability) indicator of a rocket, rather than takeoff thrust, mass, or "power". The total impulse of a rocket (stage) burning its propellant is::27
When there is fixed thrust, this is simply:
The total impulse of a multi-stage rocket is the sum of the impulses of the individual stages.
|Rocket||Propellants||Isp, vacuum (s)|
As can be seen from the thrust equation, the effective speed of the exhaust controls the amount of thrust produced from a particular quantity of fuel burnt per second.
An equivalent measure, the net impulse per weight unit of propellant expelled, is called specific Impulse, , and this is one of the most important figures that describes a rocket's performance. It is defined such that it is related to the effective exhaust velocity by:
- has units of seconds
- is the acceleration at the surface of the Earth
Thus, the greater the specific impulse, the greater the net thrust and performance of the engine. is determined by measurement while testing the engine. In practice the effective exhaust velocities of rockets varies but can be extremely high, ~4500 m/s, about 15 times the sea level speed of sound in air.
Delta-v (rocket equation)
The delta-v capacity of a rocket is the theoretical total change in velocity that a rocket can achieve without any external interference (without air drag or gravity or other forces).
- is the initial total mass, including propellant, in kg (or lb)
- is the final total mass in kg (or lb)
- is the effective exhaust velocity in m/s (or ft/s)
- is the delta-v in m/s (or ft/s)
When launched from the Earth practical delta-v's for a single rockets carrying payloads can be a few km/s. Some theoretical designs have rockets with delta-v's over 9 km/s.
The required delta-v can also be calculated for a particular manoeuvre; for example the delta-v to launch from the surface of the Earth to Low earth orbit is about 9.7 km/s, which leaves the vehicle with a sideways speed of about 7.8 km/s at an altitude of around 200 km. In this manoeuvre about 1.9 km/s is lost in air drag, gravity drag and gaining altitude.
The ratio is sometimes called the mass ratio.
Almost all of a launch vehicle's mass consists of propellant. Mass ratio is, for any 'burn', the ratio between the rocket's initial mass and its final mass. Everything else being equal, a high mass ratio is desirable for good performance, since it indicates that the rocket is lightweight and hence performs better, for essentially the same reasons that low weight is desirable in sports cars.
Rockets as a group have the highest thrust-to-weight ratio of any type of engine; and this helps vehicles achieve high mass ratios, which improves the performance of flights. The higher the ratio, the less engine mass is needed to be carried. This permits the carrying of even more propellant, enormously improving the delta-v. Alternatively, some rockets such as for rescue scenarios or racing carry relatively little propellant and payload and thus need only a lightweight structure and instead achieve high accelerations. For example, the Soyuz escape system can produce 20g.
Achievable mass ratios are highly dependent on many factors such as propellant type, the design of engine the vehicle uses, structural safety margins and construction techniques.
The highest mass ratios are generally achieved with liquid rockets, and these types are usually used for orbital launch vehicles, a situation which calls for a high delta-v. Liquid propellants generally have densities similar to water (with the notable exceptions of liquid hydrogen and liquid methane), and these types are able to use lightweight, low pressure tanks and typically run high-performance turbopumps to force the propellant into the combustion chamber.
Some notable mass fractions are found in the following table (some aircraft are included for comparison purposes):
|Vehicle||Takeoff Mass||Final Mass||Mass ratio||Mass fraction|
|Ariane 5 (vehicle + payload)||746,000 kg (~1,645,000 lb)||2,700 kg + 16,000 kg (~6,000 lb + ~35,300 lb)||39.9||0.975|
|Titan 23G first stage||117,020 kg (258,000 lb)||4,760 kg (10,500 lb)||24.6||0.959|
|Saturn V||3,038,500 kg (~6,700,000 lb)||13,300 kg + 118,000 kg (~29,320 lb + ~260,150 lb)||23.1||0.957|
|Space Shuttle (vehicle + payload)||2,040,000 kg (~4,500,000 lb)||104,000 kg + 28,800 kg (~230,000 lb + ~63,500 lb)||15.4||0.935|
|Saturn 1B (stage only)||448,648 kg (989,100 lb)||41,594 kg (91,700 lb)||10.7||0.907|
|Virgin Atlantic GlobalFlyer||10,024.39 kg (22,100 lb)||1,678.3 kg (3,700 lb)||6.0||0.83|
|V-2||13,000 kg (~28,660 lb) (12.8 ton)||3.85||0.74 |
|X-15||15,420 kg (34,000 lb)||6,620 kg (14,600 lb)||2.3||0.57|
|Concorde||~181,000 kg (400,000 lb )||2||0.5|
|Boeing 747||~363,000 kg (800,000 lb)||2||0.5|
Thus far, the required velocity (delta-v) to achieve orbit has been unattainable by any single rocket because the propellant, tankage, structure, guidance, valves and engines and so on, take a particular minimum percentage of take-off mass that is too great for the propellant it carries to achieve that delta-v. Since Single-stage-to-orbit has so far not been achievable, orbital rockets always have more than one stage.
For example, the first stage of the Saturn V, carrying the weight of the upper stages, was able to achieve a mass ratio of about 10, and achieved a specific impulse of 263 seconds. This gives a delta-v of around 5.9 km/s whereas around 9.4 km/s delta-v is needed to achieve orbit with all losses allowed for.
This problem is frequently solved by staging — the rocket sheds excess weight (usually empty tankage and associated engines) during launch. Staging is either serial where the rockets light after the previous stage has fallen away, or parallel, where rockets are burning together and then detach when they burn out.
The maximum speeds that can be achieved with staging is theoretically limited only by the speed of light. However the payload that can be carried goes down geometrically with each extra stage needed, while the additional delta-v for each stage is simply additive.
Acceleration and thrust-to-weight ratio
From Newton's second law, the acceleration, , of a vehicle is simply:
Where m is the instantaneous mass of the vehicle and is the net force acting on the rocket (mostly thrust but air drag and other forces can play a part.)
As the remaining propellant decreases, rocket vehicles become lighter and their acceleration tends to increase until the propellant is exhausted. This means that much of the speed change occurs towards the end of the burn when the vehicle is much lighter. However, the thrust can be throttled to offset or vary this if needed. Discontinuities in acceleration also occur when stages burn out, often starting at a lower acceleration with each new stage firing.
Peak accelerations can be increased by designing the vehicle with a reduced mass, usually achieved by a reduction in the fuel load and tankage and associated structures, but obviously this reduces range, delta-v and burn time. Still, for some applications that rockets are used for, a high peak acceleration applied for just a short time is highly desirable.
The minimal mass of vehicle consists of a rocket engine with minimal fuel and structure to carry it. In that case the thrust-to-weight ratio[nb 3] of the rocket engine limits the maximum acceleration that can be designed. It turns out that rocket engines generally have truly excellent thrust to weight ratios (137 for the NK-33 engine, some solid rockets are over 1000:442), and nearly all really high-g vehicles employ or have employed rockets.
The high accelerations that rockets naturally possess means that rocket vehicles are often capable of vertical takeoff, and in some cases, with suitable guidance and control of the engines, also vertical landing. For these operations to be done it is necessary for a vehicle's engines to provide more than the local gravitational acceleration.
Rocket launch vehicles take-off with a great deal of flames, noise and drama, and it might seem obvious that they are grievously inefficient. However, while they are far from perfect, their energy efficiency is not as bad as might be supposed.
The energy density of a typical rocket propellant is often around one-third that of conventional hydrocarbon fuels; the bulk of the mass is (often relatively inexpensive) oxidizer. Nevertheless, at take-off the rocket has a great deal of energy in the fuel and oxidizer stored within the vehicle. It is of course desirable that as much of the energy of the propellant end up as kinetic or potential energy of the body of the rocket as possible.
In a chemical propulsion device, the engine efficiency is simply the ratio of the kinetic power of the exhaust gases and the power available from the chemical reaction::37–38
100% efficiency within the engine (engine efficiency ) would mean that all the heat energy of the combustion products is converted into kinetic energy of the jet. This is not possible, but the near-adiabatic high expansion ratio nozzles that can be used with rockets come surprisingly close: when the nozzle expands the gas, the gas is cooled and accelerated, and an energy efficiency of up to 70% can be achieved. Most of the rest is heat energy in the exhaust that is not recovered.:37–38 The high efficiency is a consequence of the fact that rocket combustion can be performed at very high temperatures and the gas is finally released at much lower temperatures, and so giving good Carnot efficiency.
However, engine efficiency is not the whole story. In common with the other jet-based engines, but particularly in rockets due to their high and typically fixed exhaust speeds, rocket vehicles are extremely inefficient at low speeds irrespective of the engine efficiency. The problem is that at low speeds, the exhaust carries away a huge amount of kinetic energy rearward. This phenomenon is termed propulsive efficiency ().:37–38
However, as speeds rise, the resultant exhaust speed goes down, and the overall vehicle energetic efficiency rises, reaching a peak of around 100% of the engine efficiency when the vehicle is travelling exactly at the same speed that the exhaust is emitted. In this case the exhaust would ideally stop dead in space behind the moving vehicle, taking away zero energy, and from conservation of energy, all the energy would end up in the vehicle. The efficiency then drops off again at even higher speeds as the exhaust ends up travelling forwards- trailing behind the vehicle.
From these principles it can be shown that the propulsive efficiency for a rocket moving at speed with an exhaust velocity is:
And the overall (instantaneous) energy efficiency is:
For example, from the equation, with an of 0.7, a rocket flying at Mach 0.85 (which most aircraft cruise at) with an exhaust velocity of Mach 10, would have a predicted overall energy efficiency of 5.9%, whereas a conventional, modern, air-breathing jet engine achieves closer to 35% efficiency. Thus a rocket would need about 6x more energy; and allowing for the specific energy of rocket propellant being around one third that of conventional air fuel, roughly 18x more mass of propellant would need to be carried for the same journey. This is why rockets are rarely if ever used for general aviation.
Since the energy ultimately comes from fuel, these considerations mean that rockets are mainly useful when a very high speed is required, such as ICBMs or orbital launch. For example, NASA's space shuttle fires its engines for around 8.5 minutes, consuming 1,000 tonnes of solid propellant (containing 16% aluminium) and an additional 2,000,000 litres of liquid propellant (106,261 kg of liquid hydrogen fuel) to lift the 100,000 kg vehicle (including the 25,000 kg payload) to an altitude of 111 km and an orbital velocity of 30,000 km/h. At this altitude and velocity, the vehicle has a kinetic energy of about 3 TJ and a potential energy of roughly 200 GJ. Given the initial energy of 20 TJ,[nb 4] the Space Shuttle is about 16% energy efficient at launching the orbiter.
Thus jet engines, with a better match between speed and jet exhaust speed (such as turbofans—in spite of their worse )—dominate for subsonic and supersonic atmospheric use, while rockets work best at hypersonic speeds. On the other hand, rockets serve in many short-range relatively low speed military applications where their low-speed inefficiency is outweighed by their extremely high thrust and hence high accelerations.
||This section may stray from the topic of the article into the topic of another article, Orbital maneuver. (May 2016)|
One subtle feature of rockets relates to energy. A rocket stage, while carrying a given load, is capable of giving a particular delta-v. This delta-v means that the speed increases (or decreases) by a particular amount, independent of the initial speed. However, because kinetic energy is a square law on speed, this means that the faster the rocket is travelling before the burn the more orbital energy it gains or loses.
This fact is used in interplanetary travel. It means that the amount of delta-v to reach other planets, over and above that to reach escape velocity can be much less if the delta-v is applied when the rocket is travelling at high speeds, close to the Earth or other planetary surface; whereas waiting until the rocket has slowed at altitude multiplies up the effort required to achieve the desired trajectory.
Safety, reliability and accidents
|This section needs expansion. You can help by adding to it. (May 2016)|
The reliability of rockets, as for all physical systems, is dependent on the quality of engineering design and construction.
Because of the enormous chemical energy in rocket propellants (greater energy by weight than explosives, but lower than gasoline), consequences of accidents can be severe. Most space missions have some issues. In 1986, following the Space Shuttle Challenger Disaster, American Physicist Richard Feynman, having served on the Rogers Commission estimated that the chance of an unsafe condition for a launch of the Shuttle was very roughly 1%; more recently the historical per person-flight risk in orbital spaceflight has been calculated to be around 2% or 4%.
Costs and economics
The costs of rockets can be roughly divided into propellant costs, the costs of obtaining and/or producing the 'dry mass' of the rocket, and the costs of any required support equipment and facilities.
Most of the takeoff mass of a rocket is normally propellant. However propellant is seldom more than a few times more expensive than gasoline per kilogram (as of 2009 gasoline was about $1/kg [$0.45/lb] or less), and although substantial amounts are needed, for all but the very cheapest rockets, it turns out that the propellant costs are usually comparatively small, although not completely negligible. With liquid oxygen costing $0.15 per kilogram ($0.068/lb) and liquid hydrogen $2.20/kg ($1.00/lb), the Space Shuttle in 2009 had a liquid propellant expense of approximately $1.4 million for each launch that cost $450 million from other expenses (with 40% of the mass of propellants used by it being liquids in the external fuel tank, 60% solids in the SRBs).
Even though a rocket's non-propellant, dry mass is often only between 5-20% of total mass, nevertheless this cost dominates. For hardware with the performance used in orbital launch vehicles, expenses of $2000–$10,000+ per kilogram of dry weight are common, primarily from engineering, fabrication, and testing; raw materials amount to typically around 2% of total expense. For most rockets except reusable ones (shuttle engines) the engines need not function more than a few minutes, which simplifies design.
Extreme performance requirements for rockets reaching orbit correlate with high cost, including intensive quality control to ensure reliability despite the limited safety factors allowable for weight reasons. Components produced in small numbers if not individually machined can prevent amortization of R&D and facility costs over mass production to the degree seen in more pedestrian manufacturing. Amongst liquid-fueled rockets, complexity can be influenced by how much hardware must be lightweight, like pressure-fed engines can have two orders of magnitude lesser part count than pump-fed engines but lead to more weight by needing greater tank pressure, most often used in just small maneuvering thrusters as a consequence.
To change the preceding factors for orbital launch vehicles, proposed methods have included mass-producing simple rockets in large quantities or on large scale, or developing reusable rockets meant to fly very frequently to amortize their up-front expense over many payloads, or reducing rocket performance requirements by constructing a hypothetical non-rocket spacelaunch system for part of the velocity to orbit (or all of it but with most methods involving some rocket use).
The costs of support equipment, range costs and launch pads generally scale up with the size of the rocket, but vary less with launch rate, and so may be considered to be approximately a fixed cost.
Rockets in applications other than launch to orbit (such as military rockets and rocket-assisted take off), commonly not needing comparable performance and sometimes mass-produced, are often relatively inexpensive.
- Chronology of Pakistan's rocket tests
- List of rockets
- Timeline of rocket and missile technology
- Timeline of spaceflight
- Astrodynamics—the study of spaceflight trajectories
- Pendulum rocket fallacy—an instability of rockets
- Rocket garden—a place for viewing unlaunched rockets
- Rocket launch
- Rocket launch site
- Variable-mass system—the form of Newton's second law used for describing rocket motion
Propulsion and Propellant
- Ammonium Perchlorate Composite Propellant—Most common solid rocket propellant
- Bipropellant rocket—two-part liquid or gaseous fuelled rocket
- Hot Water rocket—powered by boiling water
- Pulsed Rocket Motors—solid rocket that burns in segments
- Spacecraft propulsion—describes many different propulsion systems for spacecraft
- Tripropellant rocket—variable propellant mixes can improve performance
Recreational Pyrotechnic Rocketry
- Bottle rocket—small firework type rocket often launched from bottles
- Skyrocket—fireworks that typically explode at apogee
- Air-to-ground rockets
- Fire Arrow—one of the earliest types of rocket
- Katyusha rocket launcher—rack mounted rocket
- Rocket-propelled grenade—military use of rockets
- Shin Ki Chon—Korean variation of the Chinese fire arrow
- VA-111 Shkval—Russian rocket-propelled supercavitation torpedo
Rockets for Research
- Rocket plane—winged aircraft powered by rockets
- Rocket sled—used for high speeds along ground
- Sounding rocket—suborbital rocket used for atmospheric and other research
- English rocket, first attested in 1566 (OED), adopted from the Italian term, given due to the similarity in shape to the bobbin or spool used to hold the thread to be fed to a spinning wheel. The modern Italian term is razzo.
- The confusion is illustrated in http://science.howstuffworks.com/rocket.htm; “If you have ever seen a big fire hose spraying water, you may have noticed that it takes a lot of strength to hold the hose (sometimes you will see two or three firefighters holding the hose). The hose is acting like a rocket engine. The hose is throwing water in one direction, and the firefighters are using their strength and weight to counteract the reaction. If they were to let go of the hose, it would thrash around with tremendous force. If the firefighters were all standing on skateboards, the hose would propel them backward at great speed!”
- “thrust-to-weight ratio F/Wg is a dimensionless parameter that is identical to the acceleration of the rocket propulsion system (expressed in multiples of g0) ... in a gravity-free vacuum”:442
- The energy density is 31MJ per kg for aluminum and 143 MJ/kg for liquid hydrogen, this means that the vehicle consumes around 5 TJ of solid propellant and 15 TJ of hydrogen fuel.
- Bernhard, Jim (1 January 2007). Porcupine, Picayune, & Post: How Newspapers Get Their Names. University of Missouri Press. p. 126. ISBN 9780826266019. Retrieved 28 May 2016.
- Sutton, George P.; Biblarz, Oscar (2001). Rocket Propulsion Elements. John Wiley & Sons. ISBN 9780471326427. Retrieved 28 May 2016.
- MSFC History Office. "Rockets in Ancient Times (100 B.C. to 17th Century)". A Timeline of Rocket History. NASA. Retrieved 2009-06-28.
- "Rockets appear in Arab literature in 1258 A.D., describing Mongol invaders' use of them on February 15 to capture the city of Baghdad." "A brief history of rocketry". NASA Spacelink. Retrieved 2006-08-19.
- Hassan, Ahmad Y. "Gunpowder Composition for Rockets and Cannon in Arabic Military Treatises In Thirteenth and Fourteenth Centuries". History of Science and Technology in Islam. Archived from the original on February 26, 2008. Retrieved March 29, 2008.
- Hassan, Ahmad Y. "Transfer Of Islamic Technology To The West, Part III: Technology Transfer in the Chemical Industries". History of Science and Technology in Islam. Archived from the original on March 9, 2008. Retrieved 2008-03-29.
- Riper, A. Bowdoin Van (2004). Rockets and missiles : the life story of a technology. Westport: Greenwood Press. p. 10. ISBN 978-0-313-32795-7.
- "NASA History: Rocket vehicles". Hq.nasa.gov. Retrieved 2012-12-10.
- "OPEL Rocket vehicles". Strangevehicles.greyfalcon.us. Retrieved 2012-12-10.
- "Rocket bicycle sets 207mph speed record By Leo Kelion". BBC News. 2013-11-11. Retrieved 2014-11-11.
- Polmar, Norman; Moore, Kenneth J. (2004). Cold War submarines : the design and construction of U.S. and Soviet submarines. Washington, DC: Brassey's. p. 304. ISBN 978-1-57488-594-1.
- III, compiled by A.D. Baker (2000). The Naval Institute guide to combat fleets of the world 2000-2001 : their ships, aircraft, and systems. Annapolis, Md.: Naval Institute Press. p. 581. ISBN 978-1-55750-197-4.
- "The Rocketman". The Rocketman. Retrieved 2012-12-10.
- Richard B. Dow (1958), Fundamentals of Advanced Missiles, Washington (DC): John Wiley & Sons, loc 58-13458
- United States Congress. House Select Committee on Astronautics and Space Exploration (1959), "4. Rocket Vehicles", Space handbook: Astronautics and its applications : Staff report of the Select Committee on Astronautics and Space Exploration, House document / 86th Congress, 1st session, no. 86, Washington (DC): U.S. G.P.O., OCLC 52368435
- Charles Lafayette Proctor II. "internal combustion engines". Concise Britannica. Retrieved 2012-12-10.
- Marconi:KSC, Elaine. "NASA - What is a Sounding Rocket?". www.nasa.gov. Retrieved 28 May 2016.
- "Test sets world land speed record". www.af.mil. Archived from the original on June 1, 2013. Retrieved 2008-03-18.
- "Spaceflight Now-worldwide launch schedule". Spaceflightnow.com. Retrieved 2012-12-10.
- "Apollo launch escape subsystem". ApolloSaturn. Retrieved 2012-12-10.
- "Soyuz T-10-1 "Launch vehicle blew up on pad at Tyuratam; crew saved by abort system"". Astronautix.com. Retrieved 2012-12-10.
- Wade, Mark. "N1 Manned Lunar Launch Vehicle". astronautix.com. Encyclopedia Astronautica. Retrieved 24 June 2014.
- Wade, Mark. "N1 5L launch - 1969.07.03". astronautix.com. Encyclopedia Astronautica. Retrieved 24 June 2014.
- Harvey, Brian (2007). Soviet and Russian lunar exploration. Berlin: Springer. p. 226. ISBN 9780387739762. Retrieved 2 July 2014.
- "N1 (vehicle 5L) moon rocket Test - launch abort system activated". YouTube.com. 2015 YouTube, LLC. Retrieved 12 January 2015.
- Wade, Mark. "Soyuz T-10-1". astronautix.com. Encyclopedia Astronautica. Retrieved 24 June 2014.
- Bonsor, Kevin (2001-06-27). "Howstuff works ejection seats". Science.howstuffworks.com. Retrieved 2012-12-10.
- "CSXT GO FAST! Rocket Confirms Multiple World Records". Colorado Space News. 4 September 2014.
- "jetbelt". Transchool.eustis.army.mil. 1961-10-12. Retrieved 2010-02-08.[dead link]
- "Sammy Miller". Eurodragster.com. Retrieved 2012-12-10.
- Potter, R.C; Crocker, M.J (1966), Acoustic Prediction Methods for Rocket Engines, Including the Effects of Clustered Engines and Deflected Exhaust Flow, CR-566 (PDF), Washington, D.C.: NASA, OCLC 37049198[page needed]
- "Launch Pad Vibroacoustics Research at NASA/KSC", Retrieved on 30 April 2016.
- "Sound Suppression System", Retrieved on 30 April 2016.
- Warren, J. W. (1979). Understanding force : an account of some aspects of teaching the idea of force in school, college and university courses in engineering, mathematics and science. London: Murray. pp. 37–38. ISBN 9780719535642.
- Warren, J. W. (1979). Understanding force : an account of some aspects of teaching the idea of force in school, college and university courses in engineering, mathematics and science. London: Murray. p. 28. ISBN 9780719535642.
- "NASA- Four forces on a model rocket". Grc.nasa.gov. 2000-09-19. Retrieved 2012-12-10.
- Glasstone, Samuel (1 January 1965). Sourcebook on the Space Sciences. D. Van Nostrand Co. p. 209. OCLC 232378. Retrieved 28 May 2016.
- Callaway, David W. (March 2004). "Coplanar Air Launch with Gravity-Turn Launch Trajectories" (PDF). Masters Thesis: 2. Archived from the original (PDF) on November 28, 2007.
- "Space Shuttle Max-Q". Aerospaceweb. 2001-05-06. Retrieved 2012-12-10.
- "General Electric J85". Geae.com. 2012-09-07. Retrieved 2012-12-10.
- "Mach 1 Club". Thrust SSC. Retrieved 2016-05-28.
- "table of cislunar/mars delta-vs". Archived from the original on 2007-07-01.
- "cislunar delta-vs". Strout.net. Retrieved 2012-12-10.
- "Choose Your Engine". Projectrho.com. 2012-06-01. Retrieved 2012-12-10.
- "The Evolution of Rockets". Istp.gsfc.nasa.gov. Retrieved 2012-12-10.
- "Rocket Mass Ratios". Exploration.grc.nasa.gov. Retrieved 2012-12-10.
- Astronautix- Ariane 5g
- Astronautix - Saturn V
- Astronautix- Saturn IB
- AIAA2001-4619 RLVs
- NASA (2006). "Rocket staging". Beginner's Guide to Rockets. NASA. Retrieved 2016-05-28.
- "Astronautix NK-33 entry". Astronautix.com. 2006-11-08. Retrieved 2012-12-10.
- "A brief history of space accidents". Jane's Civil Aerospace. 2003-02-03. Archived from the original on 2003-02-04. Retrieved 2010-04-24.
- "Rogers commission Appendix F". Retrieved 2012-12-10.
- "Going Private: The Promise and Danger of Space Travel By Tariq Malik". Space.com. 2004-09-30. Retrieved 2012-12-10.
- "Weighing the risks of human spaceflight". The Space Review. 21 July 2003. Retrieved 1 December 2010.
- "A Rocket a Day Keeps the High Costs Away" by John Walker. September 27, 1993.
- "Space Shuttle Use of Propellants and Fluids" (PDF). Nasa.gov. Archived from the original (PDF) on October 17, 2011. Retrieved 2011-04-30.
- "NASA Launch Vehicles and Facilities". Nasa.gov. Retrieved 2011-04-30.
- "NASA - Space Shuttle and International Space Station". Nasa.gov. Retrieved 2011-04-30.
- "Mass Fraction". Andrews Space and Technology (original figure source). Retrieved 2011-04-30.
- Regis, Ed (1990),Great Mambo Chicken And The Transhuman Condition: Science Slightly Over The Edge, Basic Books, ISBN 0-201-56751-2. Excerpt online
- U.S. Air Force Research Report No. AU-ARI-93-8: LEO On The Cheap. Retrieved April 29, 2011.
|Wikimedia Commons has media related to Rockets.|
|Look up rocket in Wiktionary, the free dictionary.|
- FAA Office of Commercial Space Transportation
- National Aeronautics and Space Administration (NASA)
- National Association of Rocketry (USA)
- Tripoli Rocketry Association
- Asoc. Coheteria Experimental y Modelista de Argentina
- United Kingdom Rocketry Association
- IMR - German/Austrian/Swiss Rocketry Association
- Canadian Association of Rocketry
- Indian Space Research Organisation |
This chapter elaborates on topics including how to read and save an image in Pillow.
Reading and writing images using pillow library is very simple, with the help of PIL.Image module function.
fp − A filename (string), pathlib.Path object or a file object. The file object must implement read(), seek() and tell() methods and be opened in binary mode.
mode − It’s an optional argument, if given, must be ‘r’.
Return value − An Image object.
Error − If the file cannot be found, or the image cannot be opened and identified.
Following is a very simple example, where we are going to open an image of any format (We are using .jpg), display it in a window and then save it (default location) with another file format (.png).
from PIL import Image image = Image.open('beach1.jpg') image.show() image.save('beach1.bmp') image1 = Image.open('beach1.bmp') image1.show()
In the above example, we import the Image module from PIL library and then, call the Image.open() function to read an image from disk, which returns an image object data type. It will automatically determine the type of file by looking at the file content. For reading, the open() function accepts a filename(string), a path object or an image(file) object.
So, by using the open() function, we are actually reading the image. Image.open() will read the image and get all the relevant information from the image.
If you save the above program as Example.py and execute, it displays the original (.jpg) and resaved (.bmp) images using standard PNG display utility, as follows −
Resaved image (.bmp)
The save() function writes an image to file. Like for reading (open() function), the save() function accepts a filename, a path object or a file object that has been opened to write.
Image.save(fp, format=None, **params)
fp − A filename (string), pathlib.Path object or file object.
format − Optional format override. If omitted, the format to use is determined from the filename extension. If a file object was used instead of a filename, this parameter should always to used.
options − Extra parameters to the image writer.
Return value − None
KeyError − If the output format could not be determined from the file name, use the format option to solve this.
IOError − If the file could not be written, the file may have been created, and may contain partial data.
In short, the above syntax will save the image under the given filename. If no format is specified, then it is based on current filename extension. To provide the additional instructions to the writer, we use keyword options.
In the above example, it saves the file based on the file extension to determine the type of image, for example – the above will create a bmp file in our current working directory.
You can also explicitly specify the file type as a second parameter − |
What was the East India Company?
The East India Company was probably the most powerful corporation in history. At its height, it dominated global trade between Europe, South Asia and the Far East, fought numerous wars using its own army and navy, and conquered and colonised modern day India, Pakistan, Bangladesh and Burma.
From its foundation in 1600 the Company was granted a monopoly on British trade with the East, and the products it brought back soon began appearing in British homes. During the eighteenth century, cottons, indigo, porcelain, tea, and silks imported by the Company became incredibly popular. On the back of such lucrative trade, many Britons became wealthy, while even those of modest middle-class means benefited from share ownership.
During the seventeenth and early eighteenth centuries the Company trafficked enslaved people taken from Africa across the Indian Ocean to work on plantations in India and Indonesia.
By the late eighteenth century, however, customers and rivals increasingly complained of unfair pricing and practices. Eventually, in 1813, the government acted to strip the Company of its privileges and open its markets to free trade.As a result, it was left with a monopoly on only a few key products, principally tea, which it purchased from China.
As the Chinese authorities would only accept silver in exchange, the Company looked to more profitable methods of exchange and increasingly relied on smuggling opium. This resulted in a terrible human cost and two ‘Opium Wars’ between Britain and China.
From its earliest days the Company had sought to protect its presence in India through the building of fortified outposts and field armies. Pushed on by threats from other European powers and local rulers, plus the individual ambition of several key figures, the Company’s military machine expanded to fight numerous wars.
With the victories at Plassey (1757) and Buxar (1764) the Company forced the defeated Mughal emperor, Shah Alam II, to surrender the territories of Bengal, Bihar and Orrisa. From this point onwards the East India Company became rulers, able to create the laws and levy taxes. For almost a century after this the Company waged war until it had conquered the entirety of the Sub-continent and ruled over a population exceeding 200 million.
As both merchant and ruler, the Company maintained high prices and charged its tax collectors with extracting as much revenue from the Indian population as possible, which opponents argued reduced many to subsistence poverty. The Company’s policies and inaction also directly contributed to the ill effects of crop failures, such as in Bengal during 1769-70, where the resulting famine killed somewhere between 1-4 million people.
As foreign conquerors, the Company felt itself in perpetual danger of rebellion and overthrow and was particularly wary of interfering in Indian religions. That said, the Company could be highly interventionist, particularly in terms of law and order, where it sought to maintain an image of unassailable power and was not beyond brutal retribution.
From the early nineteenth century, tensions grew with the introduction of Christian missionaries and the Company’s move to mould Indian society more in the image of Britain. Combined with continued misrule and unrest in the army, this exploded into violence in 1857.
The rebellion which broke out that year spelt the end of the East India Company, and after a bloody campaign of suppression, the British government transferred India to its direct control.
Starting his career as a junior mercantile agent, Robert Clive rose to become Governor of Bengal and Commander-in-Chief of the East India Company's army. Despite no formal military training Clive successfully led numerous campaigns, the most famous being the Battle of Plassey. It was also Clive who extracted territorial control for the Company as a result. For himself, he took a vast fortune in gold, silver and jewels from the treasury of the defeated Siraj ud-Daulah as spoils of war.
Our places and collections with East India Company connections
Claremont Landscape Garden
On his return from India, Robert Clive bought the Claremont estate and decided to have the house demolished and a new Palladian mansion and landscape garden constructed. The task was given to two giants of eighteenth century design, Henry Holland and Lancelot 'Capability' Brown. Having spent a huge sum of money, Clive was not destined to enjoy his new estate, as he died the year of its completion in 1774.
Powis Castle and Garden
Robert's eldest son, Edward Clive, inherited his father's fortune but was in want of status and a wife. Henrietta Herbert, eldest daughter of the Earl of Powis, had a prestigious name, however the family was in serious debt. Their marriage in 1784 was therefore a welcome occasion for both. The couple would travel to India together during Edward's posting as Governor of Madras and return home with a large collection of Indian objects. These they installed at Powis Castle and now make up the core collection of the Clive Museum.
Britain’s increasing involvement in India had an influence reaching far beyond those who travelled East. Alongside the domestic commodities, Indian and ‘Oriental’ architecture, interiors and objects also proliferated. Individual objects, either imported or created in Britain in the style of Asian goods, found their way into most homes. The Lower India Room at Penrhyn Castle, created in the 1830s, is a stunning example of this trend for all things ‘Oriental’.
Francis Sykes was Robert Clive’s right hand man in Bengal and crucial to the dramatic rise of the East India Company. Having had a senior Company position and his own trading network, he returned to Britain an extremely wealthy man. Sykes purchased himself a place in Parliament and bought the Basildon Park estate, where he commissioned an impressive Palladian mansion to be built.
In many ways Sykes was the archetypal “nabob”, the 18th century name for a brash and newly rich East India Company employee, being denounced for the scale and acquisition of his fortune. Unlike Clive, who proudly displayed his Indian collections around his many houses, Sykes appears to have furnished his home with few Indian objects, preferring instead to buy Chinese porcelain imported through Company connections.
A tiger head finial from the throne of Tipu Sultan
Tipu Sultan, ruler of Mysore, was one of the Company's fiercest and most implacable enemies. After several wars, he was eventually killed during the British conquest of Seringapatam in 1799, by the forces of expansionist Governor-General, Lord Wellesley, and his brother, the future Duke of Wellington. The richest kingdom in South India therefore fell into British hands and Tipu's palaces were pillaged. This finial, made of gold and set with rubies, diamonds and emeralds, was broken from Tipu's throne and given by Wellesley to Henrietta Clive (née Herbert), and is now on display in the Clive Museum at Powis Castle.H3 - Tipu Sultan's tent
This tent was used by Tipu Sultan as his headquarters in the field while he made progresses about his territories, and capture by the Company following his defeat. Made of cotton chintz and patterned with acanthus leaves and flowers, it was amongst the Indian relics acquired by Edward Clive while he was Governor of Madras. At Powis it was used as a marquee for garden parties held by the family, and is now on partial display.
Puja performed at a Temple of Shiva
This oil painting depicts a puja, a Hindu prayer ritual, in this case performed in worship of Shiva. The painting, part of a series and dated 1804, is by British artist Thomas Daniell who spent 10 years in India painting various subjects for the British market. The seated figure is likely a high caste Indian, who is attended by a man servant with a fan. Such images grew in popularity alongside British rule, and were part of a general rage for all things Indian, led by the Prince Regent at the turn of the nineteenth century. This example and others in the series were bought by Richard Colt Hoare for the family home at Stourhead.
This article contains contributions from Kieran Hazzard from the University of Oxford who specialises in Britain and India during the eighteenth and nineteenth centuries. His research particularly covers the encounter between British Radicalism and India, and the role of material objects in fashioning the Raj both in reality and in story. His wider interests include constitutional thought, the consumption of history, print culture, and the East India Company. Kieran is a contributor to the Trusted Source project.
Find out more about our Trusted Source articles, which were created in partnership with the University of Oxford, and explore topics related to the special places in our care.
In England, several sites of lost medieval villages can be found at National Trust places. Learn more about these abandoned villages.
Explore a selection from more than half a million books and manuscripts in the collections we care for. Libraries Curator Tim Pye takes a closer look at some of the most significant works.
The places and collections in our care are rich with different sporting cultures. Find celebratory sports-themed sculptures and paintings, sportswear through the ages and historic sporting equipment. |
Matrix Laboratory or Matlab is software used to analyse and design systems and other products. It is used efficiently to plot various types of lines and graphs, as per need and preference. Graphs are a primary method of representation of data visually. Many graphs can be made in Matlab, such as line plots, bubble and scatter charts, data distribution plots, discrete data plots, geographic plots, polar plots, contour plots, volume visualisation, and many more.
In this article, we will see the different functions and methods to plot equations in MATLAB.
Also read: How to plot multiple lines in Matlab?
In Matlab, ezplot is one of the many available functions but is not the most recommended. explot plots a symbolic expression, function or equation. It plots a univariate function in the range of [negative 2pi to positive 2pi].
Here, syms is a function used to create symbolic scalar variables, matrix variables or functions.
The function fplot plots the symbolic expression or function within a default range of [-5, 5].
The default range can, however, be varied depending on the requirements.
The function of fplot can specify the range for the graph if the default range is unfavourable.
Also read: How to inverse matrix in Matlab?
2-D Line Plot in Matlab is created using the basic inbuilt function plot(x, y), where x and y are both vectors, and the graph is y versus the corresponding values of x. Within this function, by just adding another comma, line colour, line width, and many other characteristics can be added to the graph. Consider the example below.
The graph above is a sine graph. Note that the graph has not been modified to change colour, line width or line style. Consider the change in the syntax below with a change in colour, line width, and line style.
The graph here is green as indicated by the ‘g’. The line width considered here is 4, which is written after the keyword ‘LineWidth’, and the line style here is taken to be hyphen-full stop (-.) in alternation mentioned after the keyword ‘LineStyle’.
Adding colour to the graph or changing its width/style can improve the graph’s readability and make it more visible. Adding colour also makes it easier to add and understand the legends in multiple line graphs.
The plot3 function plots a 3D parametric curve of the functions or equations x(t), y(t), and z(t). The default range for the plot3 function is [-5, 5]. However, the default range of the plot can be varied using the square bracket with the range specified in it.
Also read: How to make a table in MATLAB?
The function ezpolar plots the polar coordinates of the function. A polar curve of the function theta is obtained over the default domain of 0 to 2pi. The default domain can be varied by adding a square bracket with the range in the ezpolar function.
The function fsurf plots 3D surfaces. It creates a surface plot of the symbolic functions over a default range [-5, 5]. The default range can ve changed based on the user’s requirements simply by adding the range in a square bracket alongside the function.
Also read: How does Shazam work?
The function fcontour is used in Matlab to plot contours. It plots the contour lines of symbolic variables x, y. The default range for the function is [-5, 5]. However, this range can be changed depending on the requirements by adding a square bracket with the range specified within the function call.
The colorbar function can be used to show the colour scale along with the plot. While it is not necessary, it helps make the graph more readable.
The fmesh function plots a 3D mesh plot. It creates a mesh plot of the symbolic variables in a default range of [-5, 5]. This range can be varied by using a square bracket along with the function description.
A creative nerd, TT player, an avid reader, and an engineering student. |
Explain The Term "Optical Fibres" And How They Work On Total Internal Reflection
Optical fiber (or "fiber optic") refers to the medium and the technology associated with the transmission of information as light pulses along a glass or plastic strand or fiber. Optical fiber carries much more information than conventional copper wire and is in general not subject to electromagnetic interference and the need to retransmit signals. Most telephone company long-distance lines are now made of optical fiber.
Total Internal Reflection:
When light is incident from a denser to rarer medium the ray deviates away from the normal. For a certain angle of incidence (i = ic) the angle of refraction becomes 900. This angle of incidence is called the critical angle. Any ray with incident angle greater than this critical angle will be completely reflected back to the denser medium (see fig). This phenomenon is called Total Internal Reflection.
Optical fibers are made of glass. These are glass fibers. Now for fiber optic communication glass fibers are simply not enough they have to be specially built for total internal reflection (TIR). To achieve TIR the optical fiber is given an outer layer having a lower refractive index called cladding and the central part of the glass fibre is called core.
It is a flexible, transparent fiber made of glass (silica) or plastic, slightly thicker than a human hair.
How they work on "Total Internal Reflection" :-
When light traveling in an optically dense medium hits a boundary at a steep angle (larger than the critical angle for the boundary), the light will be completely reflected. This is called total internal reflection. This effect is used in optical fibers to confine light in the core. Light travels through the fiber core, bouncing back and forth off the boundary between the core and cladding. Because the light must strike the boundary with an angle greater than the critical angle, only light that enters the fiber within a certain range of angles can travel down the fiber without leaking out. This range of angles is called the acceptance cone of the fiber. The size of this acceptance cone is a function of the refractive index difference between the fiber's core and cladding.
Visit 2 http://en.wikipedia.org/wiki/Optical_fibres for more information. Give thumbs up if helpful. |
Using Formulas and Functions in Microsoft Excel 2013
- Creating simple cell formulas
- Assigning names to groups of cells
- Using names in formulas
- Creating a formula that references values in an Excel table
- Creating formulas that reference cells in other workbooks
- Changing links to different workbooks
- Analyzing data by using the Quick Analysis lens
- Summing a group of cells without using a formula
- Creating a summary formula
- Summing with subtotals and grand totals
- Exploring the Excel function library
- Using the IF function
- Checking formula references
- Debugging your formulas
Exploring the Excel function library
You can create dozens of different functions in Excel. You can use Excel functions to determine mortgage payments, perform scientific calculations, or find the square root of a number. The best way to become familiar with the formulas available in Excel is to display the Insert Function dialog box and move through the listed functions, clicking the ones that look interesting. When you click a function, its description appears at the bottom of the dialog box.
Another way to get information about a function is to view the ScreenTip that appears next to the function. If you double-click a cell with a function, a ScreenTip with the function’s structure and expected values appears below it. Clicking an element of the structure points to the cell or cells providing that value.
List functions available from the Excel library
Click the Insert Function button on the formula bar.
Display the drop-down list, and click the function category that you want to view.
Click the function that you want to examine.
Click Cancel to close the Function Arguments dialog box.
Use function ScreenTips
Double-click a cell that contains a formula.
In the ScreenTip, click the function name to open the Help file entry for the function.
Click the Close button to close the Help window.
Click an argument to select the cells to which it refers. |
The joint NASA/ESA Cassini-Huygens mission revealed some amazing things about Saturn and its system of moons. In the thirteen years that it spent studying the system – before it plunged into Saturn’s atmosphere on September 15th, 2017 – it delivered the most compelling evidence to date of extra-terrestrial life. And years later, scientists are still poring over the data it gathered.
For instance, a team of German scientists recently examined data gathered by the Cassini orbiter around Enceladus’ southern polar region, where plume activity regularly sends jets of icy particles into space. What they found was evidence of organic signatures that could be the building blocks for amino acids, the very thing that life is made of! This latest evidence shows that life really could exist beneath Enceladus’ icy crust.
As soon as the Cassini-Huygens mission arrived the Saturn system in 2004, it began to send back a number of startling discoveries. One of the biggest was the discovery of plume activity around the southern polar region of Saturn’s moon Enceladus’, which appears to be the result of geothermal activity and an ocean in the moon’s interior. This naturally gave rise to a debate about whether or not this interior ocean could support life.
Since then, multiple studies have been conducted to get a better idea of just how likely it is that life exists inside Enceladus. The latest comes from the University of Washington’s Department of Earth and Space Sciences (ESS), which shows that concentrations of carbon dioxide, hydrogen and methane in Enceladus’ interior ocean (as well as its pH levels) are more conducive to life than previously thought.
Since the 1970s, when the Voyager probes captured images of Europa’s icy surface, scientists have suspected that life could exist in interior oceans of moons in the outer Solar System. Since then, other evidence has emerged that has bolstered this theory, ranging from icy plumes on Europa and Enceladus, interior models of hydrothermal activity, and even the groundbreaking discovery of complex organic molecules in Enceladus’ plumes.
However, in some locations in the outer Solar System, conditions are very cold and water is only able to exist in liquid form because of the presence of toxic antifreeze chemicals. However, according to a new study by an international team of researchers, it is possible that bacteria could survive in these briny environments. This is good news for those hoping to find evidence of life in extreme environments of the Solar System.
Basically, on bodies like Ceres, Callisto, Triton, and Pluto – which are either far from the Sun or do not have interior heating mechanisms – interior oceans are believed to exist because of the presence of certain chemicals and salts (such as ammonia). These “antifreeze” compounds ensure that their oceans have lower freezing points, but create an environment that would be too cold and toxic to life as we know it.
For the sake of their study, the team sought to determine if microbes could indeed survive in these environments by conducting tests with Planococcus halocryophilus, a bacteria found in the Arctic permafrost. They then subjected this bacteria to solutions of sodium, magnesium and calcium chloride as well as perchlorate, a chemical compound that was found by the Phoenix lander on Mars.
They then subjected the solutions to temperatures ranging from +25°C to -30°C through multiple freeze and thaw cycles. What they found was that the bacteria’s survival rates depended on the solution and temperatures involved. For instance, bacteria suspended in chloride-containing (saline) samples had better chances of survival compared to those in perchlorate-containing samples – though survival rates increased the more the temperatures were lowered.
For instance, the team found that bacteria in a sodium chloride (NaCl) solution died within two weeks at room temperature. But when temperatures were lowered to 4 °C (39 °F), survivability began to increase and almost all the bacteria survived by the time temperatures reached -15 °C (5 °F). Meanwhile, bacteria in the magnesium and calcium-chloride solutions had high survival rates at –30 °C (-22 °F).
The results also varied for the three saline solvents depending on the temperature. Bacteria in calcium chloride (CaCl2) had significantly lower survival rates than those in sodium chloride (NaCl) and magnesium chloride (MgCl2)between 4 and 25 °C (39 and 77 °F), but lower temperatures boosted survival in all three. The survival rates in perchlorate solution were far lower than in other solutions.
However, this was generally in solutions where perchlorate constituted 50% of the mass of the total solution (which was necessary for the water to remain liquid at lower temperatures), which would be significantly toxic. At concentrations of 10%, bacteria was still able to grow. This is semi-good news for Mars, where the soil contains less than one weight percent of perchlorate.
However, Heinz also pointed out that salt concentrations in soil are different than those in a solution. Still, this could be still be good news where Mars is concerned, since temperatures and precipitation levels there are very similar to parts of Earth – the Atacama Desert and parts of Antarctica. The fact that bacteria have can survive such environments on Earth indicates they could survive on Mars too.
In general, the research indicated that colder temperatures boost microbial survivability, but this depends on the type of microbe and the composition of the chemical solution. As Heinz told Astrobiology Magazine:
“[A]ll reactions, including those that kill cells, are slower at lower temperatures, but bacterial survivability didn’t increase much at lower temperatures in the perchlorate solution, whereas lower temperatures in calcium chloride solutions yielded a marked increase in survivability.”
The team also found that bacteria did better in saltier solutions when it came to freezing and thawing cycles. In the end, the results indicate that survivability all comes down to a careful balance. Whereas lower concentrations of chemical salts meant that bacteria could survive and even grow, the temperatures at which water would remain in a liquid state would be reduced. It also indicated that salty solutions improve bacteria survival rates when it comes to freezing and thawing cycles.
Of course, the team emphasized that just because bacteria can subsist in certain conditions doesn’t mean they will thrive there. AsTheresa Fisher, a PhD student at Arizona State University’s School of Earth and Space Exploration and a co-author on the study, explained:
“Survival versus growth is a really important distinction, but life still manages to surprise us. Some bacteria can not only survive in low temperatures, but require them to metabolize and thrive. We should try to be unbiased in assuming what’s necessary for an organism to thrive, not just survive.”
As such, Heinz and his colleagues are currently working on another study to determine how different concentrations of salts across different temperatures affect bacterial propagation. In the meantime, this study and other like it are able to provide some unique insight into the possibilities for extraterrestrial life by placing constraints on the kinds of conditions that they can survive and grow in.
These studies also allow help when it comes to the search for extraterrestrial life, since knowing where life can exist allows us to focus our search efforts. In the coming years, missions to Europa, Enceladus, Titan and other locations in the Solar System will be looking for biosignatures that indicate the presence of life on or within these bodies. Knowing that life can survive in cold, briny environments opens up additional possibilities.
For decades, ever since the Pioneer and Voyager missions passed through the outer Solar System, scientists have speculated that life might exist within icy bodies like Jupiter’s moon Europa. However, thanks the Cassinimission, scientists now believe that other moons in the outer Solar System – such as Saturn’s moon Enceladus – could possibly harbor life as well.
For instance, Cassini observed plume activity coming from Enceladus’ southern polar region that indicated the presence of hydrothermal activity inside. What’s more, these plumes contained organic molecules and hydrated minerals, which are potential indications of life. To see if life could thrive inside this moon, a team of scientists conducted a test where strains of Earth bacteria were subjected to conditions similar to what is found inside Enceladus.
For the sake of their study, the team chose to work with three strains of methanogenic archaea known as methanothermococcus okinawensis. This type of microorganism thrives in low-oxygen environments and consumes chemical products known to exist on Enceladus – such as methane (CH4), carbon dioxide (CO2) and molecular hydrogen (H2) – and emit methane as a metabolic byproduct. As they state:
“To investigate growth of methanogens under Enceladus-like conditions, three thermophilic and methanogenic strains, Methanothermococcus okinawensis (65 °C), Methanothermobacter marburgensis (65 °C), and Methanococcus villosus (80 °C), all able to fix carbon and gain energy through the reduction of CO2 with H2 to form CH4, were investigated regarding growth and biological CH4 production under different headspace gas compositions…”
These strains were selected because of their ability to grow in a temperature range that is characteristic of the vicinity around hydrothermal vents, in a chemically defined medium, and at low partial pressures of molecular hydrogen. This is consistent with what has been observed in Enceladus’ plumes and what is believed to exist within the moon’s interior.
These types of archaea can still be found on Earth today, lingering in deep-see fissures and around hydrothermal vents. In particular, the strain of M. okinawensis has been determined to exist in only one location around the deep-sea hydrothermal vent field at Iheya Ridge in the Okinawa Trough near Japan. Since this vent is located at a depth of 972 m (3189 ft) below sea level, this suggests that this strain has a tolerance toward high pressure.
For many years, scientists have suspected that Earth’s hydrothermal vents played a vital role in the emergence of life, and that similar vents could exist within the interior of moons like Europa, Ganymede, Titan, Enceladus, and other bodies in the outer Solar System. As a result, the research team believed that methanogenic archaea could also exist within these bodies.
After subjecting the strains to Enceladus-like temperature, pressure and chemical conditions in a laboratory environment, they found that one of the three strains was able to flourish and produce methane. The strain even managed to survive after the team introduced harsh chemicals that are present on Enceladus, and which are known to inhibit the growth of microbes. As they conclude in their study:
“In this study, we show that the methanogenic strain M. okinawensis is able to propagate and/or to produce CH4 under putative Enceladus-like conditions. M. okinawensis was cultivated under high-pressure (up to 50 bar) conditions in defined growth medium and gas phase, including several potential inhibitors that were detected in Enceladus’ plume.”
From this, they determined that some of the methane found in Enceladus’ plumes were likely produced by the presence of methanogenic microbes. As Simon Rittmann, a microbiologist at the University of Vienna and lead author of the study, explained in an interview with The Verge. “It’s likely this organism could be living on other planetary bodies,” he said. “And it could be really interesting to investigate in future missions.”
In the coming decades, NASA and other space agencies plan to send multiple mission to the Jupiter and Saturn systems to investigate their “ocean worlds” for potential signs of life. In the case of Enceladus, this will most likely involve a lander that will set down around the southern polar region and collect samples from the surface to determine the presence of biosignatures.
Alternately, an orbiter mission may be developed that will fly through Enceladus’ plumes and collect bioreadings directly from the moon’s ejecta, thus picking up where Cassini left off. Whatever form the mission takes, the discoveries are expected to be a major breakthrough. At long last, we may finally have proof that Earth is not the only place in the Solar System where live can exist.
Be sure to check out John Michael Godier’s video titled “Encedalus and the Conditions for Life” as well:
When the Cassini mission arrived in the Saturn system in 2004, it discovered something rather unexpected in Enceladus’ southern hemisphere. From hundreds of fissures located in the polar region, plumes of water and organic molecules were spotted periodically spewing forth. This was the first indication that Saturn’s moon may have an interior ocean caused by hydrothermal activity near the core-mantle boundary.
According to a new study based on Cassini data, which it obtained before diving into Saturn’s atmosphere on September 15th, this activity may have been going on for some time. In fact, the study team concluded that if the moon’s core is porous enough, it could have generated enough heat to maintain an interior ocean for billions of years. This study is the most encouraging indication yet that the interior of Enceladus could support life.
Prior to the Cassini mission’s many flybys of Enceladus, scientists believed this moon’s surface was composed of solid ice. It was only after noticing the plume activity that they came to realize that it had water jets that extended all the way down to a warm-water ocean in its interior. From the data obtained by Cassini, scientists were even able to make educated guesses of where this internal ocean lay.
All told, Enceladus is a relatively small moon, measuring some 500 km (311 mi) in diameter. Based on gravity measurements performed by Cassini, its interior ocean is believed to lie beneath an icy outer surface at depths of 20 to 25 km (12.4 to 15.5 mi). However, this surface ice thins to about 1 to 5 km (0.6 to 3.1 mi) over the southern polar region, where the jets of water and icy particles jet through fissures.
Based on the way Enceladus orbits Saturn with a certain wobble (aka. libration), scientists have been able to make estimates of the ocean’s depth, which they place at 26 to 31 km (16 to 19 mi). All of this surrounds a core which is believed to be composed of silicate minerals and metal, but which is also porous. Despite all these findings, the source of the interior heat has remained something of an open question.
This mechanism would have to be active when the moon formed billions of years ago and is still active today (as evidenced by the current plume activity). As Dr. Choblet explained in an ESA press statement:
“Where Enceladus gets the sustained power to remain active has always been a bit of mystery, but we’ve now considered in greater detail how the structure and composition of the moon’s rocky core could play a key role in generating the necessary energy.”
For years, scientists have speculated that tidal forces caused by Saturn’s gravitational influence are responsible for Enceladus’ internal heating. The way Saturn pushes and pulls the moon as it follows an elliptical path around the planet is also believed to be what causes Enceladus’ icy shell to deform, causing the fissures around the southern polar region. These same mechanisms are believed to be what is responsible for Europa’s interior warm-water ocean.
However, the energy produced by tidal friction in the ice is too weak to counterbalance the heat loss seen from the ocean. At the rate Enceladus’ ocean is losing energy to space, the entire moon would freeze solid within 30 million years. Similarly, the natural decay of radioactive elements within the core (which has been suggested for other moons as well) is also about 100 times too weak to explain Enceladus interior and plume activity.
To address this, Dr. Choblet and his team conducted simulations of Enceladus’ core to determine what kind of conditions could allow for tidal heating over billions of years. As they state in their study:
“In absence of direct constraints on the mechanical properties of Enceladus’ core, we consider a wide range of parameters to characterize the rate of tidal friction and the efficiency of water transport by porous flow. The unconsolidated core of Enceladus can be viewed as a highly granular/fragmented material, in which tidal deformation is likely to be associated with intergranular friction during fragment rearrangements.”
What they found was that in order for the Cassini observations to be borne out, Enceladus’ core would need to be made of unconsolidated, easily deformable, porous rock. This core could be easily permeated by liquid water, which would seep into the core and gradually heated through tidal friction between sliding rock fragments. Once this water was sufficiently heated, it would rise upwards because of temperature differences with its surroundings.
This process ultimately transfers heat to the interior ocean in narrow plumes which rise to the meet Enceladus’ icy shell. Once there, it causes the surface ice to melt and forming fissures through which jets reach into space, spewing water, ice particles and hydrated minerals that replenish Saturn’s E-Ring. All of this is consistent with the observations made by Cassini, and is sustainable from a geophysical point of view.
In other words, this study is able to show that action in Enceladus’ core could produce the necessary heating to maintain a global ocean and produce plume activity. Since this action is a result of the core’s structure and tidal interaction with Saturn, it is perfectly logical that it has been taking place for billions of years. So beyond providing the first coherent explanation for Enceladus’ plume activity, this study is also a strong indication of habitability.
As scientists have come to understand, life takes a long time to get going. On Earth, it is estimated that the first microorganisms arose after 500 million years, and hydrothermal vents are believed to have played a key role in that process. It took another 2.5 billion years for the first multi-cellular life to evolve, and land-based plants and animals have only been around for the past 500 million years.
Knowing that moons like Enceladus – which has the necessary chemistry to support for life – has also had the necessary energy for billions of years is therefore very encouraging. One can only imagine what we will find once future missions begin inspecting its plumes more closely!
Ever since the Cassini mission entered the Saturn system and began studying its moons, Enceladus has become a major source of interest. Once the probe detected plumes of water and organic molecules erupting from the moon’s southern polar region, scientists began to speculate that Enceladus may possess a warm-water ocean in its interior – much like Jupiter’s moon Europa and other bodies in our Solar System.
In the future, NASA hopes to send another mission to this system to further explore these plumes and the interior of Enceladus. This mission will likely include a new instrument that was recently announced by NASA, known as the Submillimeter Enceladus Life Fundamentals Instrument (SELFI). This instrument, which was proposed by a team from the NASA Goddard Space Flight Center, recently received support for further development.
Prior to the Cassini mission, scientists thought that the surface of Enceladus was frozen solid. However, Cassini data revealed a slight wobble in the moon’s orbit that suggested the presence of an interior ocean. Much like Europa, this is caused by tidal forces that cause flexing in the core, which generates enough heat to hold liquid water in the interior. Around the southern pole, this results in the ice cracking open and forming fissures.
The Cassini mission also discovered plumes emanating from about 100 different fissures which continuously spew icy particles, water vapor, carbon dioxide, methane, and other gases into space. To study these more closely, NASA has been developing some ambitious instruments that will rely on millimeter-wave or radio frequency (RF) waves to determine their composition and learn more about Enceladus’ interior ocean.
According to SELFI Principal Investigator Gordon Chin, SELFI represents a significant improving over existing submillimeter-wavelenght devices. Once deployed, it will measure traces of chemicals in the plumes of water and icy parties that periodically emanated from Enceladus’ southern fissures, also known as “Tiger Stripes“. In addition to revealing the chemical composition of the ocean, this instrument will also indicate it’s potential for supporting life.
On Earth, hydrothermal vents are home to thriving ecosystems, and are even suspected to be the place where life first emerged on Earth. Hence why scientists are so eager to study hydrothermal activity on moons like Enceladus, since these could represent the most likely place to find extra-terrestrial life in our Solar System. As Chin indicated in a NASA press statement:
“Submillimeter wavelengths, which are in the range of very high-frequency radio, give us a way to measure the quantity of many different kinds of molecules in a cold gas. We can scan through all the plumes to see what’s coming out from Enceladus. Water vapor and other molecules can reveal some of the ocean’s chemistry and guide a spacecraft onto the best path to fly through the plumes to make other measurements directly.”
Molecules like water, carbon dioxide and other elements broadcast specific radio frequencies, which submillimeter spectrometers are sensitive to. The spectral lines are very discrete, and the intensity at which they broadcast can be used to quantify their existence. In other words, instruments like SELFI will not only be able to determine the chemical composition of Enceladus’ interior ocean, but also the abundance of those chemicals.
For decades, spectrometers have been used in space sciences to measure the chemical compositions of planets, stars, comets and other targets. Most recently, scientists have been attempting to obtain spectra from distant planets in order to determine the chemical compositions of their atmospheres. This is crucial when it comes to finding potentially-habitable exoplanets, since water vapor, nitrogen and oxygen gas are all required for life as we know it.
Performing scans in the submillimeter band is a relatively new process, though, since submillimeter-sensitive instruments are complex and difficult to build. But with help of NASA research-and-development funding, Chin and his colleagues are increasing the instrument’s sensitivity using an amplifier that will boost the signal to around 557 GHz. This will allow SELFI to detect even minute traces of water and gases coming from the surface of Enceladus.
Other improvements include a more energy-efficient and flexible radio frequency data-processing system, as well as a sophisticated digital spectrometer for the RF signal. This latter improvement will employ high-speed programmable circuitry to convert RF data into digital signals that can be analyzed to measure gas quantities, temperatures, and velocities from Enceladus’ plumes.
These enhancements will allow SELFI to simultaneously detect and analyze 13 different types of molecules, which include various isotopes of water, methanol, ammonia, ozone, hydrogen peroxide, sulfur dioxide, and sodium chloride (aka. salt). Beyond Enceladus, Chin believes the team can sufficiently improve the instrument for proposed future missions. “SELFI is really new,”he said. “This is one of the most ambitious submillimeter instruments ever built.”
For instance, in recent years, scientists have spotted plume activity coming from the surface of Europa. Here too, this activity is believed to be the result of geothermal activity, which sends warm water plumes from the moon’s interior ocean to the surface. Already, NASA hopes to examine these plumes and those on Enceladus using the James Webb Space Telescope, which will be deploying in 2019.
Another possibility would be to equip the proposed Europa Clipper – which is set to launch between 2022 and 2025 – with an instrument like SELFI. The instrument package for this probe already calls for a spectrometer, but an improved submillimeter-wave and RF device could allow for a more detailed look at Europa’s plumes. This data could in turn resolve the decades-old debate as to whether or not Europa’s interior is capable of supporting life.
In the coming decades, one of the greatest priorities of space exploration is to investigate the Solar System’s “Ocean Worlds” for signs of life. To see this through, NASA and other space agencies are busy developing the necessary tools to sniff out all the chemical and biological indicators. Within a decade, with any luck, we might just find that life on Earth is not the exception, but part of a larger norm.
In October of 2018, the James Webb Space Telescope (JWST) will be launched into orbit. As part of NASA’s Next Generation Space Telescope program, the JWST will spend the coming years studying every phase of cosmic history. This will involve probing the first light of the Universe (caused by the Big Bang), the first galaxies to form, and extra-solar planets in nearby star systems.
In addition to all of that, the JWST will also be dedicated to studying our Solar System. As NASA recently announced, the telescope will use its infrared capabilities to study two “Ocean Worlds” in our Solar System – Jupiter’s moon Europa and Saturn’s moon Enceladus. In so doing, it will add to observations previously made by NASA’s Galileo and Cassini orbiters and help guide future missions to these icy moons.
The moons were chosen by scientist who helped to develop the telescope (aka. guaranteed time observers) and are therefore given the privilege of being among the first to use it. Europa and Enceladus were added to the telescope’s list of targets since one of the primary goals of the telescope is to study the origins of life in the Universe. In addition to looking for habitable exoplanets, NASA also wants to study objects within our own Solar System.
One of the main focuses will be on the plumes of water that have been observed breaking through the icy surfaces of Enceladus and Europa. Since 2005, scientists have known that Enceladus has plumes that periodically erupt from its southern polar region, spewing water and organic chemicals that replenish Saturn’s E-Ring. It has since discovered that these plumes reach all the way into the interior ocean that exists beneath Enceladus’ icy surface.
In 2012, astronomers using the Hubble Space Telescope detected similar plumes coming from Europa. These plumes were spotted coming from the moon’s southern hemisphere, and were estimated to reach up to 200 km (125 miles) into space. Subsequent studies indicated that these plumes were intermittent, and presumably rained water and organic materials from the interior back onto the surface.
These observations were especially intriguing since they bolstered the case for Europa and Enceladus having interior, warm-water oceans that could harbor life. These oceans are believed to be the result of geological activity in the interior that is caused by tidal flexing. Based on the evidence gathered by the Galileo and Cassini orbiters, scientists have theorized that these surface plumes are the result of these same geological processes.
The presence of this activity could also means that these moons have hydrothermal vents located at their core-mantle boundaries. On Earth, hydrothermal vents (located on the ocean floor) are believed to have played a major role in the emergence of life. As such, their existence on other bodies within the Solar System is viewed as a possible indication of extra-terrestrial life.
The effort to study these “Ocean Worlds” will be led by Geronimo Villanueva, a planetary scientist at NASA’s Goddard Space Flight Center. As he explained in a recent NASA press statement, he and his team will be addressing certain fundamental questions:
“Are they made of water ice? Is hot water vapor being released? What is the temperature of the active regions and the emitted water? Webb telescope’s measurements will allow us to address these questions with unprecedented accuracy and precision.”
Villanueva’s team is part of a larger effort to study the Solar System, which is being led by Heidi Hammel – the executive VP of the Association of Universities for Research in Astronomy (AURA). As she described the JWST’s “Ocean World” campaign to Universe Today via email:
“We will be seeking signatures of plume activity on these ocean worlds as well as active spots. With the near-infrared camera of NIRCAM, we will have just enough spatial resolution to distinguish general regions of the moons that could be “active” (creating plumes). We will also use spectroscopy (examining specific colors of light) to sense the presence of water, methane and several other organic species in plume material.”
For Enceladus, the team will be analyze the molecular composition of its plumes and perform a broad analysis of its surface features. Due to its small size, high-resolution of the surface will not be possible, but this should not be a problem since the Cassini orbiter already mapped much of its surface terrain. All told, Cassini has spent the past 13 years studying the Saturn system and will conclude the “Grande Finale” phase of its mission this September 15th.
These surveys, it is hoped, will find evidence of organic signatures in the plumes, such as methane, ethanol and ethane. To be fair, there are no guarantees that the JWST’s observations will coincide with plumes coming from these moons, or that the emissions will have enough organic molecules in them to be detectable. Moreover, these indicators could also be caused by geological processes.
Nevertheless, the JWST is sure to provide evidence that will allow scientists to better characterize the active regions of these moons. It is also anticipated that it will be able to pinpoint locations that will be of interest for future missions, such as NASA’s Europa Clipper mission. Consisting of an orbiter and lander, this mission – which is expected to launch sometime in the 2020s – will attempt to determine if Europa is habitable.
As Dr. Hammel explained, the study of these two “Ocean Moons” is also intended to advance our understanding about the origins of life in the Universe:
“These two ocean moons are thought to provide environments that may harbor water-based life as we know it. At this point, the issue of life elsewhere is completely unknown, though there is much speculation. JWST can move us closer to understanding these potentially habitable environments, complementing robotic spacecraft missions that are currently in development (Europa Clipper) and may be planned for the future. At the same time, JWST will be examining the far more distant potentially habitable environments of planets around other stars. These two lines of exploration – local and distant – allow us to make significant advances in the search for life elsewhere.”
Once deployed, the JWST will be the most powerful space telescope ever built, relying on eighteen segmented mirrors and a suite of instruments to study the infrared Universe. While it is not meant to replace the Hubble Space Telescope, it is in many ways the natural heir to this historic mission. And it is certainly expected to expand on many of Hubble’s greatest discoveries, not the least of which are here in the Solar System.
Be sure to check out this video on the kinds of spectrographic data the JWST will provide in the coming years, courtesy of NASA: |
Geographic coordinate system
A geographic coordinate system is a coordinate system that enables every location on Earth to be specified by a set of numbers, letters or symbols. The coordinates are often chosen such that one of the numbers represents a vertical position and two or three of the numbers represent a horizontal position; alternatively, a geographic position may be expressed in a combined three-dimensional Cartesian vector. A common choice of coordinates is latitude, longitude and elevation. To specify a location on a plane requires a map projection.
The invention of a geographic coordinate system is generally credited to Eratosthenes of Cyrene, who composed his now-lost Geography at the Library of Alexandria in the 3rd century BC. A century later, Hipparchus of Nicaea improved on this system by determining latitude from stellar measurements rather than solar altitude and determining longitude by timings of lunar eclipses, rather than dead reckoning. In the 1st or 2nd century, Marinus of Tyre compiled an extensive gazetteer and mathematically-plotted world map using coordinates measured east from a prime meridian at the westernmost known land, designated the Fortunate Isles, off the coast of western Africa around the Canary or Cape Verde Islands, and measured north or south of the island of Rhodes off Asia Minor. Ptolemy credited him with the full adoption of longitude and latitude, rather than measuring latitude in terms of the length of the midsummer day.
Ptolemy's 2nd-century Geography used the same prime meridian but measured latitude from the Equator instead. After their work was translated into Arabic in the 9th century, Al-Khwārizmī's Book of the Description of the Earth corrected Marinus' and Ptolemy's errors regarding the length of the Mediterranean Sea, causing medieval Arabic cartography to use a prime meridian around 10° east of Ptolemy's line. Mathematical cartography resumed in Europe following Maximus Planudes' recovery of Ptolemy's text a little before 1300; the text was translated into Latin at Florence by Jacobus Angelus around 1407.
In 1884, the United States hosted the International Meridian Conference, attended by representatives from twenty-five nations. Twenty-two of them agreed to adopt the longitude of the Royal Observatory in Greenwich, England as the zero-reference line. The Dominican Republic voted against the motion, while France and Brazil abstained. France adopted Greenwich Mean Time in place of local determinations by the Paris Observatory in 1911.
In order to be unambiguous about the direction of "vertical" and the "horizontal" surface above which they are measuring, map-makers choose a reference ellipsoid with a given origin and orientation that best fits their need for the area to be mapped. They then choose the most appropriate mapping of the spherical coordinate system onto that ellipsoid, called a terrestrial reference system or geodetic datum.
Datums may be global, meaning that they represent the whole Earth, or they may be local, meaning that they represent an ellipsoid best-fit to only a portion of the Earth. Points on the Earth's surface move relative to each other due to continental plate motion, subsidence, and diurnal Earth tidal movement caused by the Moon and the Sun. This daily movement can be as much as a meter. Continental movement can be up to 10 cm a year, or 10 m in a century. A weather system high-pressure area can cause a sinking of 5 mm. Scandinavia is rising by 1 cm a year as a result of the melting of the ice sheets of the last ice age, but neighboring Scotland is rising by only 0.2 cm. These changes are insignificant if a local datum is used, but are statistically significant if a global datum is used.
Examples of global datums include World Geodetic System (WGS 84, also known as EPSG:4326 ), the default datum used for the Global Positioning System, and the International Terrestrial Reference Frame (ITRF), used for estimating continental drift and crustal deformation. The distance to Earth's center can be used both for very deep positions and for positions in space.
Local datums chosen by a national cartographical organization include the North American Datum, the European ED50, and the British OSGB36. Given a location, the datum provides the latitude and longitude . In the United Kingdom there are three common latitude, longitude, and height systems in use. WGS 84 differs at Greenwich from the one used on published maps OSGB36 by approximately 112 m. The military system ED50, used by NATO, differs from about 120 m to 180 m.
The latitude and longitude on a map made against a local datum may not be the same as one obtained from a GPS receiver. Converting coordinates from one datum to another requires a datum transformation such as a Helmert transformation, although in certain situations a simple translation may be sufficient.
In popular GIS software, data projected in latitude/longitude is often represented as a Geographic Coordinate System. For example, data in latitude/longitude if the datum is the North American Datum of 1983 is denoted by 'GCS North American 1983'.
Latitude and longitude
The "latitude" (abbreviation: Lat., φ, or phi) of a point on Earth's surface is the angle between the equatorial plane and the straight line that passes through that point and through (or close to) the center of the Earth. Lines joining points of the same latitude trace circles on the surface of Earth called parallels, as they are parallel to the Equator and to each other. The North Pole is 90° N; the South Pole is 90° S. The 0° parallel of latitude is designated the Equator, the fundamental plane of all geographic coordinate systems. The Equator divides the globe into Northern and Southern Hemispheres.
The "longitude" (abbreviation: Long., λ, or lambda) of a point on Earth's surface is the angle east or west of a reference meridian to another meridian that passes through that point. All meridians are halves of great ellipses (often called great circles), which converge at the North and South Poles. The meridian of the British Royal Observatory in Greenwich, in southeast London, England, is the international prime meridian, although some organizations—such as the French Institut Géographique National—continue to use other meridians for internal purposes. The prime meridian determines the proper Eastern and Western Hemispheres, although maps often divide these hemispheres further west in order to keep the Old World on a single side. The antipodal meridian of Greenwich is both 180°W and 180°E. This is not to be conflated with the International Date Line, which diverges from it in several places for political and convenience reasons, including between far eastern Russia and the far western Aleutian Islands.
The combination of these two components specifies the position of any location on the surface of Earth, without consideration of altitude or depth. The grid formed by lines of latitude and longitude is known as a "graticule". The origin/zero point of this system is located in the Gulf of Guinea about 625 km (390 mi) south of Tema, Ghana.
Length of a degree
On the GRS80 or WGS84 spheroid at sea level at the Equator, one latitudinal second measures 30.715 meters, one latitudinal minute is 1843 meters and one latitudinal degree is 110.6 kilometers. The circles of longitude, meridians, meet at the geographical poles, with the west–east width of a second naturally decreasing as latitude increases. On the Equator at sea level, one longitudinal second measures 30.92 meters, a longitudinal minute is 1855 meters and a longitudinal degree is 111.3 kilometers. At 30° a longitudinal second is 26.76 meters, at Greenwich (51°28′38″N) 19.22 meters, and at 60° it is 15.42 meters.
On the WGS84 spheroid, the length in meters of a degree of latitude at latitude φ (that is, the number of meters you would have to travel along a north–south line to move 1 degree in latitude, when at latitude φ), is about
The returned measure of meters per degree latitude varies continuously with latitude.
Similarly, the length in meters of a degree of longitude can be calculated as
(Those coefficients can be improved, but as they stand the distance they give is correct within a centimeter.)
The formulae both return units of meters per degree.
An alternative method to estimate the length of a longitudinal degree at latitude is to assume a spherical Earth (to get the width per minute and second, divide by 60 and 3600, respectively):
where Earth's average meridional radius is 6,367,449 m. Since the Earth is an oblate spheroid, not spherical, that result can be off by several tenths of a percent; a better approximation of a longitudinal degree at latitude is
where Earth's equatorial radius equals 6,378,137 m and ; for the GRS80 and WGS84 spheroids, b/a calculates to be 0.99664719. ( is known as the reduced (or parametric) latitude). Aside from rounding, this is the exact distance along a parallel of latitude; getting the distance along the shortest route will be more work, but those two distances are always within 0.6 meter of each other if the two points are one degree of longitude apart.
|60°||Saint Petersburg||55.80 km||0.930 km||15.50 m||5.58 m|
|51° 28′ 38″ N||Greenwich||69.47 km||1.158 km||19.30 m||6.95 m|
|45°||Bordeaux||78.85 km||1.31 km||21.90 m||7.89 m|
|30°||New Orleans||96.49 km||1.61 km||26.80 m||9.65 m|
|0°||Quito||111.3 km||1.855 km||30.92 m||11.13 m|
To establish the position of a geographic location on a map, a map projection is used to convert geodetic coordinates to plane coordinates on a map; it projects the datum ellipsoidal coordinates and height onto a flat surface of a map. The datum, along with a map projection applied to a grid of reference locations, establishes a grid system for plotting locations. Common map projections in current use include the Universal Transverse Mercator (UTM), the Military Grid Reference System (MGRS), the United States National Grid (USNG), the Global Area Reference System (GARS) and the World Geographic Reference System (GEOREF). Coordinates on a map are usually in terms northing N and easting E offsets relative to a specified origin.
Map projection formulas depend on the geometry of the projection as well as parameters dependent on the particular location at which the map is projected. The set of parameters can vary based on the type of project and the conventions chosen for the projection. For the transverse Mercator projection used in UTM, the parameters associated are the latitude and longitude of the natural origin, the false northing and false easting, and an overall scale factor. Given the parameters associated with particular location or grin, the projection formulas for the transverse Mercator are a complex mix of algebraic and trigonometric functions.:45-54
UTM and UPS systems
The Universal Transverse Mercator (UTM) and Universal Polar Stereographic (UPS) coordinate systems both use a metric-based Cartesian grid laid out on a conformally projected surface to locate positions on the surface of the Earth. The UTM system is not a single map projection but a series of sixty, each covering 6-degree bands of longitude. The UPS system is used for the polar regions, which are not covered by the UTM system.
Stereographic coordinate system
During medieval times, the stereographic coordinate system was used for navigation purposes. The stereographic coordinate system was superseded by the latitude-longitude system. Although no longer used in navigation, the stereographic coordinate system is still used in modern times to describe crystallographic orientations in the fields of crystallography, mineralogy and materials science.
Vertical coordinates include height and depth.
3D Cartesian coordinates
Every point that is expressed in ellipsoidal coordinates can be expressed as an rectilinear x y z (Cartesian) coordinate. Cartesian coordinates simplify many mathematical calculations. The Cartesian systems of different datums are not equivalent.
The Earth-centered Earth-fixed (also known as the ECEF, ECF, or conventional terrestrial coordinate system) rotates with the Earth and has its origin at the center of the Earth.
The conventional right-handed coordinate system puts:
- The origin at the center of mass of the Earth, a point close to the Earth's center of figure
- The Z axis on the line between the North and South Poles, with positive values increasing northward (but does not exactly coincide with the Earth's rotational axis)
- The X and Y axes in the plane of the Equator
- The X axis passing through extending from 180 degrees longitude at the Equator (negative) to 0 degrees longitude (prime meridian) at the Equator (positive)
- The Y axis passing through extending from 90 degrees west longitude at the Equator (negative) to 90 degrees east longitude at the Equator (positive)
An example is the NGS data for a brass disk near Donner Summit, in California. Given the dimensions of the ellipsoid, the conversion from lat/lon/height-above-ellipsoid coordinates to X-Y-Z is straightforward—calculate the X-Y-Z for the given lat-lon on the surface of the ellipsoid and add the X-Y-Z vector that is perpendicular to the ellipsoid there and has length equal to the point's height above the ellipsoid. The reverse conversion is harder: given X-Y-Z we can immediately get longitude, but no closed formula for latitude and height exists. See "Geodetic system." Using Bowring's formula in 1976 Survey Review the first iteration gives latitude correct within 10-11 degree as long as the point is within 10000 meters above or 5000 meters below the ellipsoid.
Local tangent plane
A local tangent plane can be defined based on the vertical and horizontal dimensions. The vertical coordinate can point either up or down. There are two kinds of conventions for the frames:
- East, North, up (ENU), used in geography
- North, East, down (NED), used specially in aerospace
In many targeting and tracking applications the local ENU Cartesian coordinate system is far more intuitive and practical than ECEF or geodetic coordinates. The local ENU coordinates are formed from a plane tangent to the Earth's surface fixed to a specific location and hence it is sometimes known as a local tangent or local geodetic plane. By convention the east axis is labeled , the north and the up .
In an airplane, most objects of interest are below the aircraft, so it is sensible to define down as a positive number. The NED coordinates allow this as an alternative to the ENU. By convention, the north axis is labeled , the east and the down . To avoid confusion between and , etc. in this article we will restrict the local coordinate frame to ENU.
On other celestial bodies
Similar coordinate systems are defined for other celestial bodies such as:
- In specialized works, "geographic coordinates" are distinguished from other similar coordinate systems, such as geocentric coordinates and geodetic coordinates. See, for example, Sean E. Urban and P. Kenneth Seidelmann, Explanatory Supplement to the Astronomical Almanac, 3rd ed., (Mill Valley CA: University Science Books, 2013) pp. 20–23.
- The pair had accurate absolute distances within the Mediterranean but underestimated the circumference of the Earth, causing their degree measurements to overstate its length west from Rhodes or Alexandria, respectively.
- WGS 84 is the default datum used in most GPS equipment, but other datums can be selected.
- Alternative versions of latitude and longitude include geocentric coordinates, which measure with respect to Earth's center; geodetic coordinates, which model Earth as an ellipsoid; and geographic coordinates, which measure with respect to a plumb line at the location for which coordinates are given.
- A guide to coordinate systems in Great Britain (PDF), D00659 v2.3, Ordnance Survey, March 2015, archived from the original (PDF) on 24 September 2015, retrieved 22 June 2015
- Taylor, Chuck. "Locating a Point On the Earth". Retrieved 4 March 2014.
- McPhail, Cameron (2011), Reconstructing Eratosthenes' Map of the World (PDF), Dunedin: University of Otago, pp. 20–24.
- Evans, James (1998), The History and Practice of Ancient Astronomy, Oxford, England: Oxford University Press, pp. 102–103, ISBN 9780199874453.
- Greenwich 2000 Limited (9 June 2011). "The International Meridian Conference". Wwp.millennium-dome.com. Archived from the original on 6 August 2012. Retrieved 31 October 2012.
- Bolstad, Paul. GIS Fundamentals (PDF) (5th ed.). Atlas books. p. 102. ISBN 978-0-9717647-3-6.
- "Making maps compatible with GPS". Government of Ireland 1999. Archived from the original on 21 July 2011. Retrieved 15 April 2008.
- American Society of Civil Engineers (1 January 1994). Glossary of the Mapping Sciences. ASCE Publications. p. 224. ISBN 9780784475706.
- Geographic Information Systems - Stackexchange
- "Grids and Reference Systems". National Geospatial-Intelligence Agency. Retrieved 4 March 2014.
- "Geomatics Guidance Note Number 7, part 2 Coordinate Conversions and Transformations including Formulas" (PDF). International Association of Oil and Gas Producers (OGP). pp. 9–10. Archived from the original (PDF) on 6 March 2014. Retrieved 5 March 2014.
- Note on the BIRD ACS Reference Frames Archived 18 July 2011 at the Wayback Machine
- Davies, M. E., "Surface Coordinates and Cartography of Mercury," Journal of Geophysical Research, Vol. 80, No. 17, June 10, 1975.
- Davies, M. E., S. E. Dwornik, D. E. Gault, and R. G. Strom, NASA Atlas of Mercury, NASA Scientific and Technical Information Office, 1978.
- Davies, M. E., T. R. Colvin, P. G. Rogers, P. G. Chodas, W. L. Sjogren, W. L. Akim, E. L. Stepanyantz, Z. P. Vlasova, and A. I. Zakharov, "The Rotation Period, Direction of the North Pole, and Geodetic Control Network of Venus," Journal of Geophysical Research , Vol. 97, £8, pp. 13,14 1-13,151, 1992.
- Davies, M. E., and R. A. Berg, "Preliminary Control Net of Mars,"Journal of Geophysical Research, Vol. 76, No. 2, pps. 373-393, January 10, 1971.
- Merton E. Davies, Thomas A. Hauge, et. al.: Control Networks for the Galilean Satellites: November 1979 R-2532-JPL/NASA
- Davies, M. E., P. G. Rogers, and T. R. Colvin, "A Control Network of Triton," Journal of Geophysical Research, Vol. 96, E l , pp. 15, 675-15, 681, 1991.
|Wikidata has the property:|Media related to Geographic coordinate system at Wikimedia Commons |
Ancient Greek Mathematicians: Thales, Pythagoras, Euclid, and Archimedes
“Geometry is knowledge of the eternally existent,” (“Sacred Mathematics”). This quotation by Plato, an Ancient Greek philosopher, demonstrates the importance of geometry to the foundations of the universe. Geometry encompasses every aspect of life including architecture, physics, and biology. Teachers around the globe instruct the basics of geometry to teen-aged students every day, yet these self-evident ideas were not always simple. It took the collaboration of many great minds to formulate the mathematical conclusions so easily comprehensible today. Ancient Greece’s thriving civilization allowed great thinkers such as Thales, Pythagoras, Euclid, and Archimedes to flourish through discovery and innovation. Because of the considerable time period, these mathematicians belong to one of two categories: the early mathematicians (700-400 BCE) and the later mathematicians (300-200 BCE). Thales and Pythagoras are early mathematicians, while Euclid and Archimedes are later mathematicians. Their discoveries provided a better understanding of geometry and developed the principle understandings of the world around us, thus providing invaluable contributions to the field of mathematics, especially in geometry.
Thales: The Father of Greek Mathematics
One of the earliest great Greek mathematicians was Thales. Thales (624-560 BCE) was born in Miletus, but resided in Egypt for a portion of his life. He returned to Miletus later in his life and began to introduce and shape his knowledge of astronomy and mathematics to Greece (Allman 7). As an astronomer, he was infamous for accurately predicting the solar eclipse on May 28, 585 BC. But, evidence points to this prediction being a fluke as astronomy at the time was not advanced enough to make such a prediction (Symonds and Scott 2).
Mathematically, however, his contributions are more reputable. Historians believe that Thales introduced the concept of geometry to Greece (“Thales”). Through his use of logical reasoning and his view of geometrical figures as mere ideas rather than physical representations, Thales drew five conclusions about geometry. In circular geometry, Thales proved the diameter of a circle perfectly bisects the circle, and that an angle inscribed in a semicircle is invariably a right angle. In trigonometry (the geometry of triangles) he discovered that an isosceles triangle’s base angles are equal. Today, architects still rely on this principle to ensure that steeples and spires on buildings are level. He also proved that triangles with two congruent angles and one congruent side with each other are, in themselves, congruent, as displayed. Later, artists used this proof in paintings to ensure symmetry, particularly in modern works. Lastly, he proved that when two straight lines intersect, the opposite angles between the two lines equal each other (Symonds and Scott 1), which is crucial to predicting trajectory in physics.
His version of geometry was abstract for the time period, as “Thales insisted that geometric statements be established by deductive reasoning rather than by trial and error” (Greenberg 6). He focused on the relationships of the parts of a figure to determine the properties of the remaining pieces of the figure (Allman 7). Through his discoveries, Thales influenced his successors and aided in their discoveries. But, he also applied them practically to Grecian life. The theorems he formed on congruent triangles and their corresponding parts and angles allowed him to more accurately calculate distances, which ultimately aided in sea navigation (Wilson 80), crucial due to this being their main mode of transportation. Thales’ ideas also founded the geometry of lines, “which has ever since been the principal part of geometry,” (Allman 15). Through his development of this principal part of geometry and his exposure of these ideas to the Greeks, Thales greatly impacted the overall development of mathematics. Historians acknowledged these contributions by naming Thales as one of the Seven Wise Men of Greece (“Thales”).
Pythagoras: The Father of Trigonometry
Living from 569-500 BCE, Pythagoras, too, found an interest in mathematics and astronomy as he studied under one of Thales’ pupils, Alzimandar. Through his years of research and study of mathematics, Pythagoras attracted a community of followers in his home of Crotona (Wilson 80). Known as the Pythagoreans, scholars credit them with discovering the sum of the angles of a triangle equalling two right angles, or 180 degrees, and the existence of irrational numbers. Another notable accomplishment is the construction of the five regular solids: the tetrahedron, the hexahedron (cube), the octahedron, the dodecahedron, and the icosahedron (Polyhedron). Later, scientists found these solids to represent the atomic shapes of compounds. Today, students and educators alike most recognize Pythagoras for the Pythagorean theorem, in which “the square of the hypotenuse of a right angled triangle is equal to the sum of the squares of the other two sides” (Symonds and Scott 3), or a^2 + b^2 = c^2. This theorem developed the basic principle of trigonometry, which is the basis of physics.
Eventually, however, the Pythagoreans particularly focused on abstract rather than concrete problems. (Symonds and Scott 3). Rather than focusing on measurable, concrete quantities as numbers, the “Pythagorean worldview was based on the idea that the universe consists of an infinite number of negligibly small indivisible particles” (Naziev 175). This group believed that the objects around them (water, rocks, materials) were all constructed of microscopic, single units, later discovered to be atoms. It is through this assertion that Pythagoras coined his slogan, “All is number.” Through this sentiment, he implies that everything in the universe can be explained, organized, and predicted using numbers and mathematics. (M. B. 47).
Euclid: The Father of Geometry
Euclid, the first well-known mathematician from Alexandria, lived from 325-265 BCE. (Wilson 96). Euclid attended a Platonic school, where he found his passion for mathematics and logic (Greenberg 7). He is most well known for his collection of his plane and solid geometry studies: his book Elements. Influenced by Thales’ geometrical beliefs, Euclid wrote his Elements to serve as an example of deductive reasoning in practice, starting with “initial axioms and deduc[ing] new propositions in a logical and systematic order.” (Wilson 96). Consisting of thirteen books covering topics from arithmetic, plane and solid geometry, and number theory, its groundbreaking content and overall influence catapulted this work to become one of the greatest textbooks in history, being the second most sold book only to the Bible. And, because of the success of this title, experts recorded Euclid as the most widely read author in history (Greenberg 7), in addition to one of the greatest mathematicians of all time (Symonds and Scott 4).
The first four volumes of Elements focus on the Pythagoreans and some of their discoveries (Greenberg 7). The fifth volume is said to be the “finest discovery of Greek mathematics” as it explains geometry as dependent on recognizing proportions, and the sixth volume applies these proportions to plane geometry. Volumes seven through nine focus on number theory, while volume ten deals with irrational numbers. Lastly, the eleventh through thirteenth volumes focus on three-dimensional geometry (Symonds and Scott 4). Some examples of the content in these volumes include the five postulates in volume one. Euclid writes,
Let the following be postulated: 1. To draw a straight line from any point to any point. 2. To produce a finite straight line continuously in a straight line. 3. To describe a circle with any centre and distance. 4. That all right angles are equal to one another. 5. That, if a straight line falling on two straight lines make the interior angles on the same side less than two right angles, the two straight lines … meet on that side on which are the angles less than the two right angles. (Euclid 2) Through these postulates, Euclid focuses his propositions, through which he makes discoveries including measurements of angles in constructions, bisections, and proportional lengths (Euclid 2-36).
Euclid’s discoveries span across multiple principles of geometry. These discoveries serve crucial roles in construction and architecture, helping build and produce buildings with structural integrity while aiding the workers in accurately estimating the amount of material needed to complete a project. Not only do these aid in construction, but his discoveries apply greatly to engineering and physics. Through his focus on angles, he created the basis for future scientists to predict trajectory and to aid athletes in optimizing their performance.
Archimedes: The Father of Mathematics
Born in 287 BC, Archimedes of Syracuse on the island Sicily studied mathematics and, because of his discoveries, scholars consider him to be one of the top-ranking mathematicians of all time (Symonds and Scott 4-5). Archimedes continued some of the work of the Pythagoreans and Euclid as he recorded the thirteen semi-regular solids. Scientists later discovered that these solids serve as depictions of some crystalline structures. Influenced by Euclid’s Elements, he also found the surface areas and volumes of spheres and cylinders used to determine the amount of a substance in a can or the amount of air needed to expand a balloon to a specific size (Wilson 96). Following this discovery, Archimedes detailed his ability to find these properties, saying, “These properties were all along naturally inherent already in their figures referred to, but they were unknown to those who were before our time engaged in the study of geometry, because none of them realized that there exists symmetry between these figures” (Dijksterhuis 142), just as Euclid determined in the fifth volume of Elements. Archimedes found these surface areas and volumes by calculating the ratios for these solids to circles and to each other. For instance, the surface area of a sphere is 4πr2 units while the area of a circle is πr2 units, leaving the surface area of a sphere and the area of a circle in a perfect 4:1 ratio. Similarly, the volume of a cylinder is πr2h units while the area of a circle is πr2 units, meaning these two properties are also directly proportional.
While recognizing the proportionality of these components, Archimedes also decided to focus on “the ratio of the circumference of a circle to its diameter”: pi. By drawing polygons with numerous sides inscribed in a circle and calculating the perimeter of said polygons, he was able to more accurately compute the value of pi. He found pi to lie between 3 10/71 and 3 1/7, the most precise prediction for his time period. (Symonds and Scott 5). This estimation for pi allowed a more accurate calculation of the area of a circle and volumes of spheres and cylinders, which can be applied to the calculation of sound waves to better understand pitch for music.
In computing these components of circular objects, Archimedes actually perfected a method of integration (Symonds and Scott 5), commonplace in calculus which, as a section of mathematics, wasn’t invented until much later. Archimedes was way ahead of his time in discovering this method, and this allowed himself and future mathematicians to calculate areas and volumes for various shapes.
The Ancient Greek mathematicians contributed to mathematics more than they could have predicted. Many of these people found interest in the field through their studies of prior mathematicians, and capitalized on prior discoveries to draw their own conclusions. This group of people were some of the first to study principles that were abstract and did not require physical tests to prove; rather, they relied on deductive reasoning to develop their theorems. This practice set the precedent for all future scientific and mathematical discoveries. The Ancient Greek mathematicians influenced not only the mathematics of their times or the mathematics of the future, but the overall process of all further scientific discoveries and experiments, thus proving to be invaluable assets to both the field of mathematics and scientific thought as a whole.
Pythagorean Theorem: the Heart of Mathematics
Have you ever wondered what the Pythagorean theorem is? Where it came from? Who came up with the idea of it? I’m going to start by telling you the history of the Pythagorean theorem. In this part of the essay you will learn the timeline of the Pythagorean theorem and its Importance in history. Although even being around for over 4000 years its importance and uses are almost limitless. Its effects are everywhere and affect everyone and everything. The affects are found in living and nonliving things. Although the effects seem like they are hidden they are right in front of our eyes. It’s simple to understand once you learn about them. Even the math is simple and can be very versatile and can be used with many other formulas to improve its solving capabilities. Not only is the math and simple but the history of this theorem is interesting. From this theorem being fo und by Babylonians to being discovered by Pythagoras to being passed on to his followers. From his followers it has new been proven over 300 times and still proofs are being found. More and more this theorem evolves and gets more and more complex in the manner used. None of this would make scenes though if not explained and an explanation is due. What is the history of this theorem? How does the mathematics work? How is it used today? Not only can those questions be answered but they can be explained, proven and shown. Even now there is so much unexplained about the Pythagorean theorem. This essay will only provide a skin to the whole system that is this theorem. This theorem is easily explained but understanding it and the history about it is very difficult. The origins of this theorem are unknown and most likely will never be found but as time goes on our understanding and uses for this theorem will be way greater and farther complicated.
The Pythagorean theorem is one of the oldest mathematical theorems known to men. It goes way back in 1900-1600 BCE but was looked more into and resolved Pythagoras. Pythagoras was a mathematician and a philosopher. He founded a brotherhood that had contributed to development of mathematics. Even inspiring and influencing Plato and Aristotle with his formulated principles and philosophies. Even with all the developmental stuff he has done for mathematics none of his true writings and teachings have survive through word of mouth of his followers. As many know word of mouth travels and changes due to interpretation. Although traveling and changing from his followers’ interpretations the Pythagorean theorem has also stayed the same without fail. Use of the theorem can be seen everywhere in history. It is used I all architecture through time.it has been proved many times throughout history and has been estimated to have 367 different proofs. Not only has this theorem been used for scales and buildings it is used for supports in bridges. A more ancient type of architecture that used this theorem for calculation for supports is aqueducts. Every support pillar has three support triangles to help keep the weight from crumbling the pillars. Triangles are important in architecture because the equally distribute pressure and weight throughout the triangle. This was useful because of the lack of materials and supplies available. It allowed less materials to be used and still have buildings be sturdier while standing under all its own weight.
Even before the Pythagorean theorem triangles were being used in buildings for supports but it wasn’t until Pythagoras discovery that buildings became more and more complex. Look at the colosseum. The colosseum is 1949 years old and is still standing strong. Even buildings such as the leaning tower of Pisa still stands even after its awkward and unusual state of being. For over 4000 years its seen in architecture and teaching by scholars in all ages of time. Even as children we learn about it.at your ages we learn about the uses and traits of right triangles. These weird traits were stating to be questioned by the Chinese in the 1st that were not yet realized could be solved by the theorem. One of these problems being having the ability to find the length of a side of a right triangle only being given a combination of either two angles or two side or both. Today we have just interpreted that into the theorem but that wasn’t the true purpose of the theorem. Seeing as the theorem was discovered mainly by seeing that triangles and right triangles can be fit into any regular shape. Most of these being squares and rectangles. This can also work with some irregular shapes. this can be seen in the game tangrams. There isn’t much left to say or explain about the history of the theorem. Most ancient examples can only be seen in buildings and seen on tablets that were used only as examples. Most of the knowledge died when Pythagoras and his followers and following philosophers died. Even though he wasn’t the one who created it he perfected it and taught the world of the masteries of this amazing and versatile theorem. It is only time till the theorem is perfected and more is discovered because Mathematics has evolved and changed and become more advance as time goes on. Even as time continues, and all these advancements happen this theorem will always be helpful.
What about how this theorem works. How does a theorem provide this much information and when can you use it? What are its limits and how does it benefit the user? In this part of the essay you will learn how the theorem works mathematically and physically makes sense. With most math it is hard to make it physically work but this theorem is possible to be created outside of writing.
When it comes to the Pythagorean theorem it is hard to tell when you use it and when you don’t. It’s as easy as ABC. a2 +b2=c2 is the equation. This equation only works with right triangles and can be used to find any of the sides of the triangle. Depending on the side you are trying to find the equation can change from addition to subtraction. It starts with a, A is usually the smallest or the side that is flat to the bottom. B is the second longest or the side that goes up. C is the hypotenuse or the longest side and is usually the side you will be looking for. If you know what two of the sides of the triangle are you can always find the third. When you only have one side of the triangle the formula can still be used but there can be a very of answers for the other two sides unless you use another formula that uses the right angle or any of the other angles to help you calculate the lengths. This isn’t part of the Pythagorean theorem, but it can be used to help find all the side of the triangle. This is because the Pythagorean theorem isn’t perfect.it isn’t able to find the hypotenuse without two sides already known. With help from other formulas and equations this theorem can cover a vast majority of materials. Through these vast amounts of materials, you can physically show the how this equation works. First you make squares of all the sides. Taking the area of both the smaller squares and combining them will equal the area of the bigger square. This is the most accurate way for finding unknown sides of a triangle. This can be used to help find a rough estimate of the unknown sides of any type of triangle, but it is always correct when used for right triangles.
Although this is extremely accurate and is always correct when used in the right way it is very faulty. First it only works with right triangles. If used on equal lateral triangles for example c would equal twice the amount that it really would. When it comes to the type of triangles it only works with one specific type of triangle. Although it can be used to estimate other types of triangles it won’t be as accurate as right triangles. There are little to no disadvantages to this theorem. If used in the correct way, it will almost end up right and is easy to learn and you can learn other equations to further the uses of this theorem. Every day we see how the Pythagorean theorem is used. It is everywhere around us in both natural and manmade objects. Not only is it there but we also use the effects of it every day and every second of our lives.
In architecture triangles are the most important part. Triangles are used for equal weight distribution. They are usually right triangle and put in between wall beams and used in roofs they are used as supports under a bridge and even the wires on top of a bridge. When it comes to supports and weight triangles are the only shape that can successfully at holding buildings together. Occasionally, you’ll see a board holding up a fence. Imagine the ground is side a and the fence is side b what is side c. Side c is the board touching the ground and the fence. Side c is the hypotenuse. If you knew the height of the fence and the length of the ground from the fence to the end of the hypotenuse, you would be able to calculate a highly accurate estimate of how long the hypotenuse is. This is just one common example of how people use the Pythagorean theorem. This theorem is a very common practice in our lives. It’s used when you want your picture frames to stand up. This theorem is used in our daily lives as well as our jobs. It is everywhere you just must look for it or know what you’re looking for.
In natural objects the Pythagorean theorem is also used. Unlike in manmade objects it’s hard for us to see it. You would have to go out of your way to find them but there are still common examples. In gemstones triangles are natural forming because of the way they are created to support weight. this can also be seen in the patterns that cut diamonds are in today. even in the smallest of jewelry and toys you can see triangles at work. In common toys such as Legos you can see the supports are right triangles and even that they are extremely durables and withstand the pressure of children and adults’ feet.
Even in plain sight people would still question how that is the Pythagorean theorem is used every day in common life. It isn’t directly used unless being taught or doing homework. In our heads we don’t usually do this math all the time. We use the effects of this theorem. This theorem affects our daily life from simply objects like toys and furniture to more complex objects like buildings and bridges. It’s not that we use the theorem is that we are using the after effects of the theorem and the theorem was used to create items so we can use them. It’s used to create items to be more stable and durable.jobs such as an architect carpenter metal worker or welder use this theorem during work and use it constantly for their job but it isn’t used in many other jobs.
The Pythagorean theorem serves a big purpose on our day to day living. There are many real worlds uses for it. It is used in our day to day lives even without knowing it. The Pythagorean theorem is a big component in our lives, and I have learned to appreciate it a little more after writing this paper. Not only have I learned how to use it, but once it is learned and you know how it is used you are able to see its affects everywhere.
- Berggren, J. L., Gray, J. J., Knorr, W. R., Fraser, C. G., & Folkerts, M. (2019, February 08). Mathematics. Retrieved February 25, 2019, from https://www.britannica.com/science/mathematics#ref536035
- Britannica, T. E. (2018, December 06). Pythagoras. Retrieved February 12, 2019, from https://www.britannica.com/biography/Pythagoras
- Britannica, T. E. (2018, March 01). Pythagorean theorem. Retrieved February 12, 2019, from https://www.britannica.com/science/Pythagorean-theorem
- Carpentry Math – Learn the basic math formulas used in carpentry. (n.d.). Retrieved February 25, 2019, from https://www.mycarpentry.com/carpentry-math.html
- Morris, S. J. (n.d.). The Pythagorean Theorem. Retrieved February 12, 2019, from http://jwilson.coe.uga.edu/emt669/student.folders/morris.stephanie/emt.669/essay.1/pythagorean.html
- Pierce, Rod. (1 Mar 2018). ‘Pythagoras Theorem’. Math Is Fun. Retrieved 21 Feb 2019 from http://www.mathsisfun.com/pythagoras.html
- Pythagorean Theorem. (n.d.). Retrieved February 25, 2019, from https://geometryandarchitecture.weebly.com/pythagorean-theorem.html
- Thesleff, H. (2013, February 21). Pythagoreanism. Retrieved February 12, 2019, from https://www.britannica.com/science/Pythagoreanism |
If you are looking for guidelines on finding the density of an object from the knowledge of its total volume and mass, this article will be an insightful read. Here I shall present the density calculation formula and illustrate its usage with examples. Also provided is a density calculator which you will definitely find useful.
Matter manifests itself in various forms to create the world around us. Every object which is observed in your environment, comes with its unique set of physical and chemical properties. One of the most important physical properties of an object is its density.
What is Density?
Before I present you with the formula for calculation of density of any object, let me define this physical property precisely. Density is the amount of matter (or mass) that occupies the unit volume of any object. So the value of density will tell you how densely matter is stuffed in each unit volume of any object. This object need not be a solid. It may be a gas or a liquid too.
If you were to divide any object into small pieces of unit volume, the density would be the mass of each small piece. Everything is made up of molecules, which in turn are made up of atoms. So every object is created from the aggregation of atoms and molecules. Depending on how densely, the atoms and molecules bond together to create the object through aggregation, the density of objects varies. In the following lines, I will provide you with a generic formula for the calculation of density.
Formula For Density Calculation
So density is basically a ratio of the mass of the object to its volume. Here is the formula for density calculation.
Density = Total Mass / Total Volume
Since the SI unit for measurement of mass is Kilogram (Kg) and the unit for length is meter (m), the unit for measurement of density is Kg/m3
Finding the Density of an Object
After a look at the above formula, figuring out density calculation is simple enough. All you have to do is substitute the value of mass in Kg and kilogram in meter to get the density value in Kg/m. To be able to calculate volume, you need to know the appropriate volume formulas for various geometric objects. To know the mass, you can simply weigh the object on a scale. Let me illustrate the calculation method through an example.
Problem: The mass of a wooden cube, which is 2 meters in length, 2 meters in breadth and 2 meters in height is 16 Kg. Find the density of the wooden cube.
Solution: Volume of the Wooden Cube = 2 m x 2 m x 2 m = 8 m3
Mass of the object = 16 Kg
Therefore, Density of the Wooden Cube = 16 Kg / 8 m3 = 2 Kg /m3.
Here is a density calculator which will give you the density of any object, if you provide information about the volume and mass of the object. Just enter the mass value of the object in kilograms and then enter the volume value in cubic meter, to get the density value in Kg/m3.
Once you know the mass and the exact volume of the object, calculating density is just a matter of carrying out a single division operation. Using the density calculator provided above, you can easily calculate the density of any object. Just make sure that you enter the values in the right units which are kilogram for mass and cubic meter for volume. |
Ordered Pairs and Linear Systems
In this ordered pair and linear systems worksheet, students problem solve two equations involving ordered pairs and solving linear systems by graphing.
4 Views 4 Downloads
- Folder Types
- Activities & Projects
- Graphics & Images
- Handouts & References
- Lab Resources
- Learning Games
- Lesson Plans
- Primary Sources
- Printables & Templates
- Professional Documents
- PD Courses
- Study Guides
- Performance Tasks
- Graphic Organizers
- Writing Prompts
- Constructed Response Items
- AP Test Preps
- Lesson Planet Articles
- Interactive Whiteboards
- Home Letters
- Unknown Types
- All Resource Types
- Show All
See similar resources:
Have Some Pizza: Introduction to Systems of Linear EquationsLesson Planet
Frugal shoppers, linear systems are your friends! Armed with information about the rates different pizza places charge for their pizzas and deliveries, learners write equations to represent each restaurant. After analyzing the graphs,...
8th Math CCSS: Designed
Going to the Game (systems)Lesson Planet
Seven real-life math problems require the solving of simultaneous pairs of linear equations. Although a similar approach is taken for most of the problems, the topic of sports stadium concessions should keep the interest level high. As a...
7th - 9th Math CCSS: Designed
Introduction to Solving Linear SystemsLesson Planet
Word problems offer class members an opportunity to learn the concept of solving linear systems using graphs. Individuals choose a problem based upon preferences, break into groups to discuss solution methods and whether there is...
8th - 11th Math CCSS: Adaptable
Solving Systems of Linear EquationsLesson Planet
Solving systems of equations underpins much of advanced algebra, especially linear algebra. Developing an intuition for the kinds and descriptions of solutions is key for success in those later courses. This intuition is exactly what...
8th - 9th Math CCSS: Adaptable
Ordered Pair Solutions of EquationsLesson Planet
Assign this video as a homework refresher on the concept of ordered pairs as solutions to linear equations. Viewers will observe how Sal uses the line on the graph to determine if the given ordered pair is a solution to the given equation.
1 min 9th - 11th Math
Graphing Systems of EquationsLesson Planet
With a graph and two linear equations, Sal explains how to graph systems of equations. He uses a table to pick points, completes the equations, and plots the lines on the graph. This video would be appropriate as a refresher or for more...
7 mins 9th - 11th Math
Application of Linear SystemsLesson Planet
Let the learners take the driving wheel! The class solves systems of linear equations and applies the concepts of systems to solve a real-world situation about parking cars and buses. They then use calculators to create a visual of their...
9th - 12th Math CCSS: Designed
Graphs of Linear Systems: Star LinesLesson Planet
Let the class stars shine. Using four given linear equations, scholars create a graph of a star. The pupils use the interactive to graph the linear systems and determine the points of intersection. Through the activity, classmates...
8th - 10th Math CCSS: Designed
Model a Real-Life Situation Using a System of Linear EquationsLesson Planet
Ah, the dreaded systems of equations word problem. Probably one of the best ways to connect algebra to the real-world, here is a video that goes through the steps on how to detect key words and create two equations that model the scenario.
7 mins 8th - 10th Math CCSS: Designed |
1Higher / Intermediate 2 Computer – Systems Mr ClimiePart 1
2Data RepresentationWhat you will learn here is how a computer that uses an odd number system called binary can still store all kinds of data such as numbers, text, sound or graphics.
3How We Change From Binary to Decimal Representing NumbersHow We Change From Binary to Decimal
4Why Computers Use Binary Even if there is a slight drop in voltage it will still be detected as a 1There are only four rules for addition in binary compared to 100 in decimal[0+0=0 ; 0+1=1 ; 1+0=1; 1+1=10]
5Binary to DecimalWe can change from our number system, decimal, to the computer number system, binary
6Two’s ComplementTwo's complement is the most popular method of working with negative binary numbers.Do examples.H
14Text RepresentationIn this section, you will find out how text is stored on computers and how we can guarantee that what you type is exactly what is printed out or sent to someone else.
15Coding TextWhen we type text into the computer, a numeric code is used to store it as a number.
16ASCII This was developed in the 1960’s Due to peripherals being made by many different makers they needed a common code.American Standard Code for Information Interchange became the common code.
17ASCII Control CodesControl codes can be used for device control such as cursor movement, page eject, or changing colours
18Character Set This is the set of characters that can be displayed. If a different language is being used then a different character set may be used.Examples could beLatin set for EnglishCyrillic for RussianChinese character set.
19Problems With ASCII It is American. There is no code for the £. There is not a code for European languages and the characters they use.
20UnicodeThis was designed to replace ASCII and have a unique numeric code for every written language.H
21Advantage of UnicodeThe advantage is guaranteed correct communication betweenCountriesPeripheralsH
22ASCII or Unicode ASCII files are smaller than Unicode files Unicode has every possible characterASCII is an 8-bit codeUnicode is a 16-bit codeH
24Representing Graphics In this section you will find out how graphics are stored on computers.There are a couple of methods in use and we discuss them.
25Graphics Representation The computer uses two methods to store a graphic on the computer.Bit mappedRecords ever single dot or pixel used and stores the pixels directly in the computers memory.VectorRecords how the diagram is made up by recording how we would draw the lines and shapes used in the diagram.H
26Bit Mapped Graphics The screen is drawn using dots called PIXELS. Each pixel is connected directly to a set (map) of memory locations.
27Bit Map Graphics and Colour If we use colour, we need more than one bit of memory to record the pixel.This is called bmp graphics.A drawback is the very large size of file that can be produced by some pictures.An advantage is we can edit individual pixels.
28Bit Depth or Number of Colours 2n is the amount of colours we can display when we use n bits to store the colours.Sometime said as n-bit depth.8 – bit depth or 8 bit colour= 256 colours24 – bit colour (sometimes called true colour)H
29Vector GraphicsThis records ‘how’ to draw the shape by recording the ‘attributes’ of the shape.A big advantage is it takes up very little space to record the picture.H
30Example - CircleA circle is drawn knowing the centre co-ordinates, the radius, and the colour of the circumference.Circle1, 200,400, 15, red.H
33Graphic File FormatsIn this section you will find out the different ways graphic programs save their files.No single program or format is the best, they all have different advantages, disadvantages and uses.
34Graphic File FormatsUse the Internet to research the following graphic file formatsBMPJPEGGifTiffWhat it stands forAn advantageA disadvantageA likely use of eachH
35BMP Bit Mapped Pictures The standard format for Windows based computers.It is a resolution-dependent file format.A picture drawn on a screen of 800x600 pixel will look poor on a screen 1280x768 pixelA drawback is the large file sizeAn advantage is being able to work at individual pixel levelH
36JPEG JPEG is a format that uses compression. This can lose data from an image.If you save the compressed file again and again using the compression feature, you will lose detail over time.JPEG files have the extension .Jpeg or .jpgA major advantage is the file size, it is usually far smaller than BMPH
37GIF Graphic Interchange Format Its resolution is low, it is designed for screen only use, making it unsuitable for printing purposes.Animated GIF images are the most common method of creating a moving banner or animation for the web.H
38TIFF Tagged Image File Format TIFF is a platform-independent format The TIFF format was specifically designed for scanned images and use in DTP.Uses bitmapped imagesThere are different versions in use, so it is not commonly used!H
39Need for Compression Graphic files can be very large. Compression is used to make the file size smaller.This may mean the picture loses some quality, but it is a more manageable size.
40Memory Used By A Picture In this section you will find out, by calculation, how much storage is required to store pictures.
41Memory Used Calculation We can calculate how much memory a picture will take up when stored.There are two methods employed, it depends on whether it is displayed on aScreenOrAs a photograph
42PhotographWhen we scan a photo we need to know the scan resolution in dots per inch (dpi) and the number of colours or bit depth.We can calculate the memory required to hold it asArea of photo x dpi x dpi x bit depth
43Monitor Screen A computer screen uses resolution and bit depth only. We need to know the screen size in pixels and the bit depth used.Width in pixels times height in pixels times bit depth.
49ALU – Arithmetic Logic Unit This is the part of the CPU where data is processed and manipulated.The processing consists ofArithmetical operationsLogical comparisonsIt will also include registers to temporarily store the results of calculations.
50Arithmetic UnitMost computer calculations involve adding so a specialist arithmetic unit is part of the CPU.3 times 4 or 3x4 is 3 added together 4 times.
51Logic UnitThis is special circuits designed to work out comparisons such asA<BAge > 12(Seats=‘yes’) and (number < 6)
52Control UnitThis is the part of the CPU that manages the execution of instructions.It fetches each instruction in sequenceDecodes the instructionSynchronises the commandsThen executes the commandThis is done by sending out control signals to the other parts of the computer.
53Registers These hold Data being processed Instructions being executed Addresses to be accessed
55Internal BussesIn this section you will find out how the various chips that make up the main section, or motherboard, of the computer can communicate between themselves.
56What is a Computer Bus?A bus is a set of physical connections which are used by hardware components in order to communicate with one another.H
57The Computer BussesThere are threeAddress busData busControl busH
58Address BusThis transports memory addresses which the processor wants to access in order to read or write data.The size of the address bus will determine how many memory locations can be addressed.The amount of memory accessible for a bus of width n is 2nIt is a unidirectional bus.H
59Data BusThe data bus is used to transfer data either way between the memory and the CPU.It is a bi-directional busH
60Control BusIt sends signals to other parts of the computer to synchronise their tasks.It also transmits response signals from the hardware.It is a bidirectional bus.H
61Example Control LinesReset - to return a device back to its original state.Interrupt - the processor has to stop doing what it was doing and deal with this new more important task.Read – to initiate the transfer of data from the memory to the processorWrite - to initiate the transfer of data from the processor to the memory.H
62Fetch Execute Cycle The main task a computer does It gets an instruction that is stored in the memory of the computer, loads it into the cpu, then does the command it is set.This is called the fetch/execute cyle as it does this repeatedly many times per second.
63Fetch Execute Cycle Setup address bus Enable read line Data transferred using data busCommand decodedCommand executedH
65Data StorageIn this section you will get more details of how a computer stores the data it is using and storing.Also there is a common method used to speed up the access to that data.
66Data StorageComputers make use of various styles of storage, they include:Main memoryCacheRegistersBacking storage
67Main Memory There are two styles of memory in use RAM ROM Random access memory is a type of computer storage whose contents can be accessed in any orderROMRead only memory is memory whose contents can be accessed and read but cannot be easily changed
80How Fast Is That Computer? In this section you will find out how we can compare computers in terms of overall speed.We use terms such as powerful and fast but how do we compare them properly.
81Clock Speed A computer's system clock resides on the motherboard. It sends out a signal to all other computer components in sync.Every action in the computer is timed by these clock cycles and takes a certain number of cycles to perform.
82Clock Speed as System Performance This is only a basic measure.We can only compare them if they are made by the same company.Even then they must be from the same ‘family’ of processors.
83MIPS Millions of instructions per second. This measures how many simple instructions can be performed by the CPU in one second.MIPS measures CPU performance only, not the overall system performance.
84Flops Floating point operations per second. This is similar to Mips but it uses real numbers with fractions in the calculations.Is still in use for modern computers.
85How Do We Compare Computer Systems? Mips and Flops look at the processor on its own, but that does not take into account that some disc drives are much faster than others, so we need to be careful how we measure a computers speed.For a full comparison we use application tests which test many different programs and aspects of the whole system.
86Application Based Tests A test that serves as a standard by which computer systems may be compared.This takes into account the complete computer system as well as software.Also known as benchmarks.
87More InfluencesThere are more methods of influencing how fast a computer works. These are in the way the computer is designed.On the next page you will learn of some of the ways this can be designed.
88Factors That Affect System Performance Data bus widthWider the betterUse of cache memoryOn board the CPU is betterRate of data transfer to and from peripheralsModern interfaces are much faster, USB2 is 40 times faster than USB1
89Other Methods of Increasing System Speed Increasing clock speedsNew parallel processorsIncreasing memory, 2 or 3 GbBacking storage capacity1 Tb available
91What Is a Peripheral? Peripherals may be internal or external. Examples of peripherals include printers, monitors, disk drives, scanners and so on.You will find out how we compensate for the difference in speed of the fast computer and the slow peripheral.
93Practical WorkThe following few slides indicate the type of peripheral you are going to research.Write down the details as you find them.Note we want typical information NOT specific details of an actual peripheral.
941. Typical Characteristics of KeyboardMouseMicrophoneTouchpad
952. Typical Characteristics of Digital cameraScannerWebcam
963. Typical Characteristics of MonitorLCD panelInkjet printerLaser printerLoudspeaker
974. Typical Characteristics of Hard disc driveMagnetic tape driveCD-rom, CD-R, CD-RWDVD-rom, DVD-R, DVD-RW
98Choice of PeripheralsA peripheral is hardware that is added to a computer in order to expand its abilities.You can take in to accountResolutionCapacitySpeed of data transferCompatibilityCost
99Compensating for Speed Peripherals are slow in comparison to the computerThere are methods employed to compensate for these speed differences.BuffersSpoolers
100BUFFERSAn amount of RAM on the peripheral, used for temporary storage of data that is waiting to be sent to a device, typically a printer.Used to compensate for differences in the rate of flow of data between components of a computer system.
101SpoolersA method by which a disc drive can store data and feed it gradually to a printer, which is operating more slowly than the computer.More commonly used in a network.
103InterfacesInterfaces are more than just a connector, the wrong choice can slow the speed of the computer system considerably, the right choice can speed it up.Most of the times you do not realise the interface is there, as it should be.
104Purpose of an Interface Interfaces are used to pass data that are different in style but can be used by the equipment connected together by that interface.
105Functions of an Interface BufferingData format conversionVoltage conversionProtocol conversionHandling of status signals
106BufferingThis is when a section of RAM is set aside to store the data being transferred.It waits until a complete block of data is created, then transfers the data completely in one block.It then creates a new block of data.
107Digital to Analogue Converter This converts digital signals into analogue signals.
108Parallel to Serial Parallel Serial All bytes of data sent at once in a ‘row’One cable per bit of data being sentSerialOne bit sent at a time, one after the other down the same cable.
109Voltage ConversionThis may transform the mains AC to DC for the computer peripherals.It may convert the 240volts mains to the 5 volts the computer needs.
110Interface ProtocolA formal description of the rules and formats used to allow computer and peripheral to work properly together.
111Status RegistersThere will be a special register on the interface that can communicate with the CPU.The status register could holdThe printer is out of paperThe disc drive is not readyThe internet connection can not be made
112Wireless This is becoming increasingly more and more common. There are two stylesBluetoothWi-Fi
113Wi-Fi It can let you use a printer in another room of the house. You can use peripherals up to quite a long distance away. (30m).You can share a Wi-Fi link with other users or make it secure.
114Bluetooth This is also wireless but is much less powerful. It is designed to allow peripherals of any kind to communicate with others.It only works over about 10 metres distance or so.It is much slower than Wi-Fi.
115Computer types A loose description of computer types is Embedded PalmtopLaptopDesktopMainframe
116EmbeddedThese are incorporated into other devices, rather than being stand alone computers.Examples include digital cameras, mobile phones, music players and almost any kind of industrial or domestic control system
117PalmtopThe name for pocket computers of small size, low weight and long battery life.Its disadvantage, comparing to a PC, are reduced functions due to smaller memory and small screen.It is more intended for time planning, listing addresses and notes.Being replaced by Smart Phones such as the iPhone and similar.
118Laptop A laptop is a computer that is characterized by mobility. Its components are similar to a desktop except miniaturized and made for low power consumption.Now being called a portable or notebook.
119Desktop Designed to be used by one person only. The most commonly used style of computer.Easily upgraded and extended..
120MainframeA physically large computer which has an extensive amount of memory and disk space and is able to perform several different tasks simultaneously.It can have hundreds or even thousands of users connected to it.
121Computer Task Can you compare the types of computer on Type of processorSpeed of processorSize of main memoryBacking storageInput and out put devices |
Part A. Use the Model mode in the simulation to identify the geometries for ozone, phosphate ion, and argon fluorohydride (for which the Lewis structures are depicted, and the resonance forms can be ignored).
Drag the labels to the respective targets
The shapes of molecules depend on the number of electron groups that surround a central atom. For molecules in which all electrons around the central atom are participating in bonding, the molecular geometry is the same as the electron geometry, and the molecular shapes are linear, trigonal planar, tetrahedral, trigonal bipyramidal, and octahedral. However, nonbonded electrons, which wouldn't be observed in the molecular geometry, affect the overall distribution of electron groups; therefore, the molecular and electron geometries will be different when nonbonding electrons are present.
This can be exemplified in the case of ammonia, NH3. Ammonia has three bonding groups, but it will not exhibit trigonal planar geometry because the lone pair of electrons exerts a repulsive force. There are four electron groups (three bonding groups and one nonbonding group) in ammonia, which means that the electron geometry will be tetrahedral. When only the molecular structure is examined, the lone pair is not seen, and the molecular geometry will adopt a more pyramidal structure that can be seen in the image below (trigonal pyramidal).
The valence-shell electron-pair repulsion (VSEPR) model encompasses the geometries that result from the various interactions that occur between electron groups (also called electron domains) and the relative repulsive forces exerted by each type of electron group (lone pair, single bond, double bond, and triple bond). VSEPR models also predict both the electron and molecular geometries, but not all reference charts may indicate bond angles.
Frequently Asked Questions
What scientific concept do you need to know in order to solve this problem?
Our tutors have indicated that to solve this problem you will need to apply the Molecular vs Electron Geometry concept. You can view video lessons to learn Molecular vs Electron Geometry. Or if you need more Molecular vs Electron Geometry practice, you can also practice Molecular vs Electron Geometry practice problems. |
What is Hyperinflation?
Hyperinflation occurs in a country’s economy when the prices of goods and services rise in excess of 50% per month.
Table of Contents
Causes of Hyperinflation in Economics
In economics, the term “hyperinflation” is defined as a period in which the prices of all goods and services in a particular country rise dramatically.
If a country’s economy is in a state of hyperinflation, the central government (or the applicable governing party) has essentially lost control of the economy’s rate of inflation.
The main cause of hyperinflation is a disproportionate rise in the money supply that far exceeds the expectations of consumers, companies, economists, and the government.
The significant uptick in the money supply, when not supported by enough growth in the economy, can cause seemingly exponential growth in inflation.
Hyperinflation is frequently preceded by the central government printing a substantial amount of money in an attempt to increase the current level of economic activity.
The drawback to the government flooding the economy with cash is that the sudden increase in the amount of money in circulation results in the country’s currency declining in value, and thus causing a rise in the price of goods and services.
Usually, these negative consequences of the central government printing more money are not apparent to everyday consumers until the printing is either gradually pulled back or halted.
Effects of Hyperinflation
If hyperinflation is present in a country’s economy, one notable change in consumer behavior is the increased hoarding of goods, i.e. stockpiling of everyday essentials.
When the outlook on the economy is negative, consumers expectedly increase their near-term spending to accumulate required goods in anticipation of a long-term decline in overall spending (and a major economic collapse).
Over time, goods become more expensive, more businesses close, and daily goods become scarce as the government struggles to fix an economy that is falling apart.
Often, consumers will lose their life savings from currency devaluation, where the country’s currency of exchange loses a significant percentage of its original value.
In addition, banks and other institutional lenders will end up in bankruptcy from the value of their loans becoming near worthless, reducing the amount of credit available in the country and the amount of money in circulation.
To make matters even worse, consumers eventually will stop depositing their money at financial institutions, placing even more downward pressure on banks and lenders.
A country’s currency during a period of hyperinflation plummets in value, particularly overseas in foreign markets, and domestic importers also produce less revenue (and profits) as the cost of foreign goods becomes too high for their business models to be sustainable.
From the perspective of foreign countries, the collapsing value of the country’s currency makes exports more affordable — but these beneficial savings are at the expense of the country experiencing hyperinflation.
Hyperinflation is characterized by increased prices, a devalued currency, more bankruptcies, less purchasing power among consumers, and shortages in goods like food.
Inflation vs. Hyperinflation — Key Differences
Inflation describes periods when the prices of goods and services rise, resulting in less consumer spending and a reduction in purchasing power.
In contrast, hyperinflation describes a time of “extreme” inflation that was not managed effectively by the central government and is now deemed excessive and uncontrollable.
- Inflation → The concept of inflation refers to a noticeable rise in the price of goods and services, which the central government can (and should) take measures to taper such price increases.
- Hyperinflation → In contrast, hyperinflation results from poor fiscal policies and unwise actions that the central government takes post-inflation.
Hyperinflation in the U.S. Economy?
Most economists define hyperinflation as when inflation is at a rate of more than 50% per month. The level of inflation observed in the U.S. in 2022 is nowhere near this threshold, i.e. the effects of hyperinflation are multitudes worse than “normal course” inflation.
In the U.S., the Federal Reserve aims to maintain an inflation rate of around 2% over the long term, although the latest reported figures have been closer to 8.5%.
The spike in the U.S. inflation rate was caused by the low-interest rate environment that lasted for decades, with rates lowered even further because of the COVID-19 pandemic in 2020.
But now that the economy is gradually recovering, the Fed is attempting to mitigate inflation risk by increasing interest rates and reducing spending (and we’ll see how these monetary policies pan out in the coming years).
Hyperinflation Example — Venezuela Economy
A real-world example of a country suffering from hyperinflation is Venezuela, which initially began with double-digit inflation in the early 1980s after a span of socioeconomic and geopolitical conflict.
The issues that caused the rise in inflation in the first place continued to negatively affect the country’s economy even up to the present date, despite the claims by economists in the end of 2021 that Venezuela is technically no longer in a state of hyperinflation.
While Venezuela broke out of one of its longest streaks of hyperinflation in 2021 — i.e. the country’s inflation rate was reported to be sub-50% for the first time in quite a while — the economy is by no means recovered and stable today.
In fact, many consumers in the country still struggle to afford necessities such as food.
The payment infrastructure in Venezuela collapsed until somewhat recovering more recently when the central government made adjustments to the denomination of its currency and implemented a gradual reduction in the amount of money printing and government spending in order to cut the fiscal deficit more effectively.
Currently, more than half of the transactions completed in Venezuela are denoted in U.S. dollars, coinciding with the increased usage of digital apps such as Zelle and PayPal.
Venezuela Annual Inflation Rate (Source: Steve Hanke, John Hopkins University) |
This activity is part of the Climate Change Challenge unit.
1. Support students as they read to define key meteorological terms.
- Go outside briefly, or open classroom windows and ask students to quickly brainstorm everything they can see, hear, smell, or feel to describe the weather at this moment in time.
- Next, challenge students to identify the six variables used by professionals to describe weather conditions. List ways that weather channels or apps describe current weather conditions in a Think-Pair-Share. Celebrate student identification of any of the six weather variables from prior knowledge:
- Organize students into small groups associated with each weather variable listed above, aside from temperature (which will be used as a model variable at the end of this step). Assign each group to read and annotate the encyclopedic entry associated with their variable.
- Prompt students in their groups to begin Part C of the Extreme Weather Model Builder handout by defining their weather variables using knowledge from the article.
- Use temperature to model the collaborative definition process: request volunteers to share and record definitions on the board to evaluate and edit as a class. Assign all students to record accurate consensus definitions for each term to complete the table in Part C of the Extreme Weather Model Builder.
2. Introduce the class weather station, and gather and graph initial weather data with students.
- Introduce the class weather station (the Setup section contains guidance on creating this very simple station). This will be used to gather data on temperature and precipitation (and other variables, if desired) throughout the Extreme Weather lesson.
- Model how to collect initial temperature data using a thermometer. Incorporate this data point onto a class temperature point/line graph that will last for the next three days (this and the following two activities, Weather, Meet Climate, and Now and Then).
- Prompt students to check the temperature graph for critical elements discussed in the Global Trends activity: title, axis labels, and key.
- Have students continue to work in the same Weather Data Collection groups from Step 1.
- Assign groups to collect data for additional variables. Depending on the complexity of your class weather station, you may wish to direct students on how to collect this data directly or use the National Weather Service 3-Day Weather Observation History to collect the data digitally. Project the National Weather Service site as you enter your zip code in the upper left corner of the page, then click on the "3 Day History" link at right-center. The graphs and chart that appear give hourly data, of which students only need the most recent (first) entry. In the table, the key weather variables appear as follows:
- Temperature, with units in degrees Fahrenheit (F)
- Humidity, with units in percent (%)
- Wind; read only the first number, which refers to the constant wind speed, with units in miles per hour (mph)
- Atmospheric pressure, with units in inches (in)
- Precipitation, with units in inches per 24 hours (in)
- Cloud cover (not measured here); looking at the sky through a window, students can make a rough estimate of cloud cover, in bins of 0–25%, 25–50%, 50–75%, and 75–100%.
- Orient students to this chart, and help the groups identify the most recent (first) entry for their variable.
- Assign students to incorporate this data onto the first day of a point/line graph, mirroring the one that you created for temperature.
3. Prompt students to revise their extreme weather event models with additional research.
- Reconvene students in their extreme weather groups from the Weather Interconnections activity. Prompt groups to revisit their initial weather models (Part B) and key meteorological terms (Part C) from their Extreme Weather Model Builder.
- Have students rewatch the appropriate extreme weather video Extreme Weather: Drought (3:01), Hurricanes 101 (2:42), or Tornadoes 101 (3:01) to practice identifying how these weather variables influence extreme weather. Complete Part D of the Extreme Weather Model Builder for at least four weather variables.
- Using this information, have students create a revised model of their extreme weather event in Part E of the Extreme Weather Model Builder. This new model should contain:
- A visual representation of the extreme weather event.
- Labels to identify at least four weather variables.
- Arrows to show interactions between the variables.
- Plus and minus signs to show relationships between the variables.
4. Lead a discussion of the factors that are common or unique to extreme weather events.
- Ask groups to choose one member’s revised weather model (Part E) to post in a visible location in the classroom.
- Organize a gallery walk in which students visit other groups’ revised weather models sequentially, preparing through small group discussion to answer two questions:
- What do these extreme weather events have in common? (Listen for responses such as, ‘all weather events involve temperature and moving air’ or ‘hurricanes and tornadoes both involve high winds and they spin.’)
- What is unique to each extreme weather event? (Listen for responses such as, ‘each kind of event has different variables that are more important’ or ‘drought comes with very low precipitation, which is different from tornadoes and hurricanes.’)
- Reconvene the class, soliciting volunteer contributions. Direct students to record what they think are the most meaningful similarities and differences between the different extreme weather events in Part F of the Extreme Weather Model Builder.
Informally assess students’ understanding of the weather variables contributing to extreme weather events, as well as similarities between these events, by examining Parts B-F of their Extreme Weather Model Builder.
Extending the Learning
Step 2: Students may wish to explore additional elements of the National Weather Service website, including the "Active Alerts" and "Rivers, Lakes, and Rainfall" tabs. Both tabs are relevant to extreme weather events occurring currently. You may wish to consider having students gather and graph additional information. This can be done either daily if data collection during this and the following two activities span a weekend, or by incorporating more hourly information from the three-day forecast.
Subjects & Disciplines
- Read to define key weather variables.
- Collect and graph current, local data on these key weather variables.
- Revise a model of an extreme weather event to incorporate the roles of, and interactions between, key weather variables.
- Project-based learning
- Lab procedures
- 21st Century Student Outcomes
- 21st Century Themes
Critical Thinking Skills
Science and Engineering Practices
- Developing and using models
- Obtaining, evaluating, and communicating information
Connections to National Standards, Principles, and Practices
Common Core State Standards for English Language Arts & Literacy
- CCSS.ELA-LITERACY.RST.6-8.4: Determine the meaning of symbols, key terms, and other domain-specific words and phrases as they are used in a specific scientific or technical context relevant to grades 6-8 texts and topics.
Next Generation Science Standards
- Crosscutting Concept 2: Cause and Effect: Cause and effect relationships may be used to predict phenomena in natural or designed systems.
- MS-ESS2-5: Collect data to provide evidence for how the motions and complex interactions of air masses results in changes in weather conditions.
- Science and Engineering Practice 2: Developing and using models
What You’ll Need
Materials You Provide
- Anemometer (optional)
- Barometer (optional)
- Clear-walled, straight-sided vessel, such as a glass beaker
- Hygrometer (optional)
The resources are also available at the top of the page.
- Internet Access: Required
- Tech Setup: 1 computer per pair, Monitor/screen, Projector
- Computer lab
- Large-group instruction
- Large-group learning
- Small-group learning
- Small-group work
Weather describes the state of the atmosphere at a specific place and a short span of time. Six key variables contribute to weather: temperature, precipitation, pressure, wind, humidity, and cloudiness. Scientists and forecasters precisely measure these variables with tools such as a thermometer, barometer, and anemometer. These variables combine to influence what we feel when we walk outside, but also determine other important aspects of our lives, such as the ability of our food to grow in a given season.
Extreme weather events include hurricanes, tornadoes, and droughts. Each of these extreme weather events has the capacity to powerfully influence the lives of humans, and can sometimes even be deadly. Extreme weather events involve the same set of variables as other types of weather. For example, hurricanes depend on temperature and humidity—they thrive on warm, moist air. Droughts occur when precipitation is very low. Although the conditions leading to the formation of tornadoes is slightly less clear, these storms seem related to differing temperatures in colliding air masses.
force per unit area exerted by the mass of the atmosphere as gravity pulls it to Earth.
all weather conditions for a given location over a period of time.
amount of sky covered with clouds.
period of greatly reduced precipitation.
amount of water vapor in the air.
tropical storm with wind speeds of at least 119 kilometers (74 miles) per hour. Hurricanes are the same thing as typhoons, but usually located in the Atlantic Ocean region.
all forms in which water falls to Earth from the atmosphere.
degree of hotness or coldness measured by a thermometer with a numerical scale.
a violently rotating column of air that forms at the bottom of a cloud and touches the ground.
state of the atmosphere, including temperature, atmospheric pressure, wind, humidity, precipitation, and cloudiness.
movement of air (from a high pressure zone to a low pressure zone) caused by the uneven heating of the Earth by the sun. |
These worksheets demonstrate the various skills needed to solve an equation when there are variables found on both sides of the equal sig. To solve this we will learn to use grouping or isolating the variables. We will start you off with working with two of the same variables in an equation. You will need to combine those values or like terms and then proceed from there. As we get more advanced, we will look at problems that involve two completely different variables and how to solve for each. Most of these problems follow the format of combine like terms and perform any needed operations. This will seem very difficult at first but will become a breeze in time.
Learn how to solve an equation that has a variable on each side of the equal sign. x + 33.24 = 9 - 2x. This lesson will walk through it. In these types of equation, both sides of the equation have a term with the variable. To isolate the variable, we need to get all the variable terms to one side and the constant terms to the other side. Next, we combine like terms and then isolate the variable by multiplying or dividing.
Follow the steps to write an equation from the statement given, then solve for x. Example: When a number is multiplied by 2 and 1 is subtracted from the result, the result is the same as the result of that number multiplied by 9 and subtracted from 120.
Solve for each equation. Check by substituting your solution to the equation. This is an example of the types of problems that you will find here: x + 9 = 2x + 4
For each of the 10 problems write an equation and solve. These problems will give you a written math statement that will need to be converted to an equation and then you can complete them.
You will be given 10 problems like this: -15 + 12x = 5x + 7 - 3x. You can solve them any number of ways.
For each, write an equation and solve. You will have math statements that you will need to convert to equations that have variables on both sides of the equal sign. Example: When you multiply a number by 12 and add 18 the result is the same as the product of the number and 3.
How to Solve Equations with Variables on Both Sides?
Until now we have worried about solving equations that contain one unknown variable. We will now look at what to do if you have two variables in a problem and are working to find the value of the variables. In this series of problems we are working with a single variable that is found on either side of an equation. The first step is to get all the non-variable values of the equation to one side of the equation. Then you can place the variables together on the other side. Once you combine the variables, it gets pretty easy to solve. Students will work on solving for variables on each side of an algebraic equation using grouping, isolating, and balancing. The worksheets also provide several methods to check the answers. You can always just plug the value you found into the equation itself. If the equation turns out to be true, you are right. If the equation is false, time to go back to the drawing board.<br>
Have you ever seen letters in your math problems? In algebra, we often encounter variables in our mathematical equations. As we go higher in grade level, more variables start appearing, sometimes on both sides of the equal sign.
How do you solve equations with variables on both sides? Is there a trick to it? What should you know when solving these questions? We will answer all these questions for you.
What are Variables?
You might have encountered a few letters in your math equation. Those letters are called variables. Variables are letters that represent a specific unknown value. Your goal is to find the unknown value of the variable that proves the equation.
For example, in the question 3x = 15, x is the variable. There are 26 letters in the English alphabet. So, there can also be 26 kinds of variables that you may see in a math equation.
What are Equations?
An equation is a statement between two expressions showing that the two expressions are equal. The expressions can consist of variables and numbers.
In simple terms, an equation is any statement with an equal sign and two expressions on both sides of the equal sign.
An algebraic equation is considered solved when you have isolated and found a value for the variable. This value should verify the equation. To solve such an equation, you must follow a simple rule: whatever you do on one side of the equation, you must also do it on the other.
Another thing to remember when solving such a question is the PEMDAS rule. The PEMDAS rule shows the order of operations. The equation must be operated in a sequence of parenthesis, exponents, multiplication or division, then the addition or subtraction.
The BODMAS rule is also important in this case. It states the order as Brackets Open, Division, Multiplication, Addition, then Subtraction.
Steps to Follow
Step 1: The first thing you need to do is identify the variable. The variable you identify is the variable you need to find a value to.
Step 2: Then, remove the variable from one side. You can do that by either adding, subtracting, multiplying or dividing the value to both sides of the equation.
Step 3: Remove the consonants from one side of the equation, following the same method. Tip: cancel out the consonant on the same side as your variable. You need to isolate your variable to one side.
Step 4: if there is a coefficient with the vowel, remove that.
Step 5: Confirm your answer. You can confirm your solution by putting the value of the variable you got into the original equation. If both sides are equal, your value is correct.
Example: Solve 10a - 12 = 7a + 15
We identify our variable as a and need to remove the variable for one side
To do that, we subtract both sides with 7a
10a - 12 – 7a = 7a + 15 – 7a
3a - 12 = 15
Now, we remove the consonant from the left side by adding both sides with 12
3a - 12 + 12 = 15 + 12
3a = 27
Now, we remove the coefficient by dividing both sides with 3
3a/3 = 27/3
a = 9
to confirm our answer, we add the value of a we got to the original equation.
10a – 12 = 7a + 15
10(9) – 12 = 7(9) + 15
90 -12 = 63 + 15
78 = 78
If you know the trick, solving equations with variables on both sides is easy. But the most important thing to do is to practice. Once you’ve practiced enough, you will never forget how to solve them. |
Eigen vector of a matrix A is a vector represented by a matrix X such that when X is multiplied with matrix A, then the direction of the resultant matrix remains same as vector X.
Mathematically, above statement can be represented as:
AX = λX
where A is any arbitrary matrix, λ are eigen values and X is an eigen vector corresponding to each eigen value.
Here, we can see that AX is parallel to X. So, X is an eigen vector.
Method to find eigen vectors and eigen values of any square matrix A
We know that,
AX = λX
=> AX – λX = 0
=> (A – λI) X = 0 …..(1)
Above condition will be true only if (A – λI) is singular. That means,
|A – λI| = 0 …..(2)
(2) is known as characteristic equation of the matrix.
The roots of the characteristic equation are the eigen values of the matrix A.
Now, to find the eigen vectors, we simply put each eigen value into (1) and solve it by Gaussian elimination, that is, convert the augmented matrix (A – λI) = 0 to row echelon form and solve the linear system of equations thus obtained.
Some important properties of eigen values
Eigen values of real symmetric and hermitian matrices are real
Eigen values of real skew symmetric and skew hermitian matrices are either pure imaginary or zero
Eigen values of unitary and orthogonal matrices are of unit modulus |λ| = 1
If λ1, λ2…….λn are the eigen values of A, then kλ1, kλ2…….kλn are eigen values of kA
If λ1, λ2…….λn are the eigen values of A, then 1/λ1, 1/λ2…….1/λn are eigen values of A-1
If λ1, λ2…….λn are the eigen values of A, then λ1k, λ2k…….λnk are eigen values of Ak
Eigen values of A = Eigen Values of AT (Transpose)
Sum of Eigen Values = Trace of A (Sum of diagonal elements of A)
Product of Eigen Values = |A|
Maximum number of distinct eigen values of A = Size of A
If A and B are two matrices of same order then, Eigen values of AB = Eigen values of BA
This article has been contributed by Saurabh Sharma.
If you would like to contribute, please email us your interest at firstname.lastname@example.org
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above
Attention reader! Don’t stop learning now. Get hold of all the important CS Theory concepts for SDE interviews with the CS Theory Course at a student-friendly price and become industry ready.
- Orthogonal and Orthonormal Vectors in Linear Algebra
- Basis Vectors in Linear Algebra - ML
- Mathematics | Predicates and Quantifiers | Set 1
- Mathematics | Mean, Variance and Standard Deviation
- Mathematics | Sum of squares of even and odd natural numbers
- Mathematics | Introduction and types of Relations
- Mathematics | Representations of Matrices and Graphs in Relations
- Mathematics | Covariance and Correlation
- Mathematics | Predicates and Quantifiers | Set 2
- Mathematics | Closure of Relations and Equivalence Relations
- Mathematics | Partial Orders and Lattices
- Mathematics | Graph Isomorphisms and Connectivity
- Mathematics | Planar Graphs and Graph Coloring
- Mathematics | Euler and Hamiltonian Paths
- Mathematics | PnC and Binomial Coefficients
- Mathematics | Limits, Continuity and Differentiability
- Mathematics | Walks, Trails, Paths, Cycles and Circuits in Graph
- Mathematics | Power Set and its Properties
- Mathematics | Unimodal functions and Bimodal functions
- Mathematics | Sequence, Series and Summations |
SummaryStudents learn about sound and sound energy as they gather evidence that sound travels in waves. Teams work through five activity stations that provide different perspectives on how sound can be seen and felt. At one station, students observe oobleck (a shear-thickening fluid made of cornstarch and water) “dance” on a speaker as it interacts with sound waves (see Figure 1). At another station, the water or grain inside a petri dish placed on a speaker moves and make patterns, giving students a visual understanding of the wave properties of sound. At another station, students use objects of various materials and shapes (such as Styrofoam, paper, cardboard, foil) to amplify or distort the sound output of a homemade speaker (made from another TeachEngineering activity). At another station, students complete practice problems, drawing waves of varying amplitude and frequency. And at another station, they experiment with string (and guitar wire and stringed instruments, if available) to investigate how string tightness influences the plucked sound generated, and relate this sound to high/low frequency. A worksheet guides them through the five stations. Some or all of the stations may be included, depending on class size, resources and available instructors/aides, and this activity is ideal for an engineering family event.
Engineers must understand how sound behaves in order to design sound equipment such as stereos, speakers, phones and radios. In order to design hearing aids and implants for hearing-impaired people, biomedical engineers also require an excellent understanding of sound energy and how our brains perceive sound. In this activity, students learn about sound waves and sound energy. In the discussion following the activity, students relate their knowledge of sound waves to engineering applications.
It is recommended that students complete the Yogurt Cup Speakers activity before conducting this activity so that they have their own homemade speakers to test, as well as a basic understanding of speakers.
After this activity, students should be able to:
- Provide evidence that sound is a wave.
- Define amplitude, wavelength and frequency.
- Explain how sound energy travels as waves, interacts with the eardrum and is perceived as sound.
More Curriculum Like This
Students learn about glaucoma—its causes, how it affects individuals and how biomedical engineers can identify factors that trigger or cause this eye disease, specifically the increase of pressure in the eye. Students sketch their own designs for a pressure-measuring eye device, prepare them to cond...
Students learn about the types of seismic waves produced by earthquakes and how they move the Earth. Students learn how engineers build shake tables that simulate the ground motions of the Earth caused by seismic waves in order to test the seismic performance of buildings.
Students learn about sound with an introduction to the concept of frequency and how it applies to musical sounds.
Students measure the wavelength of sounds and learn basic vocabulary associated with waves. As a class, they brainstorm the difference between two tuning forks and the sounds they produce. Then they come up with a way to measure that difference. Using a pipe in a graduated cylinder filled with water...
Each TeachEngineering lesson or activity is correlated to one or more K-12 science,
technology, engineering or math (STEM) educational standards.
All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN),
a project of D2L (www.achievementstandards.org).
In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics;
within type by subtype, then by grade, etc.
Each TeachEngineering lesson or activity is correlated to one or more K-12 science, technology, engineering or math (STEM) educational standards.
All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN), a project of D2L (www.achievementstandards.org).
In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics; within type by subtype, then by grade, etc.
- Energy comes in different forms. (Grades 3 - 5) Details... View more aligned curriculum... Do you agree with this alignment? Thanks for your feedback!
- Technological innovation often results when ideas, knowledge, or skills are shared within a technology, among technologies, or across other fields. (Grades 9 - 12) Details... View more aligned curriculum... Do you agree with this alignment? Thanks for your feedback!
Following is a list of materials needed for an instructor-led demonstration and the five activity stations.
Teacher Class Demo
- clear plastic tub, ~1 x 1 ft (~30 x 30 cm) in size, although almost any size plastic tub will work to drop a pebble or marble into it to demonstrate waves
- access to water
- pebble or marble
- roll/stack of paper towels
Station 1: Oobleck Dance
- 0.4 cup (96 ml) water
- 0.6 cups (144 ml) cornstarch
- plastic spoon
- roll/stack of paper towels
- disposable bowl for mixing oobleck, such as a 20-oz (~590-ml) paper bowl; most store-bought disposable paper or foam bowls work well
- stereo with good bass output, such as the AXESS MSBT3907 2.1 Mini Entertainment System for $52 at Amazon; suitable stereos are often available at second-hand and thrift stores for low cost
- full-range or subwoofer speakers; often available at second-hand and thrift stores; note that the speaker will not be reusable after the activity since it will be covered with the oobleck mixture (see Figure 1); the previously mentioned example mini entertainment system from Amazon includes speakers
- (optional) plastic bag, to put over the speaker to help keep it clean, although this dampens the signal, resulting in less visual impact
- 1 lawn-and-leaf bag or plastic sheeting to cover the table (placed under the speaker and stereo), such as from this box of 18 39-gallon-size, extra strong trash bags for $7.63 at Amazon
- laptop or other music source device capable of playing low-frequency sound input; either downloaded or streaming, such as the 30-60-hertz bass tests on YouTube
- auxiliary cord, to connect music device and stereo, such as the 6-foot Fosmon 3.5 mm to RCA stereo audio Y adapter cable for $4 at Amazon; often available at second-hand and thrift stores
Station 2: Sound Visualization
- stereo or DVD player*; excellent options are often available at second-hand and thrift stores such as Goodwill for a few dollars; choose a stereo or DVD player intended for home entertainment systems; see note below; most phones and computers do not have good enough output for adequate sound wave visualization
- speaker; most sizes work for the activity, but larger speakers (larger than small computer speakers) provide better visualization; make sure the speakers have a connection type that is compatible with your stereo or DVD player
- CD with kid-friendly music
- roll/stack of paper towels
- 2 cups (480 ml) of uncooked long grain rice or bulgur wheat
- 1 cup (240 ml) of water
- 2 large plastic petri dishes with lids, such as 150-mm size, although smaller petri dishes or plastic plates also work fairly well; petri dishes with lids better contain any mess
Station 3: Testing Homemade Speakers
- yogurt cup speaker; either make one on your own, or have groups use the ones they created during the Yogurt Cup Speakers activity
- stereo or DVD player*; often available at second-hand and thrift stores such as Goodwill for a few dollars; see note below
- CD with kid-friendly music
- various objects of different materials and shapes, such as Styrofoam, paper and plastic plates, cups and/or bowls; cardboard; foil; students will use these materials to amplify or distort the yogurt cup speaker sound output
- 12 inches (30 cm) of masking tape
*Note: It is possible to use one stereo or DVD player for two stations (Sound Visualization and Testing Homemade Speakers) by using the left output for one station and the right output for the other station. But, if available, use separate stereos/DVD players, which also helps to minimize crowding at those stations. If separate systems are used, it is easier for the Sound Visualization station to use a stereo or DVD player with plugs for the speaker cables. However, for the Testing Homemade Speakers station, it is easier to use a system with spring-loaded terminals (circled in red in Figure 2).
Station 4: Practice Problems
- Seeing Sound Worksheet, one per student; this worksheet guides students through all five stations, including providing the four practice problems for this station
Station 5: How Do Stringed Instruments Make Sound?
- 6 ft (183 cm) of cotton string, cut into 2-foot (61-cm) lengths
- roll/stack of paper towels
- 1 petri dish (with lid), filled halfway with water; a 150-mm diameter plastic petri dish works well
- (optional) 1 ft (~30 cm) guitar wire
- (optional) guitar or other stringed instrument
How do we hear? (Ask the class for ideas.) When I speak, you can hear what I say, correct? Why is that? Well, let’s work backwards. When something around you makes a noise, your brain receives an electrical signal from your eardrum that you heard a sound (show Figure 3 or sketch it on the classroom board). Your eardrum sent that signal to your brain because the sound “tapped” on your eardrum, causing it to vibrate. But what caused your eardrum, many feet from the sound that came from my mouth, to start moving when I spoke? (Ask the class to share their ideas.)
Well, let’s visualize the concept. When I began to talk, a sound wave was created. (If possible, draw a simple illustration on the board showing waves, representing sound, coming from a mouth.) To understand what that means, let’s think of where we have seen waves before. Picture the waves you have seen on water: the ocean, a lake, a pool or even a puddle. What about when a rock hits the surface of water? (Demonstrate this by dropping a pebble or marble into a tub of water.)
When the rock hits the water, the water has to move out of the way of the rock. This disturbance around the rock moves in all directions as a wave of movement. In general, waves are caused when something starts to vibrate (a disturbance) and then causes the air, liquid or water around it to also vibrate (be disturbed).
Gently place your hand on your throat. Now speak or hum. Can you feel the vocal chords in your throat vibrating? When I start speaking, my vocal chords vibrate and make the air around them vibrate as well. This vibrating air has places with higher and lower pressure (identify these wave characteristics on the classroom board, as shown on Figure 4); the high pressure forms the crests of waves and the low pressure forms the troughs.
This sound wave travels through air and eventually reaches your eardrum—a membrane in the ear canal. The high pressure part of the sound wave taps more on your eardrum than the low pressure part. This pattern is transmitted to your brain, which interprets it as sound.
So although sound travels as a wave, you cannot see it in the air. It would be great to have proof of this in order to understand it better! How could we find evidence that sound is a wave? (Ask students for suggestions and discuss as a class.)
In today’s activity, we will have many opportunities to gain evidence that sound is a wave and to practice describing waves in terms of their amplitude, wavelength and frequency.
The purpose of today’s activity is to learn about sound and sound energy. Who else might need to understand sound? (Listen to student ideas.) Look around you—do you see any devices in the room that make noise or sounds? (Possible examples: Computers, tablets, stereos, phones, public address speakers, printers, fire alarms, TVs, school bells.) Who designed those devices? That’s right, engineers. Can you think of other inventions that would require engineers to have a good understanding of sound and sound energy? (Listen to student ideas.) There are many! For example, engineers design hearing aids for people who are deaf. Engineers also make ultrasound devices that doctors use to see organs and bones inside the human body. In order to design all these devices so they work as intended, engineers need to know a lot about sound energy and the way it travels as waves.
amplitude: The height of a wave.
frequency: The number of waves passing a point in a certain time; related to the inverse of wavelength.
hertz: Unit of frequency (1/s).
oobleck: A cornstarch and water mixture that is a shear-thickening fluid, which means that it spreads out like a fluid when at rest and firms up like a solid when subjected to a force. Oobleck is an inexpensive and non-toxic example of a non-Newtonian fluid. If placed on a large subwoofer at sufficiently high volume, it thickens and forms standing waves in response to low-frequency sound waves from the speaker.
pitch : (music) The highness or lowness of a sound.
wavelength: The distance between two peaks or troughs of a wave.
Waves are all around us. Light travels as waves and so does sound. What is different about a sound wave from a low sound compared to the wave from a high sound? What is different about a loud sound compared with a quiet sound? It is all explained by the shape of the wave (see Figure 5).
The height of a wave is its amplitude. What do you think of when you hear the word amplitude? Usually we think of volume. A loud sound wave has large amplitude, and a quiet sound wave has smaller amplitude. Wavelength and frequency are also important wave characteristics that determine what a sound wave sounds like. The wavelength is the distance before the wave repeats itself. You can measure wavelength between two crests or two troughs. (Ask students to use a ruler to measure the wavelength and amplitude of a wave drawn on the classroom board.)
Another way we describe waves is by their frequencies. The frequency is how fast a wave is waving or, how many wavelengths pass a certain point in a given time. Waves travel the same speed through air whether they have small or long wavelengths, high or low frequency. (Point to a drawn wave on the classroom board.) Let’s say we ask how many wavelengths will pass this point in 1 second. If a wave has very long wavelengths, only a few wave crests (or troughs) pass this point over that time period. So that wave has low frequency. If a wave has very small wavelengths, then many wave crests (or troughs) would pass this point in 1 second. So, this second wave has high frequency.
High- and low-frequency waves sound different. High-frequency sounds have a high pitch and low-frequency sounds have a low pitch. Low-frequency sounds that are still high enough for us to hear are in the range of 20-60 hertz; the lower the frequency, the lower the pitch of the sound (and longer the wavelength). Sound waves are how sound energy travels. Waves with higher frequency and amplitude have a larger amount of energy, which can be transferred to other objects (such as our eardrums).
Before the Activity
- Gather materials and make copies of the Seeing Sound Worksheet, which guides students through all five stations.
- The activity is flexible in that you may run all or just some of the five stations, depending on class size, resources and available adults for supervision. If it is not possible to set up all activity stations, modify the worksheet to exclude unused stations. Since groups of four students are ideal at each workstation, for large class sizes, you may want to make some duplicate stations (or have students work on other assignment tasks) so that you have the same number of groups as stations without having to increase the group size too much.
- Set up each activity station. Label each with a visible number. Refer to the station setup notes, below, including goals and procedures. Refer to the worksheet for station-specific questions to guide activity exploration.
- If students complete the Yogurt Cup Speakers activity, have them save their speakers (or just save several of the more well-made speakers) for use at the Testing Homemade Speakers station. Otherwise, create a yogurt cup speaker on your own to use at that station.
Station 1: Oobleck Dance
Background: Oobleck is a cornstarch and water mixture that is a shear-thickening fluid, which means that it spreads out like a fluid when at rest and firms up like a solid when subjected to a force. In this activity, the oobleck “dances” on the speaker as songs play because more intense sound pulses make the oobleck briefly behave more like a solid and take on interesting shapes, and then during less intense moments in the music, the oobleck relaxes. Oobleck responds best to low-frequency (30-60 hertz), loud sounds.
- This station is best managed by an adult to monitor the optimum selection of sound frequencies and speaker volume, and to manage/minimize the oobleck mess.
- The oobleck tends to dry out as it dances on the speaker, so periodically add water to it. Its state of hydration influences which (low) frequencies it responds to best. When you notice oobleck start to crumble and flake, add a small amount of water to the original mixture to restore its properties.
- To understand that when a sound wave interacts with oobleck, the oobleck stiffens temporarily.
o If the sound wave has high frequency (short wavelengths), the oobleck does not have time to stiffen and relax; it stays stiff.
o If the oobleck meets a low-frequency sound, it has time to stiffen and then relax.
- To see standing wave patterns in the oobleck—when left on a repeating low-frequency tone.
- Cover the desk/table for this station with plastic.
- Remove the outer mesh that covers the speaker cone. If desired, protect the speaker by placing a plastic bag over it to help keep it clean, although this dampens the signal, resulting in less visual impact.
- Use an auxiliary cord to connect the laptop or other music source to the stereo.
- Oobleck requires a low-frequency tone in order to respond with interesting shapes (30-60-hertz tones), so choose a range of low-frequency sounds to play sequentially, ranging from 20-100 hertz, to be able to see differences in oobleck response to frequency. Find low-frequency tones online using search terms such as “30 hertz bass test” or “subwoofer test.”
Creation of Oobleck
- 1 cup of oobleck is sufficient for the entire class to complete this activity.
- Using a plastic spoon, mix 1 part water (0.4 cups) to 1.5 parts (0.6 cups) cornstarch.
Procedures—Make Oobleck Dance!
- Place a spoonful of oobleck on the speaker cone (see Figure 1). Expect the oobleck to first stiffen to show a standing wave pattern as it responds to a repeating low-frequency tone.
- If poked with a spoon or finger, the oobleck may creep and crawl into animal-like shapes. Advise students to use caution since the speaker cone is fragile and easily broken with a plastic spoon or finger.
- Direct students to experiment with the speaker volume to explore how varying the amplitude alters the oobleck response.
Station 2: Sound Visualization
- To gain a visual understanding of the wave properties of sound. Although the movement of sound through air cannot be seen, when water (or grains) are placed on a petri dish (or plate) and positioned on top of a speaker, the resulting patterns provide some visual evidence.
- To see the effects of amplitude changes. When the volume is increased, the wave amplitude increases and the sound energy can be enough to make drops of water (or grains) leap.
- To see the effects of frequency (wavelength) changes. The patterns displayed by the water (or grains) change, depending on the sound frequency (and thus, wavelength).
- Attach a speaker to the stereo.
- Pour water in a large petri dish so it is about half full. Cover it with the petri dish lid. Place the petri dish on top of the speaker.
- Prepare a second, covered petri dish with grains inside.
- Start the kid-friendly music and let the sound-visualization begin.
- Direct students to adjust the volume and observe the water (or grains). What do they notice?
Station 3: Testing Homemade Speakers
- To use the student-created homemade speakers to aid in sound visualization. Doing this further cements their understanding of how speakers generate sound through vibration.
- To give students a chance to optimize a speaker’s output.
- To use assorted materials to amplify or distort the sound output of the homemade speaker.
- For each team, have an adult connect its yogurt cup speaker’s magnet wire to the stereo.
- Using masking tape, secure the magnet wire to the table to minimize the likelihood of wires pulling out from the stereo during the activity.
- Near the speaker, place a box of various objects of different materials and shapes.
- Place two pieces of masking tape across the stereo volume knob to prevent students from adjusting the volume.
Direct students to experiment with the supplied materials to amplify or distort the sound output of the yogurt cup speaker. Do not permit them to adjust the stereo volume. If students need prompting, suggest they hold up different materials close to and on the speaker, turn them different ways and/or combine them to see what happens. (See the Seeing Sound Worksheet Answer Key for examples of what students might discover.)
Station 4: Practice Problems
Learning Goals: To cement what students know about wave characteristics: frequency, wavelength, amplitude.
Setup: None required beyond labeling the station location.
Procedure: Direct students to complete the “Station 4: Practice Problems” on the worksheet, in which they draw waves of varying amplitude and frequency.
Station 5: How Do Stringed Instruments Make Sound?
- To use strings (and possibly stringed instruments) to explore the generation of sound.
- To further expand students’ understanding of wave amplitude, wavelength and frequency.
- Provide several 2-foot lengths of string.
- If available, also provide guitar wire and/or a guitar or other stringed instrument.
- Provide a petri dish and lid, half-filled with water.
- Guided by the worksheet questions, students experiment with the string and investigate how string tightness influences the sound produced, and relate this sound to high/low frequency.
- Students pluck the string on top of a petri dish of water, looking for visual evidence that sound is a wave.
- Have them do the same or similar experiments with the guitar wire and stringed instruments.
With the Students—Overall Procedure
- Ask students the pre-activity questions, as provided in the Assessment section. Then present to the class the Introduction/Motivation and Background sections, which include a quick class demonstration. (10 minutes)
- As a class, briefly preview each station. Discuss the learning goals and instructions on what to do at each station. (10 minutes)
- Divide the class into groups of four (ideally).
- Assign each group a different numbered station at which to begin the activity.
- Depending on class size, consider cycling the stations every 5-8 minutes. (25-30 minutes)
- If necessary, give students some extra time at activity end to finish answering the worksheet questions. (5 minutes)
- Close with a class discussion to relate the activity back to the real-world of engineering and the design work that some engineers do that requires them to understand sound waves. Refer to some suggested questions in the Assessment section.
Make sure that students do not adjust the volume at the Testing Homemade Speakers station. If the volume is too loud on a powerful speaker it has the potential to cause the magnet wire to smoke. Also, if wires become unplugged between the speaker and the stereo, students must ask an adult to plug them back in correctly (with power turned off).
- Choose stereos with adequate bass output as well as a woofer or subwoofer speaker.
- If possible, place the oobleck station in an area without carpet to make any cleanup easier. Cover the table hosting this station with plastic to contain any mess.
- Keep paper towels at the stations with oobleck, water or grain, and encourage students to clean up messes immediately.
- Do not dispose of oobleck in a sink because it may clog the drain. Instead, dispose of it in the trash.
Class Discussion/Questions: Ask students the following questions. Review their answers to gain an understanding of their base knowledge of sound.
- If I have a low-pitch sound that is very quiet, what can we say about its wavelength and amplitude? (Answer: The frequency is low and the amplitude is small. Draw this on the classroom board.)
- If I have a second sound that is also quiet, but high-pitch, what does this wave look like? (Answer: The frequency is high and the amplitude is small. Draw this on the board.)
- If I have a third wave that is high-pitch and loud, what does this wave look like? (Answer: The frequency is high and the amplitude is large. Draw this on the board.)
Activity Embedded Assessment
Worksheet: Have students use the Seeing Sound Worksheet to guide them through the five stations, individually answering the questions as they go. Review their answers to gauge their engagement and comprehension.
Class Discussion/Questions: At activity end, lead a class discussion so students can review and share what they learned at the five stations. Also ask them the following questions to relate the activity to the real-world of engineering applications that require an understanding of sound waves. Students’ answers reveal their depth of understanding. Ask the students:
- Why do engineers need to know about sound waves? (Possible answers: To design sound and music equipment such as stereos, speakers, amplifiers, phones and radios as well as hearing aids and implants for hearing-impaired people. Also public address systems, alarms and warning systems. Design of buildings and spaces to minimize or optimize sound such as libraries, recording studios and concert halls. Engineers design radar and sonar systems that use sound waves, too.)
- Waves are all around us and are not only produced by sound energy. What other kinds of energy travel as waves? (Guide the discussion towards the wider realization that waves are very common; light travels as waves, and other forms of non-visible electromagnetic radiation such as radio waves, microwaves or UV waves are all around us. We cannot hear ultrasound waves but engineers use them to create medical diagnostic tools like ultrasound machines, and even ultrasonic equipment to clean jewelry and to detect invisible flaws and cracks in metal such as airplanes and pipes.)
Assign student teams to use an app such as Explain EverythingTM Classic to create video presentations about sound waves (see https://itunes.apple.com/us/app/explain-everything-interactive/id431493086?mt=8). Have them film the “sound wave evidence” observed at the stations in this activity and then edit the videos to include audio and/or visual scientific explanations. Have students share their finished videos with the class or with other classes at school or during an engineering-focused family event.
This activity is also well-suited for an engineering family event. Consider completing this activity in class, and then at the family event have students be the facilitators (with adult support) at each station to explain the underlying science and related engineering relevance to their families.
- For lower grades, use one well-made yogurt cup speaker at the Testing Homemade Speakers station instead of having each group test its own.
- For higher grades, require students to create and then test their own speakers, as described in the Yogurt Cup Speakers activity.
Nave, Carl R. Transverse and Longitudinal Waves. 2012. HyperPhysics, Department of Physics and Astronomy, Georgia State University, Atlanta, GA. Accessed March 2016. http://hyperphysics.phy-astr.gsu.edu/hbase/sound/tralon.html
Science Mission Directorate. “Anatomy of an Electromagnetic Wave.” Last updated August 13, 2014. Mission: Science. National Aeronautics and Space Administration. Accessed March 2016. http://missionscience.nasa.gov/ems/02_anatomy.html
Copyright© 2011 by Regents of the University of Colorado
Supporting ProgramIntegrated Teaching and Learning Program, College of Engineering, University of Colorado Boulder
The contents of these digital library curricula were developed by the Integrated Teaching and Learning Program under National Science Foundation GK-12 grant no. DGE 0946502. However, these contents do not necessarily represent the policies of the National Science Foundation, and you should not assume endorsement by the federal government.
Last modified: May 4, 2017 |
Comparison of instruction set architectures
An instruction set architecture (ISA) is an abstract model of a computer. It is also referred to as architecture or computer architecture. A realization of an ISA is called an implementation. An ISA permits multiple implementations that may vary in performance, physical size, and monetary cost (among other things); because the ISA serves as the interface between software and hardware. Software that has been written for an ISA can run on different implementations of the same ISA. This has enabled binary compatibility between different generations of computers to be easily achieved, and the development of computer families. Both of these developments have helped to lower the cost of computers and to increase their applicability. For these reasons, the ISA is one of the most important abstractions in computing today.
An ISA defines everything a machine language programmer needs to know in order to program a computer. What an ISA defines differs between ISAs; in general, ISAs define the supported data types, what state there is (such as the main memory and registers) and their semantics (such as the memory consistency and addressing modes), the instruction set (the set of machine instructions that comprises a computer's machine language), and the input/output model.
Computer architectures are often described as n-bit architectures. Today n is often 8, 16, 32, or 64, but other sizes have been used. This is actually a strong simplification. A computer architecture often has a few more or less "natural" datasizes in the instruction set, but the hardware implementation of these may be very different. Many architectures have instructions operating on half and/or twice the size of respective processors' major internal datapaths. Examples of this are the 8080, Z80, MC68000 as well as many others. On this type of implementations, a twice as wide operation typically also takes around twice as many clock cycles (which is not the case on high performance implementations). On the 68000, for instance, this means 8 instead of 4 clock ticks, and this particular chip may be described as a 32-bit architecture with a 16-bit implementation. The external databus width is often not useful to determine the width of the architecture; the NS32008, NS32016 and NS32032 were basically the same 32-bit chip with different external data buses. The NS32764 had a 64-bit bus, but used 32-bit registers.
The width of addresses may or may not be different from the width of data. Early 32-bit microprocessors often had a 24-bit address, as did the System/360 processors.
The number of operands is one of the factors that may give an indication about the performance of the instruction set. A three-operand architecture will allow
A := B + C
to be computed in one instruction.
A two-operand architecture will allow
A := A + B
to be computed in one instruction, so two instructions will need to be executed to simulate a single three-operand instruction
A := B A := A + C
An architecture may use "big" or "little" endianness, or both, or be configurable to use either. Little endian processors order bytes in memory with the least significant byte of a multi-byte value in the lowest-numbered memory location. Big endian architectures instead order them with the most significant byte at the lowest-numbered address. The x86 architecture as well as several 8-bit architectures are little endian. Most RISC architectures (SPARC, Power, PowerPC, MIPS) were originally big endian (ARM was little endian), but many (including ARM) are now configurable.
Endianness only applies to processors that allow individual addressing of units of data (such as bytes) that are smaller than the basic addressable machine word.
Usually the number of registers is a power of two, e.g. 8, 16, 32. In some cases a hardwired-to-zero pseudo-register is included, as "part" of register files of architectures, mostly to simplify indexing modes. This table only counts the integer "registers" usable by general instructions at any moment. Architectures always include special-purpose registers such as the program pointer (PC). Those are not counted unless mentioned. Note that some architectures, such as SPARC, have register window; for those architectures, the count below indicates how many registers are available within a register window. Also, non-architected registers for register renaming are not counted.
Note, a common type of architecture, "load-store", is a synonym for "Register Register" below, meaning no instructions access memory except special – load to register(s) – and store from register(s) – with the possible exceptions of atomic memory operations for locking.
The table below compares basic information about instruction sets to be implemented in the CPU architectures:
|Instruction encoding||Branch evaluation||Endian-
|6502||8||1975||1||Register Memory||CISC||3||Variable (8- to 32-bit)||Condition register||Little|
|6809||8||1978||1||Register Memory||CISC||9||Variable (8- to 32-bit)||Condition register||Big|
|680x0||32||1979||2||Register Memory||CISC||8 data and 8 address||Variable||Condition register||Big|
|8080||8||1974||2||Register Memory||CISC||8||Variable (8 to 24 bits)||Condition register||Little|
|8051||32 (8→32)||1977?||1||Register Register||CISC||
||Variable (8-bit to 128 bytes)||Compare and branch||Little|
|x86||16, 32, 64
||Variable (8086 ~ 80386: variable between 1 and 6 bytes /w MMU + intel SDK, 80486: 2 to 5 bytes with prefix, pentium and onward: 2 to 4 bytes with prefix, x64: 4 bytes prefix, third party x86 emulation: 1 to 15 bytes w/o prefix & MMU . SSE/MMX: 4 bytes /w prefix AVX: 8 Bytes /w prefix)||Condition code||Little||x87, IA-32, MMX, 3DNow!, SSE,
SSE2, PAE, x86-64, SSE3, SSSE3, SSE4,
BMI, AVX, AES, FMA, XOP, F16C
|Alpha||64||1992||3||Register Register||RISC||32 (including "zero")||Fixed (32-bit)||Condition register||Bi||, , ,||No|
|ARC||16/32||ARCv2||1996||3||Register Register||RISC||16 or 32 including SP
user can increase to 60
|Variable (16- and 32-bit)||Compare and branch||Bi||APEX User-defined instructions|
||Fixed (32-bit)||Condition code||Bi||NEON, Jazelle, ,
||Thumb: Fixed (16-bit), Thumb-2:
Variable (16- and 32-bit)
|Condition code||Bi||NEON, Jazelle, ,
|A64||64||ARMv8-A||2011||3||Register Register||RISC||32 (including the stack pointer/"zero" register)||Fixed (32-bit)||Condition code||Bi||none: all ARMv7
extensions are non-optional
16 on "reduced architecture"
|Variable (mostly 16-bit, four instructions are 32-bit)||Condition register,
on an I/O or
compare and skip
|AVR32||32||Rev 2||2006||2–3||RISC||15||Variable||Big||Java Virtual Machine|
|Blackfin||32||2000||3||Register Register||RISC||2 accumulators
8 data registers
8 pointer registers
4 index registers
4 buffer registers
|Variable(16- or 32-bit)||Condition code||Little|
|CDC 6000||60||1964||3||Register Memory||RISC||24 (8 18-bit address reg.,
8 18-bit index reg.,
8 60-bit operand reg.)
|Variable (15, 30, and 60-bit)||Compare and branch||n/a||Compare/Move Unit, additional
Peripheral Processing Units
|32||2000||1||Register Register||VLIW||Variable (64- or 128-bit in native mode, 15 bytes in x86 emulation)||Condition code||Little|
|64||Elbrus-4S||2014||1||Register Register||VLIW||8–64||64||Condition code||Little||Just-in-time dynamic trans-
lation: x87, IA-32, MMX, SSE,
SSE2, x86-64, SSE3, AVX
|eSi-RISC||16/32||2009||3||Register Register||RISC||8–72||Variable (16- or 32-bit)||Compare and branch
and condition register
|64||2001||Register Register||EPIC||128||Fixed (128 bit bundles with 5 bit template tag
and 3 instructions, each 41 bit long)
|Intel Virtualization Technology||No||No|
|M32R||32||1997||3||Register Register||RISC||16||Variable (16- or 32-bit)||Condition register||Bi|
|Mico32||32||?||2006||3||Register Register||RISC||32||Fixed (32-bit)||Compare and branch||Big||User-defined instructions||Yes||Yes|
|MIPS||64 (32→64)||6||1981||1–3||Register Register||RISC||4–32 (including "zero")||Fixed (32-bit)||Condition register||Bi||MDMX, MIPS-3D||Yes||Yes|
|MMIX||64||?||1999||3||Register Register||RISC||256||Fixed (32-bit)||?||Big||?||Yes||Yes|
|NS320xx||32||1982||5||Memory Memory||CISC||8||Variable Huffman coded, up to 23 bytes long||Condition code||Little||BitBlt instructions|
|OpenRISC||32, 64||1.3||2010||3||Register Register||RISC||16 or 32||Fixed||?||?||?||Yes||Yes|
|64 (32→64)||2.0||1986||3||Register Register||RISC||32||Fixed (32-bit)||Compare and branch||Big → Bi||MAX||No|
|PDP-8||12||1966||Register Memory||CISC||1 accumulator
1 multiplier quotient register
|Fixed (12-bit)||Condition register
Test and branch
|EAE(Extended Arithmetic Element)|
|PDP-11||16||1970||3||Memory Memory||CISC||8 (includes stack pointer,
though any register can
act as stack pointer)
|Fixed (16-bit)||Condition code||Little||Floating Point,
Commercial Instruction Set
|POWER, PowerPC, Power ISA||32/64 (32→64)||3.0B||1990||3||Register Register||RISC||32||Fixed (32-bit), Variable||Condition code||Big/Bi||AltiVec, APU, VSX, Cell||Yes||Yes|
|RISC-V||32, 64, 128||2.2||2010||3||Register Register||RISC||32 (including "zero")||Variable||Compare and branch||Little||?||Yes||Yes|
|RX||64/32/16||2000||3||Memory Memory||CISC||4 integer + 4 address||Variable||Compare and branch||Little||No|
|SPARC||64 (32→64)||OSA2017||1985||3||Register Register||RISC||32 (including "zero")||Fixed (32-bit)||Condition code||Big → Bi||VIS||Yes||Yes|
|SuperH (SH)||32||1994||2||Register Register
|RISC||16||Fixed (16- or 32-bit), Variable||Condition code
|64 (32→64)||1964||2 (most)
3 (FMA, distinct
4 (some vector inst.)
|CISC||16||Variable (16-, 32-, or 48-bit)||Condition code, compare and branch||Big||No||No|
|Transputer||32 (4→64)||1987||1||Stack machine||MISC||3 (as stack)||Variable (8 ~ 120 bytes)||Compare and branch||Little|
|VAX||32||1977||6||Memory Memory||CISC||16||Variable||Compare and branch||Little|
|Z80||8||1976||2||Register Memory||CISC||17||Variable (8 to 32 bits)||Condition register||Little|
|Instruction encoding||Branch evaluation||Endian-
- Central processing unit (CPU)
- CPU design
- Comparison of CPU microarchitectures
- Instruction set
- Benchmark (computing)
- da Cruz, Frank (October 18, 2004). "The IBM Naval Ordnance Research Calculator". Columbia University Computing History. Retrieved January 28, 2019.
- "Russian Virtual Computer Museum – Hall of Fame – Nikolay Petrovich Brusentsov".
- Trogemann, Georg; Nitussov, Alexander Y.; Ernst, Wolfgang (2001). Computing in Russia: the history of computer devices and information technology revealed. Vieweg+Teubner Verlag. pp. 19, 55, 57, 91, 104–107. ISBN 978-3-528-05757-2..
- The LEA (8086 & later) and IMUL-immediate (80186 & later) instructions accept three operands; most other instructions of the base integer ISA accept no more than two operands.
- ARMv8 Technology Preview
- "ARM goes 64-bit with new ARMv8 chip architecture". Retrieved 26 May 2012.
- "AVR32 Architecture Document" (PDF). Atmel. Retrieved 2008-06-15.
- "Blackfin manual" (PDF). analog.com.
- "Blackfin Processor Architecture Overview". Analog Devices. Retrieved 2009-05-10.
- "Blackfin memory architecture". Analog Devices. Archived from the original on 2011-06-16. Retrieved 2009-12-18.
- Since memory is an array of 60-bit words with no means to access sub-units, big endian vs. little endian makes no sense. The optional CMU unit uses big endian semantics.
- "Crusoe Exposed: Transmeta TM5xxx Architecture 2". Real World Technologies.
- Alexander Klaiber (January 2000). "The Technology Behind Crusoe Processors" (PDF). Transmeta Corporation. Retrieved December 6, 2013.
- "LatticeMico32 Architecture". Lattice Semiconductor. Archived from the original on 23 June 2010.
- "LatticeMico32 Open Source Licensing". Lattice Semiconductor. Archived from the original on 20 June 2010.
- MIPS64 Architecture for Programmers: Release 6
- MIPS32 Architecture for Programmers: Release 6
- MIPS Open
- OpenRISC Architecture Revisions
- "PDP-8 Users Handbook" (PDF). bitsavers.org. 2019-02-16.
- "Power ISA Version 3.0". openpowerfoundation.org. 2016-11-30. Retrieved 2017-01-06.
- "RISC-V ISA Specifications". Retrieved 17 June 2019.
- Oracle SPARC Processor Documentation
- SPARC Architecture License |
Here we'll look at controlling the position and size of an element.
Box Element Positioning
So far, we know how to position elements either horizontally or vertically inside a box. We will often need more control over the position and size of elements within the box. For this, we first need to look at how a box works.
The position of an element is determined by the layout style of its container. For example, the position of a button in a horizontal box is to the right of the previous button, if any. The size of an element is determined by two factors, the size that the element wants to be and the size you specify. The size that an element wants to be is determined by what is in the element. For example, a button's width is determined by the amount of text inside the button.
An element will generally be as large as it needs to be to hold its contents, and no larger. Some elements, such as textboxes have a default size, which will be used. A box will be large enough to hold the elements inside the box. A horizontal box with three buttons in it will be as wide as the three buttons, plus a small amount of padding.
In the image below, the first two buttons have been given a suitable size to hold their text. The third button is larger because it contains more content. The width of the box containing the buttons is the total width of the buttons plus the padding between them. The height of the buttons is a suitable size to hold the text.
You may need to have more control over the size of an element in a window. There are a number of features that allow you to control the size of an element. The quick way is to simply add the width and height attributes on an element, much like you might do on an HTML img tag. An example is shown below:Example 3.2.1: Source View
<button label="OK" width="100" height="40"/>
However, it is not recommended that you do this. It is not very portable and may not fit in with some themes. A better way is to use style properties, which work similarly to style sheets in HTML. The following CSS properties can be used.
- widthThis specifies the width of the element.
- heightThis specifies the height of the element.
By setting either of the two properties, the element will be created with that width and height. If you specify only one size property, the other is calculated as needed. The size of these style properties should be specified as a number followed by a unit.
The sizes are fairly easy to calculate for non-flexible elements. They simply obey their specified widths and heights, and if the size wasn't specified, the element's default size is just large enough to fit the contents. For flexible elements, the calculation is slightly trickier.
Flexible elements are those that have a flex attribute set to a value greater than 0. Recall that flexible elements grow and shrink to fit the available space. Their default size is still calculated the same as for inflexible elements. The following example demonstrates this:Example 3.2.2: Source View
<window orient="horizontal" xmlns="http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul"> <hbox> <button label="Yes" flex="1"/> <button label="No"/> <button label="I really don't know one way or the other"/> </hbox> </window>
The window will initially appear like in the image earlier. The first two buttons will be sized at a suitable default width and the third button will be larger because it has a longer label. The first button is made flexible and all three elements have been placed inside a box. The width of the box will be set to the initial total width of all three buttons (around 430 pixels in the image).
If you increase the width of the window, elements are checked to see whether they are flexible to fill the blank space that would appear. The button is the only flexible element, but it will not grow wider. This is because the box that the button is inside is not flexible. An inflexible element never changes size even when space is available, so the button can't grow either. Thus, the button won't get wider.
The solution is to make the box flexible also. Then, when you make the window wider, extra space will be available, so the box will grow to fill the extra space. Because the box is larger, more extra space will be created inside it, and the flexible button inside it will grow to fit the available space. This process repeats for as many nested boxes as necessary.
Setting Minimum and Maximum Sizes
You may want to allow a element to be flexible but constrain the size so that it cannot be larger than a certain size. Or, you may want to set a minimum size. You can set this by using four attributes:
- minwidthThis specifies the minimum width that the element can be.
- minheightThis specifies the minimum height that the element can be.
- maxwidthThis specifies the maximum width that the element can be.
- maxheightThis specifies the maximum height that the element can be.
The values are always measured in pixels. You can also use the corresponding CSS properties, min-width, min-height, max-width and max-height.
These properties are only useful for flexible elements. By setting a maximum height, for example, a stretchy button will only grow to a certain maximum height. You will still be able to resize the window beyond that point but the button will stop growing in size. The box the button is inside will also continue to grow, unless you set a maximum height on the box also.
If two buttons are equally flexible, normally both will share the amount of extra space. If one button has a maximum width, the second will still continue to grow and take all of the remaining space.
If a box has a maximum width or height, the children cannot grow larger than that maximum size. If a box has a minimum width or height, the children cannot shrink smaller than that minimum size. Here are some examples of setting widths and heights:
<button label="1" style="width: 100px;"/> <button label="2" style="width: 100em; height: 10px;"/> <button label="3" flex="1" style="min-width: 50px;"/> <button label="4" flex="1" style="min-height: 2ex; max-width: 100px"/> <textbox flex="1" style="max-width: 10em;"/> <description style="max-width: 50px">This is some boring but simple wrapping text.</description>
Example 1: the first button will be displayed with a width of
100 pixels (px means pixels). You need to add the unit or the width will be
Example 2: the second button will be displayed with a height of ten pixels and a width of 100 ems (an em is the size of a character in the current font).
Example 3: the third button is flexible so it will grow based on the size of the box the button is in. However, the button will never shrink to be less than 50 pixels. Other flexible components such as spacers will absorb the remaining space, breaking the flex ratio.
Example 4: the fourth button is flexible and will never have a height that is smaller than 2 ex (an ex is usually the height of the letter x in the current font) or wider than 100 pixels.
Example 5: the text input is flexible but will never grow to be larger than 10 ems. You will often want to use ems when specifying sizes with text in them. This unit is useful for textboxes so that the font can change and the textboxes would always be a suitable size, even if the font is very large.
Example 6: the description element is constrained to have a maximum width of 50 pixels. The text inside will wrap to the next line, after fifty pixels.
Let's add some of these styles to the find files dialog. We'll make it so that the textbox will resize to fit the entire window.
<textbox id="find-text" flex="1" style="min-width: 15em;"/>
Here, the text input has been made flexible. This way, it will grow if the user changes the size of the dialog. This is useful if the user wants to enter a long string of text. Also, a minimum width of 15 ems has been set so that the text box will always show at least 15 characters. If the user resizes the dialog to be very small, the text input will not shrink past 15 ems. It will be drawn as if it extends past the edge of the window. Notice in the image below that the text input has grown to extend to the full size of the window.
Let's say you have a box with two child elements, both of which are not flexible, but the box is flexible. For example:Example 3.2.3: Source View
<box flex="1"> <button label="Happy"/> <button label="Sad"/> </box>
If you resize the window, the box will stretch to fit the window size. The buttons are not flexible, so they will not change their widths. The result is extra space that will appear on the right side of the window, inside the box. You may wish, however, for the extra space to appear on the left side instead, so that the buttons stay right aligned in the window.
You could accomplish this by placing a spacer inside the box, but that gets messy when you have to do it numerous times. A better way is to use an additional attribute pack on the box. This attribute indicates how to pack the child elements inside the box. For horizontally oriented boxes, it controls the horizonal positioning of the children. For vertically oriented boxes, it controls the vertical positioning of the children. You can use the following values:
- startThis positions elements at the left edge for horizontal boxes and at the top edge for vertical boxes. This is the default value.
- centerThis centers the child elements in the box.
- endThis positions elements at the right edge for horizontal boxes and at the bottom edge for vertical boxes.
The pack attribute applies to the box containing the elements to be packed, not to the elements themselves.
We can change the earlier example to center the elements as follows:Example 3.2.4: Source View
<box flex="1" pack="center"> <button label="Happy"/> <button label="Sad"/> </box>
Now, when the window is resized, the buttons center themselves horizontally. Compare this behavior to that of the previous example.
If you resize the window in the Happy-Sad example above horizontally, the box will grow in width. If you resize the window vertically however, you will note that the buttons grow in height. This is because the flexibility is assumed by default in the other direction.
You can control this behavior with the align attribute. For horizontal boxes, it controls the position of the children vertically. For vertical boxes, it controls the position of the children horizontally. The possible values are similar to the pack.
- startThis aligns elements along the top edge for horizontal boxes and along the left edge for vertical boxes.
- centerThis centers the child elements in the box.
- endThis aligns elements along the bottom edge for horizontal boxes and along the right edge for vertical boxes.
- baselineThis aligns the elements so that the text lines up. This is only useful for horizontal boxes.
- stretchThis value, the default, causes the elements to grow to fit the size of the box, much like a flexible element, but in the opposite direction.
As with the pack attribute, the align attribute applies to the box containing the elements to be aligned, not to the elements themselves.
For example, the first box below will have its children stretch, because that is the default. The second box has an align attribute, so its children will be placed centered.Example 3.2.5: Source View
<?xml version="1.0"?> <?xml-stylesheet href="chrome://global/skin/" type="text/css"?> <window id="yesno" title="Question" orient="horizontal" xmlns="http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul"> <hbox> <button label="Yes"/> <button label="No"/> </hbox> <hbox align="center"> <button label="Maybe"/> <button label="Perhaps"/> </hbox> </window>
You can also use the style properties -moz-box-pack and -moz-box-align instead of specifying attributes.
Cropping Text and Buttons
You could potentially create a button element that contains a label that is larger than the maximum width of the button. Of course, a solution would be to increase the size of the button. However, buttons (and other elements with a label) have a special attribute called crop that allows you to specify how the text may be cropped if it is too big.
If the text is cropped, an ellipsis (...) will appear on the button where the text was taken out. Four possible values are valid:
- leftThe text is cropped on its left side
- rightThe text is cropped on its right side
- centerThe text is cropped in the middle.
- noneThe text is not cropped. This is the default value.
This attribute is really only useful when a dialog has been designed to be useful at any size. The crop attribute can also be used with other elements that use the label attribute for labels. The following shows this attribute in use:Example 3.2.6: Source View
<button label="Push Me Please!" crop="right" flex="1"/>
Notice how the text on the button has had the right side of it cropped after the window is made smaller.
(Next) Next, a summary and some additional details of the box model are described. |
Megalodon (Carcharocles megalodon), meaning "big tooth", is an extinct species of shark that lived approximately 23 to 3.6 million years ago (mya), during the Early Miocene to the Pliocene. It was formerly thought to be a member of the family Lamnidae, and a close relative of the great white shark (Carcharodon carcharias). However, presently there is near unanimous consensus that it belongs to the extinct family Otodontidae, which diverged from the ancestry of the great white shark during the Early Cretaceous. Its genus placement is still debated, authors placing it in either Carcharocles, Megaselachus, Otodus, or Procarcharodon. This is because transitional fossils have been found showing that Megalodon is the final chronospecies of a lineage of giant sharks originally of the genus Otodus which evolved during the Paleocene.
|Model of megalodon jaws at the American Museum of Natural History|
While regarded as one of the largest and most powerful predators to have ever lived, megalodon is known from fragmentary remains and its appearance and maximum size are uncertain. Scientists differ on whether it would have more closely resembled a stockier version of the great white shark, the basking shark (Cetorhinus maximus) or the sand tiger shark (Carcharias taurus). Most estimates of megalodon's size extrapolate from teeth; with maximum length estimates up to 18 meters (59 ft) and average length estimates of 10.5 meters (34 ft). Estimates suggest their large jaws could exert a bite force of up to 110,000 to 180,000 newtons (25,000 to 40,000 lbf). Their teeth were thick and robust, built for grabbing prey and breaking bone.
Megalodon probably had a major impact on the structure of marine communities. The fossil record indicates that it had a cosmopolitan distribution. It probably targeted large prey, such as whales, seals, and sea turtles. Juveniles inhabited warm coastal waters and fed on fish and small whales. Unlike the great white, which attacks prey from the soft underside, megalodon probably used its strong jaws to break through the chest cavity and puncture the heart and lungs of its prey.
The animal faced competition from whale-eating cetaceans, such as Livyatan and other macroraptorial sperm whales, and possibly smaller ancestral killer whales. As the shark preferred warmer waters, it is thought that oceanic cooling associated with the onset of the ice ages, coupled with the lowering of sea levels and resulting loss of suitable nursery areas, may have also contributed to its decline. A reduction in the diversity of baleen whales and a shift in their distribution toward polar regions may have reduced megalodon's primary food source. More recently, evidence has come forward that competition from the modern great white shark may have also contributed to the extinction of megalodon, coupled with range fragmentation resulting in a gradual, asynchronous extinction as a result of cooling oceans around 3.6-4 million years ago, far earlier than previously assumed. The extinction of the shark appeared to affect other animals; for example, the size of baleen whales increased significantly after the shark had disappeared.
- 1 Taxonomy
- 2 Biology
- 3 Paleobiology
- 4 Extinction
- 5 In popular culture
- 6 See also
- 7 Notes
- 8 References
- 9 Further reading
- 10 External links
According to Renaissance accounts, gigantic triangular fossil teeth often found embedded in rocky formations were once believed to be the petrified tongues, or glossopetrae, of dragons and snakes. This interpretation was corrected in 1667 by Danish naturalist Nicolas Steno, who recognized them as shark teeth, and famously produced a depiction of a shark's head bearing such teeth. He described his findings in the book The Head of a Shark Dissected, which also contained an illustration of a megalodon tooth.
Swiss naturalist Louis Agassiz gave this shark its initial scientific name, Carcharodon megalodon, in his 1843 work Recherches sur les poissons fossiles, based on tooth remains. English paleontologist Edward Charlesworth in his 1837 paper used the name Carcharias megalodon, while citing Agassiz as the author, indicating that Agassiz described the species prior to 1843. English paleontologist Charles Davies Sherborn in 1928 listed an 1835 series of articles by Agassiz as the first scientific description of the shark. The specific name megalodon translates to "big tooth", from Ancient Greek: μέγας, romanized: (megas), lit. 'big, mighty' and οδόντος (odontús), "tooth". The teeth of megalodon are morphologically similar to those of the great white shark (Carcharodon carcharias), and on the basis of this observation, Agassiz assigned megalodon to the genus Carcharodon. Though “megalodon” is an informal name for the shark, it is also often informally dubbed the "giant white shark", the "megatooth shark", the "big tooth shark", or "Meg".:4
There was one apparent description of the shark in 1881 classifying it as Selache manzonii.
|Relationship between megalodon and other sharks, including the great white shark (Carcharodon carcharias)|
While the earliest megalodon remains have been reported from the Late Oligocene, around 28 million years ago (mya), there is disagreement as to when it appeared, with dates ranging to as young as 16 mya. It has been thought that megalodon became extinct around the end of the Pliocene, about 2.6 mya; claims of Pleistocene megalodon teeth, younger than 2.6 million years old, are considered unreliable. A more recent assessment moves the extinction date back to earlier in the Pliocene, 3.6 mya.
Megalodon is now considered to be a member of the family Otodontidae, genus Carcharocles, as opposed to its previous classification into Lamnidae, genus Carcharodon. Megalodon's classification into Carcharodon was due to dental similarity with the great white shark, but most authors currently believe that this is due to convergent evolution. In this model, the great white shark is more closely related to the extinct broad-toothed mako (Isurus hastalis) than to megalodon, as evidenced by more similar dentition in those two sharks; megalodon teeth have much finer serrations than great white shark teeth. The great white shark is more closely related to the mako shark (Isurus spp.), with a common ancestor around 4 mya. Proponents of the former model, wherein megalodon and the great white shark are more closely related, argue that the differences between their dentition are minute and obscure.:23–25
The genus Carcharocles currently contains four species: C. auriculatus, C. angustidens, C. chubutensis, and C. megalodon.:30–31 The evolution of this lineage is characterized by the increase of serrations, the widening of the crown, the development of a more triangular shape, and the disappearance of the lateral cusps.:28–31 The evolution in tooth morphology reflects a shift in predation tactics from a tearing-grasping bite to a cutting bite, likely reflecting a shift in prey choice from fish to cetaceans. Lateral cusplets were finally lost in a gradual process that took roughly 12 million years during the transition between C. chubutensis and C. megalodon. The genus was proposed by D. S. Jordan and H. Hannibal in 1923 to contain C. auriculatus. In the 1980s, megalodon was assigned to Carcharocles.:30 Before this, in 1960, the genus Procarcharodon was erected by French ichthyologist Edgard Casier, which included those four sharks and was considered separate from the great white shark. It is now considered a junior synonym of Carcharocles.:30 The genus Palaeocarcharodon was erected alongside Procarcharodon to represent the beginning of the lineage, and, in the model wherein megalodon and the great white shark are closely related, their last common ancestor. It is believed to be an evolutionary dead-end and unrelated to the Carcharocles sharks by authors who reject that model.:70
Another model of the evolution of this genus, also proposed by Casier in 1960, is that the direct ancestor of the Carcharocles is the shark Otodus obliquus, which lived from the Paleocene through the Miocene epochs, 60 mya to 13 mya. The genus Otodus is ultimately derived from Cretolamna, a shark from the Cretaceous period. In this model, O. obliquus evolved into O. aksuaticus, which evolved into C. auriculatus, and then into C. angustidens, and then into C. chubutensis, and then finally into C. megalodon.
Another model of the evolution of Carcharocles, proposed in 2001 by paleontologist Michael Benton, is that the three other species are actually a single species of shark that gradually changed over time between the Paleocene and the Pliocene, making it a chronospecies.:17 Some authors suggest that C. auriculatus, C. angustidens, and C. chubutensis should be classified as a single species in the genus Otodus, leaving C. megalodon the sole member of Carcharocles.
The genus Carcharocles may be invalid, and the shark may actually belong in the genus Otodus, making it Otodus megalodon. A 1974 study on Paleogene sharks by Henri Cappetta erected the subgenus Megaselachus, classifying the shark as Otodus (Megaselachus) megalodon, along with O. (M.) chubutensis. A 2006 review of Chondrichthyes elevated Megaselachus to genus, and classified the sharks as Megaselachus megalodon and M. chubutensis. The discovery of fossils assigned to the genus Megalolamna in 2016 led to a re-evaluation of Otodus, which concluded that it is paraphyletic, that is, it consists of a last common ancestor but it does not include all of its descendants. The inclusion of the Carcharocles sharks in Otodus would make it monophyletic, with the sister clade being Megalolamna.
One interpretation on how megalodon appeared was that it was a robust-looking shark, and may have had a similar build to the great white shark. The jaws may have been blunter and wider than the great white, and the fins would have also been similar in shape, though thicker due to its size. It may have had a pig-eyed appearance, in that it had small, deep-set eyes.:64–65
Another interpretation is that megalodon bore a similarity to the whale shark (Rhincodon typus) or the basking shark (Cetorhinus maximus). The tail fin would have been crescent-shaped, the anal fin and second dorsal fin would have been small, and there would have been a caudal keel present on either side of the tail fin (on the caudal peduncle). This build is common in other large aquatic animals, such as whales, tuna, and other sharks, in order to reduce drag while swimming. The head shape can vary between species as most of the drag-reducing adaptations are toward the tail-end of the animal.:35–36
Since Carcharocles is derived from Otodus, and the two had teeth that bear a close similarity to those of the sand tiger shark (Carcharias taurus), megalodon may have had a build more similar to the sand tiger shark than to other sharks. This is unlikely since the sand tiger shark is a carangiform swimmer which requires faster movement of the tail for propulsion through the water than the great white shark, a thunniform swimmer.:35–36
Due to fragmentary remains, there have been many contradictory size estimates for megalodon, as they can only be drawn from fossil teeth and vertebrae.:87 Also because of this, the great white shark is the basis of its reconstruction and size estimation,:57 as it is regarded as the best analogue to megalodon. Using length estimates extrapolated from 544 teeth found throughout geological time and geography, including adults and juveniles, a 2015 study estimated an average length of 10.5 meters (34 ft). In comparison, the maximum recorded size of the great white shark is 6.1 meters (20 ft), and the whale shark (the largest living fish) can reach 18.8 m (62 ft). It is possible that different populations of megalodon around the globe had different body sizes and behaviors due to different ecological pressures. If it did attain a size of over 16 meters (52 ft), it would have been the largest known fish that has ever lived, surpassing the Jurassic fish Leedsichthys.
Mature male megalodon may have had a body mass of 12.6 to 33.9 metric tons (13.9 to 37.4 short tons), and mature females may have been 27.4 to 59.4 metric tons (30.2 to 65.5 short tons), given that males could range in length from 10.5 to 14.3 meters (34 to 47 ft) and females 13.3 to 17 meters (44 to 56 ft).:61 A 2015 study linking shark size and typical swimming speed estimated that megalodon would have typically swum at 18 kilometers per hour (11 mph)–given that its body mass was typically 48 metric tons (53 short tons)–which is consistent with other aquatic creatures of its size, such as the fin whale (Balaenoptera physalus) which typically cruises at speeds of 14.5 to 21.5 km/h (9.0 to 13.4 mph).
Its large size may have been due to climatic factors and the abundance of large prey items, and it may have also been influenced by the evolution of regional endothermy (mesothermy) which would have increased its metabolic rate and swimming speed. Since the otodontid sharks are considered to have been ectotherms, and megalodon was a close relative to them, megalodon may have also been ectothermic. Contrary to this, the largest contemporary ectothermic sharks, such as the whale shark, are filter feeders, implying some metabolic constraints with a predatory lifestyle. That is to say, it is unlikely that megalodon was ectothermic.
Gordon Hubbell from Gainesville, Florida, possesses an upper anterior megalodon tooth whose maximum height is 18.4 centimeters (7.25 in), one of the largest known tooth specimens from the shark. In addition, a 2.7-by-3.4-meter (9 by 11 ft) megalodon jaw reconstruction developed by fossil hunter Vito Bertucci contains a tooth whose maximum height is reportedly over 18 centimeters (7 in).
The first attempt to reconstruct the jaw of megalodon was made by Bashford Dean in 1909, displayed at the American Museum of Natural History. From the dimensions of this jaw reconstruction, it was hypothesized that megalodon could have approached 30 meters (98 ft) in length. Dean had overestimated the size of the cartilage on both jaws, causing it to be too tall.
In 1973, John E. Randall, an ichthyologist, used the enamel height (the vertical distance of the blade from the base of the enamel portion of the tooth to its tip) to measure the length of the shark, yielding a maximum length of about 13 meters (43 ft). However, tooth enamel height does not necessarily increase in proportion to the animal's total length.:99
In 1996, shark researchers Michael D. Gottfried, Leonard Compagno, and S. Curtis Bowman proposed a linear relationship between a shark's total length and the height of the largest upper anterior tooth. The proposed relationship is: total length in meters = − (0.096) × [UA maximum height (mm)]-(0.22).:60 They asserted that C. megalodon could have reached a maximum of 20.3 meters (67 ft) in total length.
In 2002, shark researcher Clifford Jeremiah proposed that total length was proportional to the root width of an upper anterior tooth. He claimed that for every 1 centimeter (0.39 in) of root width, there are approximately 1.4 meters (4.6 ft) of shark length. Jeremiah pointed out that the jaw perimeter of a shark is directly proportional to its total length, with the width of the roots of the largest teeth being a tool for estimating jaw perimeter. The largest tooth in Jeremiah's possession had a root width of about 12 centimeters (4.7 in), which yielded 16.5 meters (54 ft) in total length.:88
In 2002, paleontologist Kenshu Shimada of DePaul University proposed a linear relationship between tooth crown height and total length after conducting anatomical analysis of several specimens, allowing any sized tooth to be used. Shimada stated that the previously proposed methods were based on a less-reliable evaluation of the dental homology between megalodon and the great white shark, and that the growth rate between the crown and root is not isometric, which he considered in his model. Using this model, the upper anterior tooth possessed by Gottfried and colleagues corresponded to a total length of 15 meters (49 ft). Among several specimens found in the Gatún Formation of Panama, one upper lateral tooth was used by other researchers to obtain a total length estimate of 17.9 meters (59 ft) using this method.
In 2019, Shimada revisited the size of megalodon and discouraged using non-anterior teeth for estimations, noting that the exact position of isolated non-anterior teeth is difficult to identify. Shimada stated that the maximum total length estimates, based on upper anterior teeth that are available in museums, are 14.2 and 15.3 meters (47 and 50 ft), depending on the estimation method used.
Teeth and bite force
The most common fossils of megalodon are its teeth. Diagnostic characteristics include a triangular shape, robust structure, large size, fine serrations, a lack of lateral denticles, and a visible V-shaped neck (where the root meets the crown).:55 The tooth met the jaw at a steep angle, similar to the great white shark. The tooth was anchored by connective tissue fibers, and the roughness of the base may have added to mechanical strength. The lingual side of the tooth, the part facing the tongue, was convex; and the labial side, the other side of the tooth, was slightly convex or flat. The anterior teeth were almost perpendicular to the jaw and symmetrical, whereas the posterior teeth were slanted and asymmetrical.
Megalodon teeth can measure over 180 millimeters (7.1 in) in slant height (diagonal length) and are the largest of any known shark species.:33 In 1989, a nearly complete set of megalodon teeth was discovered in Saitama, Japan. Another nearly complete associated megalodon dentition was excavated from the Yorktown Formations in the United States, and served as the basis of a jaw reconstruction of megalodon at the National Museum of Natural History (USNM). Based on these discoveries, an artificial dental formula was put together for megalodon in 1996.:55
The dental formula of megalodon is: 184.108.40.206. As evident from the formula, megalodon had four kinds of teeth in its jaws: anterior, intermediate, lateral, and posterior. Megalodon's intermediate tooth technically appears to be an upper anterior and is termed as "A3" because it is fairly symmetrical and does not point mesially (side of the tooth toward the midline of the jaws where the left and right jaws meet). Megalodon had a very robust dentition,:20–21 and had over 250 teeth in its jaws, spanning 5 rows.:iv It is possible that large megalodon individuals had jaws spanning roughly 2 meters (6.6 ft) across.:129 The teeth were also serrated, which would have improved efficiency in cutting through flesh or bone.:1 The shark may have been able to open its mouth to a 75° angle, though a reconstruction at the USNM approximates a 100° angle.:63
In 2008, a team of scientists led by S. Wroe conducted an experiment to determine the bite force of the great white shark, using a 2.5-meter (8.2 ft) long specimen, and then isometrically scaled the results for its maximum size and the conservative minimum and maximum body mass of megalodon. They placed the bite force of the latter between 108,514 to 182,201 newtons (24,395 to 40,960 lbf) in a posterior bite, compared to the 18,216 newtons (4,095 lbf) bite force for the largest confirmed great white shark, and 7,400 newtons (1,700 lbf) for the placoderm fish Dunkleosteus. In addition, Wroe and colleagues pointed out that sharks shake sideways while feeding, amplifying the force generated, which would probably have caused the total force experienced by prey to be higher than the estimate.
Megalodon is represented in the fossil record by teeth, vertebral centra, and coprolites.:57 As with all sharks, the skeleton of megalodon was formed of cartilage rather than bone; consequently most fossil specimens are poorly preserved. To support its large dentition, the jaws of megalodon would have been more massive, stouter, and more strongly developed than those of the great white, which possesses a comparatively gracile dentition. Its chondrocranium, the cartilaginous skull, would have had a blockier and more robust appearance than that of the great white. Its fins were proportional to its larger size.:64–65
Some fossil vertebrae have been found. The most notable example is a partially preserved vertebral column of a single specimen, excavated in the Antwerp Basin, Belgium, in 1926. It comprises 150 vertebral centra, with the centra ranging from 55 millimeters (2.2 in) to 155 millimeters (6 in) in diameter. The shark's vertebrae may have gotten much bigger, and scrutiny of the specimen revealed that it had a higher vertebral count than specimens of any known shark, possibly over 200 centra; only the great white approached it.:63–65 Another partially preserved vertebral column of a megalodon was excavated from the Gram Formation in Denmark in 1983, which comprises 20 vertebral centra, with the centra ranging from 100 millimeters (4 in) to 230 millimeters (9 in) in diameter.
The coprolite remains of megalodon are spiral-shaped, indicating that the shark may have had a spiral valve, a corkscrew-shaped portion of the lower intestines, similar to extant lamniform sharks. Miocene coprolite remains were discovered in Beaufort County, South Carolina, with one measuring 14 cm (5.5 in).
Gottfried and colleagues reconstructed the entire skeleton of megalodon, which was later put on display at the Calvert Marine Museum in the United States and the Iziko South African Museum.:56 This reconstruction is 11.3 meters (37 ft) long and represents a mature male,:61 based on the ontogenetic changes a great white shark experiences over the course of its life.:65
Range and habitat
Megalodon had a cosmopolitan distribution; its fossils have been excavated from many parts of the world, including Europe, Africa, the Americas, and Australia.:67 It most commonly occurred in subtropical to temperate latitudes.:78 It has been found at latitudes up to 55° N; its inferred tolerated temperature range was 1–24 °C (34–75 °F).[note 1] It arguably had the capacity to endure such low temperatures due to mesothermy, the physiological capability of large sharks to conserve metabolic heat by maintaining a higher body temperature than the surrounding water.
Megalodon inhabited a wide range of marine environments (i.e., shallow coastal waters, areas of coastal upwelling, swampy coastal lagoons, sandy littorals, and offshore deep water environments), and exhibited a transient lifestyle. Adult megalodon were not abundant in shallow water environments, and mostly inhabited offshore areas. Megalodon may have moved between coastal and oceanic waters, particularly in different stages of its life cycle.:33
Fossil remains show a trend for specimens to be larger on average in the southern hemisphere than in the northern, with mean lengths of 11.6 and 9.6 meters (38 and 31 ft), respectively; and also larger in the Pacific than the Atlantic, with mean lengths of 10.9 and 9.5 meters (36 and 31 ft) respectively. They do not suggest any trend of changing body size with absolute latitude, or of change in size over time (although the Carcharocles lineage in general is thought to display a trend of increasing size over time). The overall modal length has been estimated at 10.5 meters (34 ft), with the length distribution skewed towards larger individuals, suggesting an ecological or competitive advantage for larger body size.
Locations of fossils
Though sharks are generally opportunistic feeders, megalodon's great size, high-speed swimming capability, and powerful jaws, coupled with an impressive feeding apparatus, made it an apex predator capable of consuming a broad spectrum of animals. It was probably one of the most powerful predators to have existed.:71–75 A study focusing on calcium isotopes of extinct and extant elasmobranch sharks and rays revealed that megalodon fed at a higher trophic level than the contemporaneous great white shark. That is to say it was higher up in the food chain.
Fossil evidence indicates that megalodon preyed upon many cetacean species, such as dolphins, small whales, cetotheres, squalodontids (shark toothed dolphins), sperm whales, bowhead whales, and rorquals. In addition to this, they also targeted seals, sirenians, and sea turtles. The shark was an opportunist and piscivorous, and it would have also gone after smaller fish and other sharks. Many whale bones have been found with deep gashes most likely made by their teeth.:75 Various excavations have revealed megalodon teeth lying close to the chewed remains of whales,:75 and sometimes in direct association with them.
The feeding ecology of megalodon appears to have varied with age and between sites, like the modern great white. It is plausible that the adult megalodon population off the coast of Peru targeted primarily cetothere whales 2.5 to 7 meters (8.2 to 23 ft) in length and other prey smaller than itself, rather than large whales in the same size class as themselves. Meanwhile, juveniles likely had a diet that consisted more of fish.
Megalodon faced a highly competitive environment. Its position at the top of the food chain, probably had a significant impact on the structuring of marine communities. Fossil evidence indicates a correlation between megalodon and the emergence and diversification of cetaceans and other marine mammals.:78 Juvenile megalodon preferred habitats where small cetaceans were abundant, and adult megalodon preferred habitats where large cetaceans were abundant. Such preferences may have developed shortly after they appeared in the Oligocene.:74–75
Megalodon were contemporaneous with whale-eating toothed whales (particularly macroraptorial sperm whales and squalodontids), which were also probably among the era's apex predators, and provided competition. Some attained gigantic sizes, such as Livyatan, which grew from 13.5 to 17.5 meters (44 to 57 ft). Fossilized teeth of an undetermined species of such physeteroids from Lee Creek Mine, North Carolina, indicate it had a maximum body length of 8–10 m and a maximum lifespan of about 25 years. This is very different from similarly sized modern killer whales that live to 65 years, suggesting that unlike the latter, which are apex predators, these physeteroids were subject to predation from larger species such as megalodon or Livyatan. By the Late Miocene, around 11 mya, macroraptorials experienced a significant decline in abundance and diversity. Other species may have filled this niche in the Pliocene, such as the fossil killer whale Orcinus citoniensis which may have been a pack predator and targeted prey larger than itself, but this inference is disputed, and it was probably a generalist predator rather than a marine mammal specialist.
Megalodon may have subjected contemporaneous white sharks to competitive exclusion, as the fossil records indicate that other shark species avoided regions it inhabited by mainly keeping to the colder waters of the time.:77 In areas where their ranges seemed to have overlapped, such as in Pliocene Baja California, it is possible that megalodon and the great white shark occupied the area at different times of the year while following different migratory prey.:77 Megalodon probably also had a tendency for cannibalism, much like contemporary sharks.
Sharks often employ complex hunting strategies to engage large prey animals. Great white shark hunting strategies may be similar to how megalodon hunted its large prey. Megalodon bite marks on whale fossils suggests that it employed different hunting strategies against large prey than the great white shark.
One particular specimen–the remains of a 9-meter (30 ft) long undescribed Miocene baleen whale–provided the first opportunity to quantitatively analyze its attack behavior. Unlike great whites which target the underbelly of their prey, megalodon probably targeted the heart and lungs, with their thick teeth adapted for biting through tough bone, as indicated by bite marks inflicted to the rib cage and other tough bony areas on whale remains. Furthermore, attack patterns could differ for prey of different sizes. Fossil remains of some small cetaceans, for example cetotheres, suggest that they were rammed with great force from below before being killed and eaten, based on compression fractures.
During the Pliocene, larger cetaceans appeared. Megalodon apparently further refined its hunting strategies to cope with these large whales. Numerous fossilized flipper bones and tail vertebrae of large whales from the Pliocene have been found with megalodon bite marks, which suggests that megalodon would immobilize a large whale before killing and feeding on it.
Megalodon, like contemporaneous sharks, made use of nursery areas to birth their young in, specifically warm-water coastal environments with large amounts of food and protection from predators. Nursery sites were identified in the Gatún Formation of Panama, the Calvert Formation of Maryland, Banco de Concepción in the Canary Islands, and the Bone Valley Formation of Florida. Given that all extant lamniform sharks give birth to live young, this is believed to have been true of megalodon also. Infant megalodons were around 3.5 meters (11 ft) at their smallest,:61 and the pups were vulnerable to predation by other shark species, such as the great hammerhead shark (Sphyrna mokarran) and the snaggletooth shark (Hemipristis serra). Their dietary preferences display an ontogenetic shift::65 Young megalodon commonly preyed on fish, sea turtles, dugongs,:129 and small cetaceans; mature megalodon moved to off-shore areas and consumed large cetaceans.:74–75
An exceptional case in the fossil record suggests that juvenile megalodon may have occasionally attacked much larger balaenopterid whales. Three tooth marks apparently from a 4-to-7-meter (13 to 23 ft) long Pliocene shark were found on a rib from an ancestral blue or humpback whale that showed evidence of subsequent healing, which is suspected to have been inflicted by a juvenile megalodon.
The Earth experienced a number of changes during the time period megalodon existed which affected marine life. A cooling trend starting in the Oligocene 35 mya ultimately led to glaciation at the poles. Geological events changed currents and precipitation; among these were the closure of the Central American Seaway and changes in the Tethys Ocean, contributing to the cooling of the oceans. The stalling of the Gulf Stream prevented nutrient-rich water from reaching major marine ecosystems, which may have negatively affected its food sources. The largest fluctuation of sea levels in the Cenozoic era occurred in the Plio-Pleistocene, between around 5 million to 12 thousand years ago, due to the expansion of glaciers at the poles, which negatively impacted coastal environments, and may have contributed to its extinction along with those of several other marine megafaunal species. These oceanographic changes, in particular the sea level drops, may have restricted many of the suitable shallow warm-water nursery sites for megalodon, hindering reproduction. Nursery areas are pivotal for the survival of many shark species, in part because they protect juveniles from predation.
As its range did not apparently extend into colder waters, megalodon may not have been able to retain a significant amount of metabolic heat, so its range was restricted to shrinking warmer waters. Fossil evidence confirms the absence of megalodon in regions around the world where water temperatures had significantly declined during the Pliocene.:77 However, an analysis of the distribution of megalodon over time suggests that temperature change did not play a direct role in its extinction. Its distribution during the Miocene and Pliocene did not correlate with warming and cooling trends; while abundance and distribution declined during the Pliocene, megalodon did show a capacity to inhabit colder latitudes. It was found in locations with a mean temperature ranging from 12 to 27 °C (54 to 81 °F), with a total range of 1 to 33 °C (34 to 91 °F), indicating that the global extent of suitable habitat should not have been greatly affected by the temperature changes that occurred. This is consistent with evidence that it was a mesotherm.
Marine mammals attained their greatest diversity during the Miocene,:71 such as with baleen whales with over 20 recognized Miocene genera in comparison to only six extant genera. Such diversity presented an ideal setting to support a super-predator such as megalodon.:75 By the end of the Miocene, many species of mysticetes had gone extinct; surviving species may have been faster swimmers and thus more elusive prey.:46 Furthermore, after the closure of the Central American Seaway, tropical whales decreased in diversity and abundance. The extinction of megalodon correlates with the decline of many small mysticete lineages, and it is possible that it was quite dependent on them as a food source. Additionally, a marine megafauna extinction during the Pliocene was discovered to have eliminated 36% of all large marine species including 55% of marine mammals, 35% of seabirds, 9% of sharks, and 43% of sea turtles. The extinction was selective for endotherms and mesotherms relative to poikilotherms, implying causation by a decreased food supply and thus consistent with megalodon being mesothermic. Megalodon may have been too large to sustain itself on the declining marine food resources. The cooling of the oceans during the Pliocene might have restricted the access of megalodon to the polar regions, depriving it of the large whales which had migrated there.
Competition from other predators of marine mammals, such as macropredatory sperm whales which appeared in the Miocene, and killer whales and great white sharks in the Pliocene, may have also contributed to the decline and extinction of megalodon.:46–47 Fossil records indicate that the new whale-eating cetaceans commonly occurred at high latitudes during the Pliocene, indicating that they could cope with the increasingly prevalent cold water temperatures; but they also occurred in the tropics (e.g., Orcinus sp. in South Africa). The largest macropredatory sperm whales such as Livyatan are best known from the Miocene, but persisted into the Pliocene, while others, such as Hoplocetus and Scaldicetus, survived until the early Pleistocene. These may have occupied a niche similar to that of orcas before eventually being replaced by them. Recent evidence and more accurate dating methods suggest that C. megalodon may have died out earlier than surmised; fossils examined in North Pacific deposits imply the sharks became extinct around 3.6-4 million years ago. This is hypothesized to have been due to both cooling surface temperatures resulting in range fragmentation for C. megalodon as well as competition for prey with the newly evolved modern great white shark. Many of the species that served as megalodon's prey survived for significantly longer, contrary to a previous theory that all were swept away by a single marine mass extinction.
The extinction of megalodon set the stage for further changes in marine communities. The average body size of baleen whales increased significantly after its disappearance, although possibly due to other, climate-related, causes. Conversely the increase in baleen whale size may have contributed to the extinction of megalodon, as they may have preferred to go after smaller whales; bite marks on large whale species may have come from scavenging sharks. Megalodon may have simply become coextinct with smaller whale species, such as Piscobalaena nana. The extinction of megalodon had a positive impact on other apex predators of the time, such as the great white shark, in some cases spreading to regions where megalodon became absent.
In popular culture
Megalodon has been portrayed in several works of fiction, including films and novels, and continues to be a popular subject for fiction involving sea monsters. Three individual megalodon, two adults and one juvenile, were portrayed in BBC's 2003 TV documentary series Sea Monsters, where it is defined as a "hazard" of the era. The History Channel's Jurassic Fight Club portrays a megalodon attacking a Brygmophyseter sperm whale in Japan. Several films depict megalodon, such as Shark Attack 3: Megalodon and the Mega Shark series (for instance Mega Shark Versus Giant Octopus and Mega Shark Versus Crocosaurus). The shark appears in the 2017 videogame Ark: Survival Evolved. Some stories, such as Jim Shepard's Tedford and the Megalodon, portray a rediscovery of the shark. Steve Alten's Meg: A Novel of Deep Terror portrays the shark having preyed on dinosaurs with its prologue and cover artwork depicting megalodon killing a Tyrannosaurus in the sea. The sequels to the book also star megalodon: The Trench, Meg: Primal Waters, Meg: Hell's Aquarium, Meg: Nightstalkers, Meg: Generations, and Meg: Origins, and there is a film adaptation entitled The Meg released on 10 August 2018.
Animal Planet's pseudo-documentary Mermaids: The Body Found included an encounter 1.6 mya between a pod of mermaids and a megalodon. Later, in August 2013, the Discovery Channel opened its annual Shark Week series with another film for television, Megalodon: The Monster Shark Lives, a controversial docufiction about the creature that presented alleged evidence in order to suggest that megalodon was still alive. This program received criticism for being completely fictional; for example, all of the supposed scientists depicted were paid actors. In 2014, Discovery re-aired The Monster Shark Lives, along with a new one-hour program, Megalodon: The New Evidence, and an additional fictionalized program entitled Shark of Darkness: Wrath of Submarine, resulting in further backlash from media sources and the scientific community.
Reports of supposedly fresh megalodon teeth, such as those made by HMS Challenger in 1873 which were erroneously dated to be around 11,000 to 24,000 years old, are probably teeth that were well-preserved by a thick mineral-crust precipitate of manganese dioxide, and so had a lower decomposition rate and retained a white color during fossilization. Fossil megalodon teeth can vary in color from off-white to dark browns and greys, and some fossil teeth may have been redeposited into a younger stratum. The claims that megalodon could remain elusive in the depths, similar to the megamouth shark which was discovered in 1976, are unlikely as the shark lived in warm coastal waters and probably could not survive in the cold and nutrient-poor deep sea environment.
- Carbonated bioapatite from a megalodon tooth (of unknown source location) dated to 5.75 ± 0.9 Ma in age has been analyzed for isotope ratios of oxygen (18O/16O) and carbon (13C/12C), using a carbonate clumped-isotope thermometer methodology to yield an estimate of the ambient temperature in that individual's environment of 19 ± 4 °C.
- "Giant 'megalodon' shark extinct earlier than previously thought". Science daily.
- Agassiz, Louis (1843). Recherches sur les poissons fossiles [Research on the fossil fishes] (in French). Neuchatel: Petitpierre. p. 41.
- "Otodus (Megaselachus) megalodon (Agassiz, 1837)". SharkReferences.com. Retrieved 24 October 2017.
- Eastman, C. R. (1904). Maryland Geological Survey. 2. Baltimore, Maryland: Johns Hopkins University. p. 82.
- Cappetta, H. (1987). "Mesozoic and Cenozoic Elasmobranchii". Handbook of Paleoichthyology. 3B. München, Germany: Friedrich Pfeil. ISBN 978-3-89937-046-1. OCLC 829906016.
- Hay, O. P. (1901). "Bibliography and Catalogue of the Fossil Vertebrata of North America". Bulletin of the United States Geological Society (179): 308.
- Wroe, S.; Huber, D. R.; Lowry, M.; McHenry, C.; Moreno, K.; Clausen, P.; Ferrara, T. L.; Cunningham, E.; Dean, M. N.; Summers, A. P. (2008). "Three-dimensional computer analysis of white shark jaw mechanics: how hard can a great white bite?" (PDF). Journal of Zoology. 276 (4): 336–342. doi:10.1111/j.1469-7998.2008.00494.x.
- Boessenecker, R. W.; Ehret, D. J.; Long, D. J.; Churchill, M.; Martin, E.; Boessenecker, S. J. (2019). "The Early Pliocene extinction of the mega-toothed shark Otodus megalodon: a view from the eastern North Pacific". PeerJ. 7: e6088. doi:10.7717/peerj.6088. PMC 6377595. PMID 30783558.
- Haven, Kendall (1997). 100 Greatest Science Discoveries of All Time. Westport, Connecticut: Libraries Unlimited. pp. 25–26. ISBN 978-1-59158-265-6. OCLC 230807846.
- Hsu, Kuang-Tai (2009). "The Path to Steno's Synthesis on the Animal Origin of Glossopetrae". In Rosenburg, G. D. (ed.). The Revolution in Geology from the Renaissance to the Enlightenment. 203. Boulder, Colorado: Geological Society of America. ISBN 978-0-8137-1203-1. OCLC 608657795.
- Eilperin, J. (2012). Demon Fish. Pantheon Books. p. 43. ISBN 978-0-7156-4352-5.
- Nyberg, K. G.; Ciampaglio C. N.; Wray G. A. (2006). "Tracing the ancestry of the great white shark, Carcharodon carcharias, using morphometric analyses of fossil teeth". Journal of Vertebrate Paleontology. 26 (4): 806–814. doi:10.1671/0272-4634(2006)26[806:TTAOTG]2.0.CO;2.
- Keyes, I. W. (2012). "New records of the Elasmobranch C. megalodon (Agassiz) and a review of the genus Carcharodon in the New Zealand fossil record". New Zealand Journal of Geology and Geophysics. 15 (2): 229. doi:10.1080/00288306.1972.10421956.
- μέγας. Liddell, Henry George; Scott, Robert; A Greek–English Lexicon at the Perseus Project
- ὀδούς. Liddell, Henry George; Scott, Robert; A Greek–English Lexicon at the Perseus Project
- Augilera, Orangel A.; García, Luis; Cozzuol, Mario A. (2008). "Giant-toothed white sharks and cetacean trophic interaction from the Pliocene Caribbean Paraguaná Formation". Paläontologische Zeitschrift. 82 (2): 204–208. doi:10.1007/BF02988410. ISSN 0038-2353.
- Renz, Mark (2002). Megalodon: Hunting the Hunter. Lehigh Acres, Florida: PaleoPress. pp. 1–159. ISBN 978-0-9719477-0-2. OCLC 52125833.
- Lawley, R. (1881). "Selache manzonii n. sp. – Dente Fossile délia Molassa Miocenica del Monte Titano (Repubblica di San Marino)" [Fossil tooth from Miocene Molasse from Monte Titano (Republic of San Marino)]. Atti della Società Toscana di Scienze Naturali (in Italian). 5: 167–172.
- Ehret D. J.; Hubbell G.; Macfadden B. J. (2009). "Exceptional preservation of the white shark Carcharodon from the early Pliocene of Peru". Journal of Vertebrate Paleontology. 29 (1): 1–13. doi:10.1671/039.029.0113. JSTOR 20491064.
- Yabe, H.; Goto, M.; Kaneko, N. (2004). "Age of Carcharocles megalodon (Lamniformes: Otodontidae): A review of the stratigraphic records". The Palaeontological Society of Japan. 75: 7–15.
- Gottfried, M. D.; Fordyce, R. E. (2001). "An associated specimen of Carcharodon angustidens (Chondrichthyes, Lamnidae) from the Late Oligocene of New Zealand, with comments on Carcharodon interrelationships". Journal of Vertebrate Paleontology. 21 (4): 730–739. doi:10.1671/0272-4634(2001)021[0730:AASOCA]2.0.CO;2.
- Pimiento, C.; MacFadden, B. J.; Clements, C. F.; Varela, S.; Jaramillo, C.; Velez-Juarbe, J.; Silliman, B. R. (2016). "Geographical distribution patterns of Carcharocles megalodon over time reveal clues about extinction mechanisms". Journal of Biogeography. 43 (8): 1645–1655. doi:10.1111/jbi.12754.
- Pimiento, C.; Clements, C. F. (2014). "When Did Carcharocles megalodon Become Extinct? A New Analysis of the Fossil Record". PLoS ONE. 9 (10): e111086. Bibcode:2014PLoSO...9k1086P. doi:10.1371/journal.pone.0111086. PMC 4206505. PMID 25338197.
- Shimada, K.; Chandler, R. E.; Lam, O. L. T.; Tanaka, T.; Ward, D. J. (2016). "A new elusive otodontid shark (Lamniformes: Otodontidae) from the lower Miocene, and comments on the taxonomy of otodontid genera, including the 'megatoothed' clade". Historical Biology. 29 (5): 1–11. doi:10.1080/08912963.2016.1236795.
- Pimiento, C.; Balk, M. A. (2015). "Body-size trends of the extinct giant shark Carcharocles megalodon: a deep-time perspective on marine apex predators". Paleobiology. 41 (3): 479–490. doi:10.1017/pab.2015.16. PMC 4541548. PMID 26321775.
- Klimley, Peter; Ainley, David (1996). "Evolution". Great White Sharks: The Biology of Carcharodon carcharias. San Diego, California: Academic Press. ISBN 978-0-12-415031-7. OCLC 212425118.
- Andres, Lutz (2002). "C. megalodon — Megatooth Shark, Carcharodon versus Carcharocles". fossilguy.com. Retrieved 16 January 2008.
- Perez, V. J.; Godfrey, S. J.; Kent, B. W.; Weems, R. E.; Nance, J. R. (2019). "The transition between Carcharocles chubutensis and Carcharocles megalodon (Otodontidae, Chondrichthyes): lateral cusplet loss through time". Journal of Vertebrate Paleontology. 38 (6): e1546732. doi:10.1080/02724634.2018.1546732.
- Siverson, M.; Lindgren, J.; Newbrey, M.G.; Cederström, P.; Cook, T.D. (2013). "Late Cretaceous (Cenomanian-Campanian) mid-palaeolatitude sharks of Cretalamna appendiculata type" (PDF). Acta Palaeontologica Polonica: 2. doi:10.4202/app.2012.0137. Archived from the original on 19 October 2013.CS1 maint: BOT: original-url status unknown (link)
- Benton, M. J.; Pearson, P. N. (2001). "Speciation in the fossil record". Trends in Ecology and Evolution. 16 (7): 405–411. doi:10.1016/s0169-5347(01)02149-8. PMID 11403874.
- Pimiento, Catalina; Ehret, Dana J.; MacFadden, Bruce J.; Hubbell, Gordon (2010). Stepanova, Anna (ed.). "Ancient Nursery Area for the Extinct Giant Shark Megalodon from the Miocene of Panama". PLoS ONE. 5 (5): e10552. Bibcode:2010PLoSO...510552P. doi:10.1371/journal.pone.0010552. PMC 2866656. PMID 20479893.
- "Could Megalodon Have Looked Like a BIG Sandtiger Shark?". Biology of Sharks and Rays. Retrieved 2 September 2017.
- Portell, Roger; Hubell, Gordon; Donovan, Stephen; Green, Jeremy; Harper, David; Pickerill, Ron (2008). "Miocene sharks in the Kendeace and Grand Bay formations of Carriacou, The Grenadines, Lesser Antilles" (PDF). Caribbean Journal of Science. 44 (3): 279–286. doi:10.18475/cjos.v44i3.a2. Archived from the original on 20 July 2011.CS1 maint: BOT: original-url status unknown (link)
- Woodford, James; Woodford, James. "Great white sharks: 10 myths debunked". The Guardian. Retrieved 3 June 2016.
- Carpenter, K. "Carcharodon carcharias". FishBase.org. Retrieved 3 June 2016.
- Viegas, Jennifer. "Largest Great White Shark Don't Outweigh Whales, but They Hold Their Own". Discovery Channel. Retrieved 19 January 2010.
- McClain, Craig R.; Balk, Meghan A.; Benfield, Mark C.; Branch, Trevor A.; Chen, Catherine; Cosgrove, James; Dove, Alistair D.M.; Gaskins, Lindsay C.; Helm, Rebecca R.; Hochberg, Frederick G.; Lee, Frank B. (13 January 2015). "Sizing ocean giants: patterns of intraspecific size variation in marine megafauna". PeerJ. 3: e715. doi:10.7717/peerj.715. ISSN 2167-8359. PMC 4304853. PMID 25649000.
- Prothero, D. R. (2015). "Mega-Jaws". The Story of Life in 25 Fossils. New York, New York: Columbia University Press. pp. 96–110. ISBN 978-0-231-17190-8. OCLC 897505111.
- Jacoby, D. M. P.; Siriwat, P.; Freeman, R.; Carbone, C. (2015). "Is the scaling of swim speed in sharks driven by metabolism?". Biology Letters. 12 (10): 20150781. doi:10.1098/rsbl.2015.0781. PMC 4707698. PMID 26631246.
- Ferrón, H. G. (2017). "Regional endothermy as a trigger for gigantism in some extinct macropredatory sharks". PLOS ONE. 12 (9): e0185185. Bibcode:2017PLoSO..1285185F. doi:10.1371/journal.pone.0185185. PMC 5609766. PMID 28938002.
- Crane, B. (2017). "A Prehistoric Killer, Buried in Muck". The New Yorker. Retrieved 10 December 2017.
- Mustain, A. (2011). "For Sale: World's Largest Shark Jaws". LiveScience. Retrieved 31 August 2017.
- Helfman, G.; Burgess, G. H. (2014). Sharks: The Animal Answer Guide. Baltimore, Maryland: Johns Hopkins University Press. p. 19. ISBN 978-1-4214-1310-5. OCLC 903293986.
- Randall, John E. (1973). "Size of the Great White Shark (Carcharodon)". Science Magazine. 181 (4095): 169–170. Bibcode:1973Sci...181..169R. doi:10.1126/science.181.4095.169. PMID 17746627.
- Schembri, Patrick (1994). "Malta's Natural Heritage". Natural Heritage. In: 105–124.
- Papson, Stephen (1992). "Copyright: Cross the Fin Line of Terror". Journal of American Culture. 15 (4): 67–81. doi:10.1111/j.1542-734X.1992.1504_67.x.
- Compagno, Leonard J. V. (2002). Sharks of the World: An Annotated and Illustrated Catalogue of Shark Species Known to Date. Rome: Food & Agriculture Organization of the United Nations. p. 97. ISBN 978-92-5-104543-5.
- Shimada, Kenshu (2002). "The relationship between the tooth size and total body length in the white shark, Carcharodon carcharias (Lamniformes: Lamnidae)". Journal of Fossil Research. 35 (2): 28–33.
- Pimiento, Catalina; Gerardo González-Barba; Dana J. Ehret; Austin J. W. Hendy; Bruce J. MacFadden; Carlos Jaramillo (2013). "Sharks and Rays (Chondrichthyes, Elasmobranchii) from the Late Miocene Gatun Formation of Panama" (PDF). Journal of Paleontology. 87 (5): 755–774. doi:10.1666/12-117. Archived from the original (PDF) on 29 October 2013.
- Shimada, Kenshu (2019). "The size of the megatooth shark, Otodus megalodon (Lamniformes: Otodontidae), revisited". Historical Biology: 1–8. doi:10.1080/08912963.2019.1666840. ISSN 0891-2963.
- Bendix-Almgreen, Svend Erik (1983). "Carcharodon megalodon from the Upper Miocene of Denmark, with comments on elasmobranch tooth enameloid: coronoïn" (PDF). Bulletin of the Geological Society of Denmark. 32: 1–32.
- Reolid, M.; Molina, J. M. (2015). "Record of Carcharocles megalodon in the Eastern Guadalquivir Basin (Upper Miocene, South Spain)". Estudios Geológicos. 71 (2): e032. doi:10.3989/egeol.41828.342.
- Uyeno, T.; Sakamoto, O.; Sekine, H. (1989). "The Description of an Almost Complete Tooth Set of Carcharodon megalodon from a Middle Miocene Bed in the Saitama Prefecture, Japan". Saitama Museum of Natural History Bulletin. 7: 73–85.
- Anderson, P.S.L.; Westneat, M. (2009). "A biomechanical model of feeding kinematics for Dunkleosteus terrelli (Arthrodira, Placodermi)". Paleobiology. 35 (2): 251–269. doi:10.1666/08011.1.
- "Megalodon Shark Facts and Information: The Details". fossilguy.com. Retrieved 18 September 2017.
- Stringer, G. L.; King, L. (2012). "Late Eocene Shark Coprolites from the Yazoo Clay in Northeastern Louisiana". New Mexico Museum of Natural History and Science, Bulletin. Vertebrate Corpolites. 57: 301.
- Fitzgerald, Erich (2004). "A review of the Tertiary fossil Cetacea (Mammalia) localities in Australia" (PDF). Memoirs of Museum Victoria. 61 (2): 183–208. doi:10.24199/j.mmv.2004.61.12. Archived from the original (PDF) on 23 August 2008.
- Löffler, N.; Fiebig, J.; Mulch, A.; Tütken, T.; Schmidt, B.C.; Bajnai, D.; Conrad, A.C.; Wacker, U.; Böttcher, M.E. (2019). "Refining the temperature dependence of the oxygen and clumped isotopic compositions of structurally bound carbonate in apatite". Geochimica et Cosmochimica Acta. 253: 19–38. Bibcode:2019GeCoA.253...19L. doi:10.1016/j.gca.2019.03.002.
- Aguilera O.; Augilera E. R. D. (2004). "Giant-toothed White Sharks and Wide-toothed Mako (Lamnidae) from the Venezuela Neogene: Their Role in the Caribbean, Shallow-water Fish Assemblage". Caribbean Journal of Science. 40 (3): 362–368.
- Carcharocles megalodon at fossilworks.org (retrieved 28 August 2017)
- Martin, J. E.; Tacail, T.; Sylvain, A.; Catherine, G.; Vincent, B. (2015). "Calcium isotopes reveal the trophic position of extant and fossil elasmobranchs". Chemical Geology. 415: 118–125. Bibcode:2015ChGeo.415..118M. doi:10.1016/j.chemgeo.2015.09.011.
- Collareta, A.; Lambert, O.; Landini, W.; Di Celma, C.; Malinverno, E.; Varas-Malca, R.; Urbina, M.; Bianucci, G. (2017). "Did the giant extinct shark Carcharocles megalodon target small prey? Bite marks on marine mammal remains from the late Miocene of Peru". Palaeogeography, Palaeoclimatology, Palaeoecology. 469: 84–91. Bibcode:2017PPP...469...84C. doi:10.1016/j.palaeo.2017.01.001.
- Morgan, Gary S. (1994). "Whither the giant white shark?" (PDF). Paleontology Topics. 2 (3): 1–2. Archived from the original (PDF) on 22 July 2016.
- Landini, W.; Altamirano-Sera, A.; Collareta, A.; Di Celma, C.; Urbina, M.; Bianucci, G. (2017). "The late Miocene elasmobranch assemblage from Cerro Colorado (Pisco Formation, Peru)". Journal of South American Earth Sciences. 73: 168–190. Bibcode:2017JSAES..73..168L. doi:10.1016/j.jsames.2016.12.010.
- Lambert, O.; Bianucci, G.; Post, P.; de Muizon, C.; Salas-Gismondi, R.; Urbina, M.; Reumer, J. (2010). "The giant bite of a new raptorial sperm whale from the Miocene epoch of Peru". Nature. 466 (7302): 105–108. Bibcode:2010Natur.466..105L. doi:10.1038/nature09067. PMID 20596020.
- Compagno, Leonard J. V. (1989). "Alternative life-history styles of cartilaginous fishes in time and space". Environmental Biology of Fishes. 28 (1–4): 33–75. doi:10.1007/BF00751027.
- Ferretti, Francesco; Boris Worm; Gregory L. Britten; Michael R. Heithaus; Heike K. Lotze1 (2010). "Patterns and ecosystem consequences of shark declines in the ocean" (PDF). Ecology Letters. 13 (8): 1055–1071. doi:10.1111/j.1461-0248.2010.01489.x. PMID 20528897. Archived from the original on 6 July 2011. Retrieved 19 February 2011.CS1 maint: BOT: original-url status unknown (link)
- Gilbert, K.N.; Ivany, L.C.; Uhen, M.D. (2018). "Living fast and dying young: life history and ecology of a Neogene sperm whale". Journal of Vertebrate Paleontology. 38 (2): e1439038. doi:10.1080/02724634.2018.1439038.
- Heyning, John; Dahlheim, Marilyn (1988). "Orcinus orca" (PDF). Mammalian Species. 304 (304): 1–9. doi:10.2307/3504225. JSTOR 3504225. Archived from the original (PDF) on 5 December 2010.
- Bianucci, Giovanni; Walter, Landini (2006). "Killer sperm whale: a new basal physeteroid (Mammalia, Cetacea) from the Late Miocene of Italy". Zoological Journal of the Linnean Society. 148 (1): 103–131. doi:10.1111/j.1096-3642.2006.00228.x.
- Lindberg, D. R.; Pyenson, N. D. (2006). "Evolutionary Patterns in Cetacea: Fishing Up Prey Size through Deep Time". Whales, Whaling, and Ocean Ecosystems. University of California Press. p. 77. ISBN 978-0-520-24884-7.
- Boessenecker, R. W. (2013). "A new marine vertebrate assemblage from the Late Neogene Purisima Formation in Central California, part II: Pinnipeds and Cetaceans". Geodiversitas. 35 (4): 815–940. doi:10.5252/g2013n4a5.
- Bianucci, G. (1997). "Hemisyntrachelus cortesii (Cetacea, Delphinidae) from the Pliocene Sediments of Campore Quarry (Salsomaggiori Terme, Italy". Bollettino della Societa Paleontologica Italiana. 36 (1): 75–83).
- Antunes, M.T.; Legoinha, P.; Balbing, A. (2015). "Megalodon, mako shark and planktonic foraminifera from the continental shelf off Portugal and their age". Geologica Acta. 13: 181–190.
- "Paleoecology of Megalodon and the White Shark". Biology of Sharks and Rays. Retrieved 1 October 2017.
- Tanke, Darren; Currie, Philip (1998). "Head-Biting Behaviour in Theropod Dinosaurs: Paleopathological Evidence" (PDF). Gaia (15): 167–184.
- Godfrey, S. J.; Altman, J. (2005). "A Miocene Cetacean Vertebra Showing a Partially Healed Compression Factor, the Result of Convulsions or Failed Predation by the Giant White Shark, Carcharodon megalodon" (PDF). Jeffersoniana (16): 1–12.
- Deméré, Thomas A.; Berta, Annalisa; McGowen, Michael R. (2005). "The taxonomic and evolutionary history of fossil and modern balaenopteroid mysticetes". Journal of Mammalian Evolution. 12 (1/2): 99–143. doi:10.1007/s10914-005-6944-3.
- "Identifican en Canarias fósiles de 'megalodón', el tiburón más grande que ha existido" [Identifying Canary fossils of 'megalodon', the largest shark that ever lived] (in Spanish). Europa Press Noticias SA. 2013. Retrieved 29 August 2017.
- Dulvy, N. K.; Reynolds, J. D. (1997). "Evolutionary transitions among egg-laying, live-bearing and maternal inputs in sharks and rays". Proceedings of the Royal Society B: Biological Sciences. 264 (1386): 1309–1315. Bibcode:1997RSPSB.264.1309D. doi:10.1098/rspb.1997.0181. PMC 1688595.
- Godfrey, Stephen (2004). "The Ecphora" (PDF). The Newsletter of Calvert Marine Museum Fossil Club. 19 (1): 1–13. Archived from the original on 10 December 2010.CS1 maint: BOT: original-url status unknown (link)
- Kallal, R. J.; Godfrey, S. J.; Ortner, D. J. (27 August 2010). "Bone Reactions on a Pliocene Cetacean Rib Indicate Short-Term Survival of Predation Event". International Journal of Osteoarchaeology. 22 (3): 253–260. doi:10.1002/oa.1199.
- Pimiento, C.; Griffin, J. N.; Clements, C. F.; Silvestro, D.; Varela, S.; Uhen, M. D.; Jaramillo, C. (2017). "The Pleistocene Marine Megafauna Extinction and its Impact on Functional Diversity". Nature Ecology and Evolution. 1 (8): 1100–1106. doi:10.1038/s41559-017-0223-6. PMID 29046566.
- "The Extinction of Megalodon". Biology of Sharks and Rays. Retrieved 31 August 2017.
- Reilly, Michael (29 September 2009). "Prehistoric Shark Nursery Spawned Giants". Discovery News. Archived from the original on 10 March 2012. Retrieved 23 November 2013.
- Allmon, Warren D.; Steven D. Emslie; Douglas S. Jones; Gary S. Morgan (2006). "Late Neogene Oceanographic Change along Florida's West Coast: Evidence and Mechanisms". The Journal of Geology. 104 (2): 143–162. Bibcode:1996JG....104..143A. doi:10.1086/629811.
- Collareta, A.; Lambert, O.; Landini, W.; Bianucci, G. (2017). "Did the giant extinct shark Carcharocles megalodon target small prey? Bite marks on marine mammal remains from the late Miocene of Peru". Palaeogeography, Palaeoclimatology, Palaeoecology. 469: 84–91. Bibcode:2017PPP...469...84C. doi:10.1016/j.palaeo.2017.01.001.
- Dooly A.C.; Nicholas C. F.; Luo Z. X. (2006). "The earliest known member of the rorqual—gray whale clade (Mammalia, Cetacea)". Journal of Vertebrate Paleontology. 24 (2): 453–463. doi:10.1671/2401. JSTOR 4524731.
- Antunes, Miguel Telles; Balbino, Ausenda Cáceres (2010). "The Great White Shark Carcharodon carcharias (Linne, 1758) in the Pliocene of Portugal and its Early Distribution in Eastern Atlantic". Revista Española de Paleontología. 25 (1): 1–6.
- "Huge Tooth Reveals Prehistoric Moby Dick in Melbourne". Australasian Science Magazine. Retrieved 24 April 2016.
- Hampe, O. (2006). "Middle/late Miocene hoplocetine sperm whale remains (Odontoceti: Physeteridae) of North Germany with an emended classification of the Hoplocetinae". Fossil Record. 9 (1): 61–86. doi:10.1002/mmng.200600002.
- Slater, G. J.; Goldbogen, J. A.; Pyenson, N. D. (2017). "Independent evolution of baleen whale gigantism linked to Plio-Pleistocene ocean dynamics". Proceedings of the Royal Society B: Biological Sciences. 284 (1855): 20170546. doi:10.1098/rspb.2017.0546. PMC 5454272. PMID 28539520.
- Sylvain, Adnet; A. C. Balbino; M. T. Antunes; J. M. Marín-Ferrer (2010). "New fossil teeth of the White Shark (Carcharodon carcharias) from the Early Pliocene of Spain. Implication for its paleoecology in the Mediterranean". Neues Jahrbuch für Geologie und Paläontologie. 256 (1): 7–16. doi:10.1127/0077-7749/2009/0029.
- Weinstock, J. A. (2014). The Ashgate Encyclopedia of Literary and Cinematic Monsters. Farnham, United Kingdom: Routledge. pp. 107–108. ISBN 978-1-4094-2562-5. OCLC 874390267.
- "The Third Most Deadly Sea". Sea Monsters. Season 1. Episode 3. 23 September 2003. BBC.
- "Deep Sea Killers". Jurassic Fight Club. Season 1. Episode 5. 26 August 2008. History Channel.
- "Megalodon". ARK Survival Evolved. Retrieved 9 August 2018.
- Shepard, J. (2007). "Tedford and the Megalodon". In Chabon, M. (ed.). McSweeney's Mammoth Treasury of Thrilling Tales. New York, New York: Knopf Doubleday Publishing Group. p. 9. ISBN 978-0-307-42682-6. OCLC 1002088939.
- Alten, S. (2011). "Megalodon". Meg: A Novel of Deep Terror. Portland, Oregon: Gere Donovan Press. ISBN 978-1-936666-21-8.
- McNary, Dave (2 March 2017). "Jason Statham's Shark Thriller 'Meg' Swims Back Five Months". Variety. Retrieved 15 April 2018.
- Sid Bennett (director) (27 May 2012). Mermaids: The Body Found (Motion picture). Animal Planet.
- "Shark Week 'Megalodon: The Monster Shark Lives' Tries To Prove Existence Of Prehistoric Shark (VIDEO)". Huff Post Green. 5 August 2013. Retrieved 11 August 2013.
- Winston, B.; Vanstone, G.; Chi, W. (2017). "A Walk in the Woods". The Act of Documenting: Documentary Film in the 21st Century. New York, New York: Bloomsbury Publishing. ISBN 978-1-5013-0918-2. OCLC 961183719.
- Flanagin, J. (2014). "Sorry, Fans. Discovery Has Jumped the Shark Week". New York Times. Retrieved 16 August 2014.
- Roesch, B. S. (1998). "A Critical Evaluation of the Supposed Contemporary Existence of Carcharocles megalodon". The Cryptozoology Review. 3 (2): 14–24.
- "Does Megalodon Still Live?". Biology of Sharks and Rays. Retrieved 2 October 2017.
- "Fossil, Fossilized Teeth of the Megalodon Shark | NCpedia". ncpedia.org. Retrieved 17 October 2019.
- Dickson, K. A.; Graham, J. B. (November–December 2004). "Evolution and consequences of endothermy in fishes". Physiological and Biochemical Zoology. 77 (6): 998–1018. doi:10.1086/423743. PMID 15674772.
- Kent, Bretton W. (1994). Fossil Sharks of the Chesapeake Bay Region. Columbia, Md.: Egan Rees & Boyer. ISBN 978-1-881620-01-3. OCLC 918266672.
|Wikimedia Commons has media related to Carcharodon megalodon.|
|Wikispecies has information related to Megalodon|
- Paleontological videos |
Speed of sound
|Sound pressure||p, SPL|
|Particle velocity||v, SVL|
|Sound intensity||I, SIL|
|Sound power||P, SWL|
|Sound energy density||w|
|Sound exposure||E, SEL|
|Speed of sound||c|
The speed of sound is the distance travelled per unit time by a sound wave as it propagates through an elastic medium. In dry air at 20 °C (68 °F), the speed of sound is 1,130 feet per second / 343.2 metres per second (1,236 km/h; 768 mph; 667 kn), or a kilometre in 2.914 s or a mile in 4.689 s.
The speed of sound in an ideal gas depends only on its temperature and composition. The speed has a weak dependence on frequency and pressure in ordinary air, deviating slightly from ideal behavior.
In common everyday speech, speed of sound refers to the speed of sound waves in air. However, the speed of sound varies from substance to substance: sound travels most slowly in gases; it travels faster in liquids; and faster still in solids. For example (as noted above), sound travels at 343.2 m/s in air; it travels at 1,484 m/s in water (4.3 times as fast as in air); and at 5,120 m/s in iron. In an exceptionally stiff material such as diamond, sound travels at 12,000 m/s; which is around the maximum speed that sound will travel under normal conditions.
Sound waves in solids are composed of compression waves (just as in gases and liquids), but there is also a different type of sound wave called a shear wave, which occurs only in solids. These different types of waves in solids usually travel at different speeds, as exhibited in seismology. The speed of a compression sound wave in solids is determined by the medium's compressibility, shear modulus and density. The speed of shear waves is determined only by the solid material's shear modulus and density.
In fluid dynamics, the speed of sound in a fluid medium (gas or liquid) is used as a relative measure for the speed of an object moving through the medium. The ratio of the speed of an object to the speed of sound in the fluid is called the object's Mach number. Objects moving at speeds greater than Mach1 are said to be traveling at supersonic speeds.
- 1 History
- 2 Basic concept
- 3 Equations
- 4 Dependence on the properties of the medium
- 5 Altitude variation and implications for atmospheric acoustics
- 6 Practical formula for dry air
- 7 Details
- 8 Effect of frequency and gas composition
- 9 Mach number
- 10 Experimental methods
- 11 Non-gaseous media
- 12 Gradients
- 13 See also
- 14 References
- 15 External links
Sir Isaac Newton computed the speed of sound in air as 979 feet per second (298 m/s), which is too low by about 15%, but had neglected the effect of fluctuating temperature; that was later rectified by Laplace.
During the 17th century, there were several attempts to measure the speed of sound accurately, including attempts by Marin Mersenne in 1630 (1,380 Parisian feet per second), Pierre Gassendi in 1635 (1,473 Parisian feet per second) and Robert Boyle (1,125 Parisian feet per second).
In 1709, the Reverend William Derham, Rector of Upminster, published a more accurate measure of the speed of sound, at 1,072 Parisian feet per second. Derham used a telescope from the tower of the church of St Laurence, Upminster to observe the flash of a distant shotgun being fired, and then measured the time until he heard the gunshot with a half second pendulum. Measurements were made of gunshots from a number of local landmarks, including North Ockendon church. The distance was known by triangulation, and thus the speed that the sound had travelled was calculated.
The transmission of sound can be illustrated by using a model consisting of an array of balls interconnected by springs. For real material the balls represent molecules and the springs represent the bonds between them. Sound passes through the model by compressing and expanding the springs, transmitting energy to neighbouring balls, which transmit energy to their springs, and so on. The speed of sound through the model depends on the stiffness of the springs, and the mass of the balls. As long as the spacing of the balls remains constant, stiffer springs transmit energy more quickly, and more massive balls transmit energy more slowly. Effects like dispersion and reflection can also be understood using this model.
In a real material, the stiffness of the springs is called the elastic modulus, and the mass corresponds to the density. All other things being equal (ceteris paribus), sound will travel more slowly in spongy materials, and faster in stiffer ones. For instance, sound will travel 1.59 times faster in nickel than in bronze, due to the greater stiffness of nickel at about the same density. Similarly, sound travels about 1.41 times faster in light hydrogen (protium) gas than in heavy hydrogen (deuterium) gas, since deuterium has similar properties but twice the density. At the same time, "compression-type" sound will travel faster in solids than in liquids, and faster in liquids than in gases, because the solids are more difficult to compress than liquids, while liquids in turn are more difficult to compress than gases.
Some textbooks mistakenly state that the speed of sound increases with increasing density. This is usually illustrated by presenting data for three materials, such as air, water and steel, which also have vastly different compressibilities which more than make up for the density differences. An illustrative example of the two effects is that sound travels only 4.3 times faster in water than air, despite enormous differences in compressibility of the two media. The reason is that the larger density of water, which works to slow sound in water relative to air, nearly makes up for the compressibility differences in the two media.
Compression and shear waves
In a gas or liquid, sound consists of compression waves. In solids, waves propagate as two different types. A longitudinal wave is associated with compression and decompression in the direction of travel, and is the same process in gases and liquids, with an analogous compression-type wave in solids. Only compression waves are supported in gases and liquids. An additional type of wave, the transverse wave, also called a shear wave, occurs only in solids because only solids support elastic deformations. It is due to elastic deformation of the medium perpendicular to the direction of wave travel; the direction of shear-deformation is called the "polarization" of this type of wave. In general, transverse waves occur as a pair of orthogonal polarizations.
These different waves (compression waves and the different polarizations of shear waves) may have different speeds at the same frequency. Therefore, they arrive at an observer at different times, an extreme example being an earthquake, where sharp compression waves arrive first, and rocking transverse waves seconds later.
The speed of a compression wave in fluid is determined by the medium's compressibility and density. In solids, the compression waves are analogous to those in fluids, depending on compressibility, density, and the additional factor of shear modulus. The speed of shear waves, which can occur only in solids, is determined simply by the solid material's shear modulus and density.
The speed of sound in mathematical notation is conventionally represented by c, from the Latin celeritas meaning "velocity".
In general, the speed of sound c is given by the Newton–Laplace equation:
- Ks is a coefficient of stiffness, the isentropic bulk modulus (or the modulus of bulk elasticity for gases);
- ρ is the density.
Thus the speed of sound increases with the stiffness (the resistance of an elastic body to deformation by an applied force) of the material, and decreases with the density. For ideal gases the bulk modulus K is simply the gas pressure multiplied by the dimensionless adiabatic index, which is about 1.4 for air under normal conditions of pressure and temperature.
- p is the pressure;
- ρ is the density and the derivative is taken isentropically, that is, at constant entropy s.
In a non-dispersive medium, the speed of sound is independent of sound frequency, so the speeds of energy transport and sound propagation are the same for all frequencies. Air, a mixture of oxygen and nitrogen, constitutes a non-dispersive medium. However, air does contain a small amount of CO2 which is a dispersive medium, and causes dispersion to air at ultrasonic frequencies (> 28 kHz).
In a dispersive medium, the speed of sound is a function of sound frequency, through the dispersion relation. Each frequency component propagates at its own speed, called the phase velocity, while the energy of the disturbance propagates at the group velocity. The same phenomenon occurs with light waves; see optical dispersion for a description.
Dependence on the properties of the medium
The speed of sound is variable and depends on the properties of the substance through which the wave is travelling. In solids, the speed of transverse (or shear) waves depends on the shear deformation under shear stress (called the shear modulus), and the density of the medium. Longitudinal (or compression) waves in solids depend on the same two factors with the addition of a dependence on compressibility.
In fluids, only the medium's compressibility and density are the important factors, since fluids do not transmit shear stresses. In heterogeneous fluids, such as a liquid filled with gas bubbles, the density of the liquid and the compressibility of the gas affect the speed of sound in an additive manner, as demonstrated in the hot chocolate effect.
In gases, adiabatic compressibility is directly related to pressure through the heat capacity ratio (adiabatic index), while pressure and density are inversely related to the temperature and molecular weight, thus making only the completely independent properties of temperature and molecular structure important (heat capacity ratio may be determined by temperature and molecular structure, but simple molecular weight is not sufficient to determine it).
In low molecular weight gases such as helium, sound propagates faster as compared to heavier gases such as xenon. For monatomic gases, the speed of sound is about 75% of the mean speed that the atoms move in that gas.
For a given ideal gas the molecular composition is fixed, and thus the speed of sound depends only on its temperature. At a constant temperature, the gas pressure has no effect on the speed of sound, since the density will increase, and since pressure and density (also proportional to pressure) have equal but opposite effects on the speed of sound, and the two contributions cancel out exactly. In a similar way, compression waves in solids depend both on compressibility and density—just as in liquids—but in gases the density contributes to the compressibility in such a way that some part of each attribute factors out, leaving only a dependence on temperature, molecular weight, and heat capacity ratio which can be independently derived from temperature and molecular composition (see derivations below). Thus, for a single given gas (assuming the molecular weight does not change) and over a small temperature range (for which the heat capacity is relatively constant), the speed of sound becomes dependent on only the temperature of the gas.
In non-ideal gas behavior regimen, for which the van der Waals gas equation would be used, the proportionality is not exact, and there is a slight dependence of sound velocity on the gas pressure.
Humidity has a small but measurable effect on the speed of sound (causing it to increase by about 0.1%–0.6%), because oxygen and nitrogen molecules of the air are replaced by lighter molecules of water. This is a simple mixing effect.
Altitude variation and implications for atmospheric acoustics
In the Earth's atmosphere, the chief factor affecting the speed of sound is the temperature. For a given ideal gas with constant heat capacity and composition, the speed of sound is dependent solely upon temperature; see Details below. In such an ideal case, the effects of decreased density and decreased pressure of altitude cancel each other out, save for the residual effect of temperature.
Since temperature (and thus the speed of sound) decreases with increasing altitude up to 11 km, sound is refracted upward, away from listeners on the ground, creating an acoustic shadow at some distance from the source. The decrease of the speed of sound with height is referred to as a negative sound speed gradient.
However, there are variations in this trend above 11 km. In particular, in the stratosphere above about 20 km, the speed of sound increases with height, due to an increase in temperature from heating within the ozone layer. This produces a positive speed of sound gradient in this region. Still another region of positive gradient occurs at very high altitudes, in the aptly-named thermosphere above 90 km.
Practical formula for dry air
The approximate speed of sound in dry (0% humidity) air, in meters per second, at temperatures near 0 °C, can be calculated from
where is the temperature in degrees Celsius (°C).
This equation is derived from the first two terms of the Taylor expansion of the following more accurate equation:
Dividing the first part, and multiplying the second part, on the right hand side, by √273.15 gives the exactly equivalent form
The value of 331.3 m/s, which represents the speed at 0 °C (or 273.15 K), is based on theoretical (and some measured) values of the heat capacity ratio, γ, as well as on the fact that at 1 atm real air is very well described by the ideal gas approximation. Commonly found values for the speed of sound at 0 °C may vary from 331.2 to 331.6 due to the assumptions made when it is calculated. If ideal gas γ is assumed to be 7/5 = 1.4 exactly, the 0 °C speed is calculated (see section below) to be 331.3 m/s, the coefficient used above.
This equation is correct to a much wider temperature range, but still depends on the approximation of heat capacity ratio being independent of temperature, and for this reason will fail, particularly at higher temperatures. It gives good predictions in relatively dry, cold, low pressure conditions, such as the Earth's stratosphere. The equation fails at extremely low pressures and short wavelengths, due to dependence on the assumption that the wavelength of the sound in the gas is much longer than the average mean free path between gas molecule collisions. A derivation of these equations will be given in the following section.
A graph comparing results of the two equations is at right, using the slightly different value of 331.5 m/s for the speed of sound at 0 °C.
Speed of sound in ideal gases and air
For an ideal gas, K (the bulk modulus in equations above, equivalent to C, the coefficient of stiffness in solids) is given by
thus, from the Newton–Laplace equation above, the speed of sound in an ideal gas is given by
- γ is the adiabatic index also known as the isentropic expansion factor. It is the ratio of specific heats of a gas at a constant-pressure to a gas at a constant-volume(), and arises because a classical sound wave induces an adiabatic compression, in which the heat of the compression does not have enough time to escape the pressure pulse, and thus contributes to the pressure induced by the compression;
- p is the pressure;
- ρ is the density.
Using the ideal gas law to replace p with nRT/V, and replacing ρ with nM/V, the equation for an ideal gas becomes
- cideal is the speed of sound in an ideal gas;
- R (approximately 8.314,5 J · mol−1 · K−1) is the molar gas constant(universal gas constant);
- k is the Boltzmann constant;
- γ (gamma) is the adiabatic index. At room temperature, where thermal energy is fully partitioned into rotation (rotations are fully excited) but quantum effects prevent excitation of vibrational modes, the value is 7/5 = 1.400 for diatomic molecules, according to kinetic theory. Gamma is actually experimentally measured over a range from 1.399,1 to 1.403 at 0 °C, for air. Gamma is exactly 5/3 = 1.6667 for monatomic gases such as noble gases;
- T is the absolute temperature;
- M is the molar mass of the gas. The mean molar mass for dry air is about 0.028,964,5 kg/mol;
- n is the number of moles;
- m is the mass of a single molecule.
This equation applies only when the sound wave is a small perturbation on the ambient condition, and the certain other noted conditions are fulfilled, as noted below. Calculated values for cair have been found to vary slightly from experimentally determined values.
Newton famously considered the speed of sound before most of the development of thermodynamics and so incorrectly used isothermal calculations instead of adiabatic. His result was missing the factor of γ but was otherwise correct.
Numerical substitution of the above values gives the ideal gas approximation of sound velocity for gases, which is accurate at relatively low gas pressures and densities (for air, this includes standard Earth sea-level conditions). Also, for diatomic gases the use of γ = 1.400,0 requires that the gas exists in a temperature range high enough that rotational heat capacity is fully excited (i.e., molecular rotation is fully used as a heat energy "partition" or reservoir); but at the same time the temperature must be low enough that molecular vibrational modes contribute no heat capacity (i.e., insignificant heat goes into vibration, as all vibrational quantum modes above the minimum-energy-mode, have energies too high to be populated by a significant number of molecules at this temperature). For air, these conditions are fulfilled at room temperature, and also temperatures considerably below room temperature (see tables below). See the section on gases in specific heat capacity for a more complete discussion of this phenomenon.
For air, we use a simplified symbol
Additionally, if temperatures in degrees Celsius(°C) are to be used to calculate air speed in the region near 273 kelvin, then Celsius temperature θ = T − 273.15 may be used. Then
For dry air, where θ (theta) is the temperature in degrees Celsius(°C).
Making the following numerical substitutions,
is the molar gas constant in J/mole/Kelvin, and
is the mean molar mass of air, in kg; and using the ideal diatomic gas value of γ = 1.4000.
Using the first two terms of the Taylor expansion:
The derivation includes the first two equations given in the Practical formula for dry air section above.
Effects due to wind shear
The speed of sound varies with temperature. Since temperature and sound velocity normally decrease with increasing altitude, sound is refracted upward, away from listeners on the ground, creating an acoustic shadow at some distance from the source. Wind shear of 4 m/(s · km) can produce refraction equal to a typical temperature lapse rate of 7.5 °C/km. Higher values of wind gradient will refract sound downward toward the surface in the downwind direction, eliminating the acoustic shadow on the downwind side. This will increase the audibility of sounds downwind. This downwind refraction effect occurs because there is a wind gradient; the sound is not being carried along by the wind.
For sound propagation, the exponential variation of wind speed with height can be defined as follows:
- U(h) is the speed of the wind at height h;
- ζ is the exponential coefficient based on ground surface roughness, typically between 0.08 and 0.52;
- dU/dH(h) is the expected wind gradient at height h.
In the 1862 American Civil War Battle of Iuka, an acoustic shadow, believed to have been enhanced by a northeast wind, kept two divisions of Union soldiers out of the battle, because they could not hear the sounds of battle only 10 km (six miles) downwind.
In the standard atmosphere:
- T0 is 273.15 K (= 0 °C = 32 °F), giving a theoretical value of 331.3 m/s (= 1086.9 ft/s = 1193 km/h = 741.1 mph = 644.0 kn). Values ranging from 331.3-331.6 may be found in reference literature, however;
- T20 is 293.15 K (= 20 °C = 68 °F), giving a value of 343.2 m/s (= 1126.0 ft/s = 1236 km/h = 767.8 mph = 667.2 kn);
- T25 is 298.15 K (= 25 °C = 77 °F), giving a value of 346.1 m/s (= 1135.6 ft/s = 1246 km/h = 774.3 mph = 672.8 kn).
In fact, assuming an ideal gas, the speed of sound c depends on temperature only, not on the pressure or density (since these change in lockstep for a given temperature and cancel out). Air is almost an ideal gas. The temperature of the air varies with altitude, giving the following variations in the speed of sound using the standard atmosphere—actual conditions may vary.
|Speed of sound
|Density of air
|Characteristic specific acoustic impedance
Given normal atmospheric conditions, the temperature, and thus speed of sound, varies with altitude:
|Sea level||15 °C (59 °F)||340||1,225||761||661|
|11,000 m−20,000 m
(Cruising altitude of commercial jets,
and first supersonic flight)
|−57 °C (−70 °F)||295||1,062||660||573|
|29,000 m (Flight of X-43A)||−48 °C (−53 °F)||301||1,083||673||585|
Effect of frequency and gas composition
General physical considerations
The medium in which a sound wave is travelling does not always respond adiabatically, and as a result the speed of sound can vary with frequency.
The limitations of the concept of speed of sound due to extreme attenuation are also of concern. The attenuation which exists at sea level for high frequencies applies to successively lower frequencies as atmospheric pressure decreases, or as the mean free path increases. For this reason, the concept of speed of sound (except for frequencies approaching zero) progressively loses its range of applicability at high altitudes. The standard equations for the speed of sound apply with reasonable accuracy only to situations in which the wavelength of the soundwave is considerably longer than the mean free path of molecules in a gas.
The molecular composition of the gas contributes both as the mass (M) of the molecules, and their heat capacities, and so both have an influence on speed of sound. In general, at the same molecular mass, monatomic gases have slightly higher speed of sound (over 9% higher) because they have a higher γ (5/3 = 1.66…) than diatomics do (7/5 = 1.4). Thus, at the same molecular mass, the speed of sound of a monatomic gas goes up by a factor of
This gives the 9% difference, and would be a typical ratio for speeds of sound at room temperature in helium vs. deuterium, each with a molecular weight of 4. Sound travels faster in helium than deuterium because adiabatic compression heats helium more, since the helium molecules can store heat energy from compression only in translation, but not rotation. Thus helium molecules (monatomic molecules) travel faster in a sound wave and transmit sound faster. (Sound travels at about 70% of the mean molecular speed in gases; the figure is 75% in monatomic gases and 68% in diatomic gases).
Note that in this example we have assumed that temperature is low enough that heat capacities are not influenced by molecular vibration (see heat capacity). However, vibrational modes simply cause gammas which decrease toward 1, since vibration modes in a polyatomic gas gives the gas additional ways to store heat which do not affect temperature, and thus do not affect molecular velocity and sound velocity. Thus, the effect of higher temperatures and vibrational heat capacity acts to increase the difference between the speed of sound in monatomic vs. polyatomic molecules, with the speed remaining greater in monatomics.
Practical application to air
By far the most important factor influencing the speed of sound in air is temperature. The speed is proportional to the square root of the absolute temperature, giving an increase of about 0.6 m/s per degree Celsius. For this reason, the pitch of a musical wind instrument increases as its temperature increases.
The speed of sound is raised by humidity but decreased by carbon dioxide. The difference between 0% and 100% humidity is about 1.5 m/s at standard pressure and temperature, but the size of the humidity effect increases dramatically with temperature. The carbon dioxide content of air is not fixed, due to both carbon pollution and human breath (e.g., in the air blown through wind instruments).
The dependence on frequency and pressure are normally insignificant in practical applications. In dry air, the speed of sound increases by about 0.1 m/s as the frequency rises from 10 Hz to 100 Hz. For audible frequencies above 100 Hz it is relatively constant. Standard values of the speed of sound are quoted in the limit of low frequencies, where the wavelength is large compared to the mean free path.
Mach number, a useful quantity in aerodynamics, is the ratio of air speed to the local speed of sound. At altitude, for reasons explained, Mach number is a function of temperature. Aircraft flight instruments, however, operate using pressure differential to compute Mach number, not temperature. The assumption is that a particular pressure represents a particular altitude and, therefore, a standard temperature. Aircraft flight instruments need to operate this way because the stagnation pressure sensed by a Pitot tube is dependent on altitude as well as speed.
A range of different methods exist for the measurement of sound in air.
The earliest reasonably accurate estimate of the speed of sound in air was made by William Derham, and acknowledged by Isaac Newton. Derham had a telescope at the top of the tower of the Church of St Laurence in Upminster, England. On a calm day, a synchronized pocket watch would be given to an assistant who would fire a shotgun at a pre-determined time from a conspicuous point some miles away, across the countryside. This could be confirmed by telescope. He then measured the interval between seeing gunsmoke and arrival of the sound using a half-second pendulum. The distance from where the gun was fired was found by triangulation, and simple division (distance/time) provided velocity. Lastly, by making many observations, using a range of different distances, the inaccuracy of the half-second pendulum could be averaged out, giving his final estimate of the speed of sound. Modern stopwatches enable this method to be used today over distances as short as 200–400 meters, and not needing something as loud as a shotgun.
Single-shot timing methods
If a sound source and two microphones are arranged in a straight line, with the sound source at one end, then the following can be measured:
- The distance between the microphones (x), called microphone basis.
- The time of arrival between the signals (delay) reaching the different microphones (t).
Then v = x/t.
Kundt's tube is an example of an experiment which can be used to measure the speed of sound in a small volume. It has the advantage of being able to measure the speed of sound in any gas. This method uses a powder to make the nodes and antinodes visible to the human eye. This is an example of a compact experimental setup.
A tuning fork can be held near the mouth of a long pipe which is dipping into a barrel of water. In this system it is the case that the pipe can be brought to resonance if the length of the air column in the pipe is equal to (1 + 2n)λ/4 where n is an integer. As the antinodal point for the pipe at the open end is slightly outside the mouth of the pipe it is best to find two or more points of resonance and then measure half a wavelength between these.
Here it is the case that v = fλ.
High-precision measurements in air
The effect from impurities can be significant when making high-precision measurements. Chemical desiccants can be used to dry the air, but will in turn contaminate the sample. The air can be dried cryogenically, but this has the effect of removing the carbon dioxide as well; therefore many high-precision measurements are performed with air free of carbon dioxide rather than with natural air. A 2002 review found that a 1963 measurement by Smith and Harlow using a cylindrical resonator gave "the most probable value of the standard speed of sound to date." The experiment was done with air from which the carbon dioxide had been removed, but the result was then corrected for this effect so as to be applicable to real air. The experiments were done at 30 °C but corrected for temperature in order to report them at 0 °C. The result was 331.45 ± 0.01 m/s for dry air at STP, for frequencies from 93 Hz to 1,500 Hz.
Speed of sound in solids
In a solid, there is a non-zero stiffness both for volumetric deformations and shear deformations. Hence, it is possible to generate sound waves with different velocities dependent on the deformation mode. Sound waves generating volumetric deformations (compression) and shear deformations (shearing) are called pressure waves (longitudinal waves) and shear waves (transverse waves), respectively. In earthquakes, the corresponding seismic waves are called P-waves (primary waves) and S-waves (secondary waves), respectively. The sound velocities of these two types of waves propagating in a homogeneous 3-dimensional solid are respectively given by
- K is the bulk modulus of the elastic materials;
- G is the shear modulus of the elastic materials;
- E is the Young's modulus;
- ρ is the density;
- ν is Poisson's ratio.
The last quantity is not an independent one, as E = 3K(1 − 2ν). Note that the speed of pressure waves depends both on the pressure and shear resistance properties of the material, while the speed of shear waves depends on the shear properties only.
Typically, pressure waves travel faster in materials than do shear waves, and in earthquakes this is the reason that the onset of an earthquake is often preceded by a quick upward-downward shock, before arrival of waves that produce a side-to-side motion. For example, for a typical steel alloy, K = 170 GPa, G = 80 GPa and ρ = 7,700 kg/m3, yielding a compressional speed csolid,p of 6,000 m/s. This is in reasonable agreement with csolid,p measured experimentally at 5,930 m/s for a (possibly different) type of steel. The shear speed csolid,s is estimated at 3,200 m/s using the same numbers.
The speed of sound for pressure waves in stiff materials such as metals is sometimes given for "long rods" of the material in question, in which the speed is easier to measure. In rods where their diameter is shorter than a wavelength, the speed of pure pressure waves may be simplified and is given by:
where E is the Young's modulus. This is similar to the expression for shear waves, save that Young's modulus replaces the shear modulus. This speed of sound for pressure waves in long rods will always be slightly less than the same speed in homogeneous 3-dimensional solids, and the ratio of the speeds in the two different types of objects depends on Poisson's ratio for the material.
Speed of sound in liquids
In a fluid the only non-zero stiffness is to volumetric deformation (a fluid does not sustain shear forces).
Hence the speed of sound in a fluid is given by
where K is the bulk modulus of the fluid.
In fresh water, sound travels at about 1481 m/s at 20 °C. See Technical Guides - Speed of Sound in Pure Water for an online calculator. Applications of underwater sound can be found in sonar, acoustic communication and acoustical oceanography. See Discovery of Sound in the Sea for other examples of the uses of sound in the ocean (by both man and other animals).
In salt water that is free of air bubbles or suspended sediment, sound travels at about 1500 m/s (1500.235 m/s at 1000 kilopascals, 10 °C and 3% salinity by one method). The speed of sound in seawater depends on pressure (hence depth), temperature (a change of 1 °C ~ 4 m/s), and salinity (a change of 1‰ ~ 1 m/s), and empirical equations have been derived to accurately calculate the speed of sound from these variables. Other factors affecting the speed of sound are minor. Since temperature decreases with depth while pressure and generally salinity increase, the profile of the speed of sound with depth generally shows a characteristic curve which decreases to a minimum at a depth of several hundred meters, then increases again with increasing depth (right). For more information see Dushaw et al.
A simple empirical equation for the speed of sound in sea water with reasonable accuracy for the world's oceans is due to Mackenzie:
- T is the temperature in degrees Celsius;
- S is the salinity in parts per thousand;
- z is the depth in meters.
The constants a1, a2, …, a9 are
with check value 1550.744 m/s for T = 25 °C, S = 35 parts per thousand, z = 1,000 m. This equation has a standard error of 0.070 m/s for salinity between 25 and 40 ppt. See Technical Guides. Speed of Sound in Sea-Water for an online calculator.
Other equations for the speed of sound in sea water are accurate over a wide range of conditions, but are far more complicated, e.g., that by V. A. Del Grosso and the Chen-Millero-Li Equation.
Speed of sound in plasma
- mi is the ion mass;
- μ is the ratio of ion mass to proton mass μ = mi/mp;
- Te is the electron temperature;
- Z is the charge state;
- k is Boltzmann constant;
- γ is the adiabatic index.
In contrast to a gas, the pressure and the density are provided by separate species, the pressure by the electrons and the density by the ions. The two are coupled through a fluctuating electric field.
When sound spreads out evenly in all directions in three dimensions, the intensity drops in proportion to the inverse square of the distance. However, in the ocean there is a layer called the 'deep sound channel' or SOFAR channel which can confine sound waves at a particular depth.
In the SOFAR channel, the speed of sound is lower than that in the layers above and below. Just as light waves will refract towards a region of higher index, sound waves will refract towards a region where their speed is reduced. The result is that sound gets confined in the layer, much the way light can be confined in a sheet of glass or optical fiber. Thus, the sound is confined in essentially two dimensions. In two dimensions the intensity drops in proportion to only the inverse of the distance. This allows waves to travel much further before being undetectably faint.
- Acoustoelastic effect
- Elastic wave
- Second sound
- Sonic boom
- Sound barrier
- Underwater acoustics
- Speed of Sound
- "The Speed of Sound". mathpages.com. Retrieved 3 May 2015.
- Bannon, Mike; Kaputa, Frank. "The Newton–Laplace Equation and Speed of Sound". Thermal Jackets. Retrieved 3 May 2015.
- Murdin, Paul (Dec 25, 2008). Full Meridian of Glory: Perilous Adventures in the Competition to Measure the Earth. Springer Science & Business Media. pp. 35–36. ISBN 9780387755342.
- Fox, Tony (2003). Essex Journal. Essex Arch & Hist Soc. pp. 12–16.
- Dean, E. A. (August 1979). Atmospheric Effects on the Speed of Sound, Technical report of Defense Technical Information Center
- Everest, F. (2001). The Master Handbook of Acoustics. New York: McGraw-Hill. pp. 262–263. ISBN 0-07-136097-2.
- "CODATA Value: molar gas constant". Physics.nist.gov. Retrieved 24 October 2010.
- U.S. Standard Atmosphere, 1976, U.S. Government Printing Office, Washington, D.C., 1976.
- Uman, Martin (1984). Lightning. New York: Dover Publications. ISBN 0-486-64575-4.
- Volland, Hans (1995). Handbook of Atmospheric Electrodynamics. Boca Raton: CRC Press. p. 22. ISBN 0-8493-8647-0.
- Singal, S. (2005). Noise Pollution and Control Strategy. Oxford: Alpha Science International. p. 7. ISBN 1-84265-237-0.
It may be seen that refraction effects occur only because there is a wind gradient and it is not due to the result of sound being convected along by the wind.
- Bies, David (2004). Engineering Noise Control, Theory and Practice. London: Spon Press. p. 235. ISBN 0-415-26713-7.
As wind speed generally increases with altitude, wind blowing towards the listener from the source will refract sound waves downwards, resulting in increased noise levels.
- Cornwall, Sir (1996). Grant as Military Commander. New York: Barnes & Noble. p. 92. ISBN 1-56619-913-1.
- Cozens, Peter (2006). The Darkest Days of the War: the Battles of Iuka and Corinth. Chapel Hill: The University of North Carolina Press. ISBN 0-8078-5783-1.
- A B Wood, A Textbook of Sound (Bell, London, 1946)
- "Speed of Sound in Air". Phy.mtu.edu. Retrieved 13 June 2014.
- Nemiroff, R.; Bonnell, J., eds. (19 August 2007). "A Sonic Boom". Astronomy Picture of the Day. NASA. Retrieved 24 October 2010.
- Zuckerwar, Handbook of the speed of sound in real gases, p. 52
- L. E. Kinsler et al. (2000), Fundamentals of acoustics, 4th Ed., John Wiley and sons Inc., New York, USA
- J. Krautkrämer and H. Krautkrämer (1990), Ultrasonic testing of materials, 4th fully revised edition, Springer-Verlag, Berlin, Germany, p. 497
- "Speed of Sound in Water at Temperatures between 32–212 oF (0–100 oC) — imperial and SI units". The Engineering Toolbox.
- Wong, George S. K.; Zhu, Shi-ming (1995). "Speed of sound in seawater as a function of salinity, temperature, and pressure". The Journal of the Acoustical Society of America. 97 (3): 1732. doi:10.1121/1.413048.
- APL-UW TR 9407 High-Frequency Ocean Environmental Acoustic Models Handbook, pp. I1-I2.
- Robinson, Stephen (22 Sep 2005). "Technical Guides - Speed of Sound in Sea-Water". National Physical Laboratory. Retrieved 7 December 2016.
- "How Fast Does Sound Travel?". Discovery of Sound in the Sea. University of Rhode Island. Retrieved 30 November 2010.
- Dushaw, Brian D.; Worcester, P. F.; Cornuelle, B. D.; Howe, B. M. (1993). "On Equations for the Speed of Sound in Seawater". Journal of the Acoustical Society of America. 93 (1): 255–275. Bibcode:1993ASAJ...93..255D. doi:10.1121/1.405660.
- Kenneth V., Mackenzie (1981). "Discussion of sea-water sound-speed determinations". Journal of the Acoustical Society of America. 70 (3): 801–806. Bibcode:1981ASAJ...70..801M. doi:10.1121/1.386919.
- Del Grosso, V. A. (1974). "New equation for speed of sound in natural waters (with comparisons to other equations)". Journal of the Acoustical Society of America. 56 (4): 1084–1091. Bibcode:1974ASAJ...56.1084D. doi:10.1121/1.1903388.
- Meinen, Christopher S.; Watts, D. Randolph (1997). "Further Evidence that the Sound-Speed Algorithm of Del Grosso Is More Accurate Than that of Chen and Millero". Journal of the Acoustical Society of America. 102 (4): 2058–2062. Bibcode:1997ASAJ..102.2058M. doi:10.1121/1.419655.
- Calculation: Speed of Sound in Air and the Temperature
- Speed of sound: Temperature Matters, Not Air Pressure
- Properties of the U.S. Standard Atmosphere 1976
- The Speed of Sound
- How to Measure the Speed of Sound in a Laboratory
- Teaching Resource for 14-16 Years on Sound Including Speed of Sound
- Technical Guides. Speed of Sound in Pure Water
- Technical Guides. Speed of Sound in Sea-Water
- Did Sound Once Travel at Light Speed?
- Acoustic Properties of Various Materials Including the Speed of Sound |
From Uncyclopedia, the content-free encyclopedia
Please note that as Proof has been shot dead, all information below has been rendered obsolete.
Methods of proof
There are several methods of proof which are commonly used:
Proof by Revenge
"2+2=5" "no it doesn't" "REVENGE!"
Proof by Adding a Constant
2 = 1 if we add a constant C such that 2 = 1 + C.
Multiplicative Identity Additive Identity
Multiply both expressions by zero, e.g.,
- 1 = 2
- 1 × 0 = 2 × 0
- 0 = 0
Since the final statement is true, so is the first.
See also Proof by Pedantry.
Proof by Altering (or Destroying) the Original Premise (or Evidence)
- A = 1 and B = 1 and A + B = 3
- [Long list of confusing statements]
- [Somewhere down the line, stating B = 2 and covering up the previous definition]
- [Long list of other statements]
- A + B = 3
Works best over long period of time.
Proof by Analogy
Draw a poor analogy. Say you have two cows. But one is a bull. After the proper gestation period, 1 + 1 = 3.
Proof by Anti-proof
If there is proof that has yet to be accounted for in your opponent's argument, then it is wholly discreditable and thus proof of your own concept. It also works if you claim to be unable to comprehend their proof. Example:
- I can't see how a flagellum can evolve by itself, therefore the theory of evolution is incorrect, therefore someone must have put them together, therefore convert now!
Note: This generally works equally well in both directions:
- I can't see how someone could have put a flagellum together, therefore the theory of Creation is incorrect, therefore it must have evolved by itself, therefore Let's Party!
Proof by August
Since August is such a good time of year, no one will disagree with a proof published then, and therefore it is true. Of course, the converse is also true, i.e., January is crap, and all the logic in the world will not prove your statement then.
Proof by Assumption
An offshoot of Proof by Induction, one may assume the result is true. Therefore, it is true.
Proof by Axiom
Assert an axiom A such that the proposition P you are trying to prove is true. Thus, any statement S contradicting P is false, so P is true. Q.E.D.
Proof by Belief
"I believe assertion A to hold, therefore it does. Q.E.D."
Proof by Bijection
This is a method of proof made famous by P. T. Johnstone. Start with a completely irrelevant fact. Construct a bijection from the irrelevant fact to the thing you are trying to prove. Talk about rings for a few minutes, but make sure you keep their meaning a secret. When the audience are all confused, write Q.E.D. and call it trivial. Example:
- To prove the Chinese Remainder Theorem, observe that if p divides q, we have a well-defined function. Z/qZ → Z/qZ is a bijection. Since f is a homomorphism of rings, φ(mn) = φ(m) × φ(n) whenever (n, m) = 1. Using IEP on the hyperfield, there is a unique integer x, modulo mn, satifying x = a (mod m) and x = b (mod n). Thus, Q.E.D., and we can see it is trivial.
Proof by B.O.
This method is a fruitful attack on a wide range of problems: don't have a shower for several weeks and play lots of sports.
Proof by Calling the Other Guy an Idiot
"I used to respect his views, but by stating this opinion, he has now proven himself to be an idiot." Q.E.D.
Proof by Arbitration
Often times in mathematics, it is useful to create abitrary "Where in the hell did that come from?" type theorems which are designed to make the reader become so confused that the proof passes as sound reasoning.
Proof by Faulty Logic
Math professors and logicians sometimes rely on their own intuition to prove important mathematical theorems. The following is an especially important theorem which opened up the multi-disciplinary field of YouTube.
- Let k and l be the two infinities: mainly, the negative infinity and the positive infinity. Then, there exists a real number c, such that k and l cease to exist. Such a s is zero. We conclude that the zero infinity exists and is in between the postive and negative infinities. This theorem opens up many important ideas. For example, primitive logic would dictate that the square root of infinity, r, is a number less than r.
"I proved, therefore I am proof." – Isaac Newton, 1678, American Idol.
Proof by Canada
Like other proofs, but replace Q.E.D. with Z.E.D. Best when submitted with a bowl of Kraft Dinner.
Proof by Cantona
Conduct the proof in a confident manner in which you are convinced in what you are saying is correct, but which is absolute bollocks – and try to involve seagulls in some way. Example:
- If sin x < x … for all x > 0 … and when … [pause to have a sip of water] … the fisherman … throws sardines off the back of the trawler … and x > 0 … then … you can expect the seagulls to follow … and so sin x = 0 for all x.
Proof by Cases
AN ARGUMENT MADE IN CAPITAL LETTERS IS CORRECT. THEREFORE, SIMPLY RESTATE THE PROPOSITION YOU ARE TRYING TO PROVE IN CAPITAL LETTERS, AND IT WILL BE CORRECT!!!!!1 (USE TYPOS AND EXCLAMATION MARKS FOR ESPECIALLY DIFFICULT PROOFS)
Proof by Chocolate
By writing what seems to be an extensive proof and then smearing chocolate to stain the most crucial parts, the reader will assume that the proof is correct so as not to appear to be a fool.
Proof by Complexity
Remember, something is not true when its proof has been verified, it is true as long as it has not been disproved. For this reason, the best strategy is to limit as much as possible the number of people with the needed competence to understand your proof.
Be sure to include very complex elements in your proof. Infinite numbers of dimensions, hypercomplex numbers, indeterminate forms, graphs, references to very old books/movies/bands that almost nobody knows, quantum physics, modal logic, and chess opening theory are to be included in the thesis. Make sentences in Latin, Ancient Greek, Sanskrit, Ithkuil, and invent languages.
Refer to the Cumbersome Notation to make it more complex.
Again, the goal: nobody must understand, and this way, nobody can disprove you.
Proof by (a Broad) Consensus
If enough people believe something to be true, then it must be so. For even more emphatic proof, one can use the similar Proof by a Broad Consensus.
Proof by Contradiction (reductio ad absurdum)
- Assume the opposite: "not p".
- Bla, bla, bla …
- … which leads to "not p" being false, which contradicts assumption (1). Whatever you say in (2) and (3), (4) will make p true.
Useful to support other proofs.
Proof by Coolness (ad coolidum)
Let C be the coolness function
- C(2 + 2 = 4) < C(2 + 2 = 5)
- Therefore, 2 + 2 = 5
Let ACB be A claims B.
- C(Y) > C(X)
- Therefore Q unless there is Z, C(Z) > C(Y) and (ZC¬Q)
Let H be the previous demonstration, N nothingness, and M me.
- C(M) < C(N)
- Therefore ¬H
and all this is false since nothingness is cooler.
Let J be previous counter-argument and K be HVJ.
- Substitude K for H in J
- Therefore ¬K
- ¬K implies ¬J and ¬H
- ¬J implies H
- Therefore H and ¬H
- Therefore non-contradiction is false ad coolidum
- Therefore C(Aristotle) < C(M)
Proof by Cumbersome Notation
Best done with access to at least four alphabets and special symbols. Matrices, Tensors, Lie algebra and the Kronecker-Weyl Theorem are also well-suited.
Proof by Default
Proof by Definition
Define something as such that the problem falls into grade one math, e.g., "I am, therefore I am".
Proof by Delegation
"The general result is left as an exercise to the reader."
Proof by Dessert
The proof is in the pudding.
Philosophers consider this to be the tastiest possible proof.
Proof by Diagram
Reducing problems to diagrams with lots of arrows. Particularly common in category theory.
Proof by Disability
Proof conducted by the principle of not drawing attention to somebody's disability – like a speech impediment for oral proofs, or a severed tendon in the arm for written proofs.
Proof by Disgust
State two alternatives and explain how one is disgusting. The other is therefore obviously right and true.
- Do we come from God or from monkeys? Monkeys are disgusting. Ergo, God made Adam.
- Is euthanasia right or wrong? Dogs get euthanasia. Dogs smell and lick their butts. Ergo, euthanasia is wrong.
- Is cannibalism right or wrong? Eeew, blood! Ergo, cannibalism is wrong.
Proof by Dissent
If there is a consensus on a topic, and you disagree, then you are right because people are stupid. See global warming sceptics, creationist, tobacco companies, etc., for application of this proof.
Proof by Distraction
Be sure to provide some distraction while you go on with your proof, e.g., some third-party announces, a fire alarm (a fake one would do, too) or the end of the universe. You could also exclaim, "Look! A distraction!", meanwhile pointing towards the nearest brick wall. Be sure to wipe the blackboard before the distraction is presumably over so you have the whole board for your final conclusion.
Don't be intimidated if the distraction takes longer than planned – simply head over to the next proof.
An example is given below.
- Look behind you!
- … and proves the existence of an answer for 2 + 2.
- Look! A three-headed monkey over there!
- … leaves 5 as the only result of 2 + 2.
- Therefore 2 + 2 = 5. Q.E.D.
This is related to the classic Proof by "Look, a naked woman!"
Proof by Elephant
Q: For all rectangles, prove diagonals are bisectors. A: None: there is an elephant in the way!
Proof by Engineer's Induction
- See also: Proof by Induction.
Suppose P(n) is a statement.
- Prove true for P(1).
- Prove true for P(2).
- Prove true for P(3).
- Therefore P(n) is true for all .
Proof by Exhaustion
This method of proof requires all possible values of the expression to be evaluated and due to the infinite length of the proof, can be used to prove almost anything since the reader will either get bored whilst reading and skip to the conclusion or get hopelessly lost and thus convinced that the proof is concrete.
Proof by Eyeballing
Quantities that look similar are indeed the same. Often drawing random pictures will aid with this process.
Corollary: If it looks like a duck and acts like a duck, then it must be a duck.
Proof by Flutterby Effect
Proofs to the contrary that you can (and do) vigorously and emphatically ignore therefore you don't know about, don't exist. Ergo, they can't and don't apply.
Corollary: If it looks like a duck, acts like a duck and quacks like a duck, but I didn't see it (and hey, did you know my Mom ruptured my eardrums), then it's maybe … an aadvark?
Proof by Gun
A special case of Proof by Intimidation: "I have a gun and you don't. I'm right, you're wrong. Repeat after me: Q.E.D."
Proof by Global Warming
If it doesn't contribute to Global Warming, it is null and void.
Proof by God
Also a special case of proof. "Don't question my religion, or you are supremely insensitive and God will smite you." Similar to Proof by Religion, but sanctioned by George W. Bush.
Proof by Hand Waving
- See main article: Hand waving.
Commonly used in calculus, hand waving dispenses with the pointless notion that a proof need be rigorous.
Proof by Hitler Analogy
The opposite of Proof by Wikipedia. If Hitler said,
- "I like cute kittens."
then – automatically – cute kittens are evil, and liking them proves that you caused everything that's wrong in the world for the last 50 years.
Simple Proof by Hubris
I exist, therefore I am correct.
Proof by Hypnosis
Try to relate your proof to simple harmonic motion in some way and then convince people to look at a swinging pendulum.
Proof by Imitation
Make a ridiculous imitation of your opponent in a debate. Arguments cannot be seriously considered when the one who proposes them was laughed at a moment before.
Make sure to use puppets and high-pitched voices, and also have the puppet repeat "I am a X", replacing X with any minority that the audience might disregard: gay, lawyer, atheist, creationist, zoophile, paedophile … the choice is yours!
Proof by Immediate Danger
Having a fluorescent green gas gently seep into the room through the air vents will probably be beneficial to your proof.
Proof by Impartiality
If you, Y, disagree with X on issue I, you can invariably prove yourself right by the following procedure:
- Get on TV with X.
- Open with an ad hominem attack on X and then follow up by saying that God hates X for X's position on I.
- When X attempts to talk, interrupt him very loudly, and turn down his microphone.
- Remind your audience that you are impartial where I is concerned, while X is an unwitting servant of Conspiracy Z, e.g., the Liberal Media, and that therefore X is wrong. Then also remind your audience that I is binary, and since your position on I is different from X's, it must be right.
- That sometimes fails to prove the result on the first attempt, but by repeatedly attacking figures X1, X2, …, Xn – and by proving furthermore (possibly using Proof by Engineer's Induction) that Xn is wrong implies Xn+1 is wrong, and by demonstrating that you cannot be an Xi because your stance on I differs due to a change in position i, demonstrating that while the set of Xi's is countable, the set containing you is uncountable by the diagonal argument, and from there one can apply Proof by Consensus, as your set is infinitely bigger – you can prove yourself right.
A noted master of the technique is Bill O'Reilly.
Proof by Induction
Proof by Induction claims that
where is the number of pages used to contain the proof and is the time required to prove something, relative to the trivial case.
For the common, but special case of generalising the proof,
where is the number of pages used to contain the proof, is the number of things which are being proved and is the time required to prove something, relative to the trivial case.
The actual method of constructing the proof is irrelevant.
Proof by Intimidation
One of the principal methods used to prove mathematical statements. Remember, even if your achievements have nothing to do with the topic, you're still right. Also, if you spell even slightly better, make less typos, or use better grammar, you've got even more proof. The exact statement of proof by intimidation is given below.
Suppose a mathematician F is at a position n in the following hierarchy:
- Fields Medal winner
- Tenured Professor
- Non-tenured professor
- Graduate Student
- Undergraduate Student
If a second mathematician G is at any position p such that p < n, then any statement S given to F by G is true.
Alternatively: Theorem 3.6. All zeros of the Riemann Zeta function lie on the critical line (have a real component of 1/2).
Proof: "… trivial …"
Proof by Irrelevant References
A proof that is backed up by citations that may or may not contain a proof of the assertion. This includes references to documents that don't exist. (Cf. Schott, Wiggenmeyer & Pratt, Annals of Veterninary Medicine and Modern Domestic Plumbing, vol. 164, Jul 1983.)
Proof by Jack Bauer
If Jack Bauer says something is true, then it is. No ifs, ands, or buts about it. End of discussion.
This is why, for example, torture is good.
Proof by Lecturer
It's true because my lecturer said it was true. QED.
Proof by Liar
If liar say, that he is a liar, he lies, because liars always lie, so he is not liar.
Simple, ain't it?
Proof by Kim G. S. Øyhus' Inference
- and .
- Ergo, .
- Therefore I'm right and you're wrong.
Proof by LSD
Wow! That is sooo real, man!
Proof by Margin Too Small
"I have discovered a truly marvelous proof of this, which this margin is too narrow to contain."
Proof by Mathematical Interpretive Dance
Proof by Misunderstanding
"2 is equal to 3 for sufficiently large values of 2."
Proof by Mockery
Let the other state his claim in detail, wait he lists and explain all his argument and, at any time, explose in laughter and ask, "No, are you serious? That must be a joke. You can't really think that, do you?" Then you leave the debate in laughter and shout, "If you all want to listen to this parody of argument, I shan't prevent you!"
Proof by Narcotics Abuse
Spike the drinks/food of all people attending with physcoaltering or hallucinogenic chemicals.
Proof by Obama
Yes, we can.
Proof by Obfuscation
A long, plotless sequence of true and/or meaningless syntactically related statements.
Proof by Omission
Make it easier on yourself by leaving it up to the reader. After all, if you can figure it out, surely they can. Examples:
- The reader may easily supply the details.
- The other 253 cases are analogous.
- The proof is left as an exercise for the reader.
- The proof is left as an exercise for the marker (guaranteed to work in an exam).
Proof by Ostention
- 2 + 2 = 5
Proof by Outside the Scope
All the non-trivial parts of the proof are left out, stating that proving them is outside the scope of the book.
Proof by Overwhelming Errors
A proof in which there are so many errors that the reader can't tell whether the conclusion is proved or not, and so is forced to accept the claims of the writer. Most elegant when the number of errors is even, thus leaving open the possibility that all the errors exactly cancel each other out.
Proof by Ødemarksism
- See also: Proof by Consensus.
- The majority thinks P.
- Therefore P is true (and dissenters should be silenced in order to reduce conflict from diversity).
The silencing of dissenters can be made easier with convincing arguments.
Proof by Penis Size
My dick's much bigger than yours, so I'm right.
Corollary: You don't have a penis, so I'm right.
Proof by Pornography
Include pornographic pictures or videos in the proof – preferably playing a porno flick exactly to the side of where you are conducting the proof. Works best if you pretend to be oblivious to the porn yourself and act as if nothing is unusual.
Proof by Process of Elimination
so 2 + 2 = 5
Proof by Promiscuity
I get laid much more than you, so I'm right.
Proof by Proving
Well proven is the proof that all proofs need not be unproven in order to be proven to be proofs. But where is the real proof of this? A proof, after all, cannot be a good proof until it has been proven. Right?
Proof by Question
If you are asking me to prove something, it must be true. So why bother asking?
Proof by Realization
A form of proof where something is proved by realizing that is true. Therefore, the proof holds.
Proof by Reduction
Show that the theorem you are attempting to prove is equivalent to the trivial problem of not getting laid. Particularly useful in axiomatic set theory.
Proof by Reduction to the Wrong Problem
Why prove this theorem when you can show it's identical to some other, already proven problem? Plus a few additional steps, of course …
- Example: "To prove the four colour theorem, we reduce it to the halting problem."
Proof by Religion
Related to Proof by Belief, this method of attacking a problem involves the principle of mathematical freedom of expression by asserting that the proof is part of your religion, and then accusing all dissenters of religiously persecuting you, due to their stupidity of not accepting your obviously correct and logical proof. See also Proof by God.
Proof by Repetition
AKA the Socratic method.
If you say something is true enough times, then it is true. Repeatedly asserting something to be true makes it so. To repeat many times and at length the veracity of a given proposition adds to the general conviction that such a proposition might come to be truthful. Also, if you say something is true enough times, then it is true. Let n be the times any given proposition p was stated, preferably in different forms and ways, but not necessarily so. Then it comes to pass that the higher n comes to be, the more truth-content t it possesses. Recency bias and fear of ostracism will make people believe almost anything that is said enough times. If something has been said to be true again and again, it must definitely be true, beyond any shadow of doubt. The very fact that something is stated endlessly is enough for any reasonable person to believe it. And, finally, if you say something is true enough times, then it is true. Q.E.D.
Exactly how many times one needs to repeat the statement for it to be true, is debated widely in academic circles. Generally, the point is reached when those around die through boredom.
- E.g., let A = B. Since A = B, and B = A, and A = B, and A = B, and A = B, and B = A, and A = B, and A = B, then A = B.
Proof by Restriction
If you prove your claim for one case, and make sure to restrict yourself to this one, you thus avoid any case that could compromise you. You can hope that people won't notice the omission.
Example: Prove the four-color theorem.
- Take a map of only one region. Only 1 color is needed to color it, and 1 ≤ 4. End of the proof.
If someone questions the completeness of the proof, others methods of proofs can be used.
Proof by the Rovdistic Principle
- See also: Proof by Belief.
- I like to think that 2 + 2 = 5.
- Therefore, 2 + 2 = 5. Q.E.D.
Proof by Russian Reversal
In Soviet Russia, proof gives YOU!
Proof by Self-evidence
Claim something and tell how self-evident it is: you are right!
Proof by Semantics
Proof by semantics is simple to perform and best demonstrated by example. Using this method, I will prove the famous Riemann Hypothesis as follows:
We seek to prove that the Riemann function defined off of the critical line has no non-trivial zeroes. It is known that all non-trivial zeroes lie in the region with 0 < Re(z) < 1, so we need not concern ourselves with numbers with negative real parts. The Riemann zeta function is defined for Re(z) > 1 by sum over k of 1/kz, which can be written 1 + sum over k from 2 of 1/kz.
Consider the group (C, +). There is a trivial action theta from this group to itself by addition. Hence, by applying theta and using the fact that it is trivial, we can conclude that sum (1/kz) over k from 2 is the identity element 0. Hence, the Riemann zeta function for Re(z) > 0 is simply the constant function 1. This has an obvious analytic continuation to Re(z) > 0 minus the critical line, namely that zeta(z) = 1 for all z in the domain.
Hence, zeta(z) is not equal to zero anywhere with Re(z) > 0 and Re(z) not equal to 1/2. Q.E.D.
Observe how we used the power of the homonyms "trivial" meaning ease of proof and "trivial" as in "the trivial action" to produce a brief and elegant proof of a classical mathematical problem.
Proof by Semitics
If it happened to the Jews and has been confirmed by the state of Israel, then it must be true.
Proof by Staring
- x2 − 1 = (x + 1)(x − 1)
This becomes obvious after you stare at it for a while and the symbols all blur together.
Proof by Substitution
One may substitute any arbitrary value for any variable to prove something. Example:
- Assume that 2 = P.
- Substitute 3 for P.
- Therefore, 2 = 3. Q.E.D.
Proof by Superior IQ
- See also: Proof by Intimidation.
If your IQ is greater than that of the other person in the argument, you are right and what you say is proven.
Proof by Surprise
The proof is accomplished by stating completely random and arbitrary facts that have nothing to do with the topic at hand, and then using these facts to mysteriously conclude the proof by appealing to the Axiom of Surprise. The most known user of this style of proof is Walter Rudin in Principles of Mathematical Analysis. To quote an example:
Theorem: If and is real, then .
Proof: Let be an integer such that , . For , . Hence, . Since , . Q.E.D.
Walter Rudin, Principles of Mathematical Analysis, 3rd Edition, p. 58, middle.
Proof by Tarantino
Proof by Tension
Try to up the tension in the room by throwing in phrases like "I found my wife cheating on me … with another woman", or "I wonder if anybody would care if I slit my wrists tomorrow". The more awkward the situation you can make, the better.
Proof by TeX
Proof by … Then a Miracle Happens
Similar to Proof by Hand Waving, but without the need to wave your hand.
Example: Prove that .
- … then a miracle happens.
- . Q.E.D.
Proof by Time Travel
Someone comes from the future and tells you statement A is false. As time travel is impossible it is a double negation and therefore the statement is true Q.E.D.
Proof by Triviality
The Proof of this theorem/result is obvious, and hence left as an exercise for the reader.
Proof by Uncyclopedia
Uncyclopedia is the greatest storehouse of human knowledge that has ever existed. Therefore, citing any fact, quote or reference from Uncyclopedia will let your readers know that you are no intellectual lightweight. Because of Uncyclopedia's steadfast adherence to accuracy, any proof with an Uncyclopedia reference will defeat any and all detractors.
(Hint: In any proof, limit your use of Oscar Wilde quotes to a maximum of five.)
Proof by Volume
If you shout something really, really loud often enough, it will be accepted as true.
Also, if the proof takes up several volumes, then any reader will get bored and go do something more fun, like math.
Proof by War
My guns are much bigger than yours, therefore I'm right.
See also Proof by Penis Size.
Proof by Wolfram Alpha
If Wolfram Alpha says it is true, then it is true.
Proof by Wikipedia
If the Wikipedia website states that something is true, it must be true. Therefore, to use this proof method, simply edit Wikipedia so that it says whatever you are trying to prove is true, then cite Wikipedia for your proof.
Proof by Yoda
If stated the proof by Yoda is, then true it must be.
Proof by Your Mom
You don't believe me? Well, your mom believed me last night!
Proof by Actually Trying and Doing It the Honest W– *gunshot*
Let this be a lesson to you do-gooders.
Proof by Reading the Symbols Carefully
Proving the contrapositive theorem: Let (p→q), (¬q→¬p) be true statements. (p→q) if (and only if) (¬q→¬p).
The symbols → may also mean shooting and ¬ may also represent a gun. The symbols would then be read as this:
If statement p shoots statement q, then statement q possibly did not shoot statement p at all, because statement q is a n00b player for pointing the gun in the opposite direction of statement p.
Also, if statement q didn't shoot statement p on the right direction in time (due to n00biness), p would then shoot q.
Oh! I get it now. The power of symbol reading made the theorem sense. Therefore, the theorem is true.
The Ultimate Proof
However, despite all of these methods of proof, there is only one way of ensuring not only that you are 100% correct, but 1000 million per cent correct, and that everyone, no matter how strong or how argumentative they may be, will invariably agree with you. That, my friends, is being a girl. "I'm a girl, so there", is the line that all men dread, and no reply has been discovered which doesn't result in a slap/dumping/strop being thrown/brick being thrown/death being caused. Guys, when approached by this such form of proof, must destroy all evidence of it and hide all elements of its existence.
Some other terms one may come across when working with proofs:
A method of proof attempted at 3:00 A.M. the day a problem set is due, which generally seems to produce far better results at that time than when looked at in the light of day.
Q.E.D. stands for "Quebec's Electrical Distributor", commonly known as Hydro Quebec. It is commonly used to indicate where the author has given up on the proof and moved onto the next problem.
Can be substituted for the phrase "So there, you bastard!" when you need the extra bit of proof.
When handling or working with proofs, one should always wear protective gloves (preferably made of LaTeX).
The Burden of Proof
In recent years, proofs have gotten extremely heavy (see Proof by Volume, second entry). As a result, in some circles, the process of providing actual proof has been replaced by a practice known as the Burden of Proof. A piece of luggage of some kind is placed in a clear area, weighted down with lead weights approximating the hypothetical weight of the proof in question. The person who was asked to provide proof is then asked to lift this so-called "burden of proof". If he cannot, then he loses his balance and the burden of proof falls on him, which means that he has made the fatal mistake of daring to mention God on an Internet message board. |
In computer science, the term threaded code refers to a compiler implementation technique where the generated code has a form that essentially consists entirely of calls to subroutines. The code may be processed by an interpreter, or may simply be a sequence of machine code call instructions.
Threaded code has better code density than code generated by alternative code generation techniques and alternative calling conventions, at the expense of slightly slower execution speed. However, a program small enough to fit fully in a computer processor's cache may run faster than a larger program that suffers many cache misses.
Threaded code is best known as the implementation technique commonly used in some programming languages, such as Forth, many implementations of BASIC, some implementations of COBOL, early versions of B, and other languages for small minicomputers and amateur radio satellites.
The common way to make computer programs is to 'translate' a computer program written in some symbolic language to machine code using a compiler. The code is typically fast but nonportable since the binary code is designed for a specific computer hardware platform. A different approach uses a virtual machine instruction set — that has no particular target hardware. An interpreter executes it on each new target hardware.
Early computers had relatively little memory. For example, most Data General Nova, IBM 1130, and many of the first Apple II computers had only 4 K words of RAM installed. Consequently a lot of time was spent trying to find ways to reduce the size of programs so they would fit in the memory available. At the same time, computers were relatively slow, so simple interpretation was very noticeably slower than executing machine code.
Instead of writing out every step of an operation in every part of the program where it was needed, programmers saved memory by writing each step of such operations once (see "Don't repeat yourself") and placing it in a subroutine.
This process — code refactoring — is used today, although for different reasons. The top-level application in these programs may consist of nothing but subroutine calls. Many of these subroutines, in turn, also consist of nothing but lower level subroutine calls.
Mainframes and some early microprocessors such as the RCA 1802 required several instructions to call a subroutine. In the top-level application and in many subroutines, that sequence is constantly repeated, only the subroutine address changing from one call to the next. Using memory to store the same instructions repeatedly is wasteful.
To save space, programmers squeezed that series of subroutine calls into a list containing only contiguous addresses of the sub-routines, and used a tiny "interpreter" to call each subroutine in turn.
This is identical to the way other programmers squeezed a series of jumps in a branch table, dispatch table, or virtual method table into a list containing only the destination addresses, and used a small selector to branch to the selected destination. In threaded code and these other techniques, the program becomes a list of entry points to the actual code to be executed.
Over the years, programmers have created many variations on that "interpreter" or "small selector". The particular address in the list of addresses may be extracted using an index, general purpose register or pointer. The addresses may be direct or indirect, contiguous or non-contiguous (linked by pointers), relative or absolute, resolved at compile time or dynamically built. No one variation is "best".
To save space, programmers squeezed the lists of subroutine calls into simple lists of subroutine addresses, and used a small loop to call each subroutine in turn. For example:
start: ip = &thread top: jump *ip++ thread: &pushA &pushB &add ... pushA: *sp++ = A jump top pushB: *sp++ = B jump top add: *sp++ = *--sp + *--sp jump top
In this case, decoding the bytecodes is performed once, during program compilation or program load, so it is not repeated each time an instruction is executed. This can save much time and space when decode and dispatch overhead is large compared to the execution cost.
Note, however, addresses in
&pushB, etc., are two or more bytes, compared to one byte, typically, for the decode and dispatch interpreter described above. In general, instructions for a decode and dispatch interpreter may be any size. For example, a decode and dispatch interpreter to simulate an Intel Pentium decodes instructions that range from 1 to 16 bytes. However, bytecoded systems typically choose 1-byte codes for the most-common operations. Thus, the thread often has a higher space cost than bytecodes. In most uses, the reduction in decode cost outweighs the increase in space cost.
Note also that while bytecodes are nominally machine-independent, the format and value of the pointers in threads generally depend on the target machine which is executing the interpreter. Thus, an interpreter might load a portable bytecode program, decode the bytecodes to generate platform-dependent threaded code, then execute threaded code without further reference to the bytecodes.
The loop is simple, so is duplicated in each handler, removing
jump top from the list of machine instructions needed to execute each interpreter instruction. For example:
start: ip = &thread jump *ip++ thread: &pushA &pushB &add ... pushA: *sp++ = A jump *ip++ pushB: *sp++ = B jump *ip++ add: *sp++ = *--sp + *--sp jump *ip++
This is called direct threaded code (DTC). Although the technique is older, the first widely circulated use of the term "threaded code" is probably Bell's article "Threaded Code" from 1973.
Charles H. Moore invented a more compact notation in 1970 for his Forth virtual machine: indirect threaded code (ITC). Originally, Moore invented this because it was easy and fast on Nova minicomputers, which have an indirection bit in every address. He said (in published remarks, Byte Magazine's Forth Issue) that he found it so convenient that he propagated it into all later Forth designs.
Some Forth compilers compile Forth programs into direct-threaded code, while others make indirect-threaded code. The programs act the same either way.
Practically all executable threaded code uses one or another of these methods for invoking subroutines (each method is called a "threading model").
Addresses in the thread are the addresses of machine language. This form is simple, but may have overheads because the thread consists only of machine addresses, so all further parameters must be loaded indirectly from memory. Some Forth systems produce direct-threaded code. On many machines direct-threading is faster than subroutine threading (see reference below).
An example of a stack machine might execute the sequence "push A, push B, add". That might be translated to the following thread and routines, where
ip is initialized to the address
thread: &pushA &pushB &add ... pushA: *sp++ = A jump *ip++ pushB: *sp++ = B jump *ip++ add: addend = *--sp *sp = *sp + addend jump *ip++
Alternatively, operands may be included in the thread. This can remove some indirection needed above, but makes the thread larger:
thread: &push &A &push &B &add push: *sp++ = *ip++ jump *ip++ add: addend = *--sp *sp = *sp + addend jump *ip++
Indirect threading uses pointers to locations that in turn point to machine code. The indirect pointer may be followed by operands which are stored in the indirect "block" rather than storing them repeatedly in the thread. Thus, indirect code is often more compact than direct-threaded code, but the indirection also typically makes it slower, though still usually faster than bytecode interpreters. Where the handler operands include both values and types, the space savings over direct-threaded code may be significant. Older FORTH systems typically produce indirect-threaded code.
For example, if the goal is to execute "push A, push B, add", the following might be used. Here,
ip is initialized to address
&thread, each code fragment (
add) is found by double-indirecting through
ip; and operands to each code fragment are found in the first-level indirection following the address of the fragment.
thread: &i_pushA &i_pushB &i_add i_pushA: &push &A i_pushB: &push &B i_add: &add push: *sp++ = *(*ip + 1) jump *(*ip++) add: addend = *--sp *sp = *sp + addend jump *(*ip++)
So-called "subroutine-threaded code" (also "call-threaded code") consists of a series of machine-language "call" instructions (or addresses of functions to "call", as opposed to direct threading's use of "jump"). Early compilers for ALGOL, Fortran, Cobol and some Forth systems often produced subroutine-threaded code. The code in many of these systems operated on a last-in-first-out (LIFO) stack of operands, which had well-developed compiler theory. Most modern processors have special hardware support for subroutine "call" and "return" instructions, so the overhead of one extra machine instruction per dispatch is somewhat diminished. Anton Ertl has stated "that, in contrast to popular myths, subroutine threading is usually slower than direct threading." However, Ertl's most recent tests show that subroutine threading is faster than direct threading in 15 out of 25 test cases. Ertl's most recent tests show that direct threading is the fastest threading model on Xeon, Opteron, and Athlon processors; indirect threading is the fastest threading model on Pentium M processors; and subroutine threading is the fastest threading model on Pentium 4, Pentium III, and PPC processors.
As an example of call threading "push A, push B, add":
thread: call pushA call pushB call add pushA: *sp++ = A ret pushB: *sp++ = B ret add: addend = *--sp *sp = *sp + addend ret
Token threaded code uses lists of 8 or 12-bit indexes to a table of pointers. Token threaded code is notably compact, without much special effort by a programmer. It is usually half to three-fourths the size of other threaded-codes, which are themselves a quarter to an eighth the size of compiled code. The table's pointers can either be indirect or direct. Some Forth compilers produce token threaded code. Some programmers consider the "p-code" generated by some Pascal compilers, as well as the bytecodes used by .NET, Java, BASIC and some C compilers, to be token-threading.
A common approach historically is bytecode, which uses 8-bit opcodes and, often, a stack-based virtual machine. A typical interpreter is known as a "decode and dispatch interpreter", and follows the form
bytecode: 0 /*pushA*/ 1 /*pushB*/ 2 /*add*/ top: i = decode(vpc++) addr = table[i] jump *addr pushA: *sp++ = A jump top pushB: *sp++ = B jump top add: addend = *--sp *sp = *sp + addend jump top
If the virtual machine uses only byte-size instructions,
decode() is simply a fetch from
bytecode, but often there are commonly used 1-byte instructions plus some less-common multibyte instructions, in which case
decode() is more complex. The decoding of single byte opcodes can be very simply and efficiently handled by a branch table using the opcode directly as an index.
For instructions where the individual operations are simple, such as "push" and "add", the overhead involved in deciding what to execute is larger than the cost of actually executing it, so such interpreters are often much slower than machine code. However for more complex ("compound") instructions, the overhead percentage is proportionally less significant.
Huffman threaded code consists of lists of tokens stored as Huffman codes. A Huffman code is a variable length bit string used to identify a unique token. A Huffman-threaded interpreter locates subroutines using an index table or tree of pointers that can be navigated by the Huffman code. Huffman threaded code is one of the most compact representations known for a computer program. Basically the index and codes are organized by measuring the frequency that each subroutine occurs in the code. Frequent calls are given the shortest codes. Operations with approximately equal frequencies are given codes with nearly equal bit-lengths. Most Huffman-threaded systems have been implemented as direct-threaded Forth systems, and used to pack large amounts of slow-running code into small, cheap microcontrollers. Most published[examples needed] uses have been in toys, calculators, and watches. The bit-oriented tokenized code used in PBASIC can be seen as a kind of Huffman threaded code.
Lesser used threading
- String threading, where operations are identified by strings, usually looked-up by a hash table. This was used in Charles H. Moore's earliest Forth implementations and in the University of Illinois's experimental hardware-interpreted computer language. It is also used in Bashforth.
Examples above show no branches. For all interpreters, a branch changes the thread pointer (
ip above). As example, a conditional branch when the top-of-stack value is zero might be encoded as follows. Note that
&thread is the location to jump to, not the address of a handler, and so must be skipped (
ip++) whether or not the branch is taken.
thread: ... &brz &thread ... brz: tmp = *ip++ if (*sp++ == 0) ip = tmp jump *ip++
Separating the data and return stacks in a machine eliminates a great deal of stack management code, substantially reducing the size of the threaded code. The dual-stack principle was originated three times independently: for Burroughs large systems, Forth and PostScript, and is used in some Java virtual machines.
- ip or i (instruction pointer) of the virtual machine (not to be confused with the program counter of the underlying hardware implementing the VM)
- w (work pointer)
- rp or r (return stack pointer)
- sp or s (parameter stack pointer for passing parameters between words)
Often, threaded virtual machines such as implementations of Forth have a simple virtual machine at heart, consisting of three primitives. Those are:
- nest, also called docol
- unnest, or semi_s (;s)
In an indirect-threaded virtual machine, the one given here, the operations are:
next: *ip++ -> w jump **w++ nest: ip -> *rp++ w -> ip next unnest: *--rp -> ip next
This is perhaps the simplest and fastest interpreter or virtual machine.
- Continuation-passing style, which replaces the global variable
ipwith a function parameter
- Tail recursion
- Just-in-time compilation
- Branch table
- "Speed of various interpreter dispatch techniques V2".
- Dennis M. Ritchie. "The Development of the C Language". 1993. quote: "The B compiler on the PDP-7 did not generate machine instructions, but instead 'threaded code' ..."
- Bell, James R. (1973). "Threaded code". Communications of the ACM 16 (6): 370–372. doi:10.1145/362248.362270.
- Ertl, Anton. "What is Threaded Code?".
- Anton Ertl's explanatory page What is Threaded Code? describes different threading techniques and provides further references.
- The Development of the C Language by Dennis M. Ritchie describes B (a precursor of C) as implemented using "threaded code".
- Thinking Forth Project includes the seminal (but out of print) book Thinking Forth by Leo Brodie published in 1984.
- Starting FORTH online version of the book Starting FORTH by Leo Brodie published in 1981.
- Brad Rodriguez's Moving FORTH: Part 1: Design Decisions in the Forth Kernel covers threading techniques in depth.
- History of general purpose CPUs
- GCC extensions. Labels as Values
- Brief overview on the threaded languages, System and User RPL, used on the HP calculators (such as the HP48) by Joe Horn |
The elbow is a complex joint formed by the articulation of three bones –the humerus, radius and ulna. The elbow joint helps in bending or straightening of the arm to 180 degrees and assists in lifting or moving objects.
The bones of the elbow are supported by
- Ligaments and tendons
- Blood vessels
Bones and Joints of the elbow joint:
The elbow joint is formed at the junction of three bones:
- The Humerus (upper arm bone) forms the upper portion of the joint. The lower end of the humerus divides in to two bony protrusions known as the medial and lateral epicondyles which can be felt on either side of the elbow joint.
- The Ulna is the larger bone of the forearm located on the inner surface of the joint. The curved shape of the ulna articulates with the humerus.
- The Radius is the smaller bone of the forearm situated on the outer surface of the joint. The head of the radius is circular and hollow which allows movement with the humerus. The connection between the ulna and radius helps the forearm to rotate.
The elbow consists of three joints from articulation of the three bones namely:
- Humero-ulnar joint is formed between the humerus and ulna and allows flexion and extension of the arm.
- Humero-radial joint is formed between the radius and humerus, and allows movements like flexion, extension, supination and pronation.
- Radio-ulnar joint is formed between ulna and radius bones, and allows rotation of the lower arm.
Articular cartilage lines the articulating regions of the humerus, radius and ulna. It is a thin, tough, flexible, and slippery surface that acts as a shock absorber and cushion to reduce friction between the bones. The cartilage is lubricated by synovial fluid, which further enables the smooth movement of the bones.
Muscles of the Elbow Joint
There are several muscles extending across the elbow joint that help in various movements. These include the following:
- Biceps brachii: upper arm muscle enabling flexion of the arm
- Triceps brachii: muscle in the back of the upper arm that extends the arm and fixes the elbow during fine movements
- Brachialis: upper arm muscle beneath the biceps which flexes the elbow towards the body
- Brachioradialis: forearm muscle that flexes, straightens and pulls the arm at the elbow
- Pronator teres: this muscle extends from the humeral head, across the elbow, and towards the ulna, and helps to turn the palm facing backward
- Extensor carpi radialis brevis: forearm muscle that helps in movement of the hand
- Extensor digitorum: forearm muscle that helps in movement of the fingers
Elbow joint ligaments and tendons:
The elbow joint is supported by ligaments and tendons, which provide stability to the joint.
Ligaments are a group of firm tissues that connect bones to other bones. The most important ligaments of the elbow joint are the:
- Medial or ulnar collateral ligament: comprised of triangular bands of tissue on the inner side of the elbow joint.
- Lateral or radial collateral ligament: a thin band of tissue on the outer side of the elbow joint.
Together, the medial and lateral ligaments are the main source of stability and hold the humerus and ulna tightly in place during movement of the arm.
- Annular ligament: These are a group of fibers that surrounds the radial head, and holds the ulna and radius tightly in place during movement of the arm.
The ligaments around a joint combine to form a joint capsule that contains synovial fluid.
Any injury to these ligaments can lead to instability of the elbow joint.
Tendons are bands of connective tissue fibers that connect muscle to bone. The various tendons which surround the elbow joint include:
- Biceps tendon: attaches the biceps muscle to the radius, allowing the elbow to bend
- Triceps tendon: attaches the triceps muscle to the ulna, allowing the elbow to straighten
Nerves of the elbow joint:
The main nerves of the elbow joint are the ulnar, radial and median nerves. These nerves transfer signals from the brain to the muscles that aid in elbow movements. They also carry the sensory signals like touch, pain, and temperature back to the brain.
Any injury or damage to these nerves causes pain, weakness or joint instability.
Arteries are blood vessels that carry oxygen-pure blood from the heart to the hand. The main artery of the elbow is the brachial artery that travels across the inside of the elbow and divides into two small branches below the elbow to form the ulnar and the radial artery. |
Math Worksheets Addition Worksheet: Adding 2-Digit Addends
In this addition instructional activity, students illustrate their understanding of the concept by responding to 20 problems adding 2-digit numbers and providing the correct computation.
3 Views 2 Downloads
Smiling at Two Digit Multiplication!
How do I solve a two-digit multiplication problem? Your class tackles this question by walking through problem solving methods. They first investigates and applies traditional multiplication methods, and they then compare those with...
3rd - 4th Math CCSS: Adaptable
Number & Operations: Multi-Digit Multiplication
A set of 14 lessons on multiplication would make a great learning experience for your fourth grade learners. After completing a pre-assessment, kids work through lessons that focus on multiples of 10, double-digit multiplication, and...
4th Math CCSS: Adaptable
PowerPoint: Long Division with 2-Digit Divisor
This long division PowerPoint presents a pneumonic device in which the first initial of generic family members names corresponds with the first initial of the proper steps to follow when solving a long division problem with a two-digit...
4th - 6th Math CCSS: Adaptable |
FL.MAFS.1.OA. OPERATIONS AND ALGEBRAIC THINKING
MAFS.1.OA.1. Represent and solve problems involving addition and subtraction.
MAFS.1.OA.1.1. Use addition and subtraction within 20 to solve word problems involving situations of adding to, taking from, putting together, taking apart, and comparing, with unknowns in all positions, e.g., by using objects, drawings, and equations with a symbol for the unknown number to represent the problem (Students are not required to independently read the word problems.)
MAFS.1.OA.2. Understand and apply properties of operations and the relationship between addition and subtraction.
MAFS.1.OA.2.4. Understand subtraction as an unknown-addend problem. For example, subtract 10 – 8 by finding the number that makes 10 when added to 8.
MAFS.1.OA.3. Add and subtract within 20.
MAFS.1.OA.3.5. Relate counting to addition and subtraction (e.g., by counting on 2 to add 2).
MAFS.1.OA.3.6. Add and subtract within 20, demonstrating fluency for addition and subtraction within 10. Use strategies such as counting on; making ten (e.g., 8 + 6 = 8 + 2 + 4 = 10 + 4 = 14); decomposing a number leading to a ten (e.g., 13 – 4 = 13 – 3 – 1 = 10 – 1 = 9); using the relationship between addition and subtraction (e.g., knowing that 8 + 4 = 12, one knows 12 – 8 = 4); and creating equivalent but easier or known sums (e.g., adding 6 + 7 by creating the known equivalent 6 + 6 + 1 = 12 + 1 = 13).
FL.MAFS.1.NBT. NUMBER AND OPERATIONS IN BASE TEN
MAFS.1.NBT.3. Use place value understanding and properties of operations to add and subtract. (Additional Cluster)
MAFS.1.NBT.3.4. Add within 100, including adding a two-digit number and a one-digit number, and adding a two-digit number and a multiple of 10, using concrete models or drawings and strategies based on place value, properties of operations, and/or the relationship between addition and subtraction; relate the strategy to a written method and explain the reasoning used. Understand that in adding two-digit numbers, one adds tens and tens, ones and ones; and sometimes it is necessary to compose a ten. |
How do the area and extent differ?
Rectangles and squares: calculate perimeter and area
Extent of rectangles
A second quantity that can be calculated for figures is the circumference. In the case of rectangles and squares, this is also very easy. You just add up all the edge lengths together. The following size can be calculated for the rectangle shown above:
$ U (= circumference) = 8cm + 4cm + 8cm + 4cm $
The circumference $ U $ of a rectangle results from the addition of the side lengths:
$ U = 2 \ cdot (a + b) $
Area and perimeter for squares
These calculations work the same way with squares, except that all lengths are equal.
The area $ A $ of a square is also calculated from the product of the side lengths:
$ A = a \ cdot a = a ^ 2 $
The following applies to the circumference $ U $ of a square: $ U = 4 \ cdot a $
The simple calculations for area and circumference for rectangles will be used again and again in the following examples. In the case of more complicated shapes, the procedure is usually to break the figure down into rectangles in order to get a result as easily as possible.
Calculate rectangle: example
Now let's do an example calculation. First try to determine the solution yourself and then take a look at it here.
Now calculate the circumference (U) and the area (A) of a rectangle. The side lengths are $ a = 6 cm $ and $ b = 3 cm $
Let's start by calculating the perimeter of our rectangle. According to the formula that you have already got to know, you calculate the circumference with: $ 2 \ cdot (a + b) $. Let's insert our values for a (6 cm) and b (3 cm) into this formula and you get $ U = 2 \ cdot (6cm + 3cm) $. If we do the math, we get $ 2 \ cdot 9cm $, which is $ 18cm $. The circumference is $ U = 18 cm $.
Next we want to deal with the calculation of the area. We take the well-known formula (A = a $ \ cdot $ b) and insert our values here as well: $ A = 6cm \ cdot 3cm $. You get the area $ A = 18 cm ^ 2 $. Did you come up with the same solutions?
Test and deepen your new knowledge in the exercises! Good luck with it!
- How good are the Gurkhas at war
- Why did my comment violate BNBR
- What if you go to Neptune
- How are submarines propelled?
- What is the day of your death
- What is the advisory process for JEE
- Climbing stairs worsens sciatica
- How can I have cleaner handwriting
- How does white noise help a baby
- Where is 470 area code
- How far behind is China's space program
- Why are some people so mysterious
- What is the worst question about the interview?
- Who is your ideal quoran and why
- What's the worst day possible
- How professional are psychiatrists psychologists in India
- What should we pursue in life or career
- Can a country survive without trade
- What event killed most of the people
- Can be used turpentine for cleaning
- Why do mothers get neurotic
- Is a diet plan in order?
- Sales skills can make you more successful
- Who found the stream |
|History and description of|
|Development of vowels|
|Development of consonants|
Like many other languages, English has wide variation in pronunciation, both historically and from dialect to dialect. In general, however, the regional dialects of English share a largely similar (but not identical) phonological system. Among other things, most dialects have vowel reduction in unstressed syllables and a complex set of phonological features that distinguish fortis and lenis consonants (stops, affricates, and fricatives).
Phonological analysis of English often concentrates on or uses, as a reference point, one or more of the prestige or standard accents, such as Received Pronunciation for England, General American for the United States, and General Australian for Australia. Nevertheless, many other dialects of English are spoken, which have developed independently from these standardized accents, particularly regional dialects. Information about these standardized accents functions only as a limited guide to all of English phonology, which one can later expand upon once one becomes more familiar with some of the many other dialects of English that are spoken.
- 1 Phonemes
- 2 Lexical stress
- 3 Phonotactics
- 4 Prosody
- 5 History of English pronunciation
- 6 See also
- 7 Notes
- 8 References
- 9 Further reading
- 10 External links
A phoneme of a language or dialect is an abstraction of a speech sound or of a group of different sounds which are all perceived to have the same function by speakers of that particular language or dialect. For example, the English word through consists of three phonemes: the initial "th" sound, the "r" sound, and a vowel sound. The phonemes in this and many other English words do not always correspond directly to the letters used to spell them (English orthography is not as strongly phonemic as that of many other languages).
The number and distribution of phonemes in English vary from dialect to dialect, and also depend on the interpretation of the individual researcher. The number of consonant phonemes is generally put at 24 (or slightly more). The number of vowels is subject to greater variation; in the system presented on this page there are 20–25 vowel phonemes in Received Pronunciation, 14–16 in General American and 19–20 in Australian English. The pronunciation keys used in dictionaries generally contain a slightly greater number of symbols than this, to take account of certain sounds used in foreign words and certain noticeable distinctions that may not be—strictly speaking—phonemic.
The following table shows the 24 consonant phonemes found in most dialects of English, in addition to /x/, whose distribution is more limited. Fortis consonants are always voiceless, aspirated in syllable onset (except in clusters beginning with /s/), and sometimes also glottalized to an extent in syllable coda (most likely to occur with /t/, see T-glottalization), while lenis consonants are always unaspirated and un-glottalized, and generally partially or fully voiced. The alveolars are usually apical, i.e. pronounced with the tip of the tongue touching or approaching the roof of the mouth, though some speakers produce them laminally, i.e. with the blade of the tongue.
- Most varieties of English have syllabic consonants in some words, principally [l̩, m̩, n̩], for example at the end of bottle, rhythm and button. In such cases, no phonetic vowel is pronounced between the last two consonants, and the last consonant forms a syllable on its own. Syllabic consonants are generally transcribed with a vertical line under the consonant letter, so that phonetic transcription of bottle would be [ˈbɒtl̩], [ˈbɑɾl̩], or [ˈbɔɾl̩] in RP, GA, and Australian respectively, and for button [ˈbʌʔn̩]. In theory, such consonants could be analyzed as individual phonemes. However, this would add several extra consonant phonemes to the inventory for English, and phonologists prefer to identify syllabic nasals and liquids phonemically as /əC/. Thus button is phonemically /ˈbʌtən/ or /ˈbɐtən/ and bottle is phonemically /ˈbɒtəl/, /ˈbɑtəl/, or /ˈbɔtəl/.
- /θ, ð/ are realized as stops in accents affected by th-stopping, such as Hiberno-English and the New York accent. They are merged with /f, v/ in accents affected by th-fronting, such as some varieties of Cockney and African American Vernacular English. See Pronunciation of English ⟨th⟩.
- The voiceless velar fricative /x/ is mainly used in Hiberno-, Scottish, South African and Welsh English; words with /x/ in Scottish accents tend to be pronounced with /k/ in other dialects. The velar fricative sometimes appears in recent loanwords such as chutzpah. Under the influence of Welsh and Afrikaans, the actual phonetic realization of /x/ in Welsh English and White South African English is uvular [χ], rather than velar [x]. Dialects do not necessarily agree on the exact words in which /x/ appears; for instance, in Welsh English it appears in loanwords from Welsh (such as Amlwch /ˈæmlʊx/), whereas in White South African English it appears only in loanwords from Afrikaans or Xhosa (such as gogga /ˈxɒxə/ 'insect').
- This phoneme is conventionally transcribed with the basic Latin letter ⟨r⟩ (the IPA symbol for the alveolar trill), even though its pronunciation is usually a postalveolar approximant [ɹ̠]. The trill does exist but it is rare, found only in Scottish dialects and some varieties of Welsh[page needed] and South African English. See Pronunciation of English /r/.
- The sound at the beginning of huge in most British accents is a voiceless palatal fricative [ç], but this is analysed phonemically as the consonant cluster /hj/ so that huge is transcribed /hjuːdʒ/. As with /hw/, this does not mean that speakers pronounce [h] followed by [j]; the phonemic transcription /hj/ is simply a convenient way of representing the single sound [ç]. The yod-dropping found in Norfolk dialect means that the traditional Norfolk pronunciation of huge is [hʊudʒ] and not [çuːdʒ].
- In some conservative accents in Scotland, Ireland, the southern United States, and New England, the digraph ⟨wh⟩ in words like which and whine represents a voiceless w sound [ʍ], a voiceless labiovelar fricative or approximant, which contrasts with the voiced w of witch and wine. In most dialects, this sound is lost, and is pronounced as a voiced w (the wine–whine merger). Phonemically this sound may be analysed as a consonant cluster /hw/, rather than as a separate phoneme */ʍ/, so which and whine are transcribed phonemically as /hwɪtʃ/ and /hwaɪn/. This does not mean that such speakers actually pronounce [h] followed by [w]: this phonemic transcription /hw/ is simply a convenient way of representing a single sound [ʍ] when such dialects are not analysed as having an extra phoneme.
The following table shows typical examples of the occurrence of the above consonant phonemes in words.
- The pronunciation of /l/ varies by dialect:
- Received Pronunciation has two main allophones of /l/: the clear or plain [l], and the dark or velarized [ɫ]. The clear variant is used before vowels when they are in the same syllable, and the dark variant when the /l/ precedes a consonant or is in syllable-final position before silence.
- In South Wales, Ireland, and the Caribbean, /l/ is often always clear, and in North Wales, Scotland, Australia, and New Zealand it is always[disputed ] dark.
- In General American and Canada, /l/ is generally dark, but to varying degrees: before stressed vowels it is neutral or only slightly velarized. In southern U.S. accents it is noticeably clear between vowels, and in some other positions.
- In urban accents of Southern England, as well as New Zealand and some parts of the United States, /l/ can be pronounced as an approximant or semivowel ([w], [o], [ʊ]) at the end of a syllable (l-vocalization).
- Depending on dialect, /r/ has at least the following allophones in varieties of English around the world (see Pronunciation of English /r/):
- postalveolar approximant [ɹ̠] (the most common realization of the /r/ phoneme, occurring in most dialects, RP and General American included)
- retroflex approximant [ɻ] (occurs in most Irish dialects and some American dialects)
- labiodental approximant [ʋ] (occurs in south-east England and some London accents; known as r-labialization)
- alveolar flap [ɾ] (occurs in most Scottish, Welsh and some South African dialects, some conservative dialects in England and Ireland; not to be confused with flapping of /t/ and /d/)
- alveolar trill [r] (occurs in some very conservative Scottish dialects and some South African and Welsh accents)[page needed]
- voiced uvular fricative [ʁ] (occurs in northern Northumbria, largely disappeared; known as the Northumbrian burr)
- In most dialects /r/ is labialized [ɹ̠ʷ] in many positions, as in reed [ɹ̠ʷiːd] and tree [t̠ɹ̠̊ʷiː]; in the latter case, the /t/ may be slightly labialized as well.
- In some rhotic accents, such as General American, /r/ when not followed by a vowel is realized as an r-coloring of the preceding vowel or its coda: nurse [ˈnɚs], butter [ˈbʌtɚ].
- The distinctions between the nasals are neutralized in some environments. For example, before a final /p/, /t/ or /k/ there is nearly always only one nasal sound that can appear in each case: [m], [n] or [ŋ] respectively (as in the words limp, lint, link – note that the n of link is pronounced [ŋ]). This effect can even occur across syllable or word boundaries, particularly in stressed syllables: synchrony is pronounced [ˈsɪŋkɹəni] whereas synchronic may be pronounced either as [sɪŋˈkɹɒnɪk] or as [sɪnˈkɹɒnɪk]. For other possible syllable-final combinations, see § Coda in the Phonotactics section below.
In most dialects, the fortis stops and affricate /p, t, tʃ, k/ have various different allophones, and are distinguished from the lenis stops and affricate /b, d, dʒ, ɡ/ by several phonetic features.
- The allophones of the fortes /p, t, tʃ, k/ include:
- aspirated [pʰ, tʰ, kʰ] when they occur in the onset of a stressed syllable, as in potato. In clusters involving a following liquid, the aspiration typically manifests as the devoicing of this liquid. These sounds are unaspirated [p, t, k] after /s/ within the same syllable, as in stan, span, scan, and at the ends of syllables, as in mat, map, mac. The voiceless fricatives are always unaspirated, but a notable exception to this are English-speaking areas of Wales, where they are often aspirated.
- In many accents of English, fortis stops /p, t, k, tʃ/ are glottalized in some positions. This may be heard either as a glottal stop preceding the oral closure ("pre-glottalization" or "glottal reinforcement") or as a substitution of the glottal stop [ʔ] for the oral stop (glottal replacement). /tʃ/ can only be pre-glottalized. Pre-glottalization normally occurs in British and American English when the fortis consonant phoneme is followed by another consonant or when the consonant is in final position. Thus football and catching are often pronounced [ˈfʊʔtbɔːl] and [ˈkæʔtʃɪŋ], respectively. Glottal replacement often happens in cases such as those just given, so that football is frequently pronounced [ˈfʊʔbɔːl]. In addition, however, glottal replacement is increasingly common in British English when /t/ occurs between vowels if the preceding vowel is stressed; thus getting better is often pronounced by younger speakers as [ˈɡeʔɪŋ ˌbeʔə]. Such t-glottalization also occurs in many British regional accents, including Cockney, where it can also occur at the end of words, and where /p/ and /k/ are sometimes treated the same way.
- Among stops, both fortes and lenes:
- May have no audible release [p̚, b̚, t̚, d̚, k̚, ɡ̚] in the word-final position. These allophones are more common in North America than Great Britain.
- Always have a 'masked release' before another plosive or affricate (as in rubbed [ˈrʌˑb̚d̥]), i.e. the release of the first stop is made after the closure of the second stop. This also applies when the following stop is homorganic (articulated in the same place), as in top player. A notable exception to this is Welsh English, where stops are usually released in this environment.
- The affricates /tʃ, dʒ/ have a mandatory fricative release in all environments.
- Very often in the United States and Canada, and less frequently in Australia and New Zealand, both /t/ and /d/ can be pronounced as a voiced flap [ɾ] in certain positions: when they come between a preceding stressed vowel (possibly with intervening /r/) and precede an unstressed vowel or syllabic /l/. Examples include water, bottle, petal, peddle (the last two words sound alike when flapped). The flap may even appear at word boundaries, as in put it on. When the combination /nt/ appears in such positions, some American speakers pronounce it as a nasalized flap that may become indistinguishable from /n/, so winter [ˈwɪɾ̃ɚ] may be pronounced similarly or identically to winner [ˈwɪnɚ].[A]
- Yod-coalescence is a process that palatalizes the clusters /dj/, /tj/, /sj/ and /zj/ into [dʒ], [tʃ], [ʃ] and [ʒ] respectively, frequently occurring with clusters that would be considered to span a syllable boundary.
- Yod-coalescence in stressed syllables, such as in tune and dune, occurs in Australian, Cockney, Estuary English, Hiberno-English (some speakers), Newfoundland English, South African English, and to a certain extent in New Zealand English and Scottish English (many speakers). This can lead to additional homophony; for instance, dew and due come to be pronounced the same as Jew.
- In certain varieties—such as Australian English, South African English, and New Zealand English—/sj/ and /zj/ in stressed syllables can coalesce into [ʃ] and [ʒ], respectively. In Australian English for example, assume is pronounced [əˈʃʉːm] by some speakers. Furthermore, some British, Canadian, American, New Zealand and Australian speakers may change the /s/ sound to /ʃ/ before /tr/, so that a word having a cluster of ⟨str⟩ like in strewn would be pronounced [ʃtruːn].
- The postalveolar consonants /tʃ, dʒ, ʃ, ʒ/ are also often slightly labialized: [tʃʷ dʒʷ ʃʷ ʒʷ].
English has a particularly large number of vowel phonemes, and in addition the vowels of English differ considerably between dialects. Because of this, corresponding vowels may be transcribed with various symbols depending on the dialect under consideration. When considering English as a whole, lexical sets are often used, each named by a word containing the vowel or vowels in question. For example, the LOT set consists of words which, like lot, have /ɒ/ in Received Pronunciation and /ɑ/ in General American. The "LOT vowel" then refers to the vowel that appears in those words in whichever dialect is being considered, or (at a greater level of abstraction) to a diaphoneme, which represents this interdialectal correspondence. A commonly used system of lexical sets, devised by John C. Wells, is presented below; for each set, the corresponding phonemes are given for RP and General American, using the notation that will be used on this page.
For a table that shows the pronunciations of these vowels in a wider range of English dialects, see IPA chart for English dialects.
The following tables show the vowel phonemes of three standard varieties of English. The notation system used here for Received Pronunciation (RP) is fairly standard; the others less so. The feature descriptions given here (front, close, etc.) are abstracted somewhat; the actual pronunciations of these vowels are somewhat more accurately conveyed by the IPA symbols used (see Vowel for a chart indicating the meanings of these symbols; though note also the points listed below the following tables).
|Diphthongs||eɪ aɪ[a] ɔɪ aʊ əʊ ɪə ʊə|
|Triphthongs||(eɪə aɪə ɔɪə aʊə əʊə)|
|Diphthongs||aɪ ɔɪ aʊ|
|Diphthongs||æɪ ɑɪ[a] oɪ æɔ əʉ ɪə (ʊə)|
- The modern RP vowels /uː/, /ɔː/, /ɒ/, /ʌ/ and /aɪ/ are very similar to the corresponding Australian phonemes /ʉː/, /oː/, /ɔ/, /ɐ/ and /ɑɪ/. The difference between them lies mostly in transcription (the way they are transcribed in RP is more conservative).
- Although the notation /ʌ/ is used for the vowel of STRUT in RP, the actual pronunciation is closer to a near-open central unrounded vowel [ɐ]. The symbol ⟨ʌ⟩ continues to be used for reasons of tradition (it was historically a back vowel) and because it is still back in other varieties.
- Although the notation /eɪ oʊ/ are used for the vowels of FACE and GOAT respectively in General American, they are analysed as phonemic monophthongs and frequently transcribed as /e o/ in the literature.
- General American lacks a truly contrastive NURSE vowel, so pairs like forward vs. foreword (distinguished in RP as /ˈfɔːwəd/ and /ˈfɔːwɜːd/, respectively) are most typically homophonous as [ˈfɔɹwɚd]. Also, [ʌ] (stressed) and [ə] (unstressed) may be considered allophones of a single phoneme in General American.
- Many North American speakers do not distinguish /ɔ/ from /ɑ/ and merge them into /ɑ/, except before /r/ (see cot–caught merger).
The differences between these tables can be explained as follows:
- General American lacks a phoneme corresponding to RP /ɒ/ (LOT, CLOTH), instead using /ɑ/ in the LOT words and generally /ɔ/ in the CLOTH words. In a few North American accents, namely in Eastern New England (Boston), Western Pennsylvania (Pittsburgh), and to some degree in Pacific Northwest (Seattle, Portland) and Eastern Canadian English, LOT words do not have the vowel of PALM (the father–bother merger has not occurred) but instead merge with CLOTH/THOUGHT.
- RP transcriptions use /e/ rather than /ɛ/ largely for convenience and historical tradition; it does not necessarily represent a different sound from the General American phoneme, although the RP vowel may be described as somewhat less open than the American one.
- The different notations used for the vowel of GOAT in RP and General American (/əʊ/ and /oʊ/) reflect a difference in the most common phonetic realizations of that vowel.
- The triphthongs given in the RP table are usually regarded as sequences of two phonemes (a diphthong plus /ə/); however, in RP, these sequences frequently undergo smoothing into single diphthongs or even monophthongs.
- The different notations used here for some of the Australian vowels reflect the phonetic realization of those vowels in Australian: a central [ʉː] rather than [uː] in GOOSE, a more closed [e] rather than [ɛ] in DRESS, an open-mid [ɔ] rather than traditional RP's [ɒ] in LOT and CLOTH, NORTH and FORCE (here the difference lies almost only in transcription rather than pronunciation), an opener [ɐ] rather than somewhat closer [ʌ] in STRUT, a fronted [ɐː] rather than [ɑː] in CALM and START, and somewhat different pronunciations of most of the diphthongs. Note that central [ʉː] in GOOSE and open-mid [ɔ] in LOT are possible realizations in modern RP; in the case of the latter vowel, it is even more common than the traditional open [ɒ].
- The difference between RP /ɔː/ and Australian /oː/ lies only in transcription, as both of them are realized as close-mid [oː].
- Both Australian /eː/ and RP /eə/ are long monophthongs, the difference between them lies in tongue height: Australian /eː/ is close-mid [eː], whereas the corresponding RP vowel is open-mid [ɛː].
- Australian has the bad–lad split, with distinctive short and long variants in various words of the TRAP set: a long phoneme /æː/ in words like bad contrasts with a short /æ/ in words like lad. (A similar split is found in the accents of some speakers in southern England.)
- The vowel /ʊə/ is often omitted from descriptions of Australian, as for most speakers it has split into the long monophthong /oː/ (e.g. poor, sure) or the sequence /ʉːə/ (e.g. cure, lure).
Other points to be noted are these:
- The vowel /æ/ is coming to be pronounced more open (approaching [a]) by many modern RP speakers. In American speech, however, there is a tendency for it to become more closed, tenser and even diphthongized (to something like [eə]), particularly in certain environments, such as before a nasal consonant. Some American accents, for example those of New York City, Philadelphia and Baltimore, make a marginal phonemic distinction between /æ/ and /eə/, although the two occur largely in mutually exclusive environments. See /æ/ raising.
- A significant number of words (the BATH group) have /æ/ in General American, but /ɑː/ in RP. The pronunciation varies between /æ/ and /ɐː/ in Australia, with speakers from South Australia using /ɐː/ more extensively than speakers from other regions.
- In General American and Canadian (which are rhotic accents, where /r/ is pronounced in positions where it does not precede a vowel), many of the vowels can be r-colored by way of realization of a following /r/. This is often transcribed phonetically using a vowel symbol with an added retroflexion diacritic [ ˞ ]; thus the symbol [ɚ] has been created for an r-colored schwa (sometimes called schwar) as in LETTER, and the vowel of START can be modified to make [ɑ˞] so that the word start may be transcribed [stɑ˞t]. Alternatively, the START vowel might be written [stɑɚt] to indicate an r-colored offglide. The vowel of NURSE is generally always r-colored in these dialects, and this can be written [ɚ] (or as a syllabic [ɹ̩]).
- In modern RP and other dialects, many words from the CURE group are coming to be pronounced by an increasing number of speakers with the NORTH vowel (so sure is often pronounced like shore). Also the RP vowels /ɛə/ and /ʊə/ may be monophthongized to [ɛː] and [oː] respectively.
- The vowels of FLEECE and GOOSE are commonly pronounced as narrow diphthongs, approaching [ɪi] and [ʊu], in RP. Near-RP speakers may have particularly marked diphthongization of the type [əi] and [əu ~ əʉ], respectively. In General American, the pronunciation varies between a monophthong and a diphthong.
Allophones of vowels
Listed here are some of the significant cases of allophony of vowels found within standard English dialects.
- Vowels are shortened when followed in a syllable by a voiceless (fortis) consonant. This is known as pre-fortis clipping. Thus in the following word pairs the first item has a shortened vowel while the second has a normal length vowel: 'right' /raɪt/ – 'ride' /raɪd/; 'face' /feɪs/ – 'phase' /feɪz/; 'advice' /ədvaɪs/ – 'advise' /ədvaɪz/.
- In many accents of English, tense vowels undergo breaking before /l/, resulting in pronunciations like [pʰiəɫ] for peel, [pʰuəɫ] for pool, [pʰeəɫ] for pail, and [pʰoəɫ] for pole.
- In RP, the vowel /əʊ/ may be pronounced more back, as [ɒʊ], before syllable-final /l/, as in goal. In Australian English the vowel /əʉ/ is similarly backed to [ɔʊ] before /l/. A similar phenomenon may occur in Southern American English.
- The vowel /ə/ is often pronounced [ɐ] in open syllables.
- The PRICE and MOUTH diphthongs may be pronounced with a less open starting point when followed by a voiceless consonant; this is chiefly a feature of Canadian speech (Canadian raising), but is also found in parts of the United States. Thus writer may be distinguished from rider even when flapping causes the /t/ and /d/ to be pronounced identically.
Unstressed syllables in English may contain almost any vowel, but in practice vowels in stressed and unstressed syllables tend to use different inventories of phonemes. In particular, long vowels are used less often in unstressed syllables than stressed syllables. Additionally there are certain sounds—characterized by central position and weakness—that are particularly often found as the nuclei of unstressed syllables. These include:
- schwa, [ə], as in COMMA and (in non-rhotic dialects) LETTER (COMMA–LETTER merger); also in many other positions such as about, photograph, paddock, etc. This sound is essentially restricted to unstressed syllables exclusively. In the approach presented here it is identified as a phoneme /ə/, although other analyses do not have a separate phoneme for schwa and regard it as a reduction or neutralization of other vowels in syllables with the lowest degree of stress.
- r-colored schwa, [ɚ], as in LETTER in General American and some other rhotic dialects, which can be identified with the underlying sequence /ər/.
- syllabic consonants: [l̩] as in bottle, [n̩] as in button, [m̩] as in rhythm. These may be phonemized either as a plain consonant or as a schwa followed by a consonant; for example button may be represented as /ˈbʌtn̩/ or /ˈbʌtən/ (see above under Consonants).
- [ɨ̞], as in roses and making. This can be identified with the phoneme /ɪ/, although in unstressed syllables it may be pronounced more centrally, and for some speakers (particularly in Australian and New Zealand and some American English) it is merged with /ə/ in these syllables (weak vowel merger). Among speakers who retain the distinction there are many cases where free variation between /ɪ/ and /ə/ is found, as in the second syllable of typical. (The OED has recently adopted the symbol ⟨ᵻ⟩ to indicate such cases.)
- [ʉ̞], as in argument, today, for which similar considerations apply as in the case of [ɨ̞]. (The symbol ⟨ᵿ⟩ is sometimes used in these cases, similarly to ⟨ᵻ⟩.) Some speakers may also have a rounded schwa, [ɵ], used in words like omission [ɵˈmɪʃən].
- [i], as in happy, coffee, in many dialects (others have [ɪ] in this position). The phonemic status of this [i] is not easy to establish. Some authors consider it to correspond phonemically with a close front vowel that is neither the vowel of KIT nor that of FLEECE; it occurs chiefly in contexts where the contrast between these vowels is neutralized, implying that it represents an archiphoneme, which may be written /i/. Many speakers, however, do have a contrast in pairs of words like studied and studded or taxis and taxes; the contrast may be [i] vs. [ɪ], [ɪ] vs. [ə] or [i] vs. [ə], hence some authors consider that the happY-vowel should be identified phonemically either with the vowel of KIT or that of FLEECE, depending on speaker. See also happy-tensing.
- [u], as in influence, to each. This is the back rounded counterpart to [i] described above; its phonemic status is treated in the same works as cited there.
Vowel reduction in unstressed syllables is a significant feature of English. Syllables of the types listed above often correspond to a syllable containing a different vowel ("full vowel") used in other forms of the same morpheme where that syllable is stressed. For example, the first o in photograph, being stressed, is pronounced with the GOAT vowel, but in photography, where it is unstressed, it is reduced to schwa. Also, certain common words (a, an, of, for, etc.) are pronounced with a schwa when they are unstressed, although they have different vowels when they are in a stressed position (see Weak and strong forms in English).
Some unstressed syllables, however, retain full (unreduced) vowels, i.e. vowels other than those listed above. Examples are the /æ/ in ambition and the /aɪ/ in finite. Some phonologists regard such syllables as not being fully unstressed (they may describe them as having tertiary stress); some dictionaries have marked such syllables as having secondary stress. However linguists such as Ladefoged and Bolinger (1986) regard this as a difference purely of vowel quality and not of stress, and thus argue that vowel reduction itself is phonemic in English. Examples of words where vowel reduction seems to be distinctive for some speakers include chickaree vs. chicory (the latter has the reduced vowel of HAPPY, whereas the former has the FLEECE vowel without reduction), and Pharaoh vs. farrow (both have the GOAT vowel, but in the latter word it may reduce to [ɵ]).
Lexical stress is phonemic in English. For example, the noun increase and the verb increase are distinguished by the positioning of the stress on the first syllable in the former, and on the second syllable in the latter. (See initial-stress-derived noun.) Stressed syllables in English are louder than non-stressed syllables, as well as being longer and having a higher pitch.
In traditional approaches, in any English word consisting of more than one syllable, each syllable is ascribed one of three degrees of stress: primary, secondary or unstressed. Ordinarily, in each such word there will be exactly one syllable with primary stress, possibly one syllable having secondary stress, and the remainder are unstressed. For example, the word amazing has primary stress on the second syllable, while the first and third syllables are unstressed, whereas the word organization has primary stress on the fourth syllable, secondary stress on the first, and the second, third and fifth unstressed. This is often shown in pronunciation keys using the IPA symbols for primary and secondary stress (which are ˈ and ˌ respectively), placed before the syllables to which they apply. The two words just given may therefore be represented (in RP) as /əˈmeɪzɪŋ/ and /ˌɔːɡənaɪˈzeɪʃən/.
Some analysts identify an additional level of stress (tertiary stress). This is generally ascribed to syllables that are pronounced with less force than those with secondary stress, but nonetheless contain a "full" or "unreduced" vowel (vowels that are considered to be reduced are listed under English phonology § Unstressed syllables above). Hence the third syllable of organization, if pronounced with /aɪ/ as shown above (rather than being reduced to /ɪ/ or /ə/), might be said to have tertiary stress. (The precise identification of secondary and tertiary stress differs between analyses; dictionaries do not generally show tertiary stress, although some have taken the approach of marking all syllables with unreduced vowels as having at least secondary stress.)
In some analyses, then, the concept of lexical stress may become conflated with that of vowel reduction. An approach which attempts to separate these two is provided by Peter Ladefoged, who states that it is possible to describe English with only one degree of stress, as long as unstressed syllables are phonemically distinguished for vowel reduction. In this approach, the distinction between primary and secondary stress is regarded as a phonetic or prosodic detail rather than a phonemic feature – primary stress is seen as an example of the predictable "tonic" stress that falls on the final stressed syllable of a prosodic unit. For more details of this analysis, see Stress and vowel reduction in English.
For stress as a prosodic feature (emphasis of particular words within utterances), see § Prosodic stress below.
Phonotactics is the study of the sequences of phonemes that occur in languages and the sound structures that they form. In this study it is usual to represent consonants in general with the letter C and vowels with the letter V, so that a syllable such as 'be' is described as having CV structure. The IPA symbol used to show a division between syllables is the dot [.]. Syllabification is the process of dividing continuous speech into discrete syllables, a process in which the position of a syllable division is not always easy to decide upon.
Most languages of the world syllabify CVCV and CVCCV sequences as /CV.CV/ and /CVC.CV/ or /CV.CCV/, with consonants preferentially acting as the onset of a syllable containing the following vowel. According to one view, English is unusual in this regard, in that stressed syllables attract following consonants, so that ˈCVCV and ˈCVCCV syllabify as /ˈCVC.V/ and /ˈCVCC.V/, as long as the consonant cluster CC is a possible syllable coda; in addition, /r/ preferentially syllabifies with the preceding vowel even when both syllables are unstressed, so that CVrV occurs as /CVr.V/. This is the analysis used in the Longman Pronunciation Dictionary. However, this view is not widely accepted, as explained in the following section.
The syllable structure in English is (C)3V(C)5, with a near maximal example being strengths (/strɛŋkθs/, although it can be pronounced /strɛŋθs/).[B] From the phonetic point of view, the analysis of syllable structures is a complex task: because of widespread occurrences of articulatory overlap, English speakers rarely produce an audible release of individual consonants in consonant clusters. This coarticulation can lead to articulatory gestures that seem very much like deletions or complete assimilations. For example, hundred pounds may sound like [hʌndɹɪb paʊndz] and jumped back (in slow speech, [dʒʌmptbæk]) may sound like [dʒʌmpbæk], but X-ray and electropalatographic studies demonstrate that inaudible and possibly weakened contacts or lingual gestures may still be made. Thus the second /d/ in hundred pounds does not entirely assimilate to a labial place of articulation, rather the labial gesture co-occurs with the alveolar one; the "missing" [t] in jumped back may still be articulated, though not heard.
Division into syllables is a difficult area, and different theories have been proposed. A widely accepted approach is the maximal onset principle: this states that, subject to certain constraints, any consonants in between vowels should be assigned to the following syllable. Thus the word leaving should be divided /ˈliː.vɪŋ/ rather than */ˈliːv.ɪŋ/, and hasty is /ˈheɪ.sti/ rather than */ˈheɪs.ti/ or */ˈheɪst.i/. However, when such a division results in an onset cluster which is not allowed in English, the division must respect this. Thus if the word extra were divided */ˈɛ.kstrə/ the resulting onset of the second syllable would be /kstr/, a cluster which does not occur initially in English. The division /ˈɛk.strə/ is therefore preferred. If assigning a consonant or consonants to the following syllable would result in the preceding syllable ending in an unreduced short vowel, this is avoided. Thus the word comma (in RP) should be divided /ˈkɒm.ə/ and not */ˈkɒ.mə/, even though the latter division gives the maximal onset to the following syllable.
In some cases, no solution is completely satisfactory: for example, in British English (RP) the word hurry could be divided /ˈhʌ.ri/ or /ˈhʌr.i/, but the former would result in an analysis with a syllable-final /ʌ/ (which is held to be non-occurring) while the latter would result in a syllable final /r/ (which is said not to occur in this accent). Some phonologists have suggested a compromise analysis where the consonant in the middle belongs to both syllables, and is described as ambisyllabic. In this way, it is possible to suggest an analysis of hurry which comprises the syllables /hʌr/ and /ri/, the medial /r/ being ambisyllabic. Where the division coincides with a word boundary, or the boundary between elements of a compound word, it is not usual in the case of dictionaries to insist on the maximal onset principle in a way that divides words in a counter-intuitive way; thus the word hardware would be divided /ˈhɑː.dweə/ by the M.O.P., but dictionaries prefer the division /ˈhɑːd.weə/.
In the approach used by the Longman Pronunciation Dictionary, Wells claims that consonants syllabify with the preceding rather than following vowel when the preceding vowel is the nucleus of a more salient syllable, with stressed syllables being the most salient, reduced syllables the least, and full unstressed vowels ("secondary stress") intermediate. But there are lexical differences as well, frequently but not exclusively with compound words. For example, in dolphin and selfish, Wells argues that the stressed syllable ends in /lf/, but in shellfish, the /f/ belongs with the following syllable: /ˈdɒlf.ɪn, ˈself.ɪʃ/ → [ˈdɒlfɪ̈n, ˈselfɪ̈ʃ], but /ˈʃel.fɪʃ/ → [ˈʃelˑfɪʃ], where the /l/ is a little longer and the /ɪ/ is not reduced. Similarly, in toe-strap Wells argues that the second /t/ is a full plosive, as usual in syllable onset, whereas in toast-rack the second /t/ is in many dialects reduced to the unreleased allophone it takes in syllable codas, or even elided: /ˈtoʊ.stræp/, /ˈtoʊst.ræk/ → [ˈtoˑʊstɹæp, ˈtoʊs(t̚)ɹæk]; likewise nitrate /ˈnaɪ.treɪt/ → [ˈnaɪtɹ̥eɪt] with a voiceless /r/ (and for some people an affricated tr as in tree), vs night-rate /ˈnaɪt.reɪt/ → [ˈnaɪt̚ɹeɪt] with a voiced /r/. Cues of syllable boundaries include aspiration of syllable onsets and (in the US) flapping of coda /t, d/ (a tease /ə.ˈtiːz/ → [əˈtʰiːz] vs. at ease /æt.ˈiːz/ → [æɾˈiːz]), epenthetic stops like [t] in syllable codas (fence /ˈfens/ → [ˈfents] but inside /ɪn.ˈsaɪd/ → [ɪnˈsaɪd]), and r-colored vowels when the /r/ is in the coda vs. labialization when it is in the onset (key-ring /ˈkiː.rɪŋ/ → [ˈkiːɹʷɪŋ] but fearing /ˈfiːr.ɪŋ/ → [ˈfɪəɹɪŋ]).
The following can occur as the onset:
|All single consonant phonemes except /ŋ/||genre|
|Stop plus approximant other than /j/:||play, blood, clean, glove, prize, bring, tree,[a] dream,[a] crowd, green, twin, dwarf, Guam, quick, puissance|
|Voiceless fricative or /v/ plus approximant other than /j/:[b]||floor, sleep, thlipsis,[c] friend, three, shrimp, what,[d] swing, thwart, reservoir|
|Consonant plus /j/ (before /uː/ or its modified/reduced forms):[e]||pure, beautiful, tube,[e] during,[e] cute, argue, music, new,[e] few, view, thew,[e] suit,[e] Zeus,[e] huge, lurid[e]|
|/s/ plus voiceless stop:[f]
/sp/, /st/, /sk/
|speak, stop, skill|
|/s/ plus nasal other than /ŋ/:[f]
|/s/ plus voiceless fricative:[c]
|/s/ plus voiceless stop plus approximant:[f]||split, sclera, spring, street, scream, square, smew, spew, student,[e] skewer|
|/s/ plus voiceless fricative plus approximant:[c]
- For certain speakers, /tr/ and /dr/ tend to affricate, so that tree resembles "chree", and dream resembles "jream". This is sometimes transcribed as [tʃr] and [dʒr] respectively, but the pronunciation varies and may, for example, be closer to [tʂ] and [dʐ] or with a fricative release similar in quality to the rhotic, i.e. [tɹ̝̊ɹ̥], [dɹ̝ɹ], or [tʂɻ], [dʐɻ].
- Some northern and insular Scottish dialects, particularly in the Shetlands, preserve onsets such as /ɡn/ (as in gnaw), /kn/ (as in knock), and /wr/ or /vr/ (as in write).
- Words beginning in unusual consonant clusters that originated in Latinized Greek loanwords tend to drop the first phoneme, as in */bd/, */fθ/, */ɡn/, */hr/, */kn/, */ks/, */kt/, */kθ/, */mn/, */pn/, */ps/, */pt/, */tm/, and */θm/, which have become /d/ (bdellium), /θ/ (phthisis), /n/ (gnome), /r/ (rhythm), /n/ (cnidoblast), /z/ (xylophone), /t/ (ctenophore), /θ/ (chthonic), /n/ (mnemonic), /n/ (pneumonia), /s/ (psychology), /t/ (pterodactyl), /m/ (tmesis), and /m/ (asthma). However, the onsets /sf/, /sfr/, /skl/, /sθ/, and /θl/ have remained intact.
- The onset /hw/ is simplified to /w/ in the majority of dialects (wine–whine merger).
- Clusters ending /j/ typically occur before /uː/ and before the CURE vowel (General American /ʊr/, RP /ʊə/); they may also come before the reduced form /ʊ/ (as in argument) or even /ər/ (in the American pronunciation of figure). There is an ongoing sound change (yod-dropping) by which /j/ as the final consonant in a cluster is being lost. In RP, words with /sj/ and /lj/ can usually be pronounced with or without this sound, e.g. [suːt] or [sjuːt]. For some speakers of English, including some British speakers, the sound change is more advanced and so, for example, General American does not contain the onsets /tj/, /dj/, /nj/, /θj/, /sj/, /stj/, /zj/, or /lj/. Words that would otherwise begin in these onsets drop the /j/: e.g. tube (/tub/), during (/ˈdʊrɪŋ/), new (/nu/), Thule (/ˈθuli/), suit (/sut/), student (/ˈstudənt/), Zeus (/zus/), lurid (/ˈlʊrɪd/). In some dialects, such Welsh English, /j/ may occur in more combinations; for example in /tʃj/ (chew), /dʒj/ (Jew), /ʃj/ (sure), and /slj/ (slew).
- Many clusters beginning with /ʃ/ and paralleling native clusters beginning with /s/ are found initially in German and Yiddish loanwords, such as /ʃl/, /ʃp/, /ʃt/, /ʃm/, /ʃn/, /ʃpr/, /ʃtr/ (in words such as schlep, spiel, shtick, schmuck, schnapps, Shprintzen's, strudel). /ʃw/ is found initially in the Hebrew loanword schwa. Before /r/ however, the native cluster is /ʃr/. The opposite cluster /sr/ is found in loanwords such as Sri Lanka, but this can be nativized by changing it to /ʃr/.
Certain English onsets appear only in contractions: e.g. /zbl/ ('sblood), and /zw/ or /dzw/ ('swounds or 'dswounds). Some, such as /pʃ/ (pshaw), /fw/ (fwoosh), or /vr/ (vroom), can occur in interjections. An archaic voiceless fricative plus nasal exists, /fn/ (fnese), as does an archaic /snj/ (snew).
Several additional onsets occur in loan words (with varying degrees of anglicization) such as /bw/ (bwana), /mw/ (moiré), /nw/ (noire), /tsw/ (zwitterion), /zw/ (zwieback), /dv/ (Dvorak), /kv/ (kvetch), /ʃv/ (schvartze), /tv/ (Tver), /tsv/ (Zwickau), /kdʒ/ (Kjell), /kʃ/ (Kshatriya), /tl/ (Tlaloc), /vl/ (Vladimir), /zl/ (zloty), /tsk/ (Tskhinvali), /hm/ (Hmong), and /km/ (Khmer).
Some clusters of this type can be converted to regular English phonotactics by simplifying the cluster: e.g. /(d)z/ (dziggetai), /(h)r/ (Hrolf), /kr(w)/ (croissant), /(ŋ)w/ (Nguyen), /(p)f/ (pfennig), /(f)θ/ (phthalic), /(t)s/ (tsunami), /(ǃ)k/ (!kung), and /k(ǁ)/ (Xhosa).
Others can be replaced by native clusters differing only in voice: /zb ~ sp/ (sbirro), and /zɡr ~ skr/ (sgraffito).
The following can occur as the nucleus:
- All vowel sounds
- /m/, /n/ and /l/ in certain situations (see below under word-level patterns)
- /r/ in rhotic varieties of English (e.g. General American) in certain situations (see below under word-level patterns)
Most (in theory, all) of the following except those that end with /s/, /z/, /ʃ/, /ʒ/, /tʃ/ or /dʒ/ can be extended with /s/ or /z/ representing the morpheme -s/-z. Similarly, most (in theory, all) of the following except those that end with /t/ or /d/ can be extended with /t/ or /d/ representing the morpheme -t/-d.
Wells (1990) argues that a variety of syllable codas are possible in English, even /ntr, ndr/ in words like entry /ˈɛntr.ɪ/ and sundry /ˈsʌndr.ɪ/, with /tr, dr/ being treated as affricates along the lines of /tʃ, dʒ/. He argues that the traditional assumption that pre-vocalic consonants form a syllable with the following vowel is due to the influence of languages like French and Latin, where syllable structure is CVC.CVC regardless of stress placement. Disregarding such contentious cases, which do not occur at the ends of words, the following sequences can occur as the coda:
|The single consonant phonemes except /h/, /w/, /j/ and, in non-rhotic varieties, /r/|
|Lateral approximant plus stop or affricate: /lp/, /lb/, /lt/, /ld/, /ltʃ/, /ldʒ/, /lk/||help, bulb, belt, hold, belch, indulge, milk|
|In rhotic varieties, /r/ plus stop or affricate: /rp/, /rb/, /rt/, /rd/, /rtʃ/, /rdʒ/, /rk/, /rɡ/||harp, orb, fort, beard, arch, large, mark, morgue|
|Lateral approximant + fricative: /lf/, /lv/, /lθ/, /ls/, /lz/, /lʃ/||golf, solve, wealth, else, bells, Welsh|
|In rhotic varieties, /r/ + fricative: /rf/, /rv/, /rθ/, /rs/, /rz/, /rʃ/||dwarf, carve, north, force, Mars, marsh|
|Lateral approximant + nasal: /lm/, /ln/||film, kiln|
|In rhotic varieties, /r/ + nasal or lateral: /rm/, /rn/, /rl/||arm, born, snarl|
|Nasal + homorganic stop or affricate: /mp/, /nt/, /nd/, /ntʃ/, /ndʒ/, /ŋk/||jump, tent, end, lunch, lounge, pink|
|Nasal + fricative: /mf/, /mθ/, /nθ/, /ns/, /nz/, /ŋθ/ in some varieties||triumph, warmth, month, prince, bronze, length|
|Voiceless fricative plus voiceless stop: /ft/, /sp/, /st/, /sk/||left, crisp, lost, ask|
|Two voiceless fricatives: /fθ/||fifth|
|Two voiceless stops: /pt/, /kt/||opt, act|
|Stop plus voiceless fricative: /pθ/, /ps/, /tθ/, /ts/, /dθ/, /ks/||depth, lapse, eighth, klutz, width, box|
|Lateral approximant + two consonants: /lpt/, /lps/, /lfθ/, /lts/, /lst/, /lkt/, /lks/||sculpt, alps, twelfth, waltz, whilst, mulct, calx|
|In rhotic varieties, /r/ + two consonants: /rmθ/, /rpt/, /rps/, /rts/, /rst/, /rkt/||warmth, excerpt, corpse, quartz, horst, infarct|
|Nasal + homorganic stop + stop or fricative: /mpt/, /mps/, /ndθ/, /ŋkt/, /ŋks/, /ŋkθ/ in some varieties||prompt, glimpse, thousandth, distinct, jinx, length|
|Three obstruents: /ksθ/, /kst/||sixth, next|
For some speakers, a fricative before /θ/ is elided so that these never appear phonetically: /fɪfθ/ becomes [fɪθ], /sɪksθ/ becomes [sɪkθ],[who?] /twɛlfθ/ becomes [twɛlθ].
- Syllables may consist of a single vowel, meaning that onset and coda are not mandatory.
- The consonant /ŋ/ does not occur in syllable-initial position.
- The consonant /h/ does not occur in syllable-final position.
- Onset clusters ending in /j/ are followed by /uː/ or its variants (see note 5 above).
- Long vowels and diphthongs are not found before /ŋ/, except for the mimetic words boing and oink, unassimilated foreign words such as Burmese aung and proper names such as Taung, and American-type pronunciations of words like strong (which have /ɔŋ/ or /ɑŋ/). The short vowels /ɛ, ʊ/ occur before /ŋ/ only in assimilated non-native words such as ginseng and Sung (name of dynasty) or non-finally in some dialects in words like strength and length
- /ʊ/ is rare in syllable-initial position[C] (although in the northern half of England, [ʊ] is used for /ʌ/ and is common at the start of syllables).
- Stop + /w/ before /uː, ʊ, ʌ, aʊ/ (all presently or historically /u(ː)/) are excluded.
- Sequences of /s/ + C1 + V̆ + C1, where C1 is a consonant other than /t/ and V̆ is a short vowel, are virtually nonexistent.
- /ə/ does not occur in stressed syllables.
- /ʒ/ does not occur in word-initial position in native English words, although it can occur syllable-initially as in luxurious /lʌɡˈʒʊəriəs/, and at the start of borrowed words such as genre.
- /m/, /n/, /l/ and, in rhotic varieties, /r/ can be the syllable nucleus (i.e. a syllabic consonant) in an unstressed syllable following another consonant, especially /t/, /d/, /s/ or /z/. Such syllables are often analyzed phonemically as having an underlying /ə/ as the nucleus. See above under Consonants.
- The short vowels are checked vowels, in that they cannot occur without a coda in a word-final stressed syllable. (This does not apply to /ə/, which does not occur in stressed syllables at all.)
The prosodic features of English – stress, rhythm, and intonation – can be described as follows.
Prosodic stress is extra stress given to words or syllables when they appear in certain positions in an utterance, or when they receive special emphasis.
According to Ladefoged's analysis (as referred to under Lexical stress § Notes above), English normally has prosodic stress on the final stressed syllable in an intonation unit. This is said to be the origin of the distinction traditionally made at the lexical level between primary and secondary stress: when a word like admiration (traditionally transcribed as something like /ˌædmɪˈreɪʃən/) is spoken in isolation, or at the end of a sentence, the syllable ra (the final stressed syllable) is pronounced with greater force than the syllable ad, although when the word is not pronounced with this final intonation there may be no difference between the levels of stress of these two syllables.
Prosodic stress can shift for various pragmatic functions, such as focus or contrast. For instance, in the dialogue Is it brunch tomorrow? No, it's dinner tomorrow, the extra stress shifts from the last stressed syllable of the sentence, tomorrow, to the last stressed syllable of the emphasized word, dinner.
Grammatical function words are usually prosodically unstressed, although they can acquire stress when emphasized (as in Did you find the cat? Well, I found a cat). Many English function words have distinct strong and weak pronunciations; for example, the word a in the last example is pronounced /eɪ/, while the more common unstressed a is pronounced /ə/. See Weak and strong forms in English.
English is claimed to be a stress-timed language. That is, stressed syllables tend to appear with a more or less regular rhythm, while non-stressed syllables are shortened to accommodate this. For example, in the sentence One make of car is better than another, the syllables one, make, car, bett- and -noth- will be stressed and relatively long, while the other syllables will be considerably shorter. The theory of stress-timing predicts that each of the three unstressed syllables in between bett- and -noth- will be shorter than the syllable of between make and car, because three syllables must fit into the same amount of time as that available for of. However, it should not be assumed that all varieties of English are stress-timed in this way. The English spoken in the West Indies, in Africa and in India are probably better characterized as syllable-timed, though the lack of an agreed scientific test for categorizing an accent or language as stress-timed or syllable-timed may lead one to doubt the value of such a characterization.
Phonological contrasts in intonation can be said to be found in three different and independent domains. In the work of Halliday the following names are proposed:
- Tonality for the distribution of continuous speech into tone groups.
- Tonicity for the placing of the principal accent on a particular syllable of a word, making it the tonic syllable. This is the domain also referred to as prosodic stress or sentence stress.
- Tone for the choice of pitch movement on the tonic syllable. (The use of the term tone in this sense should not be confused with the tone of tone languages, such as Chinese.)
These terms ("the Three Ts") have been used in more recent work, though they have been criticized for being difficult to remember. American systems such as ToBI also identify contrasts involving boundaries between intonation phrases (Halliday's tonality), placement of pitch accent (tonicity), and choice of tone or tones associated with the pitch accent (tone).
Example of phonological contrast involving placement of intonation unit boundaries (boundary marked by comma):
- Those who ran quickly, escaped. (the only people who escaped were those who ran quickly)
- Those who ran, quickly escaped. (the people who ran escaped quickly)
Example of phonological contrast involving placement of tonic syllable (marked by capital letters):
- I have plans to LEAVE. (= I am planning to leave)
- I have PLANS to leave. (= I have some drawings to leave)
Example of phonological contrast (British English) involving choice of tone (\ = falling tone, \/ = fall-rise tone)
- She didn't break the record because of the \ WIND. (= she did not break the record, because the wind held her up)
- She didn't break the record because of the \/ WIND. (= she did break the record, but not because of the wind)
There is typically a contrast involving tone between wh-questions and yes/no questions, the former having a falling tone (e.g. "Where did you \PUT it?") and the latter a rising tone (e.g. "Are you going /OUT?"), though studies of spontaneous speech have shown frequent exceptions to this rule. Tag questions asking for information are said to carry rising tones (e.g. "They are coming on Tuesday, /AREN'T they?") while those asking for confirmation have falling tone (e.g. "Your name's John, \ISN'T it.").
History of English pronunciation
The pronunciation system of English has undergone many changes throughout the history of the language, from the phonological system of Old English, to that of Middle English, through to that of the present day. Variation between dialects has always been significant. Former pronunciations of many words are reflected in their spellings, as English orthography has generally not kept pace with phonological changes since the Middle English period.
The English consonant system has been relatively stable over time, although a number of significant changes have occurred. Examples include the loss (in most dialects) of the [ç] and [x] sounds still reflected by the ⟨gh⟩ in words like night and taught, and the splitting of voiced and voiceless allophones of fricatives into separate phonemes (such as the two different phonemes represented by ⟨th⟩). There have also been many changes in consonant clusters, mostly reductions, for instance those that produced the usual modern pronunciations of such letter combinations as ⟨wr-⟩, ⟨kn-⟩ and ⟨wh-⟩.
The development of vowels has been much more complex. One of the most notable series of changes is that known as the Great Vowel Shift, which began around the late 14th century. Here the [iː] and [uː] in words like price and mouth became diphthongized, and other long vowels became higher: [eː] became [iː] (as in meet), [aː] became [eː] and later [eɪ] (as in name), [oː] became [uː] (as in goose), and [ɔː] became [oː] and later [oʊ] (in RP now [əʊ]; as in bone). These shifts are responsible for the modern pronunciations of many written vowel combinations, including those involving a silent final ⟨e⟩.
Many other changes in vowels have taken place over the centuries (see the separate articles on the low back, high back and high front vowels, short A, and diphthongs). These various changes mean that many words that formerly rhymed (and may be expected to rhyme based on their spelling) no longer do. For example, in Shakespeare's time, following the Great Vowel Shift, food, good and blood all had the vowel [uː], but in modern pronunciation good has been shortened to [ʊ], while blood has been shortened and lowered to [ʌ] in most accents. In other cases, words that were formerly distinct have come to be pronounced the same – examples of such mergers include meet–meat, pane–pain and toe–tow.
- Australian English phonology
- English orthography
- English pronunciation of Greek letters
- General American
- Non-native pronunciations of English
- Old English phonology
- Perception of English /r/ and /l/ by Japanese speakers
- Phonological development
- Phonological history of English vowels
- Phonological history of English consonants
- Pronunciation of English ⟨th⟩
- Received Pronunciation
- Regional accents of English
- Rhoticity in English
- R-colored vowel
- International Phonetic Alphabet chart for English dialects
- Category:Splits and mergers in English phonology
- "The t after n is often silent in American pronunciation. Instead of saying internet Americans will frequently say 'innernet.' This is fairly standard speech and is not considered overly casual or sloppy speech." Mojsin (2009), p. 36
- Five-consonant codas are rare, but one occurs in angsts /æŋksts/. See list of the longest English words with one syllable for further long syllables in English.
- The OED does not list any native words that begin with /ʊ/, apart from mimetic oof!, ugh! oops! ook(y)
- Rogers (2014), p. 20.
- Roach (2009), pp. 100–1.
- Kreidler (2004), p. 84.
- Wells (1982), p. 55.
- Wells (1982), pp. 389, 619.
- Tench (1990), p. 132.
- Bowerman (2004), p. 939.
- Garrett, Coupland & Williams (2003).
- Bowerman (2004), p. 940.
- O'Connor, J.D. (1973). Phonetics (1st ed.). Penguin. p. 151. ISBN 978-0140136388.
- Roach (2009), p. 43.
- Gimson (2008), p. 230.
- McMahon (2002), p. 31.
- Giegerich (1992), p. 36.
- Ladefoged (2006), p. 68.
- Wells (1982), p. 490.
- Wells (1982), p. 550.
- Collins & Mees (1990), p. 91.
- Ladefoged (2001), p. 55.
- Celce-Murcia, Brinton & Goodwin (1996), pp. 62–67.
- Roach (2009), pp. 26–28.
- Wells (1982), p. 388.
- Gimson (2008), pp. 179–180.
- Wells (1982), p. 323.
- Celce-Murcia, Brinton & Goodwin (1996), p. 64.
- Gimson (2014), pp. 173–182.
- Gimson (2014), pp. 170 and 173–182.
- Gimson (2014), p. 190.
- Trudgill & Hannah 2002, p. 18
- Trudgill & Hannah 2002, p. 25
- Wyld (1936), cited in Wells (1982), p. 262.
- Bauer, Warren & 2005 (596).
- Wells (1982), p. 207.
- Durian (2007).
- Hay (2008), p. 37.
- Wells (1982), pp. 140, 147, 299.
- Roach (2004), p. 242.
- Gimson (2014).
- Roca & Johnson (1999), p. 135.
- Wells (1982), pp. 121, 132.
- Wells (1982), pp. 473–474.
- Labov, Ash & Boberg (2006), pp. 13, 171–173.
- Woods (1993), pp. 170–171.
- Kiefte & Kay-Raining Bird (2010), pp. 63–64, 67.
- Wells (1982), p. 128.
- Gimson (2014), pp. 126, 133.
- Cox & Fletcher (2017), p. 65.
- Gimson (2014), p. 118.
- Cox & Palethorpe (2007).
- Wells (1982), p. 129.
- Roach (2004), p. 240.
- Collins & Mees (2013), p. 58.
- Gimson (2008), p. 132.
- Celce-Murcia, Brinton & Goodwin (1996), p. 66.
- Wells (1982), p. 149.
- Bolinger (1986), pp. 347–360.
- Lewis (1990).
- Kreidler (2004), pp. 82–3.
- McCully (2009), pp. 123–4.
- Roach (2009), pp. 66–8.
- Wells (2014), p. 53.
- Ladefoged (2006).
- Bolinger (1986), p. 351.
- Bolinger (1986), p. 348.
- Ladefoged (2006), §5.4.
- Ladefoged (1980), p. 83.
- Wells (1990), pp. 76–86.
- Zsiga (2003), p. 404.
- Browman & Goldstein (1990).
- Barry (1991).
- Barry (1992).
- Nolan (1992).
- Selkirk (1982).
- Giegerich (1992), p. 172.
- Harris (1994), p. 198.
- Gimson (2008), pp. 258–9.
- Giegerich (1992), pp. 167–70.
- Kreidler (2004), pp. 76–8.
- Wells (1990), p. ?.
- Read (1986), p. ?.
- Bradley (2006).
- Baković (2006).
- Blake (1992), p. 67.
- McColl Millar (2007), pp. 63–64.
- Clements & Keyser (1983), p. ?.
- Collins & Mees (2013), p. 138.
- Wells (1982), p. 644.
- Wells (1982), pp. 630–1.
- Roach (1982), pp. 73–9.
- Halliday (1967), pp. 18–24.
- Tench (1996).
- Wells (2006).
- Roach (2009), p. 144.
- Brown (1990), pp. 122–3.
- Cercignani (1975), pp. 513–8.
- Baković, Eric (2006), "The jug trade", Phonoloblog, archived from the original on 2008-09-05
- Barry, M (1991), "Temporal Modelling of Gestures in Articulatory Assimilation", Proceedings of the 12th International Congress of Phonetic Sciences, Aix-en-Provence
- Barry, M (1992), "Palatalisation, Assimilation and Gestural Weakening in Connected Speech", Speech Communication, pp. vol.11, 393–400
- Bauer, L.; Warren, P. (2005), "New Zealand English: phonology", in Schneider, Edgar Werner; Kortmann, Bernd (eds.), A Handbook of Varieties of English, Mouton De Gruyter
- Blake, Norman, ed. (1992), The Cambridge History of the English Language, 2, Cambridge University Press, ISBN 9781139055529
- Bolinger, Dwight (1986), Intonation and Its Parts: Melody in Spoken English, Stanford University Press, ISBN 0-8047-1241-7
- Bowerman, Sean (2004), "White South African English: phonology", in Schneider, Edgar W.; Burridge, Kate; Kortmann, Bernd; Mesthrie, Rajend; Upton, Clive (eds.), A handbook of varieties of English, 1: Phonology, Mouton de Gruyter, pp. 931–942, ISBN 3-11-017532-0
- Bradley, Travis (2006), "Prescription Jugs", Phonoloblog, archived from the original on 2008-09-05
- Browman, Catherine P.; Goldstein, Louis (1990), "Tiers in Articulatory Phonology, with Some Implications for Casual Speech", in Kingston, John C.; Beckman, Mary E. (eds.), Papers in Laboratory Phonology I: Between the Grammar and Physics of Speech, New York: Cambridge University Press, pp. 341–376
- Brown, G. (1990), Listening to Spoken English, Longman
- Celce-Murcia, M.; Brinton, D.; Goodwin, J. (1996), Teaching Pronunciation: A Reference for Teachers of English to Speakers of Other Languages, Cambridge University Press
- Cercignani, Fausto (1975), "English Rhymes and Pronunciation in the Mid-Seventeenth Century", English Studies, 56 (6): 513–518, doi:10.1080/00138387508597728
- Clements, G.N.; Keyser, S. (1983), CV Phonology: A Generative Theory of the Syllable, Cambridge, MA: MIT press
- Collins, Beverley; Mees, Inger M. (1990), "The phonetics of Cardiff English", in Coupland, Nikolas; Thomas, Alan Richard (eds.), English in Wales: Diversity, Conflict, and Change, Multilingual Matters, pp. 87–103, ISBN 9781853590313
- Collins, Beverley; Mees, Inger M. (2013) [First published 2003], Practical Phonetics and Phonology: A Resource Book for Students (3rd ed.), Routledge, ISBN 978-0-415-50650-2
- Cox, Felicity; Fletcher, Janet (2017), Australian English Pronunciation and Transcription, Cambridge University Press, ISBN 978-1-316-63926-9
- Cox, Felicity; Palethorpe, Sallyanne (2007). "Illustrations of the IPA: Australian English". Journal of the International Phonetic Association. 37 (3): 341–350. doi:10.1017/S0025100307003192.
- Durian, David (2007), "Getting [ʃ]tronger Every Day?: More on Urbanization and the Socio-geographic Diffusion of (str) in Columbus, OH", University of Pennsylvania Working Papers in Linguistics, 13 (2): 65–79
- Garrett, Peter; Coupland, Nikolas; Williams, Angie (2003), Investigating Language Attitudes: Social Meanings of Dialect, Ethnicity and Performance, University of Wales Press, ISBN 1783162082
- Giegerich, H. (1992), English Phonology: An Introduction, Cambridge: Cambridge University Press
- Gimson, A.C. (2008), Cruttenden, Alan (ed.), Pronunciation of English, Hodder
- Gimson, A.C. (2014), Cruttenden, Alan (ed.), Gimson's Pronunciation of English (8th ed.), Routledge, ISBN 9781444183092
- Halliday, M.A.K. (1967), Intonation and Grammar in British English, Mouton
- Harris, John (1994), English Sound Structure, Oxford: Blackwell
- Hay, Jennifer (2008), New Zealand English, Edinburgh University Press, ISBN 978-0-7486-3088-2
- Kiefte, Michael; Kay-Raining Bird, Elizabeth (2010), "Canadian Maritime English", in Schreier, Daniel; Trudgill, Peter; Schneider, Edgar W.; Williams, Jeffrey P. (eds.), The Lesser-Known Varieties of English: An Introduction, Cambridge University Press, pp. 59–71, ISBN 978-1-139-48741-2
- Kreidler, Charles (2004), The Pronunciation of English, Blackwell
- Ladefoged, Peter (1980), Preliminaries to linguistic phonetics, University of Chicago Press, ISBN 0-226-46787-2
- Ladefoged, Peter (2001), Vowels and Consonants, Blackwell, ISBN 0-631-21411-9
- Ladefoged, Peter (2006), A Course in Phonetics (5th ed.), Fort Worth: Harcourt College Publishers, ISBN 0-15-507319-2
- Labov, William; Ash, Sharon; Boberg, Charles (2006), The Atlas of North American English: Phonetics, Phonology and Sound Change, Walter de Gruyter, ISBN 978-3-11-020683-8
- Lewis, J. Windsor (1990), "Happy land Reconnoitred: The unstressed word-final -y vowel in General British pronunciation", in Ramsaran, Susan (ed.), Studies in the Pronunciation of English: A Commemorative Volume in Honour of A. C. Gimson, London: Routledge, pp. 159–167, ISBN 978-0-415-07180-2</ref>
- McColl Millar, Robert (2007), Northern and Insular Scots, Edinburgh University Press
- McCully, C. (2009), The Sound Structure of English, Cambridge: Cambridge University Press
- McMahon, A. (2002), An Introduction to English Phonology, Edinburgh
- Mojsin, Lisa (2009), Mastering the American Accent (1st ed.), Barron's Educational Series, ISBN 0764195824
- Nolan, Francis (1992), "The Descriptive Role of Segments: Evidence from Assimilation.", in Docherty, Gerard J.; Ladd, D. Robert (eds.), Papers in Laboratory Phonology II: Gesture, Segment, Prosody, New York: Cambridge University Press, pp. 261–280
- Read, Charles (1986), Children's Creative Spelling, Routledge, ISBN 0-7100-9802-2
- Roach, Peter (1982), "On the distinction between 'stress-timed' and 'syllable-timed' languages", in Crystal, David (ed.), Linguistic Controversies, Arnold
- Roach, Peter (2004), "British English: Received Pronunciation", Journal of the International Phonetic Association, 34 (2): 239–245, doi:10.1017/S0025100304001768
- Roach, Peter (2009), English Phonetics and Phonology: A Practical Course, 4th Ed., Cambridge: Cambridge University Press, ISBN 0-521-78613-4
- Roca, Iggy; Johnson, Wyn (1999), A Course in Phonology, Blackwell Publishing
- Rogers, Henry (2014), The Sounds of Language: An Introduction to Phonetics (2nd ed.), Routledge, ISBN 978-1-31787776-9
- Selkirk, E. (1982), "The Syllable", in van der Hulst, H.; Smith, N. (eds.), The Structure of Phonological Representations, Dordrecht: Foris
- Tench, Paul (1990), "The Pronunciation of English in Abercrave", in Coupland, Nikolas; Thomas, Alan Richard (eds.), English in Wales: Diversity, Conflict, and Change, Multilingual Matters Ltd., pp. 130–141, ISBN 1-85359-032-0
- Tench, P. (1996), The Intonation Systems of English, Cassell
- Trudgill, Peter; Hannah, Jean (2002), International English: A Guide to the Varieties of Standard English (4th ed.), London: Arnold
- Wells, John C. (1982), Accents of English, Volume 1: An Introduction (pp. i–xx, 1–278), Volume 2: The British Isles (pp. i–xx, 279–466), Volume 3: Beyond the British Isles (pp. i–xx, 467–674), Cambridge University Press, ISBN 0-52129719-2 , 0-52128540-2 , 0-52128541-0
- Wells, John C. (1990), "Syllabification and allophony", in Ramsaran, Susan (ed.), Studies in the Pronunciation of English: A Commemorative Volume in Honour of A. C. Gimson, London: Routledge, pp. 76–86, ISBN 978-0-415-07180-2
- Wells, John C. (2006), English Intonation, Cambridge: Cambridge University Press
- Wells, John C. (2014), Sounds Interesting, Cambridge: Cambridge University Press
- Woods, Howard B. (1993), "A synchronic study of English spoken in Ottawa: Is Canadian English becoming more American?", in Clarke, Sandra (ed.), Focus on Canada, John Benjamins Publishing, pp. 151–178, ISBN 90-272-7681-1
- Wyld, H.C. (1936), A History of Modern Colloquial English, Blackwell
- Zsiga, Elizabeth (2003), "Articulatory Timing in a Second Language: Evidence from Russian and English", Studies in Second Language Acquisition, 25: 399–432, doi:10.1017/s0272263103000160
- Bacsfalvi, P. (2010). "Attaining the lingual components of /r/ with ultrasound for three adolescents with cochlear implants". Canadian Journal of Speech-Language Pathology and Audiology. 3 (34): 206–217.
- Ball, M.; Lowry, O.; McInnis, L. (2006). "Distributional and stylistic variation in /r/-misarticulations: A case study". Clinical Linguistics & Phonetics. 2–3 (20).
- Campbell, F., Gick, B., Wilson, I., Vatikiotis-Bateson, E. (2010), “Spatial and Temporal Properties of Gestures in North American English /r/”. Child's Language and Speech, 53 (1): 49–69
- Cercignani, Fausto (1981), Shakespeare's Works and Elizabethan Pronunciation, Oxford: Clarendon Press
- Chomsky, Noam; Halle, Morris (1968), The Sound Pattern of English, New York: Harper & Row
- Crystal, David (1969), Prosodic Systems and Intonation in English, Cambridge: Cambridge University Press
- Dalcher Villafaña, C., Knight, R.A., Jones, M.J., (2008), “Cue Switching in the Perception of Approximants: Evidence from Two English Dialects”. University of Pennsylvania Working Papers in Linguistics, 14 (2): 63–64
- Espy-Wilson, C. (2004), “Articulatory Strategies, speech Acoustics and Variability”. From Sound to Sense June 11 – June 13 at MIT: 62–63
- Fudge, Erik C. (1984), English Word-stress, London: Allen and Unwin
- Gimson, A.C. (1962), An Introduction to the Pronunciation of English, London: Edward Arnold
- Hagiwara, R., Fosnot, S. M., & Alessi, D. M. (2002). “Acoustic phonetics in a clinical setting: A case study of /r/-distortion therapy with surgical intervention”. Clinical linguistics & phonetics, 16 (6): 425–441.
- Halliday, M.A.K. (1970), A Course in Spoken English: Intonation, London: Oxford University Press
- Hoff, Erika, (2009), Language Development. Scarborough, Ontario. Cengage Learning, 2005.
- Howard, S. (2007), “The interplay between articulation and prosody in children with impaired speech: Observations from electropalatographic and perceptual analysis”. International Journal of Speech-Language Pathology, 9 (1): 20–35.
- Kingdon, Roger (1958), The Groundwork of English Intonation, London: Longman
- Locke, John L., (1983), Phonological Acquisition and Change. New York, United States. Academic Press, 1983. Print.
- O'Connor, J. D.; Arnold, Gordon Frederick (1961), Intonation of Colloquial English, London: Longman
- Pike, Kenneth Lee (1945), The Intonation of American English, Ann Arbor: University of Michigan Press
- Sharf, D.J., Benson, P.J. (1982), “Identification of synthesized/r-w/continua for adult and child speakers”. Donald J. Acoustical Society of America, 71 (4):1008–1015.
- Trager, George L.; Smith, Henry Lee (1951), An Outline of English Structure, Norman, OK: Battenburg Press
- Wise, Claude Merton (1957), Applied Phonetics, Englewood Cliffs, NJ: Prentice-Hall.
|Wikimedia Commons has media related to English phonology.|
- An American IPA chart with sounds and examples. All the sounds of American English (General American) with: consonants, simple vowels and diphthongs. The chart is interactive, click on the symbols and illustrations!
- Seeing Speech Accent Map
- Sounds of English (includes animations and descriptions)
- The sounds of English and the International Phonetic Alphabet (www.antimoon.com).
- The Chaos by Gerard Nolst Trenité.
- Chris Upwood on The Classic Concordance of Cacographic Chaos |
History is the continuous record of public events and the study of nations. Mediaeval history bridges the classical and the modern and is that period from the end of the Roman Empire, through Charlemagne's Holy Roman Empire to the Renaissance. This neatly spans the European gaps in learning and stable government. Rome meant peace and security to Europeans and men tried to recreate that happy golden age. (Regrettably for the Europeans, Charlemagne was followed by more Dark Ages.) In this definition, family interests lean to an emphasis of the later mediaeval time-frame. The following is intended to give some baseline against which to note family motives, decisions and events.
By far the most significant event during this period was the empowerment of the Roman Catholic Church. The concept of Christendom is important to the understanding of subsequent events. By the act of personally crowning Charlemagne as the Holy Roman Emperor in 800 Pope Leo III had gained temporal power. Since the Pope created Charlemagne Emperor, he made the Church's support conditional upon the emperor's defence of the Church. Thus the Holy Roman Empire was built to define the limits of European Christianity. The Saxons were won to Christianity because Charlemagne gave them a choice - convert or the sword! The Church had had five equal patriarchies, but by 700 there were only two left with significance: Rome and Constantinople. (Antioch, Alexandria and Jerusalem had faded badly.) Given Pope Leo's coup with Charlemagne, the remaining Greek Patriarch was doomed to lose an unequal power struggle to an increasingly powerful European Pope.
Charlemagne is important because he revived secure empire, learning, and law. The great thinkers and scholars were all churchmen and included St Thomas Aquinas, Alcuin of York, Peter Lombard, Duns Scotus, and William of Occam. Learning was an important issue, because European knowledge had only just survived and only amongst remote religious centres, for example with the Irish monks. The pagan Anglo-Saxon invasion of Britain undid much of Britain's Roman culture and Christianity, which later returned via Ireland and France. The Merovingian King Clovis and 3,000 Franks were baptised Christians in 496 at Reims. Because Charlemagne was a traditional Frank, on his death that empire was divided by Frankish custom amongst his three grandsons in the 843 'Partition of Verdun'. That partition ended centralised imperial civilisation, but more importantly it separated France from the Osterreich (Germany and modern Austria), with Italy and the Low Countries in the middle kingdom. The result has been historical European conflict.
However, despite this split of state, the church had been established as the heart of God's kingdom on earth. In recollection of earlier beliefs in pagan gods, men took religion seriously. Pope Gregory revived popular church music and is recalled in the Gregorian chant. Most Europeans did not understand it, but they accepted the Catholic Church as infallible and ceded literacy and education to the abbeys and monasteries for the next five hundred years. This breadth of power explains the success in raising the Crusades and in controlling European kings. In a sense, the same depth of religious passion explains historical Mackenzie-support of the Catholic Jacobites against the Protestant English.
Muhammed inspired rapid Arab expansion in the Middle East, Asia, North Africa, and Spain during the period 632 to 732 and the Moors set in motion a perceived need for Europe to defend itself. Charles Martel stopped the invasion on October 732 at the Battle of Tours, by defeating an army of the Umayyad Caliphate led by ‘Abdul Rahman Al Ghafiqi, Governor-general of al-Andalus. But the real European strong man was Charlemagne a descendant of Martel whose Christian empire became a bulwark against the Moors and Arabs.
Just prior to Charlemagne's coronation in 800 AD, the Chinese invention of printing had increased interest in paper making, which in turn spread to Europe. Alcuin of York organised an education system at Tours and initiated a Carolingian renaissance. In 911 the Abbey of Cluny was founded. Europe was filled with powerful forces of change; the Viking raids had just begun, the Moors and Arabs were challenging in the Mediterranean, and the Magyars would sweep into soft Europe from the east in another one hundred years. In 911 the Viking Rollo was granted the Duchy of Normandy and in 929 the Muslim Caliphate of Córdoba was created. After Charlemagne the Frankish empire declined and the Germans had proved their discipline, Otto I was crowned Holy Roman Emperor in 926 in Rome to fill the vacuum.
The eighth and ninth century barbarian raiders were after the spoils of the Carolingian, and former Roman empires, wealthy abbeys and cities alike were vulnerable. Many churches held valuable gifts from former patrons,collected over the centuries: they were the immediate targets of pagan pillage. Since churches were usually populated by unarmed clergy that added to their vulnerability. The Vikings and Mongols were more feared than the Muslim Arabs.
The Danes and Norwegian Vikings were expert slave traders. The Viking slave trade was driven by the development of Afghani gold mines by the Caliph's Muslims. The Arab's new-found wealth was a perfect solution for Scandinavian fur-traders, who soon discovered a market for white slaves from Europe. The Danes, Norwegians and Swedes were known by a variety of names to their multi-lingual customers. Vikings were called: Northmen, or Normans in much of Europe; Varangians by the Greek Byzantines, and also Rhos/Rus. The name Rus, like the Finnish name for Sweden, is perhaps derived from an Old Norse term for men who row (rods-) as rowing was the main method of navigating the Russian rivers. Rus may be also linked to the Swedish province of Roslagen (Rus-law) or Roden, from which most Varangians came. Rus might then have the same origin as the Finnish and Estonian names for Swedes: Ruotsi and Rootsi.
Central authority crumbled and European society was changed forever. These attacks were repelled by the organised Byzantines in the Mediterranean and fought by the Germans and English, but French and Russian resistance evaporated. By the twelfth century feudalism was established to provide protection and freemen were widely replaced by bound serfs. Christian Europe had evolved by 1600 while Islam expanded through northern Africa and across the southern half of Asia to the edge of China and the Malay States.
With the German repulse of the Magyars in 955 a slow economic recovery began in Europe. During the period from 950 to 1100, the Christian churches, abbeys and monasteries gained tremendous wealth and became a political battleground for power. European monarchs and the Pope fought to control the bishops. German kings appointed bishops as instruments of government. This monarchical assumption of divine right derived from the Roman Emperors adoption of Christianity as a state religion and their assumed dual functions of Emperor and Pontius Maximus. A series of Church Synods was held to develop tactics and enable the Papal crusades.
The period was marked by great religious fervour and the construction of perhaps two thousand cathedrals and abbeys in one hundred years. Chartres was built by 15,000 people in one generation! England, Hungary, France, and the Slavic nations began to form during the second half of the tenth century. In this unsettled period, feudalism grew into the glue which held nations and alliances together against the powerful threatening forces of the raiders. In 969 Cairo was founded as a city, while Islam continued to expand. The Byzantine Empire was forced to act to defend itself against its neighbours and annexed Bulgaria in 1018 for one hundred and fifty years. In c1000, the Vikings colonised Greenland and tried to establish a colony in Newfoundland. (Viking maps were extraordinarily accurate and may have encorporated Chinese cartographic details. The disputed authenticity of the Vineland Map has been confirmed as having being made in 1420-1440 and is owned by Yale University.) In c950, Normans began to fight in Sicily and near Sorrento, Italy, and they began to plan their advance on England in 1002, with the marriage of King Ethelred II to Emma of Normandy. There was a Christian crisis in 1054 as the Latin and Greek churches struggled for power, which led to the later Crusades.
In 1066, William the Conqueror led his Normans in the conquest of England, and was a friend of our first direct British ancestor Sir Other FitzOthoere, and in 1078 William built the tower in his new capital in London. In 1030 a Norman was created Lord of Aversa, near Naples, Italy. Some of the south-eastern territory of Spoleto, namely Abruzzo, became part of the Norman Kingdom of Naples in the first half of the twelfth Century. The Turks took Baghdad and defeated the Byzantines, and the Moors ruled in Spain. Gregory VII was elected Pope and saw the danger to Christianity in both the kings who fought each other and wanted to control his Church, and the outside invaders. Without hesitation he chose to challenge the kings for power and control of the bishops. In 1096, responding to Pope Urban II's challenge, the Normans and French invaded Turkey and Syria and established the Crusader states. In c1100 the first major universities were established at Salerno, Bologna and Paris, and Chartres cathedral, completed in 1221, spread Gothic architecture.
In 1125, Germans pushed eastward and in 1154 Henry II became King of England and established the Angevin Empire in France and England. In 1171 Saladin conquered Egypt and then the Crusader kingdom in 1188. Religious rebellion was challenged in a French crusade against Albigensian heresy (derived from the Persian Mani c250 AD), of celibacy and dualism. The Manichean concept travelled with the Moors to Spain and southern France and non-spiritual aspects were deemed evil. Albigensians were the early Protestants.
While the Crusades continued, Europeans began to develop poetry and Batu Khan's Mongols began to move in Europe and Hülegü Khan conquered much of Asia. In 1215, the Magna Carta forced English King John to share power with his barons. This led directly to hereditary monarchies in Europe and the evolution of the feudal system, which gradually reserved felony law to the king. During the next fifty years, Thomas de Boulton was appointed sheriff of Lincolnshire in 1263, while the Mongols conquered China, Russia, Poland, Hungary, Bohemia and Persia, and Alexander Nievskiy fought off the German Teutonic Knights. Marco Polo visited Khubilai Khan's China, arriving in 1275, spectacles were invented in Italy and Switzerland came into being.
The struggle for power between the Italian City States and the Pope was then at its height. Continuing the monarchical fight for power with the popes, King Philip II of France hounded the wealthy Templars out of existence. Philip then established Clement V as Pope in a failed attempt to control him, while the Papacy moved to Avignon in 1309. In 1314, The Bruce defeated the English at Bannockburn and Ivan I began to re-establish Moscow and built the Kremlin. The English struggle in France began the Hundred Years War in 1337 and then the Black death ravaged Europe, killing one-third and remembered in the macabre children's rhyme:
around the Rosie, a pocket full of posie
The Plantagenets of England claimed to be the rightful kings of both France and England. The Plantagenets, also known as the House of Anjou, had their roots in the French regions of Anjou and Normandy. French armies fought on both sides, with Burgundy and Aquitaine providing notable support for the Plantagenets. The War actually lasted about 81 years, but was fought over a period of 116 years. The English lost all but the area of Calais. The War devastated France as a land, but it also awakened French nationalism. The Hundred Years' War accelerated the evolution of France from a feudal monarchy to a centralised state.
Marco Polo had visited China one hundred years earlier and brought back fabulous stories. In 1386, the Poles and Lithuanians united and the Lithuanians converted to Christianity. Ten years later the Scandinavians were united under the Union of Kalmar. Tamerlane invaded India, the Ottomans gained the Balkans, and then Tamerlane defeated the Ottomans in Anatolia. Chaucer, the first English poet died in 1400. While the Chinese began to navigate the Indian Ocean, the Poles defeated the Teutonic Knights at the Battle of Tannenberg in 1410 and Henry V of England beat the French at the Battle of Angincourt. Jeanne d'Arc led the French revival in 1428, Johannes Gutenberg printed the first book in 1445. England lost most of her continental lands in 1453, the same year that the Turks ended the Byzantine Empire. In 1478, Tsar Ivan III captured Novgorod and finally expelled the Mongols. This improved Baltic Hanseatic League trade which, despite poor European roads, led to economic and population growth. The principal European cities, except Paris, were then all in the south at Naples, Venice, Milan and Constantinople.
Monarchical recovery was under way in Spain with El Cid the repulse of the Arabs in Córdoba in 1236 and Cádiz in 1262, from the new kingdoms of Castille and Aragon. In 1492 Granada fell ending Muslim rule in Spain. This initiated the Inquisition, marked the Jewish expulsion, began Spanish attacks on North Africa and of course marked Columbus' daring voyage to America and heralded John Cabot's arrival in Nova Scotia. The Spanish and Portuguese divided the world between them in 1493 leading to the Spanish Americas and Portuguese colonies in Brazil, East Africa and Asia, while Habsburgs struggled with France for European hegemony.
The evolution of a powerful France, the growth of the middle class and peace created a great wealth to exploit new learning and artistic techniques. The era exploded in a frenzy of construction and expanded into the Renaissance. By this time a quiet revolution in technology had been completed. During the fifteenth century Europeans profited from their cumulative inheritance from the Crusades, including knowledge of other peoples and customs, ideas and education, travel, art and construction, new wealth and new social classes. Perhaps most importantly Europeans gained a sense of national identity. These factors propelled thinking forward to challenge: "...Beyond this place there be dragons..." Why should there be a limit to the world? Why should man not sail west? Navigation skills and new technologies developed, fundamentally changing the world after 1500.
The Italian Renaissance began in 1500 encouraging artists like Michelangelo, Leonardo da Vinci and Raphael and thinkers like Machiavelli. Modern diplomacy evolved to aid in the development of political relationships amongst the Italian City States. The slave trade began to bring cheap labour to develop America and the Caribbean, where the natives were decimated by European disease. Ten years later Martin Luther initiated the reformation of the Roman Catholic Church and changed Western faith. While Magellan explored the Pacific, Ottoman Turks and Suleiman the Magnificent reached their peak of power in capturing Belgrade in 1521.
For the next three hundred years Europeans were secure and imposed European culture and values on the world. England, soon to be called Britain, was well placed to exploit the opportunities. In 1500, the Spanish settled Hispaniola and thereby created a slave labour market. The newly arrived Spaniards - and other Europeans - killing off 90% of the local natives, who died from their exposure to European diseases.
The largest and wealthiest country was Ming China with 100 million people, Islam the fastest expanding religion. The area physically occupied by major civilisations was small, limited to plough cultivation; and three quarters of the world was inhabited by hunter-gatherers or hand cultivators. By 1800 all that had changed forever, with new European empires and Christianity spread (initially by the Portuguese and Spanish) around the globe. This European spread led to the migration of animals, like horses and sheep, and food. The plough was very productive and European food production increases led to modern population growths. Grain became a global staple; American tobacco, cotton and the rapidly growing sugar cane crops all led to world trade.
The age of exploration was rooted in new technologies and ideas growing out of the Renaissance. These included advances in cartography, navigation, firepower and shipbuilding. Many people wanted to find a route to Asia through the west of Europe. The most important development was the invention of first the carrack and then caravel in Portugal. These vessels evolved from mediaeval European designs with a fruitful combination of Mediterranean and North Sea designs and the addition of some Arabic elements. They were the first ships that could leave the relatively placid and calm Mediterranean and sail safely on the open Atlantic.
It was not until the carrack and then the caravel were developed in Iberia that European thoughts turned to the fabled East. The European economy was dependent on gold and silver currency, but low domestic supplies had plunged much of Europe into a recession. Another factor was the centuries long conflict between the Iberians and the Muslims to the south. The eastern trade routes were controlled by the Ottoman Empire after the Turks took control of Constantinople in 1453, and they barred Europeans from those trade routes. The ability to outflank the Muslim states of North Africa was seen as crucial to their survival. At the same time, the Iberians learnt much from their Arab neighbours. The carrack and caravel both incorporated the Arab lateen sail that made ships far more maneuverable. It was also through the Arabs that Ancient Greek geography was rediscovered, for the first time giving European sailors some idea of the shape of Africa and Asia.
The first great wave of expeditions was launched by Portugal under Prince Henry the Navigator. Sailing out into the open Atlantic the Madeira Islands were discovered in 1419, and in 1427 the Azores, both becoming Portuguese colonies. Henry's main project was exploration of the West Coast of Africa. For centuries the only trade routes linking West Africa with the Mediterranean world were over the Sahara Desert. These routes bringing slaves and gold were controlled by the Muslims of North Africa, long rivals to Portugal and Spain. It was the Portuguese hope that the Islamic nations could be bypassed by trading directly with West Africa by sea.
A series of bold discoverers joined Christopher Columbus in exploring the world: Vasco da Gama, John Cabot, Jacques Cartier, Francis Drake, Ferdinand de Magellan, John Davis and James Cook amongst many others. Exploration in the New World led to more slavery to work the new lands and the cross migration of animals and plants. The South American potato was introduced to Europe in 1525, while the Europeans introduced new diseases to the natives, killing off tens of millions. Cortés and Pizarro conquered the Aztec and Inca Empires and caused a river of silver and gold to flow to Spain.
The Tudors of England broke feudalism and also in 1534, Roman control of Catholicism. John Calvin initiated a reform church in Geneva seven years later. Ivan IV expanded Russian Muscovy into the Volga River basin, accelerated one hundred and fifty years later by Peter the Great. Religious wars, lasting thirty-six years, began in France while tobacco was introduced to Europe. Spain broke Turkish sea power in the Mediterranean in 1571, and the Dutch revolted against Spanish rule and gained independence in 1609. The English, led by Sir Francis Drake, defeated the Spanish Armada in 1588, no doubt with Divine assistance, since the wind was an English ally.
As early as 1510 slaves were brought to America to replace the natives, as many as 90% of whom were killed by diseases brought by Spanish Conquistadors. The majority of the slaves came from West Africa, tied and branded like cattle, by a sea voyage that was truly traumatic. Ship holds were described as slaughterhouses and slaves might not see daylight for one month. They were destined to work until dead. Although the Arabs initiated slavery in the eighth century, this forced migration was terrifying as most Africans had never seen the sea, came from advanced inland political and social cultures, and were brutally handled. Tribal African chiefs sold most slaves and the Portuguese had an initial trade monopoly over the Atlantic slave trade. Other slaves believed the white men were cannibals. They were not far wrong!
In 1600, both the Dutch and English East India Companies formed and in 1607 the first settlers arrived in Jamestown, Virginia; and the next year French colonists and Jésuit missionaries arrived in Québec. About 1610, a scientific revolution began in Europe with men like Kepler, Bacon, Galileo and Descartes. Shakespeare died in 1616 two years before the start of the Thirty Years War, which was ended by the Peace of Westphalia and the invention of the concepts of 'nation and state'.
The Puritans landed in Massachusetts in 1620, as newspapers began in Amsterdam. Five years later, the Dutch arrived in New Amsterdam, later called New York. Russia expanded to reach the Pacific in 1638 and four years later the English Civil War began. Dutch perfection in art was achieved with Rembrant, Vermeer and Rubens, when Harvard was founded in 1636. The Taj Mahal and St Peter's were finished 20 years later.
Empires and Republics
In the 1650s, the Anglo-Dutch wars broke Dutch power; and Poland lost the Ukraine to Russia, while Swedish power was at its height. Classical French culture began as the Sun King, Louis XIV expanded French power while fighting with England and Austria and then a 1689 coalition of states. The Turks arrived at the doors of Vienna in 1683 and Sir Issac Newton wrote his scientific Principia in 1687, the year before Constitutional Monarchy was instituted in England. With Polish help, the Habsburgs recovered Vienna and Hungary from the Turks. English and Scottish Union became law, Peter the Great founded St. Petersburg and defeated the Swedes and the 1713 Treaty of Utrecht ended Marlborough's Wars of Spanish Succession using Scots in the British army for the first time. All of this floated in Europe on the baroque music of Handel and Bach.
Renewal of fighting in Ohio in 1754 began the final hegemonic struggle between Britain and France of the Seven Years' War. This resulted in the wide-spread British victories at Québec, in the Caribbean at the Windward Islands, and in India at Plassey and Pondicherry. The Treaty of Paris signalled the French defeat in 1763 removing a unifying threat to colonial America. The French Period of Enlightenment began in parallel. Captain James Cook explored the Pacific and the Ottoman Empire declined leaving the insatiable Russia to annex the Crimea. Science advanced with Watt's steam engine, and Lavoisier's and Priestley's theories. Despite the advice of men like Sir William Johnson, British insensitivities and growing colonial independence led to the American Revolution and the War of Independence in 1775. Britain recognised American Independence in the Treaty of Paris of 1783 and six years later George Washington became the first President of the new United States of America. Unlike the old European states, this new state was based upon a constitution, which incorporated idealism and a series of fundamental 'Rights of Man'.
Publication of powerful theses, The Wealth of Nations by Adam Smith, Common Sense by Tom Paine, and Critique of Pure Reason by Immanuel Kant gave impetus to new ideas; and in 1789, the French Revolution began and Europe was changed forever. The feudal system was formally abolished and the rights of man enshrined as the French Republic was proclaimed in 1792. Napoleon seized power in 1799 and the Napoleonic Wars began in earnest in 1805 with his defeat of Austria and then Prussia, although Admiral Nelson won a reprieve for Britain at the 1805 Battle of Trafalgar.
With Napoleon defeated at Waterloo in 1815, the Congress of Vienna determined the future of France and the changed Europe. Castlereigh and Metternich proposed their separate agendas (to balance power in Europe, and to maintain the European monarchies) The Louisiana Purchase (not in the current state boundaries, but essentially all of Western America) doubled America in size. This land was bought from the desperate French who needed the money to prop up their state.
Pressed by the Napoleonic Wars, Britain ignored Tom Paine's advice of citizen rights, pushed parochial hegemony, snubbed American trade and made the tranquil Jefferson and the Americans cry for the War of 1812. The French and American republican experiences led to the beginning of the end of old European ways and the 1848 European Revolutions in Bismarck's Germany and Garibaldi's Italy - amongst others.
3 Modern Indonesia is the most populous Muslim country, with a 2008 population of ~235 million, of whom ~200 million are Muslims. The global total number of Muslims in 2008 was estimated at 1.6 billion.
4 The initial objection to the map's authenticity was in a flawed assessment that the ink used was not historically then available. This has since been refuted. The parchment has been the basis of confirmation of the map's age. The remaining objection appears to rest on the necessary prior circumnavigation of Greenland. It has been confirmed that the Chinese mapped the Ellesmere Island and Greenland passage by a Chinese squadron of Admiral Zhou Wen's circumnavigation of Greenland in c 1423. See Gavin Menzies 1421, pp. 345-357. It will be noted that while northern Greenland is today locked in ice, there has been confirmation that the summers of 1422-1428 were exceptionally warm prior to the mini ice age.See Menzies' op. cit. pp. 479-480. The Chinese claimed access to the area and DNA evidence confirms historical Chinese inter-marriage with the local natives.
13 Columbus' 1492 voyage coincides with the Inquisition, perhaps most popular in Catholic Spain where the Jews were persecuted, and the Sephardic Jewish arrival in north Africa and Turkey. The astrolabe was a key example of technological development and Sieur de Champlain's astrolabe was discovered near Ottawa.
14 Sir Francis Drake may have reached Vancouver Island in 1579, inspiring Cook and Vancouver's later voyages. Cook is notable for the health of his crews, who were free from potentially fatal scurvy, or vitamin C deficiency. He maintained stern discipline, and good nutrition and cleanliness. (See Chistopher Bayly, Atlas of the British Empire, p. 59.) By 1795, British sailors were issued with oranges, lemons and limes, from which they earned the label 'Limies'.
17 See CV Wedgewood, The Thirty Years War, p. 375. Charlemagne had created the Counts Palatine with a specific task to care for his palaces and castles used when the Holy Roman Emperor was travelling.
19 Hans J. Morgenthau, Politics Among Nations, the Struggle for Power and Peace, p. 192. He quotes verbatim Czar Nicholas I who telegraphed his cousin George V of Great Britain and asked for help to "...maintain the balance of power..." He also quoted Prime Minister Churchill at p. 196-197. Here, in 1936, Churchill explained to the Conservative Committee on Foreign Affairs: "...For four hundred years the foreign policy of England has been to oppose the strongest, most aggressive, most dominating power on the continent...[taking] no account of which nation... seeks the overlordship of Europe."
21 Russia's Czar Alexander I, the Austrian Prince Metternich and the English Lord Castlereagh were defeated in their attempt to reverse the republican liberal ideal and Benedetto Croce (amongst others) termed the Congress of Vienna era 'a victory over absolutism'. See Croce, History of Europe in the Nineteenth Century, p. 58. The title Czar is the Russian for Caesar.
|home · introduction · genealogy · background · maps · bibliography · search · contact| |
Quantum computing is an exciting and rapidly-evolving field that has the potential to revolutionize the way we process and analyze data.
Unlike traditional computers, which use binary digits (bits) to represent information, quantum computers use quantum bits, or qubits. This allows them to perform certain types of computations much faster and more efficiently than classical computers.
One of the key features of quantum computing is the ability to harness the properties of quantum mechanics, such as superposition and entanglement, to perform operations.
In a classical computer, a bit can only be in one of two states: 0 or 1. In a quantum computer, a qubit can exist in a superposition of states, meaning it can be both 0 and 1 at the same time. This allows quantum computers to perform multiple calculations simultaneously, greatly increasing their computational power.
Another important property of quantum computing is entanglement, which allows qubits to be linked together in such a way that the state of one qubit is dependent on the state of another. This allows quantum computers to perform certain types of calculations, such as factorization, much faster than classical computers.
Despite the enormous potential of quantum computing, it is still a relatively new field and there are many challenges that must be overcome before it can be fully realized.
One major challenge is the issue of decoherence, which occurs when the delicate quantum state of a qubit is disturbed by its environment. This can cause errors in the computation and limit the lifetime of a qubit.
Another challenge is the need for specialized hardware and software to run quantum algorithms. Currently, most quantum computations are run on specialized quantum processors, which are still quite expensive and not widely available.
Additionally, developing software that can effectively utilize the unique properties of quantum computing is still a work in progress.
Despite these challenges, there is a growing number of companies and research institutions that are investing in quantum computing.
Many believe that it has the potential to solve problems that are currently unsolvable by classical computers, such as simulating complex chemical reactions, breaking encryption codes, and solving optimization problems.
In conclusion, quantum computing is a cutting-edge technology that holds great promise for the future of computing.
While there are still many challenges that need to be overcome, the potential benefits of quantum computing make it an area worth exploring.
As the technology continues to evolve and improve, we can expect to see more and more applications of quantum computing in a variety of fields. |
Japanese is a synthetic language with a regular agglutinative subject-object-verb (SOV) morphology, with both productive and fixed elements. In language typology, it has many features divergent from most European languages. Its phrases are exclusively head-final and compound sentences are exclusively left-branching. There are many such languages, but few among European languages. It is a topic-prominent language.
- 1 Some distinctive aspects of modern Japanese sentence structure
- 2 Sentences, phrases and words
- 3 Word classification
- 4 Nouns
- 5 Conjugable words
- 6 Other independent words
- 7 Ancillary words
- 7.1 Particles
- 7.1.1 Topic, theme, and subject: は wa and が ga
- 7.1.2 Objects, locatives, instrumentals: を o, で de, に ni, へ e
- 7.1.3 Quantity and extents: と to, も mo, か ka, や ya, から kara, まで made
- 7.1.4 Coordinating: と to, に ni, よ yo
- 7.1.5 Final: か ka, ね ne, よ yo and related
- 7.1.6 Compound particles
- 7.2 Auxiliary verbs
- 7.1 Particles
- 8 References
- 9 Bibliography
- 10 Further reading
- 11 External links
Some distinctive aspects of modern Japanese sentence structure
Word order: head final and left branching
The modern theory of constituent order ("word order"), usually attributed to Joseph Greenberg, identifies several kinds of phrases. Each one has a head and possibly a modifier. The head of a phrase either precedes its modifier (head initial) or follows it (head final). Some of these phrase types, with the head marked in boldface, are:
- genitive phrase, i.e., noun modified by another noun ("the cover of the book", "the book's cover");
- noun governed by an adposition ("on the table", "underneath the table");
- comparison ("[X is] bigger than Y", i.e., "compared to Y, X is big").
- noun modified by an adjective ("black cat").
Some languages are inconsistent in constituent order, having a mixture of head initial phrase types and head final phrase types. Looking at the preceding list, English for example is mostly head initial, but nouns follow the adjectives which modify them. Moreover, genitive phrases can be either head initial or head final in English. Japanese, by contrast, is the epitome of a head final language:
- genitive phrase: "猫の色" (neko no iro), cat GEN color = "the cat's (neko no) color (iro)";
- noun governed by an adposition:日本に (nihon ni), Japan in = "in Japan";
- comparison: "Yより大きい" (Y yori ookii), Y than big = "bigger than Y";
- noun modified by an adjective: "黒猫" (kuro neko) = "black cat".
Head finality in Japanese sentence structure carries over to the building of sentences using other sentences. In sentences that have other sentences as constituents, the subordinated sentences (relative clauses, for example), always precede what they refer to, since they are modifiers and what they modify has the syntactic status of phrasal head. Translating the phrase the man who was walking down the street into Japanese word order would be street down walking was man. (Note that Japanese has no articles, and the different word order obviates any need for the relative pronoun who.)
Head finality prevails also when sentences are coordinated instead of subordinated. In the world's languages, it is common to avoid repetition between coordinated clauses by optionally deleting a constituent common to the two parts, as in Bob bought his mother some flowers and his father a tie, where the second bought is omitted. In Japanese, such "gapping" must precede in the reverse order: Bob mother for some flowers and father for tie bought. The reason for this is that in Japanese, sentences (other than occasional inverted sentences or sentences containing afterthoughts) always end in a verb (or other predicative words like adjectival verbs, adjectival nouns, auxiliary verbs)—the only exceptions being a few sentence-ending particles such as ka, ne, and yo. The particle ka turns a statement into a question, while the others express the speaker's attitude towards the statement.
Word class system
Japanese has five major lexical word classes:
- verbal nouns (correspond to English gerunds like 'studying', 'jumping', which denote activities)
- nominal adjectives (names vary, also called na-adjectives or "adjectival nouns")
- adjectives (so-called i-adjectives)
More broadly, there are two classes: uninflectable (nouns, including verbal nouns and adjectival nouns) and inflectable (verbs, with adjectives as defective verbs). To be precise, a verbal noun is simply a noun to which suru (する, "do") can be appended, while an adjectival noun is like a noun but uses -na (〜な) instead of -no (〜の) when acting attributively. Adjectives (i-adjectives) inflect identically to the negative form of verbs, which end in na-i (ない). Compare tabe-na-i (食べない, don't eat) → tabe-na-katta (食べなかった, didn't eat) and atsu-i (熱い, is hot) → atsu-katta (熱かった, was hot).
Some scholars, such as Eleanor Harz Jorden, refer to adjectives instead as adjectivals, since they are grammatically distinct from adjectives: they can predicate a sentence. That is, atsui (熱い) is glossed as "hot" when modifying a noun phrase, as in atsui gohan (熱いご飯, hot food), but as "is hot" when predicating, as in gohan wa atsui (ご飯は熱い, [the] food is hot).
The two inflected classes, verb and adjective, are closed classes, meaning they do not readily gain new members. Instead, new and borrowed verbs and adjectives are conjugated periphrastically as verbal noun + suru (e.g. benkyō suru (勉強する, do studying; study)) and adjectival noun + na. This differs from Indo-European languages, where verbs and adjectives are open classes, though analogous "do" constructions exist, including English "do a favor", "do the twist" or French "faire un footing" (do a "footing", go for a jog), and periphrastic constructions are common for other senses, like "try climbing" (verbal noun) or "try parkour" (noun). Other languages where verbs are a closed class include Basque: new Basque verbs are only formed periphrastically. Conversely, pronouns are closed classes in Western languages but open classes in Japanese and some other East Asian languages.
In a few cases new verbs are created by appending -ru (〜る) to a noun or using it to replace the end of a word. This is most often done with borrowed words, and results in a word written in a mixture of katakana (stem) and hiragana (inflectional ending), which is otherwise very rare. This is typically casual, with the most well-established example being sabo-ru (サボる, cut class; play hooky) (circa 1920), from sabotāju (サボタージュ, sabotage), with other common examples including memo-ru (メモる, write a memo), from memo (メモ, memo), and misu-ru (ミスる, make a mistake) from misu (ミス, mistake). In cases where the borrowed word already ends with a ru (ル), this may be punned to a ru (る), as in gugu-ru (ググる, to google), from Google (グーグル), and dabu-ru (ダブる, to double), from daburu (ダブル, double).
New adjectives are extremely rare; one example is kiiro-i (黄色い, yellow), from adjectival noun kiiro (黄色), and a more casual recent example is kimo-i (きもい, gross), by contraction of kimochi waru-i (気持ち悪い, bad-feeling). By contrast, in Old Japanese -shiki (〜しき) adjectives (precursors of present i-adjectives ending in -shi-i (〜しい), formerly a different word class) were open, as reflected in words like ita-ita-shi-i (痛々しい, pitiful), from the adjective ita-i (痛い, painful, hurt), and kō-gō-shi-i (神々しい, heavenly, sublime), from the noun kami (神, god) (with sound change). Japanese adjectives are unusual in being closed class but quite numerous – about 700 adjectives – while most languages with closed class adjectives have very few. Some believe this is due to a grammatical change of inflection from an aspect system to a tense system, with adjectives predating the change.
The conjugation of i-adjectives has similarities to the conjugation of verbs, unlike Western languages where inflection of adjectives, where it exists, is more likely to have similarities to the declension of nouns. Verbs and adjectives being closely related is unusual from the perspective of English, but is a common case across languages generally, and one may consider Japanese adjectives as a kind of stative verb.
Japanese vocabulary has a large layer of Chinese loanwords, nearly all of which go back more than one thousand years, yet virtually none of them are verbs or "i-adjectives" – they are all nouns, of which some are verbal nouns (suru) and some are adjectival nouns (na). In addition to the basic verbal noun + suru form, verbal nouns with a single-character root often experienced sound changes, such as -suru (〜する) → -zuru (〜ずる) → -jiru (〜じる), as in kin-jiru (禁じる, forbid), and some cases where the stem underwent sound change, as in tassuru (達する, reach), from tatsu (達).
Verbal nouns are uncontroversially nouns, having only minor syntactic differences to distinguish them from pure nouns like 'mountain'. There are some minor distinctions within verbal nouns, most notably that some primarily conjugate as -wo suru (〜をする) (with a particle), more like nouns, while others primarily conjugate as -suru (〜する), and others are common either way. For example, keiken wo suru (経験をする, to experience) is much more common than keiken suru (経験する), while kanben suru (勘弁する, to pardon) is much more common than kanben wo suru (勘弁をする). Nominal adjectives have more syntactic differences versus pure nouns, and traditionally were considered more separate, but they, too, are ultimately a subcategory of nouns.
There are a few minor word classes that are related to adjectival nouns, namely the taru adjectives and naru adjectives. Of these, naru adjectives are fossils of earlier forms of na adjectives (the nari adjectives of Old Japanese), and are typically classed separately, while taru adjectives are a parallel class (formerly tari adjectives in Late Old Japanese), but are typically classed with na adjectives.
Japanese as a topic-prominent language
In discourse pragmatics, the term topic refers to what a section of discourse is about. At the beginning of a section of discourse, the topic is usually unknown, in which case it is usually necessary to explicitly mention it. As the discourse carries on, the topic need not be the grammatical subject of each new sentence.
Starting with Middle Japanese, the grammar evolved so as to explicitly distinguish topics from nontopics. This is done by two distinct particles (short words which do not change form). Consider the following pair of sentences:
- taiyō ga noboru
- sun NONTOPIC rise
- taiyō wa noboru
- sun TOPIC rise
Both sentences translate as "the sun rises". In the first sentence the sun (太陽 taiyō) is not a discourse topic—not yet; in the second sentence it now is a discourse topic. In linguistics (specifically, in discourse pragmatics) a sentence such as the second one (with wa) is termed a presentational sentence because its function in the discourse is to present sun as a topic, to "broach it for discussion". Once a referent has been established as the topic of the current monolog or dialog, then in (formal) modern Japanese its marking will change from ga to wa. To better explain the difference, the translation of the second sentence can be enlarged to "As for the sun, it rises" or "Speaking of the sun, it rises"; these renderings reflect a discourse fragment in which "the sun" is being established as the topic of an extended discussion.
Liberal omission of the subject of a sentence
- nihon ni ikimashita
- Japan LATIVE go-POLITE-PERFECT
The sentence literally expresses "went to Japan". Subjects are mentioned when a topic is introduced, or in situations where an ambiguity might result from their omission. The preceding example sentence would most likely be uttered in the middle of a discourse, where who it is that "went to Japan" will be clear from what has already been said (or written).
Sentences, phrases and words
Text (文章 bunshō) is composed of sentences (文 bun), which are in turn composed of phrases (文節 bunsetsu), which are its smallest coherent components. Like Chinese and classical Korean, written Japanese does not typically demarcate words with spaces; its agglutinative nature further makes the concept of a word rather different from words in English. The reader identifies word divisions by semantic cues and a knowledge of phrase structure. Phrases have a single meaning-bearing word, followed by a string of suffixes, auxiliary verbs and particles to modify its meaning and designate its grammatical role. In the following example, phrases are indicated by vertical bars:
- taiyō ga | higashi no | sora ni | noboru
- sun SUBJECT | east POSSESSIVE | sky LOCATIVE | rise
- The sun rises in the eastern sky.
Some scholars romanize Japanese sentences by inserting spaces only at phrase boundaries (i.e., "taiyō-ga higashi-no sora-ni noboru"), treating an entire phrase as a single word. This represents an almost purely phonological conception of where one word ends and the next begins. There is some validity in taking this approach: phonologically, the postpositional particles merge with the structural word that precedes them, and within a phonological phrase, the pitch can have at most one fall. Usually, however, grammarians adopt a more conventional concept of word (単語 tango), one which invokes meaning and sentence structure.
In linguistics generally, words and affixes are often classified into two major word categories: lexical words, those that refer to the world outside of a discourse, and function words—also including fragments of words—which help to build the sentence in accordance with the grammar rules of the language. Lexical words include nouns, verbs, adjectives, adverbs, and sometimes prepositions and postpositions, while grammatical words or word parts include everything else. The native tradition in Japanese grammar scholarship seems to concur in this view of classification. This native Japanese tradition uses the terminology jiritsugo (自立語), "independent words", for words having lexical meaning, and fuzokugo (付属語), "ancillary words", for words having a grammatical function.
Classical Japanese had some auxiliary verbs (i.e., they were independent words) which have become grammaticized in modern Japanese as inflectional suffixes, such as the past tense suffix -ta (which might have developed as a contraction of -te ari).
Traditional scholarship proposes a system of word classes differing somewhat from the above-mentioned. The "independent" words have the following categories.
- katsuyōgo (活用語), word classes which have inflections
- dōshi (動詞), verbs,
- keiyōshi (形容詞), i-type adjectives.
- keiyōdōshi (形容動詞), na-type adjectives
- hikatsuyōgo (非活用語) or mukatsuyōgo (無活用語), word classes which do not have inflections
- meishi (名詞), nouns
- daimeishi (代名詞), pronouns
- fukushi (副詞), adverbs
- setsuzokushi (接続詞), conjunctions
- kandōshi (感動詞), interjections
- rentaishi (連体詞), prenominals
Ancillary words also divide into a nonconjugable class, containing grammatical particles (助詞 joshi) and counter words (助数詞 josūshi), and a conjugable class consisting of auxiliary verbs (助動詞 jodōshi). There is not wide agreement among linguists as to the English translations of the above terms.
Controversy over the characterization of nominal adjectives
Uehara (1998) observes that Japanese grammarians have disagreed as to the criteria that make some words "inflectional", katsuyō, and others not, in particular, the 形容動詞 keiyōdōshi – "na-adjectives" or "na-nominals". (It is not disputed that nouns like 'book' and 'mountain' are noninflectional and that verbs and i-adjectives are inflectional.) The claim that na-adjectives are inflectional rests on the claim that the syllable da 'is', usually regarded as a "copula verb", is really a suffix—an inflection. Thus hon 'book', generates a one-word sentence, honda 'it is a book', not a two-word sentence, hon da. However, numerous constructions seem to be incompatible with the suffixal copula claim.
- (1) Reduplication for emphasis
- Hora! Hon, hon! 'See, it is a book!'
- Hora! Kirei, kirei! 'See, it is pretty!'
- Hora! Furui, furui! 'See, it is old!' (the adjectival inflection -i cannot be left off)
- Hora! Iku, iku! 'See, it does go!' (the verbal inflection -u cannot be left off)
- (2) Questions. In Japanese, questions are formed by adding the particle ka (or in colloquial speech, just by changing the intonation of the sentence).
- Hon/kirei ka? 'Is it a book? ; Is it pretty?'
- Furu-i/Ik-u ka? 'Is it old? ; Does it go?' (the inflections cannot be left off)
- (3) Several auxiliary verbs, e.g., mitai, 'looks like it's'
- Hon mitai da; Kirei mitai da 'It seems to be a book; It seems to be pretty'
- Furu-i mitai da; Ik-u mitai da 'It seems to be old; It seems to go'
On the basis of such constructions, Uehara (1998) finds that the copula is indeed an independent word, and that regarding the parameters on which i-adjectives share the syntactic pattern of verbs, the nominal adjectives pattern with pure nouns instead.
Japanese has no grammatical gender, number, or articles (though the demonstrative その, sono, "that, those", is often translatable as "the"). Thus, specialists[who?] have agreed that Japanese nouns are noninflecting: 猫 neko can be translated as "cat", "cats", "a cat", "the cat", "some cats" and so forth, depending on context. However, as part of the extensive pair of grammatical systems that Japanese possesses for honorification (making discourse deferential to the addressee or even to a third party) and politeness, nouns too can be modified. Nouns take politeness prefixes (which have not been regarded as inflections): o- for native nouns, and go- for Sino-Japanese nouns. A few examples are given in the following table. In a few cases, there is suppletion, as with the first of the examples given below, 'rice'. (Note that while these prefixes are almost always in Hiragana — that is, as お o- or ご go — the kanji 御 is used for both o and go prefixes in formal writing.)
|meal||飯 meshi||ご飯 go-han|
|money||金 kane||お金 o-kane|
|body||体 karada||お体 o-karada|
|word(s)||言葉 kotoba||お言葉 o-kotoba|
Lacking number, Japanese does not differentiate between count and mass nouns. (An English speaker learning Japanese would be well advised to treat Japanese nouns as mass nouns.) A small number of nouns have collectives formed by reduplication (possibly accompanied by voicing and related processes (rendaku)); for example: hito 'person' and hitobito 'people'. Reduplication is not productive. Words in Japanese referring to more than one of something are collectives, not plurals. Hitobito, for example, means "a lot of people" or "people in general". It is never used to mean "two people". A phrase like edo no hitobito would be taken to mean "the people of Edo", or "the population of Edo", not "two people from Edo" or even "a few people from Edo". Similarly, yamayama means "many mountains".
A limited number of nouns have collective forms that refer to groups of people. Examples include watashi-tachi, 'we'; anata-tachi, 'you (plural)'; bokura, 'we (less formal, more masculine)'. One uncommon personal noun, ware, 'I', or in some cases, 'you', has a much more common reduplicative collective form wareware 'we'.
The suffixes -tachi (達) and -ra (等) are by far the most common collectivizing suffixes. These are, again, not pluralizing suffixes: tarō-tachi does not mean "some number of people named Taro", but instead indicates the group including Taro. Depending on context, tarō-tachi might be translated into "Taro and his friends", "Taro and his siblings", "Taro and his family", or any other logical grouping that has Taro as the representative. Some words with collectives have become fixed phrases and (commonly) refer to one person. Specifically, kodomo 'child' and tomodachi 'friend' can be singular, even though -[t]omo and -[t]achi were originally collectivizing in these words; to unambiguously refer to groups of them, an additional collectivizing suffix is added: kodomotachi 'children' and tomodachitachi 'friends', though tomodachitachi is somewhat uncommon. Tachi is sometimes applied to inanimate objects, kuruma 'car' and kuruma-tachi, 'cars', for example, but this usage is colloquial and indicates a high level of anthropomorphisation and childlikeness, and is not more generally accepted as standard.
Grammatical cases in Japanese are marked by particles placed after the nouns. A distinctive feature of Japanese is the presence of two cases which are roughly equivalent to the nominative case in other languages: one representing the sentence topic, other representing the subject. The most important case markers are the following:
- Nominative - が (ga) for subject, は (wa) for the topic
- Genitive - の (no)
- Dative - に (ni)
- Accusative - を (wo)
- Lative - へ (e), used for destination direction (like in "to some place")
- Ablative - から (kara), used for source direction (like in "from some place")
- Instrumental - で (de)
|person||very informal||plain, informal||polite||respectful|
|first||俺 ore (male)||僕 boku (male)
あたし atashi (female)
私 watashi (both)
|私 watashi||私 watakushi|
お前 omae (male)
|君 kimi (male)
あなた anata (female)
|third||あいつ aitsu (pejorative)||彼 kare (male) |
彼女 kanojo (female)
あの人 ano hito
Although many grammars and textbooks mention pronouns (代名詞 daimeishi), Japanese lacks true pronouns. (Daimeishi can be considered a subset of nouns.) Strictly speaking, pronouns do not take modifiers, but Japanese daimeishi do: 背の高い彼 se no takai kare (lit. tall he) is valid in Japanese. Also, unlike true pronouns, Japanese daimeishi are not closed-class: new daimeishi are introduced and old ones go out of use relatively quickly.
A large number of daimeishi referring to people are translated as pronouns in their most common uses. Examples: 彼 kare, (he); 彼女 kanojo, (she); 私 watashi, (I); see also the adjoining table or a longer list. Some of these "personal nouns" such as 己 onore, I (exceedingly humble), or 僕 boku, I (young male), also have second-person uses: おのれ onore in second-person is an extremely rude "you", and boku in second-person is a diminutive "you" used for young boys. Kare and kanojo also mean "boyfriend" and "girlfriend" respectively, and this usage of the words is possibly more common than the use as pronouns.
Like other subjects, Japanese deemphasizes personal daimeishi, which are seldom used. This is partly because Japanese sentences do not always require explicit subjects, and partly because names or titles are often used where pronouns would appear in a translation:
- Kinoshita-san wa, se ga takai desu ne.
- (addressing Mr. Kinoshita) "You're pretty tall, aren't you?"
- Semmu, asu Fukuoka-shi nishi-ku no Yamamoto-shōji no shachō ni atte itadakemasu ka?
- (addressing the managing director) "Would it be possible for you to meet the president of Yamamoto Trading Co. in West Ward, Fukuoka tomorrow?"
The possible referents of daimeishi are sometimes constrained depending on the order of occurrence. The following pair of examples from Bart Mathias illustrates one such constraint.
- Honda-kun ni atte, kare no hon o kaeshita (本田君に会って、彼の本を返した。)
- (I) met Honda and returned his book. ("His" here can refer to Honda.)
- Kare ni atte, Honda-kun no hon o kaeshita (彼に会って、本田君の本を返した。)
- (I) met him and returned Honda's book. (Here, "him" cannot refer to Honda.)
English has a reflexive form of each personal pronoun (himself, herself, itself, themselves, etc.); Japanese, in contrast, has one main reflexive daimeishi, namely jibun (自分), which can also mean 'I'. The uses of the reflexive (pro)nouns in the two languages are very different, as demonstrated by the following literal translations (*=impossible, ??=ambiguous):
|History repeats itself.||*Rekishi wa jibun o kurikaesu. (*歴史は自分を繰り返す。)||the target of jibun must be animate|
|Hiroshi talked to Kenji about himself (=Hiroshi).||Hiroshi wa Kenji ni jibun no koto o hanashita. (ひろしは健司に自分のことを話した。)||there is no ambiguity in the translation as explained below|
|*Makoto expects that Shizuko will take good care of himself (=Makoto; note that Shizuko is female).||??誠は静子が自分を大事にすることを期待している。
??Makoto wa Shizuko ga jibun o daiji ni suru koto o kitai shite iru.
either "Makoto expects that Shizuko will take good care of him", or "Makoto expects that Shizuko will take good care of herself."
|jibun can be in a different sentence or dependent clause, but its target is ambiguous|
If the sentence has more than one grammatical or semantic subject, then the target of jibun is the subject of the primary or most prominent action; thus in the following sentence jibun refers unambiguously to Shizuko (even though Makoto is the grammatical subject) because the primary action is Shizuko's reading.
- Makoto wa Shizuko ni jibun no uchi de hon o yomaseta.
- Makoto made Shizuko read book(s) in her house.
In practice the main action is not always discernible, in which case such sentences are ambiguous. The use of jibun in complex sentences follows non-trivial rules.
There are also equivalents to jibun such as mizukara. Other uses of the reflexive pronoun in English are covered by adverbs like hitorideni which is used in the sense of "by oneself". For example,
- kikai ga hitorideni ugokidashita
- "The machine started operating by itself."
Change in a verb's valency is not accomplished by use of reflexive pronouns (in this Japanese is like English but unlike many other European languages). Instead, separate (but usually related) intransitive verbs and transitive verbs are used. There is no longer any productive morphology to derive transitive verbs from intransitive ones, or vice versa.
that one over there
(of) that over there
like that over there
what sort of?
that way over there
in this manner
in that manner
in that (other) manner
how? in what manner?
that (other) person
- 1. irregular formation
- 2. -ou is represented by -ō
- 3. colloquially contracted to -cchi
Demonstratives occur in the ko-, so-, and a- series. The ko- (proximal) series refers to things closer to the speaker than the hearer, the so- (medial) series for things closer to the hearer, and the a- (distal) series for things distant to both the speaker and the hearer. With do-, demonstratives turn into the corresponding interrogative form. Demonstratives can also be used to refer to people, for example
- Kochira wa Hayashi-san desu.
- "This is Mr. Hayashi."
Demonstratives limit, and therefore precede, nouns; thus この本 kono hon for "this/my book", and その本 sono hon for "that/your book".
When demonstratives are used to refer to things not visible to the speaker or the hearer, or to (abstract) concepts, they fulfill a related but different anaphoric role. The anaphoric distals are used for shared information between the speaker and the listener.
- A: Senjitsu, Sapporo ni itte kimashita.
- A: I visited Sapporo recently.
- B: Asoko (*Soko) wa itsu itte mo ii tokoro desu ne.
- B: Yeah, that's a great place to visit whenever you go.
Soko instead of asoko would imply that B doesn't share this knowledge about Sapporo, which is inconsistent with the meaning of the sentence. The anaphoric medials are used to refer to experience or knowledge that is not shared between the speaker and listener.
- Satō : Tanaka to iu hito ga kinō shinda n da tte...
- Sato: I heard that a man called Tanaka died yesterday...
- Mori: E', hontō?
- Mori: Oh, really?
- Satō : Dakara, sono (*ano) hito, Mori-san no mukashi no rinjin ja nakatta 'kke?
- Sato: It's why I asked... wasn't he an old neighbour of yours?
Again, ano is inappropriate here because Sato doesn't (didn't) know Tanaka personally. The proximal demonstratives do not have clear anaphoric uses. They can be used in situations where the distal series sound too disconnected:
- Ittai nan desu ka, kore (*are) wa?
- What on earth is this?
Prior to discussing the conjugable words, a brief note about stem forms. Conjugative suffixes and auxiliary verbs are attached to the stem forms of the affixee. In modern Japanese there are the following six stem forms.
Note that this order follows from the -a, -i, -u, -e, -o endings that these forms have in 五段 (5-row) verbs (according to the あ、い、う、え、お collation order of Japanese), where terminal and attributive forms are the same for verbs (hence only 5 surface forms), but differ for nominals, notably na-nominals.
- Irrealis form (未然形 mizenkei) -a (and -ō)
- is used for plain negative (of verbs), causative and passive constructions. The most common use of this form is with the -nai auxiliary that turns verbs into their negative (predicate) form. (See Verbs below.) The -ō version is used for volitional expression and formed by a euphonic change (音便 onbin).
- Continuative form (連用形 ren'yōkei) -i
- is used in a linking role (a kind of serial verb construction). This is the most productive stem form, taking on a variety of endings and auxiliaries, and can even occur independently in a sense similar to the -te ending. This form is also used to negate adjectives.
- Terminal form (終止形 shūshikei) -u
- is used at the ends of clauses in predicate positions. This form is also variously known as plain form (基本形 kihonkei) or dictionary form (辞書形 jishokei) – it is the form that verbs are listed under in a dictionary.
- Attributive form (連体形 rentaikei) -u
- is prefixed to nominals and is used to define or classify the noun, similar to a relative clause in English. In modern Japanese it is practically identical to the terminal form, except that verbs are generally not inflected for politeness; in old Japanese these forms differed. Further, na-nominals behave differently in terminal and attributive positions; see adjectives, below.
- Hypothetical form (仮定形 kateikei) -e
- is used for conditional and subjunctive forms, using the -ba ending.
- Imperative form (命令形 meireikei) -e
- is used to turn verbs into commands. Adjectives do not have an imperative stem form.
The application of conjugative suffixes to stem forms follow certain euphonic principles (音便 onbin), which are discussed below.
Verbs (動詞 dōshi) in Japanese are rigidly constrained to the ends of clauses in what is known as the predicate position. This means that the verb is always located at the end of a sentence.
|Cats eat fish.|
The subject and objects of the verb are indicated by means of particles, and the grammatical functions of the verb — primarily tense and voice — are indicated by means of conjugation. When the subject and the dissertative topic coincide, the subject is often omitted; if the verb is intransitive, the entire sentence may consist of a single verb. Verbs have two tenses indicated by conjugation, past and nonpast. The semantic difference between present and future is not indicated by means of conjugation. Usually there is no ambiguity as context makes it clear whether the speaker is referring to the present or future. Voice and aspect are also indicated by means of conjugation, and possibly agglutinating auxiliary verbs. For example, the continuative aspect is formed by means of the continuative conjugation known as the gerundive or -te form, and the auxiliary verb iru "to be"; to illustrate, 見る miru ("to see") → 見ている mite iru ("to be seeing").
Verbs can be semantically classified based on certain conjugations.
- Stative verbs
- indicate existential properties, such as "to be" (いる iru), "to be able to do" (出来る dekiru), "to need" (要る iru), etc. These verbs generally do not have a continuative conjugation with -iru because they are semantically continuative already.
- Continual verbs
- conjugate with the auxiliary -iru to indicate the progressive aspect. Examples: "to eat" (食べる taberu), "to drink" (飲む nomu), "to think" (考える kangaeru). To illustrate the conjugation, 食べる taberu ("to eat") → 食べている tabete iru ("to be eating").
- Punctual verbs
- conjugate with -iru to indicate a repeated action, or a continuing state after some action. Example: 知る shiru ("to know") → 知っている shitte iru ("to be knowing"); 打つ utsu ("to hit") → 打っている utte iru ("to be hitting (repeatedly)").
- Non-volitional verb
- indicate uncontrollable action or emotion. These verbs generally have no volitional, imperative or potential conjugation. Examples: 好む konomu, "to like / to prefer" (emotive), 見える mieru, "to be visible" (non-emotive).
- Movement verbs
- indicate motion. Examples: 歩く aruku ("to walk"), 帰る kaeru ("to return"). In the continuative form (see below) they take the particle ni to indicate a purpose.
There are other possible classes, and a large amount of overlap between the classes.
Lexically, nearly every verb in Japanese is a member of exactly one of the following three regular conjugation groups (see also Japanese consonant and vowel verbs).
- Group 2a (上一段 kami ichidan, lit. upper 1-row group)
- verbs with a stem ending in i. The terminal stem form always rhymes with -iru. Examples: 見る miru ("to see"), 着る kiru ("to wear").
- Group 2b (下一段 shimo ichidan, lit. lower 1-row group)
- verbs with a stem ending in e. The terminal stem form always rhymes with -eru. Examples: 食べる taberu ("to eat"), くれる kureru ("to give" (to someone of lower or more intimate status)). (Note that some Group 1 verbs resemble Group 2b verbs, but their stems end in r, not e.)
- Group 1 (五段 godan, lit. 5-row group)
- verbs with a stem ending in a consonant. When this is r and the verb ends in -eru, it is not apparent from the terminal form whether the verb is Group 1 or Group 2b, e.g. 帰る kaeru ("to return"). If the stem ends in w, that sound only appears in before the final a of the irrealis form.
The "row" in the above classification means a row in the gojūon table. "Upper 1-row" means the row that is one row above the center row (the u-row) i.e. i-row. "Lower 1-row" means the row that is one row below the center row (the u-row) i.e. e-row. "5-row" means the conjugation runs though all 5 rows of the gojūon table. A conjugation is fully described by identifying both the row and the column in the gojūon table. For example, 見る (miru, "to see") belongs to マ行上一段活用 (ma-column i-row conjugation), 食べる (taberu, "to eat") belongs to バ行下一段活用 (ba-column e-row conjugation), and 帰る (kaeru, "to return") belongs to ラ行五段活用(ra-column 5-row conjugation).
One should avoid confusing verbs in ラ行五段活用 (ra-column 5-row conjugation) with verbs in 上一段活用 (i-row conjugation) or 下一段活用 (e-row conjugation). For example, 切る (kiru, "to cut") belongs to ラ行五段活用 (ra-column 5-row conjugation), whereas its homophone 着る (kiru, "to wear") belongs to カ行上一段活用 (ka-column i-row conjugation). Likewise, 練る (neru, "to knead") belongs to ラ行五段活用 (ra-column 5-row conjugation), whereas its homophone 寝る (neru, "to sleep") belongs to ナ行下一段活用 (na-column e-row conjugation).
Historical note: classical Japanese had upper and lower 1- and 2-row groups and a 4-row group (上/下一段 kami/shimo ichidan, 上/下二段 kami/shimo nidan, and 四段 yodan, the nidan verbs becoming most of today's ichidan verbs (there were only a handful of kami ichidan verbs and only one single shimo ichidan verb in classical Japanese), and the yodan group, due to the writing reform in 1946 to write Japanese as it is pronounced, naturally became the modern godan verbs. Since verbs have migrated across groups in the history of the language, conjugation of classical verbs is not predictable from a knowledge of modern Japanese alone.
Of the irregular classes, there are two:
- which has only one member, する (suru, "to do"). In Japanese grammars these words are classified as サ変 sa-hen, an abbreviation of サ行変格活用 sa-gyō henkaku katsuyō, sa-row irregular conjugation).
- which also has one member, 来る (kuru, "to come"). The Japanese name for this class is カ行変格活用 ka-gyō henkaku katsuyō or simply カ変 ka-hen.
Classical Japanese had two further irregular classes, the na-group, which contained 死ぬ (shinu, "to die") and 往ぬ (inu, "to go", "to die"), the ra-group, which included such verbs as あり ari, the equivalent of modern aru, as well as quite a number of extremely irregular verbs that cannot be classified.
The following table illustrates the stem forms of the above conjugation groups, with the root indicated with dots. For example, to find the hypothetical form of the group 1 verb 書く kaku, look in the second row to find its root, kak, then in the hypothetical row to get the ending -e, giving the stem form kake. When there are multiple possibilities, they are listed in the order of increasing rarity.
|使・ tsuka(w).||書・ kak.||見・ mi.||食べ・ tabe.|
|見 mi.||食べ tabe.||さ sa
|使い tsuka.i||書き kak.i||見 mi.||食べ tabe.||し shi||来 ki|
|使う tsuka.u||書く kak.u||見る mi.ru||食べる tabe.ru||する suru||来る kuru|
|same as terminal form|
|使え tsuka.e||書け kak.e||見れ mi.re||食べれ tabe.re||すれ sure||来れ kure|
|使え tsuka.e||書け kak.e||見ろ mi.ro
- The -a and -o irrealis forms for Group 1 verbs were historically one, but since the post-WWII spelling reforms they have been written differently. In modern Japanese the -o form is used only for the volitional mood and the -a form is used in all other cases; see also the conjugation table below.
- The unexpected ending is due to the verb's root being tsukaw- but [w] only being pronounced before [a] in modern Japanese.
The above are only the stem forms of the verbs; to these one must add various verb endings in order to get the fully conjugated verb. The following table lists the most common conjugations. Note that in some cases the form is different depending on the conjugation group of the verb. See Japanese verb conjugations for a full list.
|formation rule||group 1||group 2a||group 2b||sa-group||ka-group|
|書く kaku||見る miru||食べる taberu||する suru||来る kuru|
|cont. + ます masu||書き・ます
|cont. + た ta||書い・た
|irrealis + ない nai||書か・ない
+ なかった nakatta
|-te form (gerundive)||cont. + て -te||書いて
|hyp. + ば ba||書け・ば
|cont. + たら tara||書いたら
|volitional||irrealis + う u||書こ・う
|irrealis + よう -yō||見・よう
|passive||irrealis + れる reru||書か・れる
|irrealis + られる -rareru||見・られる
|causative||irrealis + せる seru||書か・せる
|irrealis + させる -saseru||見・させる
|potential||hyp. + る ru||書け・る
|irrealis + られる -rareru||見・られる
- This is an entirely different verb; する suru has no potential form.
- These forms change depending on the final syllable of the verb's dictionary form (whether u, ku, gu, su, etc.). For details, see Euphonic changes, below, and the article Japanese verb conjugation.
The polite ending -masu conjugates as a group 1 verb, except that the negative imperfective and perfective forms are -masen and -masen deshita respectively, and certain conjugations are in practice rarely if ever used. The passive and potential endings -reru and -rareru, and the causative endings -seru and -saseru all conjugate as group 2b verbs. Multiple verbal endings can therefore agglutinate. For example, a common formation is the causative-passive ending, -sase-rareru.
- Boku wa ane ni nattō o tabesaserareta.
- I was made to eat nattō by my (elder) sister.
As should be expected, the vast majority of theoretically possible combinations of conjugative endings are not semantically meaningful.
Transitive and intransitive verbs
Japanese has a large variety of related pairs of transitive verbs (that take a direct object) and intransitive verbs (that do not usually take a direct object), such as the transitive hajimeru (始める, someone or something begins an activity), and the intransitive hajimaru (始まる, an activity begins).
|transitive verb||intransitive verb|
|One thing acts out the transitive verb on another.
||The intransitive verb passively happens without direct intervention.
Note: Some intransitive verbs (usually verbs of motion) take what it looks like a direct object, but it is not. For example, hanareru (離れる, to leave):
- 私は 東京を 離れる。
- Watashi wa Tōkyō o hanareru.
- I leave Tokyo.
Adjectival verbs and nouns
Semantically speaking, words that denote attributes or properties are primarily distributed between two morphological classes (there are also a few other classes):
- adjectival verbs (conventionally called "i-adjectives") (形容詞 keiyōshi) – these have roots and conjugating stem forms, and are semantically and morphologically similar to stative verbs.
- adjectival nouns (conventionally called "na-adjectives") (形容動詞 keiyōdōshi, lit. "adjectival verb") – these are nouns that combine with the copula.
Unlike adjectives in languages like English, i-adjectives in Japanese inflect for aspect and mood, like verbs. Japanese adjectives do not have comparative or superlative inflections; comparatives and superlatives have to be marked periphrastically using adverbs like motto 'more' and ichiban 'most'.
Every adjective in Japanese can be used in an attributive position. Nearly every Japanese adjective can be used in a predicative position; this differs from English where there are many common adjectives such as "major", as in "a major question", that cannot be used in the predicate position (that is, *"The question is major" is not grammatical English). There are a few Japanese adjectives that cannot predicate, known as 連体詞 (rentaishi, attributives), which are derived from other word classes; examples include 大きな ōkina "big", 小さな chiisana "small", and おかしな okashina "strange" which are all stylistic na-type variants of normal i-type adjectives.
All i-adjectives except for いい (ii, good) have regular conjugations, and ii is irregular only in the fact that it is a changed form of the regular adjective 良い yoi permissible in the terminal and attributive forms. For all other forms it reverts to yoi.
|安・い yasu.||静か- shizuka-|
|安かろ .karo||静かだろ -daro|
|安く .ku||静かで -de|
|安い .i||静かだ -da|
|安い .i||静かな -na / |
|安けれ .kere||静かなら -nara|
|安かれ .kare||静かなれ -nare|
- The attributive and terminal forms were formerly 安き .ki and 安し .shi, respectively; in modern Japanese these are used productively for stylistic reasons only, although many set phrases such as 名無し nanashi (anonymous) and よし yoshi (sometimes written yosh', general positive interjection) derive from them.
- The imperative form is extremely rare in modern Japanese, restricted to set patterns like 遅かれ早かれ osokare hayakare 'sooner or later', where they are treated as adverbial phrases. It is impossible for an imperative form to be in a predicate position.
Common conjugations of adjectives are enumerated below. ii is not treated separately, because all conjugation forms are identical to those of yoi.
安い yasui, "cheap"
静か shizuka, "quiet"
|root + -i
(Used alone, without the copula)
|root + copula da||静かだ shizuka da|
|cont. + あった atta
(u + a collapse)
|cont. + あった atta
(e + a collapse)
|cont. + (は)ない (wa) nai¹||安く(は)ない
|cont. + (は)ない (wa) nai||静かで(は)ない|
shizuka de (wa) nai
|cont. + (は)なかった (wa) nakatta¹||安く(は)なかった
|cont. + (は)なかった (wa) nakatta||静かで(は)なかった|
shizuka de (wa) nakatta
|root + -i + copula です desu||安いです
|root + copula です desu||静かです|
|inf. cont + ありません arimasen¹||安くありません
|inf. cont + (は)ありません (wa) arimasen||静かではありません|
shizuka de wa arimasen
|inf. neg. non-past + copula です desu¹||安くないです
|inf. cont + (は)ないです (wa) nai desu||静かではないです|
shizuka de wa nai desu
|inf. cont + ありませんでした arimasen deshita||安くありませんでした
yasuku arimasen deshita
|inf. cont + (は)ありませんでした (wa) arimasen deshita||静かではありませんでした|
shizuka de wa arimasen deshita
|inf. neg. past + copula です desu¹||安くなかったです
|inf. neg. past + なかったです nakatta desu ¹||静かではなかったです|
shizuka de wa nakatta desu
|-te form||cont. + て te||安くて
|hyp. + ば ba||安ければ
|hyp. (+ ば ba)||静かなら(ば)|
|inf. past + ら ra||安かったら
|inf. past + ら ra||静かだったら|
|volitional²||irrealis + う u
/root + だろう darō
|irrealis + う u
= root + だろう darō
|静かだろう shizuka darō|
|root + に ni||静かに|
|degree (-ness)||root + さ sa||安さ
|root + sa||静かさ|
- note that these are just forms of the i-type adjective ない nai
- since most adjectives describe non-volitional conditions, the volitional form is interpreted as "it is possible", if sensible. In some rare cases it is semi-volitional: 良かろう yokarō 'OK' (lit: let it be good) in response to a report or request.
Adjectives too are governed by euphonic rules in certain cases, as noted in the section on it below. For the polite negatives of na-type adjectives, see also the section below on the copula だ da.
Copula (だ da)
The copula da behaves very much like a verb or an adjective in terms of conjugation.
|では de wa|
|だ da (informal)|
です desu (polite)
でございます de gozaimasu (respectful)
|である de aru|
Note that there are no potential, causative, or passive forms of the copula, just as with adjectives.
The following are some examples.
- JON wa gakusei da
- John is a student.
- Ashita mo hare nara, PIKUNIKKU shiyō
- If tomorrow is clear too, let's have a picnic.
In continuative conjugations, では de wa is often contracted in speech to じゃ ja; for some kinds of informal speech ja is preferable to de wa, or is the only possibility.
|respectful||でございます de gozaimasu|
|past||informal||cont. + あった atta|
|respectful||でございました de gozaimashita|
|informal||cont. + はない wa nai||じゃない ja nai|
|polite||cont. + はありません wa arimasen||じゃありません ja arimasen|
|respectful||cont. + はございません wa gozaimasen||じゃございません ja gozaimasen|
|informal||cont. + はなかった wa nakatta||じゃなかった ja nakatta|
|polite||cont. + はありませんでした wa arimasen deshita||じゃありませんでした ja arimasen deshita|
|respectful||cont. + はございませんでした wa gozaimasen deshita||じゃございませんでした ja gozaimasen deshita|
|conditional||informal||hyp. + ば ba|
|polite||cont. + あれば areba|
|polite||same as conditional|
|respectful||でございましょう de gozaimashō|
|polite||cont. + ありまして arimashite|
|respectful||cont. + ございまして gozaimashite|
Euphonic changes (音便 onbin)
Historical sound change
|あ+う a + u
あ+ふ a + fu
|い+う i + u
い+ふ i + fu
|う+ふ u + fu||うう ū|
|え+う e + u
え+ふ e + fu
|お+ふ o + fu||おう ō|
|お+ほ o + ho
お+を o + wo
|auxiliary verb む mu||ん n|
|medial or final は ha||わ wa|
|medial or final ひ hi, へ he, ほ ho||い i, え e, お o|
(via wi, we, wo, see below)
|any ゐ wi, ゑ we, を wo||い i, え e, お o1|
- 1. usually not reflected in spelling
Modern pronunciation is a result of a long history of phonemic drift that can be traced back to written records of the thirteenth century, and possibly earlier. However, it was only in 1946 that the Japanese ministry of education modified existing kana usage to conform to the standard dialect (共通語 kyōtsūgo). All earlier texts used the archaic orthography, now referred to as historical kana usage. The adjoining table is a nearly exhaustive list of these spelling changes.
Note that palatalized morae yu, yo (ゆ、よ) combine with the initial consonant, if present, yielding a palatalized syllable. The most basic example of this is modern kyō (今日、きょう, today), which historically developed as kefu (けふ) → kyō (きょう), via the efu (えふ) → yō (よう) rule.
A few sound changes are not reflected in the spelling. Firstly, ou merged with oo, both being pronounced as a long ō. Secondly, the particles は and を are still written using historical kana usage, though these are pronounced as wa and o, rather than ha and wo, with the rare exception of 〜んを, which is pronounced as -n wo, as in sen'en wo itadakimasu (千円をいただきます, I humbly receive one thousand yen).
Among Japanese speakers, it is not generally understood that the historical kana spellings were, at one point, reflective of pronunciation. For example, the modern on'yomi reading yō (よう) for leaf (葉 yō) arose from historical efu (えふ). The latter was pronounced something like [ʲepu] by the Japanese at the time it was borrowed (compare Middle Chinese [jiɛp̚]). However, a modern reader of a classical text would still read this as [joː], the modern pronunciation.
As mentioned above, conjugations of some verbs and adjectives differ from the prescribed formation rules because of euphonic changes. Nearly all of these euphonic changes are themselves regular. For verbs the exceptions are all in the ending of the continuative form of group when the following auxiliary starts with a t-sound, i.e., た ta, て te, たり tari, etc.
|continuative ending||changes to||example|
|い i, ち chi or り ri||っ (double consonant)||*買いて *kaite → 買って katte|
*打ちて *uchite → 打って utte
*知りて *shirite → 知って shitte
|び bi, みmi or に ni||ん (syllabic n), with the following タ t sound voiced||*遊びて *asobite → 遊んで asonde|
*住みて *sumite → 住んで sunde
*死にて *shinite → 死んで shinde
|き ki||い i||*書きて *kakite → 書いて kaite|
|ぎ gi||い i, with the following タ t sound voiced||*泳ぎて *oyogite → 泳いで oyoide|
- * denotes impossible/ungrammatical form.
There is one other irregular change: 行く iku (to go), for which there is an exceptional continuative form: 行き iki + て te → 行って itte, 行き iki + た ta → 行った itta, etc.
There are dialectical differences, which are also regular and generally occur in similar situations. For example, in Kansai dialect the -i + t- conjugations are instead changed to -ut-, as in omōta (思うた) instead of omotta (思った), as perfective of omou (思う, think). In this example, this can combine with the preceding vowel via historical sound changes, as in shimōta (しもうた) (au → ō) instead of standard shimatta (しまった).
Polite forms of adjectives
The continuative form of proper adjectives, when followed by polite forms such as gozaru (御座る, be) or zonjiru (存じる, know, think), undergoes a transformation; this may be followed by historical sound changes, yielding a one-step or two-step sound change. Note that these verbs are almost invariably conjugated to polite -masu (〜ます) form, as gozaimasu (ございます) and zonjimasu (存じます) (note the irregular conjugation of gozaru, discussed below), and that these verbs are preceded by the continuative form – -ku (〜く) – of adjectives, rather than the terminal form – -i (〜い) – which is used before the more everyday desu (です, be).
The rule is -ku (〜く) → -u (〜う) (dropping the -k-), possibly also combining with the previous syllable according to the spelling reform chart, which may also undergo palatalization in the case of ゆ、よ (yu, yo).
Historically there were two classes of proper Old Japanese adjectives, -ku (〜く) and -shiku (〜しく) ("-ku adjective" means "not preceded by shi"). This distinction collapsed during the evolution of Late Middle Japanese adjectives, and both are now considered -i (〜い) adjectives. The sound change for -shii adjectives follows the same rule as for other -ii adjectives, notably that the preceding vowel also changes and the preceding mora undergoes palatalization, yielding -shiku (〜しく) → -shū (〜しゅう), though historically this was considered a separate but parallel rule.
|〜あく -aku||〜おう -ō||*おはやくございます *ohayaku gozaimasu →|
おはようございます ohayō gozaimasu
|〜いく -iku||〜ゆう -yū||*大きくございます *ōkiku gozaimasu →|
大きゅうございます ōkyū gozaimasu
|〜うく -uku||〜うう -ū||*寒くございます *samuku gozaimasu →|
寒うございます samū gozaimasu
|*〜えく *-eku||*〜よう *-yō||(not present)|
|〜おく -oku||〜おう -ō||*面白くございます *omoshiroku gozaimasu →|
面白うございます omoshirō gozaimasu
|〜しく -shiku||〜しゅう -shū||*涼しくございます *suzushiku gozaimasu →|
涼しゅうございます suzushū gozaimasu
Respectful verbs such as くださる kudasaru 'to get', なさる nasaru 'to do', ござる gozaru 'to be', いらっしゃる irassharu 'to be/come/go', おっしゃる ossharu 'to say', etc. behave like group 1 verbs, except in the continuative and imperative forms.
|continuative||ーり changed to ーい||*ござります *gozarimasu → ございます gozaimasu|
*いらっしゃりませ *irassharimase → いらっしゃいませ irasshaimase
|imperative||ーれ changed to ーい||*くだされ *kudasare → ください kudasai |
*なされ *nasare → なさい nasai
In speech, common combinations of conjugation and auxiliary verbs are contracted in a fairly regular manner.
|負けてしまう makete shimau 'lose' → 負けちゃう/負けちまう makechau/makechimau|
|死んでしまう shinde shimau 'die' → 死んじゃう shinjau or 死んじまう shinjimau|
|食べてはいけない tabete wa ikenai 'must not eat' → 食べちゃいけない tabecha ikenai|
|飲んではいけない nonde wa ikenai 'must not drink' → 飲んじゃいけない nonja ikenai|
|寝ている nete iru 'is sleeping' → 寝てる neteru|
|しておく shite oku 'will do it so' → しとく shitoku|
|出て行け dete ike 'get out!' → 出てけ deteke|
|買ってあげる katte ageru 'buy something (for someone)' → 買ったげる kattageru|
|何しているの nani shite iru no 'what are you doing?' → 何してんの nani shitenno|
|やりなさい yarinasai 'do it!' → やんなさい yannasai|
|やるな yaruna 'don't do it!' → やんな yanna|
There are occasional others, such as -aranai → -annai as in wakaranai (分からない, don't understand) → wakannai (分かんない) and tsumaranai (つまらない, boring) → tsumannai (つまんない) – these are considered quite casual and are more common among the younger generation.
Contractions differ by dialect, but behave similarly to the standard ones given above. For example, in Kansai dialect -te shimau (〜てしまう) → -temau (〜てまう).
Other independent words
Adverbs in Japanese are not as tightly integrated into the morphology as in many other languages. Indeed, adverbs are not an independent class of words, but rather a role played by other words. For example, every adjective in the continuative form can be used as an adverb; thus, 弱い yowai 'weak' (adj) → 弱く yowaku 'weakly' (adv). The primary distinguishing characteristic of adverbs is that they cannot occur in a predicate position, just as it is in English. The following classification of adverbs is not intended to be authoritative or exhaustive.
- Verbal adverbs
- are verbs in the continuative form with the particle ni. E.g. 見る miru 'to see' → 見に mi ni 'for the purpose of seeing', used for instance as: 見に行く mi ni iku, go to see (something).
- Adjectival adverbs
- are adjectives in the continuative form, as mentioned above.
- Nominal adverbs
- are grammatical nouns that function as adverbs. Example: 一番 ichiban 'most highly'.
- Sound symbolism
- are words that mimic sounds or concepts. Examples: きらきら kirakira 'sparklingly', ぽっくり pokkuri 'suddenly', するする surusuru 'smoothly (sliding)', etc.
Often, especially for sound symbolism, the particle to "as if" is used. See the article on Japanese sound symbolism.
Conjunctions and interjections
Examples of conjunctions: そして soshite 'and then', また mata 'and then/again', etc. Although called "conjunctions", these words are, as English translations show, actually a kind of adverbs.
Examples of interjections: はい (hai, yes/OK/uh), へえ (hee, wow!), いいえ (iie, no/no way), おい (oi, hey!), etc. This part of speech is not very different from that of English.
Particles in Japanese are postpositional, as they immediately follow the modified component. A full listing of particles is beyond the scope of this article, so only a few prominent particles are listed here. Keep in mind that the pronunciation and spelling differ for the particles wa (は), e (へ) and o (を): This article follows the Hepburn-style of romanizing them according to the pronunciation rather than spelling.
Topic, theme, and subject: は wa and が ga
The complex distinction between the so-called topic (は wa) and subject (が ga) particles has been the theme of many doctoral dissertations and scholarly disputes. The clause 象は鼻が長い zō-wa hana-ga nagai is well known for appearing to contain two subjects. It does not simply mean "the elephant's nose is long", as that can be translated as 象の鼻は長い zō-no hana-wa nagai. Rather, a more literal translation would be "(speaking of) the elephant, its nose is long".
Two major scholarly surveys of Japanese linguistics in English, (Shibatani 1990) and (Kuno 1973), clarify the distinction. To simplify matters, the referents of wa and ga in this section are called the topic and subject respectively, with the understanding that if either is absent, the grammatical topic and subject may coincide.
As an abstract and rough approximation, the difference between wa and ga is a matter of focus: wa gives focus to the action of the sentence, i.e., to the verb or adjective, whereas ga gives focus to the subject of the action. However, a more useful description must proceed by enumerating uses of these particles.
However, when first being introduced to the topic and subject markers wa and ga most are told that the difference between the two is simpler. The topic marker, wa, is used to declare or to make a statement. The subject marker, ga, is used for new information, or asking for new information.
The use of wa to introduce a new theme of discourse is directly linked to the notion of grammatical theme. Opinions differ on the structure of discourse theme, though it seems fairly uncontroversial to imagine a first-in-first-out hierarchy of themes that is threaded through the discourse. Of course, human limitations restrict the scope and depth of themes, and later themes may cause earlier themes to expire. In these sorts of sentences, the steadfast translation into English uses constructs like "speaking of X" or "on the topic of X", though such translations tend to be bulky as they fail to use the thematic mechanisms of English. For lack of a comprehensive strategy, many teachers of Japanese emphasize the "speaking of X" pattern without sufficient warning.
- JON wa gakusei desu
- (On the topic of) John, (he) is a student.
A common linguistic joke shows the insufficiency of rote translation with the sentence 僕はウナギだ boku wa unagi da, which per the pattern would translate as "I am an eel." (or "(As of) me is eel"). Yet, in a restaurant this sentence can reasonably be used to say "My order is eel" (or "I would like to order an eel"), with no intended humour. This is because the sentence should be literally read, "As for me, it is an eel," with "it" referring to the speaker's order. The topic of the sentence is clearly not its subject.
Related to the role of wa in introducing themes is its use in contrasting the current topic and its aspects from other possible topics and their aspects. The suggestive pattern is "X, but..." or "as for X, ...".
- ame wa futte imasu ga...
- The rain is falling, but...
Because of its contrastive nature, the topic cannot be undefined.
- *dareka wa hon o yonde iru
- *Someone is reading the book.
In this use, wa is required.
In practice, the distinction between thematic and contrastive wa is not that useful. Suffice it to say that there can be at most one thematic wa in a sentence, and it has to be the first wa if one exists, and the remaining was are contrastive. For completeness, the following sentence (due to Kuno) illustrates the difference.
- boku ga shitte iru hito wa daremo konakatta
- (1) Of all the people I know, none came.
- (2) (People came but), there weren't any of the people I know.
The first interpretation is the thematic wa, treating "the people I know" (boku ga shitte iru hito) as the theme of the predicate "none came" (dare mo konakatta). That is, if I know A, B, ..., Z, then none of the people who came were A, B, ..., Z. The second interpretation is the contrastive wa. If the likely attendees were A, B, ..., Z, and of them I know P, Q and R, then the sentence says that P, Q and R did not come. The sentence says nothing about A', B', ..., Z', all of whom I know, but none of whom were likely to come. (In practice the first interpretation is the likely one.)
Unlike wa, the subject particle ga nominates its referent as the sole satisfier of the predicate. This distinction is famously illustrated by the following pair of sentences.
- Jon-san wa gakusei desu
- John is a student. (There may be other students among the people we're talking about.)
- (Kono gurūpu no naka de) Jon ga gakusei desu
- (Of all the people we are talking about) it is John who is the student.
It may be useful to think of the distinction in terms of the question each statement could answer, e.g.:
- Jon-san no shigoto wa nan desu ka
- What is John's occupation?
for the first statement, versus
- Dochira no kata ga gakusei desu ka
- Which one (of them) is the student?
for the second.
Similarly, in a restaurant, if the waitress asks who has ordered the eels, the customer who ordered it can say
- Boku ga unagi da
- The eels are for me (not these other people).
For certain verbs, typically ga instead of o is used to mark what would be the direct object in English:
- Jon-san wa furansu-go ga dekiru
- John knows French.
These notions that would be thought of as actions, or "verbs" in English, e.g. 出来る (to be able to), ほしい (is/are desirable), 好きだ (is/are liked), 嫌いだ (is/are disliked), etc., are in fact simply adjectives and intransitive verbs whose subject is what would be a direct object in the English translation. The equivalent of the English subject is instead the topic in Japanese and thus marked by wa, reflecting the topic-prominent nature of Japanese grammar.
Objects, locatives, instrumentals: を o, で de, に ni, へ e
The direct object of transitive verbs is indicated by the object particle を o.
- Jon-san wa aoi sētā o kite iru
- John is wearing a blue sweater.
This particle can also mean "through" or "along" or "out of" when used with motion verbs.
- MEARI ga hosoi michi o aruite ita
- Mary was walking along a narrow road.
- kokkyō no nagai TONNERU o nukeru to yukiguni de atta
- The train came out of the long tunnel into the snow country.
The general instrumental particle is で de, which can be translated as "using" or "by":
- niku wa NAIFU de kiru koto
- Meat must be cut with a knife.
- densha de ikimashō
- Let's go by train.
This particle also has other uses: "at" (temporary location):
- machikado de sensei ni atta
- (I) met my teacher at the street corner.
- umi de oyogu no wa muzukashii
- Swimming in the sea is hard.
"With" or "in (the span of)":
- geki wa shujinkō no shi de owaru
- The play ends with the protagonist's death.
- ore wa nibyō de katsu
- I'll win in two seconds.
The general locative particle is に ni.
- Tōkyō ni ikimashō
- Let's go to Tokyo
In this function it is interchangeable with へ e. However, ni has additional uses: "at (prolonged)":
- watashi wa Ōtemachi itchōme 99 banchi ni sunde imasu
- I live at Ōtemachi ichōme 99 banchi.
- kōri wa mizu ni uku
- Ice floats on water.
"In (some year)", "at (some point in time)":
- haru no yūgure ni...
- On a spring eve...
Quantity and extents: と to, も mo, か ka, や ya, から kara, まで made
To conjoin nouns, と to is used.
- Kaban ni wa kyōkasho san-satsu to manga-bon go-satsu o irete imasu
- I have three textbooks and five comic books in the bag.
The additive particle も mo can be used to conjoin larger nominals and clauses.
- YOHAN wa DOITSU-jin da. BURIGETTA mo DOITSU-jin da
- Johann is a German. Brigitte is a German too.
- kare wa eiga SUTĀ de ari, seijika de mo aru
- He is a movie star and also a politician.
For an incomplete list of conjuncts, や ya is used.
- BORISU ya IBAN o yobe
- Call Boris, Ivan, etc.
When only one of the conjuncts is necessary, the disjunctive particle か ka is used.
- sushi ka sashimi ka, nanika chūmon shite ne
- Please order sushi or sashimi or something.
Quantities are listed between から kara 'from' and まで made 'to'.
- Kashi 92 do kara 96 do made no netsu wa shinpai suru mono de wa nai
- A temperature between 92 Fahrenheit and 96 is not worrisome.
This pair can also be used to indicate time or space.
- asa ku-ji kara jūichi-ji made jugyō ga aru n da
- You see, I have classes between 9 a.m. and 11 a.m.
Because kara indicates starting point or origin, it has a related use as "because", analogously to English "since" (in the sense of both "from" and "because"):
- SUMISU-san wa totemo sekkyokuteki na hito desu kara, itsumo zenbu tanomarete iru no kamoshiremasen
- Mr. Smith, because you're so assertive, you may always be asked to do everything.
The particle kara and a related particle yori are used to indicate lowest extents: prices, business hours, etc.
- Watashitachi no mise wa shichi-ji yori eigyō shite orimasu
- Our shop is open for business from 7 onwards.
Yori is also used in the sense of "than".
- omae wa nē-chan yori urusai n da
- You are louder/more talkative than my elder sister!
Coordinating: と to, に ni, よ yo
The particle と to is used to set off quotations.
- 「殺して... 殺して」とあの子は言っていた。
- "koroshite... koroshite" to ano ko wa itteita
- The girl was saying, "Kill me... kill me."
- neko ga NYĀ NYĀ to naku
- The cat says meow, meow.
It is also used to indicate a manner of similarity, "as if", "like" or "the way".
- kare wa "aishiteru yo" to itte, pokkuri to shinda
- He said "I love you," and dropped dead.
In a related conditional use, it functions like "after/when", or "upon".
- ame ga agaru to, kodomo-tachi wa jugyou wo wasurete, hi no atatteiru mizutamari no yūwaku ni muchū ni naru
- Rain stops and then: children, forgetting their lessons, give in to the temptation of sun-faced puddles.
- kokkyō no nagai TONNERU wo nukeru to, yukiguni de atta
- The train came out of the long tunnel (and then) into the snow country.
Finally it is used with verbs like to meet (with) (会う au) or to speak (with) (話す hanasu).
- JON ga MEARI to hajimete atta no wa, 1942 nen no haru no yūguredoki no koto datta
- John met Mary for the first time on a dusky afternoon of spring in 1942.
This last use is also a function of the particle に ni, but to indicates reciprocation which ni does not.
- ジョンはメアリーと恋愛している。(usually say ジョンはメアリーと付き合っている。)
- JON wa MEARI[Ī] to ren'ai shite iru (JON wa MEARI[Ī] to tsukiatte iru)
- John and Mary are in love.
- ジョンはメアリーに恋愛している。(usually say ジョンはメアリーに恋している。)
- JON wa MEARI[Ī] ni ren'ai shite iru (JON wa MEARI[Ī] ni koi shite iru)
- John loves Mary (but Mary might not love John back).
Finally, the particle よ yo is used in a hortative or vocative sense.
- kawaii musume yo, watashi ni kao wo shikameruna
- Oh my beloved daughter, don't frown at me so!
The sentence-final particle か ka turns a declarative sentence into a question.
- sochira wa amerika-jin deshō ka?
- Are you perchance an American?
Other sentence-final particles add emotional or emphatic impact to the sentence. The particle ね ne softens a declarative sentence, similar to English "you know?", "eh?", "I tell you!", "isn't it?", "aren't you?", etc.
- kare ni denwa shinakatta no ne
- You didn't call him up, did you?
- chikajika rondon ni hikkosareru sou desu ne.
- I hear you're moving to London soon. Is that true?
A final よ yo is used in order to soften insistence, warning or command, which would sound very strong without any final particles.
- uso nanka tsuite nai yo!
- I'm not lying!
There are many such emphatic particles; some examples: ぜ ze and ぞ zo usually used by males; な na a less formal form of ne; わ wa used by females (and males in the Kansai region) like yo, etc. They are essentially limited to speech or transcribed dialogue.
Compound particles are formed with at least one particle together with other words, including other particles. The commonly seen forms are:
- particle + verb (term. or cont. or -te form)
- particle + noun + particle
- noun + particle
Other structures are rarer, though possible. A few examples:
- sono ken ni kan-shite shitte-iru kagiri no koto o oshiete moraitai
- Kindly tell me everything you know concerning that case. (particle + verb in cont.)
- gaikokugo o gakushū suru ue de taisetsu na koto wa mainichi no doryoku ga mono o iu to iu koto de aru
- In studying a foreign language, daily effort gives the most rewards. (noun + particle)
- ani wa ryōshin no shinpai o yoso ni, daigaku o yamete shimatta
- Ignoring my parents' worries, my brother dropped out of college. (particle + noun + particle)
All auxiliary verbs attach to a verbal or adjectival stem form and conjugate as verbs. In modern Japanese there are two distinct classes of auxiliary verbs:
- Pure auxiliaries (助動詞 jodōshi)
- are usually just called verb endings or conjugated forms. These auxiliaries do not function as independent verbs.
- Helper auxiliaries (補助動詞 hojodōshi)
- are normal verbs that lose their independent meaning when used as auxiliaries.
In classical Japanese, which was more heavily agglutinating than modern Japanese, the category of auxiliary verb included every verbal ending after the stem form, and most of these endings were themselves inflected. In modern Japanese, however, some of them have stopped being productive. The prime example is the classical auxiliary たり -tari, whose modern forms た -ta and て -te are no longer viewed as inflections of the same suffix, and can take no further affixes.
|auxiliary||group||attaches to||meaning modification||example|
|ます masu||irregular1||continuative||makes the sentence polite||書く kaku 'to write' → 書きます kakimasu|
|られる rareru2||2b||irrealis of grp. 2||makes V passive/honorific/potential||見る miru 'to see' → 見られる mirareru 'to be able to see'|
食べる taberu 'to eat' → 食べられる taberareru 'to be able to eat'
|れる reru||irrealis of grp. 1||makes V passive/honorific||飲む nomu 'to drink/swallow' → 飲まれる nomareru 'to be drunk' (Passive form of drink, not a synonym for intoxicated.)|
|る ru3||hyp. of grp. 1||makes V potential||飲む nomu 'to drink/swallow' → 飲める nomeru 'to be able to drink'|
|させる saseru4||2b||irrealis of grp. 2||makes V causative||考える kangaeru 'to think' → 考えさせる kangaesaseru 'to cause to think'|
|せる seru||irrealis of grp. 1||思い知る omoishiru 'to realize' → 思い知らせる omoishiraseru 'to cause to realize/to teach a lesson'|
- 1 ます masu has stem forms: irrealis ませ and ましょ, continuative まし, terminal ます, attributive ます, hypothetical ますれ, imperative ませ.
- 2 られる rareru in potential usage is sometimes shortened to れる reru (grp. 2); thus 食べれる tabereru 'to be able to eat' instead of 食べられる taberareru. But it is considered non-standard.
- 3 Technically, such an auxiliary verb る, ru, denoting the potential form, does not exist, as for example 飲める nomeru is thought to actually come from the contraction of 飲み得る, nomieru (see below). However, textbooks tend to teach it this way. (飲める in old texts would have been the attributive past tense form of 飲む instead of the potential meaning.)
- 4 させる saseru is sometimes shortened to さす sasu (grp. 1), but this usage is somewhat literary.
Much of the agglutinative flavour of Japanese stems from helper auxiliaries, however. The following table contains a small selection of many such auxiliary verbs.
|auxiliary||group||attaches to||meaning modification||example|
|ある aru 'to be (inanimate)'||1||-te form
only for trans.
|indicates state modification||開く hiraku 'to open' → 開いてある hiraite-aru 'opened and is still open'|
|いる iru 'to be (animate)'||2a||-te form
|progressive aspect||寝る neru 'to sleep' → 寝ている nete-iru 'is sleeping'|
|indicates state modification||閉まる shimaru 'to close (intransitive)' → 閉まっている shimatte-iru 'is closed'|
|おく oku 'to put/place'||1||-te form||"do something in advance"||食べる taberu 'to eat' → 食べておく tabete-oku 'eat in advance'|
|"keep"||開ける akeru 'to open' → 開けておく akete-oku 'keep it open'|
|行く iku 'to go'||1||-te form||"goes on V-ing"||歩く aruku 'to walk' → 歩いて行く aruite-iku 'keep walking'|
|くる kuru 'to come'||ka||-te form||inception, "start to V"||降る furu 'fall' → 降ってくる futte-kuru 'start to fall'|
|perfection, "have V-ed" (only past-tense)||生きる ikiru 'live' → 生きてきた ikite-kita 'have lived'|
|conclusion, "come to V"||異なる kotonaru 'differ' → 異なってくる kotonatte-kuru 'come to differ'|
|始める hajimeru 'to begin'||2b||continuative
|"V begins", "begin to V"||書く kaku 'to write' → 書き始める kaki-hajimeru 'start to write'|
punctual & subj. must be plural
|着く tsuku 'to arrive' → 着き始める tsuki-hajimeru 'have all started to arrive'|
|出す dasu 'to emit'||1||continuative||"start to V"||輝く kagayaku 'to shine' → 輝き出す kagayaki-dasu 'to start shining'|
|みる miru 'to see'||1||-te form||"try to V"||する suru 'do' → してみる shite-miru 'try to do'|
|なおす naosu 'to correct/heal'||1||continuative||"do V again, correcting mistakes"||書く kaku 'to write' → 書きなおす kaki-naosu 'rewrite'|
|あがる agaru 'to rise'||1||continuative||"do V thoroughly" / "V happens upwards"||立つ tatsu 'to stand' → 立ち上がる tachi-agaru 'stand up'|
出来る dekiru 'to come out' → 出来上がる deki-agaru 'be completed'
|得る eru/uru 'to be able'||(see note at bottom)||continuative||indicates potential||ある aru 'to be' → あり得る ariuru 'is possible'|
|かかる/かける kakaru/kakeru 'to hang/catch/obtain'||1||continuative
only for intrans., non-volit.
|"about to V", "almost V",
"to start to V"
|溺れる oboreru 'drown' → 溺れかける obore-kakeru 'about to drown'|
|きる kiru 'to cut'||1||continuative||"do V completely"||食べる taberu 'to eat' → 食べきる tabe-kiru 'to eat it all'|
|消す kesu 'to erase'||1||continuative||"cancel by V"
"deny with V"
|揉む momu 'to rub' → 揉み消す momi-kesu 'to rub out, to extinguish'|
|込む komu 'to enter deeply/plunge'||1||continuative||"V deep in", "V into"||話す hanasu 'to speak' → 話し込む hanashi-komu 'to be deep in conversation'|
|下げる sageru 'to lower'||2b||continuative||"V down"||引く hiku 'to pull' → 引き下げる hiki-sageru 'to pull down'|
|過ぎる sugiru 'to exceed'||2a||continuative||"overdo V"||言う iu 'to say' → 言いすぎる ii-sugiru 'to say too much, to overstate'|
|付ける tsukeru 'to attach'||2b||continuative||"become accustomed to V"||行く iku 'to go' → 行き付ける iki-tsukeru 'be used to (going)'|
|続ける tsuzukeru 'to continue'||2b||continuative||"keep on V"||降る furu 'to fall' (e.g. rain) → 降り続ける furi-tsuzukeru 'to keep falling'|
|通す tōsu 'to show/thread/lead'||1||continuative||"finish V-ing"||読む yomu 'to read' → 読み通す yomi-tōsu 'to finish reading'|
|抜ける nukeru 'to shed/spill/desert'||2b||continuative
only for intrans.
|"V through"||走る hashiru 'to run' → 走り抜ける hashiri-nukeru 'to run through (swh)'|
|残す nokosu 'to leave behind'||1||continuative||"by doing V, leave something behind"||思う omou 'to think' → 思い残す omoi-nokosu 'to regret' (lit: to have something left to think about)|
|残る nokoru 'to be left behind'||1||continuative
only for intrans.
|"be left behind, doing V"||生きる ikiru 'live' → 生き残る iki-nokoru 'to survive' (lit: to be left alive)|
|分ける wakeru 'to divide/split/classify'||2b||continuative||"the proper way to V"||使う tsukau 'use' → 使い分ける tsukai-wakeru 'to indicate the proper way to use'|
|忘れる wasureru 'to forget'||2b||continuative||"to forget to V"||聞く kiku 'to ask' → 聞き忘れる kiki-wasureru 'to forget to ask'|
|合う au 'to come together'||1||continuative||"to do V to each other", "to do V together"||抱く daku 'to hug' → 抱き合う daki-au 'to hug each other'|
- Note: 得る eru/uru is the only modern verb of shimo nidan type (and it is different from the shimo nidan type of classical Japanese), with conjugations: irrealis え, continuative え, terminal える or うる, attributive うる, hypothetical うれ, imperative えろ or えよ.
- In contrast, Romance languages such as Spanish are strongly right-branching, and Germanic languages such as English are weakly right-branching
- Uehara, p. 69
- Dixon 1977, p. 48.
- Adam (July 18, 2011). "Homage to る(ru), The Magical Verbifier".
- "「ディスる」「タクる」は70%が聞いたことがないと回答 国語世論調査で判明" [70% of Japanese people have never heard of the words taku-ru and disu-ru.]. Retrieved January 20, 2016.
- Languages with different open and closed word classes
- The Typology of Adjectival Predication, Harrie Wetzer, p. 311
- The Art of Grammar: A Practical Guide, Alexandra Y. Aikhenvald, p. 96
- Closed and open classes in Natlangs (Especially Japanese)
- Uehara, chapter 2, especially §220.127.116.11
- Takahashi, Tarou; et al. (2010). A Japanese Grammar (in Japanese) (4 ed.). Japan: Hitsuji Shobou. p. 27. ISBN 978-4-89476-244-2.
- "What are the personal pronouns of Japanese?" in sci.lang.japan Frequently Asked Questions
- Bart Mathias. Discussion of pronoun reference constraints on sci.lang.japan.
- "What's the difference between hajimeru and hajimaru?" in sci.lang.japan Frequently Asked Questions
- Kim Allen (2000) "Japanese verbs, part 2" Archived 2007-08-10 at the Wayback Machine in Japanese for the Western Brain
- "対応する他動詞のある自動詞の意味的・統合的特徴" (PDF). Kyoto University. Retrieved May 18, 2013.[permanent dead link]
- Uehara, Satoshi (1998). Syntactic categories in Japanese: a cognitive and typological introduction. Studies in Japanese linguistics. 9. Kurosio. ISBN 487424162X.
- Bloch, Bernard. (1946). Studies in colloquial Japanese I: Inflection. Journal of the American Oriental Society, 66, 97–109.
- Bloch, Bernard. (1946). Studies in colloquial Japanese II: Syntax. Language, 22, 200–248.
- Chafe, William L. (1976). Giveness, contrastiveness, definiteness, subjects, topics, and point of view. In C. Li (Ed.), Subject and topic (pp. 25–56). New York: Academic Press. ISBN 0-12-447350-4.
- Jorden, Eleanor Harz, Noda, Mari. (1987). Japanese: The Spoken Language
- Katsuki-Pestemer, Noriko. (2009): A Grammar of Classical Japanese. München: LINCOM. ISBN 978-3-929075-68-7.
- Kiyose, Gisaburo N. (1995). Japanese Grammar: A New Approach. Kyoto: Kyoto University Press. ISBN 4-87698-016-0.
- Kuno, Susumu. (1973). The structure of the Japanese language. Cambridge, MA: MIT Press. ISBN 0-262-11049-0.
- Kuno, Susumu. (1976). Subject, theme, and the speaker's empathy: A re-examination of relativization phenomena. In Charles N. Li (Ed.), Subject and topic (pp. 417–444). New York: Academic Press. ISBN 0-12-447350-4.
- Makino, Seiichi & Tsutsui, Michio. (1986). A dictionary of basic Japanese grammar. Japan Times. ISBN 4-7890-0454-6
- Makino, Seiichi & Tsutsui, Michio. (1995). A dictionary of intermediate Japanese grammar. Japan Times. ISBN 4-7890-0775-8
- Martin, Samuel E. (1975). A reference grammar of Japanese. New Haven: Yale University Press. ISBN 0-300-01813-4.
- McClain, Yoko Matsuoka. (1981). Handbook of modern Japanese grammar: 口語日本文法便覧 [Kōgo Nihon bunpō benran]. Tokyo: Hokuseido Press. ISBN 4-590-00570-0; ISBN 0-89346-149-0.
- Mizutani, Osamu; & Mizutani, Nobuko. (1987). How to be polite in Japanese: 日本語の敬語 [Nihongo no keigo]. Tokyo: Japan Times. ISBN 4-7890-0338-8.
- Shibatani, Masayoshi. (1990). Japanese. In B. Comrie (Ed.), The major languages of east and south-east Asia. London: Routledge. ISBN 0-415-04739-0.
- Shibatani, Masayoshi. (1990). The languages of Japan. Cambridge: Cambridge University Press. ISBN 0-521-36070-6 (hbk); ISBN 0-521-36918-5 (pbk).
- Shibamoto, Janet S. (1985). Japanese women's language. New York: Academic Press. ISBN 0-12-640030-X. Graduate Level
- Tsujimura, Natsuko. (1996). An introduction to Japanese linguistics. Cambridge, MA: Blackwell Publishers. ISBN 0-631-19855-5 (hbk); ISBN 0-631-19856-3 (pbk). Upper Level Textbooks
- Tsujimura, Natsuko. (Ed.) (1999). The handbook of Japanese linguistics. Malden, MA: Blackwell Publishers. ISBN 0-631-20504-7. Readings/Anthologies
|The Wikibook Japanese has a page on the topic of: Transitivity| |
Glue ear is a condition where the middle ear fills with glue-like fluid instead of air. This causes dulled hearing. In most cases it clears without any treatment. An operation to clear the fluid and to insert ventilation tubes (grommets) may be advised if glue ear persists.
What is glue ear?
Glue ear means that the middle ear is filled with fluid that looks like glue. It can affect one or both ears. The fluid has a deadening effect on the vibrations of the eardrum and tiny bones (ossicles) created by sound. These affected vibrations are received by the cochlea and so the volume of the hearing is turned down. Glue ear usually occurs in young children but it can develop at any age. Glue ear is sometimes called otitis media with effusion (OME).
How does the ear work?
The ear is divided into three parts - the outer, middle and inner ear. Sound waves come into the outer (external) ear and hit the eardrum, causing the eardrum to vibrate. The vibrations pass from the eardrum to the middle-ear bones (ossicles). These bones then transmit the vibrations to the cochlea in the inner ear. The cochlea converts the vibrations to electric signals which are sent down the ear nerve to the brain. The brain interprets these signals as sound. For more information about the structure of the ear and how hearing works, see the separate leaflet called Hearing Problems.
The middle ear behind the eardrum is normally filled with air. The middle ear is connected to the back of the nose by a thin channel, the Eustachian tube. This tube is normally closed. However, from time to time (usually when we swallow, chew or yawn), it opens to let air into the middle ear and to drain any fluid out.
What causes glue ear?
The cause is probably due to the Eustachian tube not working properly. The balance of fluid and air in the middle ear may become altered if the Eustachian tube is narrow, blocked, or does not open properly. Air in the middle ear may gradually pass into the nearby cells if it is not replaced by air coming up the Eustachian tube. A vacuum may then develop in the middle ear. This may cause fluid to seep into the middle ear from the nearby cells.
Some children develop glue ear after a cough, cold, or ear infection when extra mucus is made. The mucus may build up in the middle ear and not drain well down the Eustachian tube. However, in many cases glue ear does not begin with an ear infection.
How common is glue ear?
Glue ear is common. By 10 years of age, 8 out of 10 children will have had at least one episode of OME. It is most common between the ages of 2 and 5 years. Boys are more commonly affected than girls. Most cases occur in winter. It is more common in children who:
- Are in daycare.
- Have an older brother or sister.
- Live in homes where people smoke.
- Have cleft palate, which can affect how well the Eustachian tube works.
- Have Down's syndrome.
- Have allergic rhinitis - eg, hay fever.
Can glue ear be prevented?
The cause of glue ear is not fully understood and there is no way of preventing most cases. However, the risk of developing glue ear is less in children who live in homes free of cigarette smoke and who are breast-fed.
Glue ear symptoms
This is the main symptom. Your child's hearing does not go completely and the hearing loss is often mild. However, the severity of hearing loss varies from child to child, is sometimes quite severe and can vary from day to day in the same child. Hearing varies according to the thickness of the fluid and other factors. For example, it is often worse during colds. Older children may say if their hearing is dulled. However, you may not notice dulled hearing if your child is younger, particularly if only one ear is affected. You may find that your child turns the TV or radio up loud, or often says "What?" or "Pardon?" when you talk to them. Babies may appear less responsive to normal sounds.
This is not usually a main symptom but mild earache may occur from time to time. Children and babies may pull at their ears if they have mild pain. However, the gluey fluid is a good food for germs (bacteria) and ear infections are more common in children with glue ear. This may then cause bad earache for the duration of an infection. Always have some painkiller in your home in case earache develops.
Development and behaviour may be affected in a small number of cases
If dulled hearing is not noticed then children may not learn so well at school if they cannot hear the teacher. Your child may also become frustrated if they cannot follow what is going on. They may feel left out of some activities. They can become quiet and withdrawn if they cannot hear so well.
There has been concern that dulled hearing from glue ear may cause problems with speech and language development. This in turn was thought perhaps to lead to poor school achievement and behavioural problems. However, research studies that have looked at this issue are reassuring. The studies showed that, on average, children with glue ear had no more chance (or just a little more chance) of having long-term behavioural problems or poor school performance compared with children without glue ear. However, these studies looked at the overall average picture. There is still a concern that the development of some children with glue ear may be affected - in particular, some children with untreated severe and persistent glue ear.
So, in short, developmental delay including speech and language is unlikely to occur in most children with glue ear. However, if you have any concern about your child's development, you should tell a doctor.
Research has also detected a link between glue ear and attention deficit hyperactivity disorder (ADHD) as well as anxiety and depression.
How does glue ear progress?
The outlook is usually good. Many children only have symptoms for a short time. The fluid often drains away gradually, air returns and hearing then returns to normal.
- Hearing is back to normal within three months in about 5 in 10 cases.
- Hearing is back to normal within a year in more than 9 in 10 cases.
- Glue ear persists for a year or more in a small number of cases.
Some children have several episodes of glue ear which cause short but repeated (recurring) episodes of reduced hearing. The total time of reduced hearing in childhood may then add up to many months.
Are any tests needed?
A referral to an ear, nose and throat (ENT) specialist may be advised at some point. This may be straightaway for babies who have hearing loss. (This is to rule out other serious causes of hearing loss.) It may be after a period of watchful waiting in older children who previously had good hearing. Hearing tests and ear tests can confirm the cause of hearing loss and show how bad the hearing has become.
Glue ear treatment
Can medication clear glue ear?
Various medicines have been tried to help clear glue ear. For example, antihistamines, steroids, decongestants, antibiotics and medicines to thin mucus. However, research studies have shown that none of these medicines works in the treatment of glue ear.
Wait and see (watchful waiting)
No treatment is usually advised at first as the outlook is good. Typically, a doctor may advise that you wait three months to see if the glue ear clears. Watchful waiting is sometimes called active monitoring. Watchful waiting isn't usually an option for some children - eg, those with Down's syndrome or cleft palate.
For this treatment a special balloon is blown up by the child using their nose. This is called auto-inflation. It puts back pressure into the nose and may help to open up the Eustachian tube and allow better drainage of the fluid. The child needs to do this regularly until the fluid clears. The research studies that looked into this treatment found that it seems to help in some cases but not all. It may improve middle ear function and reduce the need for an operation. It is difficult for young children to do properly. With well-motivated older children who can use the device, it may be worth a try. It is not thought to cause any side-effects or problems. You can get an auto-inflation kit called Otovent® on prescription, or you can buy it from pharmacies.
A small operation may be advised by an ear specialist if your child's glue ear persists, or is severe. This involves inserting small tubes called grommets (see below). The operation isn't done as often as it used to be because it is now realised that most cases of glue ear get better without treatment. Also there isn't a lot of evidence that surgery makes much difference to a child's speech or language development.
Hearing aids are an option instead of an operation to insert grommets in children with hearing loss who have glue ear in both ears. The hearing aids would usually only be used for the time until the glue ear clears away.You and your child should have an opportunity to discuss this option with the specialist and your views should be taken into account. The anxiety caused to some children by having to wear aids sometimes outweighs the benefits.
What happens during the operation?
The operation is usually done as a day case, and usually an overnight stay in hospital is not needed. Your child will need to be put to sleep for a short time (have a general anaesthetic.) The operation involves making a tiny cut (about 2-3 mm) in the eardrum, whilst the child is under anaesthetic. The fluid is drained and a ventilation tube (grommet) is then usually inserted. A grommet is like a tiny pipe that is put across the eardrum. The grommet lets air get into the middle ear. Hearing improves immediately. This improvement in hearing only lasts as long as the grommet stays in place.
What happens to the grommet after it is put into the ear?
Grommets normally fall out of the ear as the eardrum grows, usually after 6-12 months. The grommet is so small that you probably won't notice it. By this time the glue ear has often gone away. The hole in the eardrum made for the grommet normally heals quickly when the grommet falls out. Sometimes grommets need to be put in on more than one occasion if glue ear returns (recurs).
Are there any complications?
Most children have no problems after surgery. Discharge and infection are the most common complications. Rarely, a condition called tympanosclerosis occurs in which a chalky sort of substance develops in the eardrum. It's not certain whether this causes any long-term problems. A small hole (perforation) of the eardrum occasionally persists in the eardrum after the grommet has come out. This usually heals without treatment but occasionally a small operation is needed to fix it. Minor damage and scarring to the eardrum may occur but this is unlikely to cause any problems.
All general anaesthetics carry a risk, but only a very tiny one. Your anaesthetist will explain this to you.
Advice for children with grommets
Children with grommets can go swimming but should avoid diving. Swimming caps and earplugs are not necessary. You should avoid ducking your child's head in soapy water. Children with grommets do not need to avoid flying in an aeroplane. If anything, they will have less pain with taking off and landing as the pressure between the middle and outer ear will be more equal.
What can I do for my child with glue ear?
The main thing is to be aware that your child will have dulled hearing until the condition goes away or is treated. The following are some tips:
- Talk clearly and more loudly than usual (but you don't have to shout).
- Attract your child's attention before speaking to him or her. Talk directly face to face and down at their level.
- Cut out background noise when you talk to your child (for example, turn off the TV).
- Understand that your child's frustration or bad behaviour may be due to dulled hearing.
- Discuss the problem with the teacher if your child is at school or nursery. Sitting your child near to the teacher may help. Often in a class there are several children with glue ear and raising awareness of glue ear with teachers is helpful.
- Don't let anybody smoke in the same home as your child.
Even after an episode of glue ear has cleared up, remember the problem may return for a while in the future. In particular, after a cold or after an ear infection.
Are children routinely checked for hearing?
Yes. All children should have a routine hearing test either shortly after birth or aged about 8-9 months. However, most cases of glue ear develop in children aged 2-5 years. Therefore, hearing may have been fine at the routine hearing test but then become dulled at a later time. See a doctor if you suspect your child has dulled hearing at any age.
Does glue ear go away?
As children grow older, problems with glue ear usually go away. This is because the Eustachian tube widens and the drainage of the middle ear improves. In general, the older the child, the less likely that fluid will build up in the middle ear. Also, in older children, any fluid that does build up after a cold is likely to clear quickly. Glue ear rarely continues (persists) in children over the age of 8. In nearly all cases, once the fluid has gone, hearing returns to normal. Rarely, some adults are troubled with glue ear.
Rarely, long-term glue ear may lead to middle ear damage and some permanent hearing loss.
Further reading and references
Surgical management of children with otitis media with effusion (OME); NICE Clinical Guideline (February 2008)
Browning GG, Rovers MM, Williamson I, et al; Grommets (ventilation tubes) for hearing loss associated with otitis media with effusion in children. Cochrane Database Syst Rev. 2010 Oct 6(10):CD001801. doi: 10.1002/14651858.CD001801.pub3.
Otitis media with effusion; NICE CKS, October 2016 (UK access only)
Perera R, Glasziou PP, Heneghan CJ, et al; Autoinflation for hearing loss associated with otitis media with effusion. Cochrane Database Syst Rev. 2013 May 315:CD006285. doi: 10.1002/14651858.CD006285.pub2.
Otovent nasal balloon for otitis media with effusion. Medtech innovation briefing [MIB59]; National Institute for Health and Care Excellence (NICE), March 2016
Venekamp RP, Burton MJ, van Dongen TM, et al; Antibiotics for otitis media with effusion in children. Cochrane Database Syst Rev. 2016 Jun 12(6):CD009163. doi: 10.1002/14651858.CD009163.pub3.
are the dizzy spellsnormal after day 4 of having a stapedectomy. really strugglingkaren35745
Disclaimer: This article is for information only and should not be used for the diagnosis or treatment of medical conditions. Patient Platform Limited has used all reasonable care in compiling the information but make no warranty as to its accuracy. Consult a doctor or other health care professional for diagnosis and treatment of medical conditions. For details see our conditions. |
The following diagram shows how to calculate the slope of a line. Scroll down the page for more examples and solutions.
Finding the Slope of a Line from Two Points
Graphing a Line Using a Point and the Slope
Introduction to Slope-Intercept Form
Writing the Equation of a Line
Introduction to Point-Slope Form
Vertical and Horizontal Lines
Graphing a Standard Form Linear Equation
Equations of Parallel and Perpendicular Lines
Modeling Rate of Change with Linear Equations
Try the free Mathway calculator and
problem solver below to practice various math topics. Try the given examples, or type in your own
problem and check your answer with the step-by-step explanations.
We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.