text
stringlengths
4
602k
Electric Potential and Potential Energy due to Point Charges Consider an isolated positive point charge q. Recall that such a charge produces an electric field that is directed radially outward from the charge. To find the electric potential at a point located a distance r from the charge, we begin with the general expression for potential difference: where A and B are the two arbitrary points shown in Figure 25.6. At any field point, the electric field due to the point charge is E = keqr̂/r2 (Eq. 23.4), where r̂ is a unit vector directed from the charge toward the field point. The quantity E.ds can be expressed as Because the magnitude of r̂ is 1, the dot product r̂.ds = ds cosθ, where θ is the angle between r̂. and ds. Furthermore, ds cos θ is the projection of ds onto r ; thus, ds cos θ = dr. That is, any displacement ds along the path from point A to point B produces a change dr in the magnitude of r, the radial distance to the charge creating the field. Making these substitutions, we find that E.ds = ( keqr̂/r2 )dr; hence, the expression for the potential difference becomes The integral of E.ds is independent of the path between points A and B - as it must be because the electric field of a point charge is conservative. Furthermore, Equation 25.10 expresses the important result that the potential difference between any two points A and B in a field created by a point charge depends only on the radial coordinates rA and rB . It is customary to choose the reference of electric potential to be zero at rA = ∞. With this reference, the electric potential created by a point charge at any distance r from the charge is Electric potential is graphed in Figure 25.7 as a function of r, the radial distance from a positive charge in the xy plane. Consider the following analogy to gravitational potential: Imagine trying to roll a marble toward the top of a hill shaped like Figure 25.7a. The gravitational force experienced by the marble is analogous to the repulsive force experienced by a positively charged object as it approaches another positively charged object. Similarly, the electric potential graph of the region surrounding a negative charge is analogous to a “hole” with respect to any approaching positively charged objects. A charged object must be infinitely distant from another charge before the surface is “flat” and has an electric potential of zero. We obtain the electric potential resulting from two or more point charges by applying the superposition principle. That is, the total electric potential at some point P due to several point charges is the sum of the potentials due to the individual charges. For a group of point charges, we can write the total electric potential at P in the form where the potential is again taken to be zero at infinity and ri is the distance from the point P to the charge qi . Note that the sum in Equation 25.12 is an algebraic sum of scalars rather than a vector sum (which we use to calculate the electric field of a group of charges). Thus, it is often much easier to evaluate V than to evaluate E. The electric potential around a dipole is illustrated in Figure 25.8. We now consider the potential energy of a system of two charged particles. If V1 is the electric potential at a point P due to charge q1 , then the work an external agent must do to bring a second charge q2 from infinity to P without acceleration is q2V1. By definition, this work equals the potential energy U of the two-particle system when the particles are separated by a distance r12 (Fig. 25.9). Therefore, we can express the potential energy as Note that if the charges are of the same sign, U is positive. This is consistent with the fact that positive work must be done by an external agent on the system to bring the two charges near one another (because like charges repel). If the charges are of opposite sign, U is negative; this means that negative work must be done against the attractive force between the unlike charges for them to be brought near each other. If more than two charged particles are in the system, we can obtain the total potential energy by calculating U for every pair of charges and summing the terms algebraically. As an example, the total potential energy of the system of three charges shown in Figure 25.10 is Physically, we can interpret this as follows: Imagine that q1 is fixed at the position shown in Figure 25.10 but that q2 and q3 are at infinity. The work an external agent must do to bring q2 from infinity to its position near q1 is keq1q2/ r12 which is the first term in Equation 25.14. The last two terms represent the work required to bring q3 from infinity to its position near q1 and q2 . (The result is independent of the order in which the charges are transported.)
Bar graphs, line graphs, and circle graphs can be used to present data in a visual way. A bar graph displays data with vertical or horizontal bars. Bar graphs are a good way to display data that can be organized into categories. Using a bar graph, you can quickly compare the categories. Example 1: Reading and Interpreting Bar Graphs Use the graph to answer each question. A. Which casserole was ordered the most? lasagna B. About how many total orders were placed? 180 C. About how many more tuna noodle casseroles were ordered than king ranch casseroles? 10 D. About what percent of the total orders were for baked ziti? 10% Check It Out! Example 1 Use the graph to answer each question. a. Which ingredient contains the least amount of fat? bread The bar for bread is the shortest. b. Which ingredients contain at least 8 grams of fat? cheese and mayonnaise The two longest bars. A double-bar graph can be used to compare two data sets. A double-bar graph has a key to distinguish between the two sets of data. Example 2: Reading and Interpreting Double Bar Graphs Use the graph to answer each question. A. Which feature received the same satisfaction rating for each SUV? Cargo Find the two bars that are the same. B. Which SUV received a better rating for mileage? SUV Y Find the longest mileage bar. Check It Out! Example 2 Use the graph to determine which years had the same average basketball attendance. What was the average attendance for those years? 2001, 2002, and 2005 Find the orange bars that are approximately the same. The average is about 13,000. A line graph displays data using line segments. Line graphs are a good way to display data that changes over a period of time. Example 3: Reading and Interpreting Line Graphs Use the graph to answer each question. A. At what time was the humidity the lowest? B. During which 4-hour time period did the humidity increase the most? 4 A.M. 12 to 4 P.M. Identify the lowest point. Look for the segment with the greatest positive slope. Check It Out! Example 3 Use the graph to estimate the difference in temperature between 4:00 A.M. and noon. About 18°F Compare the temperatures at the two times. A double-line graph can be used to compare how two related data sets change over time. A double-line graph has a key to distinguish between the two sets of data. Example 4: Reading and Interpreting Double-Line Graphs Use the graph to answer each question. A. In which month did station A charge more than station B? May Look for the point when the station A line is above the station B line. B. During which month(s) did the stations charge the same for gasoline? April and July See where the data points overlap. Check It Out! Example 4 Use the graph to describe the general trend of the data. Prices increased from Jan through Jul or Aug, and then prices decreased through Nov. A circle graph shows parts of a whole. The entire circle represents 100% of the data and each sector represents a percent of the total. Circle graphs are good for comparing each category of data to the whole set. Example 5: Reading and Interpreting Circle Graphs Use the graph to answer the question. Which ingredients are present in equal amounts? Lemon sherbet and pineapple juice. Look for same sized sectors. 12.5% 12.5% 25% 50% Check It Out! Example 5 Use the graph to determine what percent of the fruit salad is cantaloupe. Find the cups of cantaloupe and divide that into total cups of fruit. The sections of a circle graph are called sectors. Reading Math Example 6A: Choosing and Creating an Appropriate Display Use the given data to make a graph. Explain why you chose that type of graph. A bar graph is good for displaying categories that do not make up a whole. Step 1 Choose an appropriate scale and interval. The scale must include all of the data values. The scale is separated into equal parts called intervals. Flowers in an Arrangement Example 6A Continued Step 2 Use the data to determine the lengths of the bars. Draw bars of equal width. The bars should not touch. Step 3 Title the graph and label the horizontal and vertical scales. Example 6C: Choosing and Creating an Appropriate Display Use the given data to make a graph. Explain why you chose that type of graph. A line graph is appropriate for this data because it will show the change over time. Step 1 Determine the scale and interval for each set of data. Time should be plotted on the horizontal axis because it is independent. County Farms 248 Example 6C Continued Step 2 Plot a point for each pair of values. Connect the points using line segments. Step 3 Title the graph and label the horizontal and vertical scales. Check It Out! Example 6 Use the given data to make a graph. Explain why you chose that type of graph. The data below shows how Vera spends her time during a typical 5-day week during the school year. Lesson Quiz: Part I 1. Which two apartments are about the same size? 2. In which week(s) did store B charge more than store A? Lamar Place and Candlerun Week one Lesson Quiz: Part II 3. The table shows how many orders were placed for each type of muffin at a bakery in one week. Use the data to make a graph. Explain why you chose that type of graph. A circle graph is used to compare each type of muffin to total muffin orders. A particular slide catching your eye? Clipping is a handy way to collect important slides you want to go back to later.
NCERT Solutions, Question Answer and Mind Map for Class 12 Chemistry Chapter 10, “Haloalkanes and Haloarenes,” is a study material package designed to help students understand the chemistry of organic compounds containing halogens, including their nomenclature, physical and chemical properties, and reactions. NCERT Solutions provide detailed explanations and answers to the questions presented in the chapter. The solutions cover all the topics in the chapter, including the classification and nomenclature of haloalkanes and haloarenes, the physical and chemical properties of these compounds, and their reactions with nucleophiles and bases. They also provide tips on how to answer different types of questions, including short answer, long answer, and multiple-choice questions. The question-answer section of the chapter covers a wide range of topics, from the reactions of haloalkanes and haloarenes with nucleophiles and bases to their uses in everyday life. It also includes questions on the effects of halogens on the physical and chemical properties of organic compounds and the environmental impact of halogenated compounds. The mind map provides a visual representation of the key topics covered in the chapter, allowing students to understand the connections between different concepts and ideas. The mind map covers the nomenclature of haloalkanes and haloarenes, their physical and chemical properties, and their reactions with nucleophiles and bases. NCERT Solution / Notes Class 12 Chemistry Chapter 10 Haloalkanes and Haloarenes with Mind Map PDF Download HALOALKANES AND HALOARENES Based on the structure i.e depending upon the number of halogen atoms in a compound, Alkyl/ Aryl halides are classified as mono, di, and polyhalogen. When compared to carbon, halogen atoms are more electronegative. Therefore - The bond (carbon-halogen bond) of alkyl halide is polarised. - Halogen atom carries a partial negative charge - Carbon atom carries a partial positive Preparation Of Alkyl Halides Alkyl halides are produced by the free radical halogenation of alkanes- Step 1 – Adding halogen acids to alkenes Step 2 – Replacing –OH group of alcohols by halogens with the use of phosphorus halides or halogen acids or thionyl chloride. Aryl halides are prepared with the help of electrophilic substitution to arenes. Iodides and Fluorides are prepared with halogen exchange method. Organohalogens have a higher boiling point when compared hydrocarbons due to strong van der Waals forces and dipole-dipole forces. They partial dissolve in water but completely dissolve in organic solvents. Organometallic compounds are formed by the nucleophilic substitution, elimination, and reaction with metal atoms which occurs due to the polarity of a carbon-halogen bond of alkyl halides. Based on the kinetic properties Nucleophilic substitution reactions are classified as SN1 and SN2. Chirality plays a very important in SN2 reactions of understanding the reaction mechanisms of these reactions. SN2 reactions are characterized by inversion configuration whereas SN1 reactions are characterised by racemisation. The substitution reaction is defined as a reaction in which the functional group of one chemical compound is substituted by another group or it is a reaction which involves the replacement of one atom or a molecule of a compound with another atom or molecule. Substitution Reaction Example These type of reactions are said to possess primary importance in the field of organic chemistry. For example, when CH3Cl is reacted with the hydroxyl ion (OH-), it will lead to the formation of the original molecule called methanol with that hydroxyl ion. The following reaction is as shown below- CH3Cl + (OH−) → CH3OH (methanol) + Cl– One more example would be the reaction of Ethanol with the hydrogen iodide which forms iodoethane along with water. The reaction is as shown- CH3CH2OH + HI → CH3CH2I + H2O Substitution Reaction Conditions In order to substitution reaction to occur there are certain conditions that have to be used. They are- - Maintaining low temperatures such as room temperature. - The strong base such as NaOH has to be in dilute form. Suppose if the base is of a higher concentration, there are chances of dehydrohalogenation taking place. - The solution needs to be in an aqueous state such as water. Substitution Reactions – Types Substitution Reactions are of two types naming nucleophilic reaction and electrophilic reactions. These two types of reactions mainly differ in the kind of atom which is attached to its original molecule. In the nucleophilic reactions the atom is said to be electron-rich species, whereas, in the electrophilic reaction, the atom is an electron-deficient species. A brief explanation of the two types of reactions is as given below. Nucleophiles are those species in the form of an ion or a molecule which are strongly attached to the region of a positive charge. These are said to be fully charged or have negative ions present on a molecule. The common examples of nucleophiles are cyanide ions, water, hydroxide ions, and ammonia. Nucleophilic substitution reaction A Nucleophilic substitution reaction in organic chemistry is a type of reaction where a nucleophile gets attached to the positive charged atoms or molecules of the other substance. Nomenclature Of Haloalkanes And Haloarenes Initially, there was no proper system for the naming of compounds. Mostly there were trivial names that were used depending upon the country and region. These trivial names were based on the discoverer or the nature of the compound or its place of discovery. The system of trivial names was not standard and led to much confusion, thus raising the need for a standard system for the naming of organic compounds. IUPAC came up with a set of rules that are used universally for the naming of organic compounds. There are two names associated with every compound: Common name – It is different from a trivial name in the sense that it also follows a rule for its nomenclature. IUPAC name – The IUPAC (International Union of Pure and Applied Chemistry) naming system is the standard naming system that chemists generally use. Rules of Nomenclature - Find the longest carbon chain. - Number the longest carbon chain such that the carbon atom(s) to which the halogen(s) is/are attached get the lowest number(s). - Multiple halogen atoms are labelled with the Greek numerical prefixes such as di, tri, tetra, to denote the number of identical halogen atoms attached to a carbon atom. If more than one halogen atoms attached to the same carbon atom, the numeral is repeated that much time. - In case, different types of halogens are attached, they are named alphabetically. - The position of the halogen atom is indicated by writing the position and name of the halogen just before the name of the parent hydrocarbon. The Methodology of Writing Name - First, write the root word for the parent hydrocarbon (depending upon the no. of carbon atoms in the longest carbon chain). - Secondly, calculate the number of halogen atoms present. If there are multiple halogen atoms present, then arrange the halogens alphabetically in the prefix, labelling them with their respective positions. But, if the same halogen atom is present more than once then use the prefixes di, tri, tetra, etc. Nomenclature of Haloalkanes Alkyl halides are named in two ways. In the common system, the alkyl group is named first followed by an appropriate word chloride, bromide, etc. The common name of an alkyl halide is always written as two separate words. In the IUPAC system, alkyl halides are named as haloalkanes. The other rules followed in naming compounds is that - Select the longest chain of carbon atoms containing the halogen atom. - Number the chain to give the minimum number to the carbon carrying halogen atom. - If multiple bonds (double or triple bond) is present, then it is given the preference in numbering the carbon chain. - The IUPAC name of any halogen derivative is always written as one word. Nomenclature of Haloarenes - Aryl halides are named by prefixing “halo” to the name of the parent aromatic hydrocarbon. - If there is more than one substituent on the ring then the relative positions of the substituents are indicated by mathematical numerals. - In the common system, the relative position of two groups is shown by prefixes ortho, meta or para. The common and IUPAC names of some representative haloarenes are given below. Haloarenes: Nature of C-X bond Haloarenes are the chemical compounds containing arenes, where one or more hydrogen atoms bonded to an aromatic ring are replaced with halogens. The nature of C-X bond depends on both the nature of carbon in the aromatic ring and the halogen attached. Halogens are generally denoted by “X”. As we know halogens are group 17 elements having high electronegativity namely, fluorine (F), chlorine (Cl), bromine (Br), iodine (I) and astatine (At). Out of them, Fluorine has the highest electronegativity. The elements in this group are just one electron short of completing their nearest noble gas configuration. Carbon in haloarenes is a 14th group element with comparatively lesser electronegativity in comparison to halogen molecules. This is due to the fact that electronegativity increase across a period from left to right. Salient Points on the Nature of C-X Bond in Haloarenes are: - The C-X bond in haloarenes is polarized, as halogens are more electronegative than carbon. Due to the high electronegativity of halogen, it attracts the electron cloud more towards itself and thus gains a slight negative charge, on the other hand, carbon obtains a slight positive charge. - As halogens need only one electron to achieve their nearest noble gas configuration, only one sigma bond is formed between one carbon and one halogen atom. - Due to the increase in atomic size from fluorine to astatine, the C-X bond length in haloarenes increases from fluorine to astatine and bond dissociation strength decreases. - Dipole moment depends on the difference in electronegativity of carbon and halogens (group 17 trends properties) and as we know that the electronegativity of halogens decreases down the group, the dipole moment also decreases. There is an exception to C-Cl and C-F dipole moments. Though the electronegativity of Cl is less than F, the dipole moment of a C-Cl bond is more than C-F. SN1 and SN2 Reaction of Haloalkanes Haloalkanes are converted into alcohols using hydroxide ion in aqueous media through SN1 and SN2 Reactions. Alcohols can efficiently be prepared by substitution of haloalkanes and sulfonic esters with good leaving groups. The choice of reagents and reaction conditions for the hydrolysis is important because competitive elimination reactions are possible especially at high temperatures leading to alkenes. The hydrolysis of haloalkanes depends on the structure of the haloalkanes, primary haloalkanes typically undergo SN2 reactions whereas tertiary haloalkanes react an SN1 mechanism for tertiary haloalkanes or tertiary alkyl halides. There are two kinds of reactions of haloalkanes naming SN1 And SN2 Reaction. The SN1 reaction is a substitution nucleophilic unimolecular reaction. It is a two-step reaction. In the first step, The carbon-halogen bond breaks heterolytically with the halogen retaining the previously shared pair of electrons. In the second step, the nucleophile reacts rapidly with the carbocation that was formed in the first step. This reaction is carried out in polar protic solvents such as water, alcohol, acetic acid etc. This reaction follows first order kinetics. Hence, this is named as substitution nucleophilic unimolecular. This reaction takes place in two steps as described below. - The bond between carbon and halogen breaks due to the presence of a nucleophile and formation of carbocation takes place. - It is the slowest and the reversible step as a huge amount of energy is required to break the bond. - The bond is broken by solvation of the compound in a protic solvent, thus this step is slowest of all. - The rate of reaction depends only on haloalkane, not on nucleophile. - The nucleophile attacks the carbocation formed in step 1 and the new compound is formed. - Since, the rate defining step of the reaction is the formation of a carbocation, hence greater the stability of formation of an intermediate carbocation, more is the ease of the compound undergoing substitution nucleophilic unimolecular or SN1 reaction. - In the case of alkyl halides, 3o alkyl halides undergo SN1 reaction very fast because of the high stability of 3o carbocations. - Hence allylic and benzylic halides show high reactivity towards the SN1 reaction. This reaction follows second order kinetics and the rate of reaction depends upon both haloalkane and participating nucleophile. Hence, this reaction is known as substitution nucleophilic bimolecular reaction. In this reaction, the nucleophile attacks the positively charged carbon and the halogen leaves the group. It is a one-step reaction. Both the formation of carbocation and exiting of halogen take place simultaneously. In this process, unlike the SN1 mechanism, the inversion of configuration is observed. Since this reaction requires the approach of the nucleophile to the carbon bearing the leaving group, the presence of bulky substituents on or near the carbon atom has a dramatic inhibiting effect. So opposite to SN1 reaction mechanism, this is favoured mostly by primary carbon, then secondary carbon and then tertiary carbon. Nucleophilic substitution reaction depends on a number of factors. Some important factors include. - Effect of the solvent - Effect of the structure of the substrate - Effect of the nucleophile - Effect of leaving – group. Comparing SN1 and SN2 Reactions The solvent in which the nucleophilic substitution reaction is carried out also has an influence on whether an SN2 or an SN1 reaction will predominate. Before understanding how a solvent favours one reaction over another we must understand how solvents stabilize organic molecules. “Polyhalogen compounds: Carbon compounds containing more than one halogen atom per molecule.” Polyhalogen compounds are useful in various industries and in griculture. Some important polyhalogen compounds: Dichloromethane (Methylene chloride) Dichloromethane (methylene chloride) is used as a: - Solvent for paint removers - Propellant in aerosols - Process solvent in the manufacture of drugs - Metal cleaning and finishing solvent - It endangers the human central nervous system. - Exposure to lower levels of methylene chloride in air can lead to slightly impaired hearing and vision. - High levels of methylene chloride in air cause dizziness, nausea, tingling and numbness in the fingers and toes. - In humans, direct skin contact with methylene chloride causes intense burning and mild redness of the skin. - Direct contact with the eyes can burn the cornea. - Chemically, chloroform is used as a solvent for fats, alkaloids, iodine and other substances. - The major use of chloroform today is in the production of the freon refrigerant R-22. - It was once used as a general anaesthetic in surgery but has been replaced by less toxic, safer anaesthetics such as ether. It is therefore stored in closed dark-coloured bottles which are completely filled so that air is kept out. - It was used earlier as an antiseptic, but the antiseptic properties are due to the liberation of free iodine and not due to iodoform itself. - Because of its objectionable smell, it has been replaced by other formulations containing iodine. Tetrachloromethane (Carbon tetrachloride) - It is produced in large quantities for use in the manufacture of refrigerants and propellants for aerosol cans. - It is also used as feedstock in the synthesis of chlorofluorocarbons and other chemicals, in pharmaceutical manufacturing and general solvent use. - Until the mid-1960s, it was also widely used as a cleaning fluid, both in industry, as a degreasing agent, and in the home, as a spot remover and fire extinguisher. - There is evidence that exposure to carbon tetrachloride causes liver cancer in humans. - The most common effects are dizziness, light headedness, nausea and vomiting, which can cause permanent damage to nerve cells. - In severe cases, these effects can lead rapidly to stupor, coma, unconsciousness or death. Exposure to CCl4 can make the heart beat irregularly or stop. - The chemical may irritate the eyes on contact. When carbon tetrachloride is released into the air, it rises to the atmosphere and depletes the ozone layer. - Depletion of the ozone layer is believed to increase human exposure to ultraviolet rays, leading to increased skin cancer, eye diseases and disorders, and possible disruption of the immune system. - The chlorofluorocarbon compounds of methane and ethane are collectively known as freons. - They are extremely stable, unreactive, non-toxic, non-corrosive and easily liquefiable gases. - They are manufactured from tetrachloromethane by Swarts reaction. - By 1974, the total freon production in the world was about 2 billion pounds annually. - These are usually produced for aerosol propellants, refrigeration and air conditioning purposes. - Freon 12 (CCl2F2) is one of the most common freons in industrial use. - Most freons, even those used in refrigeration, eventually make their way into the atmosphere where it diffuses unchanged into the stratosphere. - In stratosphere, freons can initiate radical chain reactions which can upset the natural ozone balance. DDT, the first chlorinated organic insecticide, was originally prepared in 1873. However, it was not until 1939 that Paul Muller of Geigy Pharmaceuticals in Switzerland discovered the effectiveness of DDT as an insecticide. Paul Muller was awarded the Nobel Prize in Medicine and Physiology in 1948 for this discovery. - The use of DDT increased enormously worldwide after World War II, primarily because of its effectiveness against the mosquito which spreads malaria and lice which carry typhus. Problems related to extensive use of DDT began to appear in the late 1940s. - Many species of insects developed resistance to DDT. - It has a high toxicity towards fish. - The chemical stability of DDT and its fat solubility compounded the problem. DDT is not metabolised very rapidly by animals. Instead, it is deposited and stored in the fatty tissues. If ingestion continues at a steady rate, DDT builds up within the animal over time. The use of DDT was banned in the United States in 1973, although it is still in use in some other parts of the world. The tertiary alkyl halides react by SN1 mechanism via formation of carbocation as intermediate. The reactivity order for SN1 reaction is Benzyl > Allyl > 3° > 2° > 1° > CH3X. A mechanism for the reaction of tert-butyl chloride with water apparently involves two steps: Stereochemistry of SN1: More stable will be the carbocation intermediate; faster will be the SN1 mechanism. Polar solvents lead to polar transition state which in turn accelerates the SN1 reaction. If the initial compound is chiral then SN1 reaction ends up with racemization of the product. Weaker bases being leaving group favor SN1 reaction. In case of SN2 reactions the halide ion leaves from the front side whereas the nucleophiles attacks from the back side; due to this reason SN2 reactions are always accompanied by the inversion of configuration. Thus formation of another enantiomer is lead by SN2 reaction of an optically active halide i.e. optical activity is retained but with opposite configuration. Stereochemistry of SN2: In SN2 reaction the stereochemistry around carbon atom of the substrate undergoes inversion and is known as walden inversion. The rate of reaction depends on the steric bulk of the alkyl group. Increase in the length of alkyl group decreases the rate of reaction. Alkyl branching next to the leaving group decreases the rate drastically. Under the following conditions SN1 and SN2 reactions take place: - The alkyl is secondary and tertiary. - The solvent is Protic or Aprotic. - To stabilize the intermediate stage.. It is a unimolecular reaction. Rate determining step consist of formation of carbocation intermediate. Stability of carbocation intermediate determines the reactivity of E1 reaction. Order of reactivity for E1 reaction is 30° > 20° > 10°. Both elimination and substitution reaction involves the use of (same reactive intermediate) carbocation. Therefore both the products are formed in comparable amount. This reaction is favored by entropy of reaction therefore increase in temperature favors the E1 reaction. Stereochemistry of E1 reaction: E1 eliminations generally lead to the more stable stereochemistry. The rate of the E1 reaction depends only on the substrate, therefore more stable the carbocation is, faster will be the reaction. Slowest step is the formation of the carbocation. Alkenes formation doesn’t require strong base, since there is no leaving group that needs to be displaced. So there is no requirement for the stereochemistry of the starting material; It’s a biomolecular reaction. It is a single step reaction whose rate depends on the concentration of base and substrate. Reactivity depends on both strength of base and nature of alkyl halide. Order of reactivity for E1 reaction is 30° > 20° > 10°. This reaction proceeds at room temperature. Stereochemistry of E2 reaction: E2 eliminations may or may not lead to the more stable stereochemistry. Initial material for this reaction has two sp3 hybridized carbons which on rehybridization forms two sp2 hybridized carbons. The C-X bond and the C-H bond lines up in the same plane and faces in anti directions to each other. - Chapter 1 The Solid State - Chapter 2 Solutions - Chapter 3 Electrochemistry - Chapter 4 Chemical Kinetics - Chapter 5 Surface Chemistry - Chapter 6 General Principles and Processes of Isolation of Elements - Chapter 7 The p Block Elements - Chapter 8 The d and f Block Elements - Chapter 9 Coordination Compounds - Chapter 10 Haloalkanes and Haloarenes - Chapter 11 Alcohols Phenols and Ethers - Chapter 12 Aldehydes Ketones and Carboxylic Acids - Chapter 13 Amines - Chapter 14 Biomolecules - Chapter 15 Polymers - Chapter 16 Chemistry in Everyday Life
We are all well aware of COVID-19, and by now most people have seen pictures of the spike protein that forms the “handshake” interaction between virus and host cells and is the basis of two new vaccines. The COVID-19 virus is made of RNA, which manufactures the spike protein and all the other proteins that allow it to survive. What if scientists could target the RNA in the virus before that manufacturing process even begins? That’s where my work centers, around COVID-19’s viral RNA. Before the proteins that infect your cells can be built, the viral RNA, which contains the blueprints to produce proteins essential for viral replication, must be read by the ribosome, the place where proteins are put together within a cell. Parts of the viral RNA form flexible structures that regulate the ability to read it and create proteins from it. If we can develop drugs that interfere with these RNA structures forming, the virus can’t function. In my work at the National Institute of Standards and Technology (NIST), I use computer simulations to predict what these RNA structures look like and how they move to gain a better understanding of how they can be targeted by drugs. RNA structures are tricky to predict compared with protein structures for a few reasons. We have 3D maps of many more proteins (about 40 times more!) from all types of organisms, so algorithms to predict RNA structures start out with much less information about what they might look like. This is because RNA can be difficult to work with experimentally — so there are fewer tools available, at a higher cost, and generating data from experiments takes a long time. Also, RNA can be very flexible (aka “dynamic”), adding complexity to the prediction. Frequently, the same piece of RNA can form multiple structures, and we take them all into account to create an ensemble, a group viewed as a whole rather than as single, individual parts. We are looking at RNA in a section of the COVID-19 viral genome that sets up translation, which is the process of reading the RNA to create the proteins the virus needs to survive. Specifically, two short regions called Stem Loop 2 and Stem Loop 3 (SL2 and SL3) contain important parts of the RNA that interact with other parts of the RNA to control the manufacture (expression) of proteins. They are called stem loops because the genetic sequence — repeats of the “letters” C, U, A, G — pair up C-G and A-U at the beginning and end of the sequence into a helix to form a ladder-like stem, while the middle part of the remaining sequence is unpaired in a loop. The RNA in SL2 has the same genetic sequence as the SL2 in the coronavirus SARS-CoV-1, the virus that caused severe acute respiratory syndrome (SARS) in the early 2000s. So we hypothesized that the SL2 in the COVID-19 virus must adopt the same 3D structure found in the earlier SARS virus. However, computational predictions, which try to match sequences to parts of already determined RNA structures, generated 3D structures very different from the reported SARS-CoV-1 SL2 RNA. We wanted to find out if that would change using more advanced computational prediction. Using all-atom molecular dynamics simulations, where we explicitly model the RNA, water and ions that would be present in the cellular environment, we find that the RNA rearranges quickly to match the RNA loop structure from SARS-CoV-1 with the same sequence. This shows that the all-atom molecular dynamics can adjust previous rough predictions that might not show the fine details of structure and resolve the dynamics of the RNA. And this means we can use it to predict details for something that we don’t have any other information on — a blind prediction. For instance, SL3 is another short piece of the viral RNA that we think forms a loop. In many coronavirus genomes, there is something called a transcription regulatory leader sequence here. This transcription regulatory leader sequence helps to control protein expression. Some viruses have this piece of RNA unstructured, or flexible and able to adopt many different shapes, while other viruses, such as the one that causes COVID-19, are predicted to have this part of the RNA structured, or rigid and resistant to taking on different shapes. This RNA structure would also need to be easily disrupted for it to do its job and interact with other parts of RNA — making predicting its dynamics important if we are going to try and change them! Simulations of SL3 show us that it is very flexible and adopts many different structures. Every so often, a potassium ion binds to SL3 and stabilizes a particular structure, creating a scaffold so the region we’re looking at can be recognized by RNA from further away. This allows for the RNA in this region to have enough structure to trigger reading, while making it easy enough to eliminate structure so the reading can smoothly progress. You can imagine it as a draw bridge, which needs to be down for cars to pass over it and up for boats to pass under it — like the SL3 RNA, both orientations are essential to its job. We know that the computer simulations that we use result in models that are accurate because they are close to structures that we have actually seen in lab experiments. By having confidence in the computer simulation methods, we can extend them to other parts of the RNA. The SL3 computational prediction links the RNA structure to known function for how transcription is controlled. Using molecular dynamics to link and predict structure and function is the goal of these computational methods. Predicting RNA structure is also important for developing drugs and vaccines where the RNA is itself the “active ingredient,” as in the Pfizer and Moderna COVID-19 vaccines. In these vaccines, the RNA needs to interact with other “ingredients” to come together in a formulation that can get the RNA into cells in the right amount of time, allowing its code to be read by cellular machinery, while remaining stable in vials in the clinic at reasonable temperatures. By understanding structure and function, we can engineer stability into drug products, optimizing for downstream manufacturing concerns such as avoiding extremely cold storage temperatures, for example. We are using computer simulations and all-atom molecular dynamics to predict how these pieces interact and how we can change the ingredients to help make stable vaccines. This expands work done under the NIST Biomanufacturing Initiative, a program that to date has largely focused on measurements and standards to support development of protein-based drugs, to RNA-based drug platforms. Given the technical challenges, cost and time required for exploratory experimental work in the development of RNA-based drug platforms, application of fast computational algorithms to perform the biophysical characterization that is central to our work at NIST can be used to save stakeholders time and money, and to help expedite bringing these life-saving drugs to the public. #RNA to create the protein. Hi Dr. Bergonzo, Your article is thought-provoking. I believe your line of inquiry would be very helpful if successful eventually. I am also interested in Biomolecular Structure and Function, but am a novice in this field. I recently retired as Professor of Aerospace Engineering Sciences at the University of Colorado, Boulder, and hence have some time for other topics of curiosity. However my background is fluid flows, rockets and aircraft engines, and nothing in RNA structure etc. Can you please suggest a book or journal article(s) (yours, hopefully) to get started? Also, is the RNA sequence of COVID19 virus available in digital format to play with? I am fundamentally interested in the helical and loop structures of RNA. I would like to better understand how a helix or a loop is formed or why one is preferred over the other. Thanks in advance. My e-mail is firstname.lastname@example.org. Thank you for your compliment and comment! You can find the complete primary sequence of the virus deposited in the NCBI database, and is linked here: https://www.ncbi.nlm.nih.gov/nuccore/NC_045512 You can find the proposed secondary structures from the Das lab deposited on their GitHub site: https://github.com/DasLab/FARFAR2-SARS-CoV-2 And for a quick and comprehensive reference of RNA structure, I'd look at this Biopolymers review by Prof. Turner (DOI: 10.1002/bip.22294) and the famous reference "Principles of Nucleic Acid Structure" by Wolfram Saenger, 1984, New York: Springer-Verlag.
M87: Viewing the Black Hole Feeding Scientists at the Max-Planck Institute for Astrophysics and their collaborators have used the Chandra X-ray satellite to unveil the gravitational capture radius of the the black hole in M87. They have measured the matter density and temperature at this radius and calculated the rate at which matter is fed into its central black hole. It is now well established that black holes exist in the centers of most galaxies. The black hole in the center of the galaxy M87, located 50 million light-years away in the constellation of Virgo, is the largest nearby black hole. It weighs more than a billion suns, and is concentrated into a region no larger than our Solar System. The mass was measured with the Hubble Space Telescope which revealed a disk of rapidly rotating, hot gas with a central black hole at its hub. |Figure 1: On the left is the Chandra X-ray light coming from the central supermassive black hole in the core of the galaxy M87 (yellow) and from the hot interstellar gas in the galaxy (red). The X-ray light peaks at the position of the black hole. The luminosity of the X-ray light is much lower than expected from standard, efficient black-hole accretion theory. The black circle in the center indicates the position at the black hole capture radius where gas temperature and density have been measured. On the right a NASA Hubble Space Telescope image of the galaxy has previously revealed a disk of rapidly rotating, hot gas with a central black hole at its hub. A brilliant jet of high-speed electrons emitted from the nucleus of M87 (diagonal feature across the images) is also thought to be produced by the black-hole "engine". The brightest spots in this jet are also emitting X-ray light (and radio emission; as shown by the green contours).| While supermassive black holes in the distant Universe generate huge luminosities (in phenomena known as quasars) the vast majority of supermassive black holes in our vicinity are extremely dim. M87 is perhaps the most illustrative case of a dim supermassive black hole. The luminosity from a black hole is produced by matter falling into a black hole through what is known as an accretion disk. Typically, a supermassive black hole in a galaxy such as M87, can capture the interstellar gas out to distances 10000 times its radius. Thanks to its unprecedented sensitivity and resolution the Chandra X-ray Satellite has shown the location at which the gas is being pulled in towards the black hole. This has allowed scientists to measure the rate at which matter falls in from this region. These new results show that the black hole in M87 is feeding on gas at a sufficiently high rate that it should display a much larger luminosity than what is observed. This suggests that nearby black holes are not dim because they are starved of fuel but rather, because their accretion disks are inefficient at converting mass into energy. Tiziana Di Matteo
About This Chapter Who's it for? This unit of our High School Precalculus Homeschool course will benefit any student who is trying to learn to solve quadratic equations. There is no faster or easier way to learn about precalculus. Among those who would benefit are: - Students who require an efficient, self-paced course of study to learn to solve quadratic equations by substitution. - Homeschool parents looking to spend less time preparing lessons and more time teaching. - Homeschool parents who need a precalculus curriculum that appeals to multiple learning types (visual or auditory). - Gifted students and students with learning differences. How it works: - Students watch a short, fun video lesson that covers a specific unit topic. - Students and parents can refer to the video transcripts to reinforce learning. - Short quizzes and a Quadratics unit exam confirm understanding or identify any topics that require review. Quadratics Unit Objectives: - Use two binomials to solve quadratic inequalities. - Use the quadratic formula to solve quadratic equations. - Solve quadratic equations that aren't in standard form. - Assign the multiplication property of zero and the greatest common factor when solving quadratics. 1. What is a Quadratic Equation? - Definition & Examples You might be surprised to learn that quadratic equations are an important part of the world we live in. We sometimes even make use of them for our entertainment! Watch this video lesson to learn more about these equations. 2. Solving Quadratics: Assigning the Greatest Common Factor and Multiplication Property of Zero When it comes to solving quadratics, we have at our disposal several methods. Watch this video lesson to learn one such method you can use to solve quadratics that don't have a constant term. 3. How to Use the Quadratic Formula to Solve a Quadratic Equation When solving a quadratic equation by factoring doesn't work, the quadratic formula is here to save the day. Learn what it is and how to use it in this lesson. 4. How to Solve Quadratics That Are Not in Standard Form It isn't always the case that your equation will be set up nicely for you to solve. In this lesson, learn how to factor or use the quadratic formula to solve quadratic equations, even when they are not in standard form. 5. Solving Quadratic Inequalities Using Two Binomials Watch this video lesson to learn what simple steps you can take to solve quadratic inequalities using two binomials. See how it only requires a few steps to solve quadratic inequalities using this method. 6. Solving Quadratic Equations by Substitution After watching this video lesson, you will be able to solve more complicated quadratic equations by the method of substitution. Learn how substitution makes your problem solving easier. Earning College Credit Did you know… We have over 160 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level. To learn more, visit our Earning Credit Page Transferring credit to the school of your choice Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you. Other chapters within the High School Precalculus: Homeschool Curriculum course - Graphing Linear Equations: Homeschool Curriculum - Solving & Graphing Inequalities: Homeschool Curriculum - Absolute Value Expressions: Homeschool Curriculum - Graphing Complex Numbers: Homeschool Curriculum - Solving Linear Systems: Homeschool Curriculum - Linear Models: Homeschool Curriculum - Quadratic Functions & Equations: Homeschool Curriculum - Geometry Basics: Homeschool Curriculum - Functions for Precalculus: Homeschool Curriculum - Using Function Operations: Homeschool Curriculum - Recognizing Graph Symmetry: Homeschool Curriculum - How to Graph Functions: Homeschool Curriculum - Rate of Change: Homeschool Curriculum - Polynomial Functions: Homeschool Curriculum - Higher-Degree Polynomials: Homeschool Curriculum - Difference Quotients in Precalculus: Homeschool Curriculum - Working with Rational Expressions: Homeschool Curriculum - Evaluating Logarithmic Functions: Homeschool Curriculum - Functions in Trigonometry: Homeschool Curriculum - Using Trigonometric Graphs: Homeschool Curriculum - Trigonometric Equations: Homeschool Curriculum - Basic Trigonometric Identities: Homeschool Curriculum - Laws of Sine & Cosine: Homeschool Curriculum - Graphing Piecewise Functions: Homeschool Curriculum - Vectors & Matrices: Homeschool Curriculum - Mathematical Sequences: Homeschool Curriculum - Identifying Conic Sections: Homeschool Curriculum - Parametric Equations: Homeschool Curriculum - Continuity: Homeschool Curriculum - Limits in Precalculus: Homeschool Curriculum - Groups & Sets in Algebra: Homeschool Curriculum
1. Write a program that converts a number entered in Roman numerals to decimal form. Your program should consist of a class, say romanType. An object of romanType should do the following: a. Store the number as a Roman numeral. b. Convert and store the number into decimal form. c. Print the number as a Roman numeral or decimal number as requested by the user. (Write two separate functions—one to print the number as a Roman numeral and the other to print the number as a decimal number.) The decimal values of the Roman numerals are: M 1000 D 500 C 100 L 50 X 10 V 5 I 1 Remember, a larger numeral preceding a smaller numeral means addition, so LX is 60. A smaller numeral preceding a larger numeral means subtraction, so XL is 40. Any place in a decimal number, such as the is place, the 10s place, and so on, requires from zero to four Roman numerals. d. Test your program using the following Roman numerals: MCXIV, CCCLIX, and MDCLXVI. 2. Write the definition of the class dayType that implements the day of the week in a program. The class dayType should store the day, such as Sunday for Sunday. The program should be able to perform the following operations on an object of type dayType: a. Set the day. h. Print the day. c. Return the day. d. Return the next day. e. Return the previous day. f. Calculate and return the day by adding certain days to the current day. For example, if the current day is Monday and we add 4 days, the day to be returned is Friday. Similarly, if today is Tuesday and we add 13 days, the day to be returned is Monday. g. Add the appropriate constructors. – 3. Write the definitions of the functions to implement the operations for the Class dayType as defined in Programming Exercise 2. Also, write a program to test various operations on this class. 2. In this chapter, the class dateType was designed to implement the date in a program, but the member function setDate and the constructor do not check whether the date is valid before storing the date in the data members. Rewrite the definitions of the function setDate and the constructor so that the values for the month, day, and year are checked before storing the date into the data members. Add a function member, isLeapYear, to check whether a year is a leap year. Moreover, write a test program to test your class. 6. In Programming Exercise 2, the class dateType was designed and implemented to keep track of a date, but it has very limited operations. Redefine the class dateType so that it can perform the following operations on a date in addition to the operations already defined: a. Set the month. b. Set the day. c. Set the year. d. Return the month. e. Return the day. f. Return the year. g. Test whether the year is a leap year. h. Return the number of clays in the month. For example, if the date is 3-12-2011, the number of days to be returned is 31 because there are 31 days in March. I. Return the number of days passed in the year. For example, if the date is 3-18-2011, the number of days passed in the year is 77. Note that the number of days returned also includes the current day. 126 I Chapter 2: Object-Oriented Design (00D) and C++ I. Return the number of days remaining in the year. For example, if the date is 3-18-2011, the number of days remaining in the year is 288. k. Calculate the new date by adding a fixed number of days to the date. For example, if the date is 3-18-2011 and the days to be added ‘are 25, the new date is 4-12-2011. 20. a. In Programming Exercise 1 in Chapter 1, we defined a class romanType to implement Roman numerals in a program. In that exercise, we also implemented a function, romanToDecimal, to convert a Roman numeral into its equivalent decimal number. Modify the definition of the class romanType so that the data members are declared as protected. Use the class string to manipulate the strings. Furthermore, overload the stream insertion and stream extraction operators for easy input and output. The stream insertion operator outputs the Roman numeral in the Roman format. Also, include a member function, decimalToRom’en, that converts the decimal number (the decimal number must be a positive integer) to an equivalent Roman numeral format. Write the definition of the member function decimalToRoman. For simplicity, we assume that only the letter I can appear in front of another letter and that it appears only in front of the letters v and X. For example, 4 is represented as IV, 9 is represented as Ix, 39 is represented as xxxix, and 49 is represented as xxxxix. Also, 40 will be represented as XXXX, 190 will be represented as CLXXXX, and so on. b. Derive a class extRomanType from the class romanType to do the following. In the class extRomanType, overload the arithmetic operators +, *, and / so that arithmetic operations can be performed on Roman numerals. Also, overload the pre- and postincrement and decrement operators as member functions of the class extRomanType. To add (subtract, multiply, or divide) Roman numerals, add (subtract, multiply, or divide, respectively) their decimal representations and then convert the result to the Roman numeral format. For subtraction, if the first number is smaller than the second number, output a message saying that, “Because the first number is smaller than the second, the numbers cannot be subtracted”. Similarly, for division, the numerator must be larger than the denominator. Use similar conventions for the increment and decrement operators. c. Write the definitions of the functions to overload the operators described in part b. d. Write a program to test your class extRomanType.
Sequences and Series Teacher Resources Find Sequences and Series educational ideas and activities Showing 1 - 20 of 70 resources In this sequences and series worksheet, students complete seven activities covering arithmetic sequences, geometric sequences, and series. Fully worked out examples, formulas, and explanations are included. Learn to calculate the nth term of a sequence defined by an algebraic expression. They examine geometric sequences and series, relate geometric sequences to their explicit forms, find the partial sums of a sequence in a table, and determine if a geometric series is converging or diverging using a graphing calculator. Learners investigate sequences and series numerically, graphically, and symbolically. For this sequences and series lesson, students use their Ti-89 to determine if a series is convergent. Learners find the terms in a sequence and series and graph them. Students use summation notation to determine the sum of a sequence. In this sequence worksheet, students determine the sum of numbers in a sequence. They evaluate an arithmetic sequence. This seven-page worksheet contains detailed notes, examples, and 10 problems. Students find patterns in a sequence. In this sequences and series lesson, students use their calculator to find the sequence of partial sums. They graph functions and explore convergent series. Students approximate alternating series. In this algebra instructional activity, students solve sequences using the geometric mean for sequences and series. They have to find the next number in the pattern using the formula for geometric means. There are 23 problems to be solved. In this Algebra II worksheet, 11th graders review sequences and series. Students solve problems in order to determine the caption for a picture. In this sequencing worksheet, students solve problems with sequences and series. They derive formulas to solve word problems and receive specific points for doing so. There are 8 questions with an answer key. In this sequences and series activity, 10th graders solve and complete 13 different problems that include infinite geometric series. First, they find the sum of each infinite geometric series. Then, students find the first three terms of each geometric series. They also express each decimal as a rational number in a given form. In this sequences and series instructional activity, students find the sum of a given geometric series. They use sigma notation to express a geometric series. This one-page instructional activity contains 12 problems. Students explore the concept of arithmetic series and sequences. In this arithmetic series and sequence instructional activity, students compare arithmetic sequences and series. Students solve arithmetic series and sequence word problems. Students identify patterns found in nature. In this algebra lesson, students model situation in nature using fractals. They investigate biological geometric structures and draw conclusion based on geometric concepts. In this algebra worksheet, students perform operations using arithmetic and geometric sequences and series. They find the sum, missing and repeated terms. There are 22 problems Students read Nina Bonita by Ana Maria Machado. In this reading comprehension/ geography lesson, students recall various parts of the story and create a map of where the rabbit traveled throughout the story. They participate in group discussions about setting and observations of race intolerance, which is brought up in the story. This lesson includes ideas for extension activities. Students explain the purpose of a photo essay, sequence a series of events, and explain the format in creating a photo essay, which includes a caption for each picture. They complete a photo essay as a creative activity. Learners find common ratios of geometric sequences on a spreadsheet. Using this data, they create scatter plots of the sequences to determine how each curve is related to the value of the common ratio. They will consider whether series converge or diverge and view graphs of series to determine the behavior as n increases. In this sequences and series worksheet, students solve 18 multiple choice problems. Students find the next terms in a sequence, the sum of series, determine if a series is convergent, expand binomials, etc. In this geometric sequence worksheet, students find the next two terms of a geometric sequence. They identify the nth term and complete a sequence statement. Next, they find the geometric means of a given sequence. This one-page worksheet contains 15 problems. Learners use Excel Database to explore sequences and series. They examine how to calculate term positions and Sigma Notations. Students use Excel to calculate the values and they print the pages for grading. In this sequences and series worksheet, 11th graders solve and complete 10 different problems that include identifying various sequences and series. First, they find the first six terms of each sequence. Then, students determine the first three iterations of each function listed, using the given initial value.
ETYM Latin cometes, cometa, from Greek, comet, prop. long-haired, from koman to wear long hair, from kome hair. (Astronomy) A relatively small celestial body consisting of a frozen mass that travels around the sun in a highly elliptical orbit. Small, icy body orbiting the Sun, usually on a highly elliptical path. A comet consists of a central nucleus a few miles across, and has been likened to a dirty snowball because it consists mostly of ice mixed with dust. As the comet approaches the Sun the nucleus heats up, releasing gas and dust which form a tenuous coma, up to 100,000 km/60,000 mi wide, around the nucleus. Gas and dust stream away from the coma to form one or more tails, which may extend for millions of miles. Comets are believed to have been formed at the birth of the Solar System. Billions of them may reside in a halo (the Oort cloud) beyond Pluto. The gravitational effect of passing stars pushes some toward the Sun, when they eventually become visible from Earth. Most comets swing around the Sun and return to distant space, never to be seen again for thousands or millions of years, although some, called periodic comets, have their orbits altered by the gravitational pull of the planets so that they reappear every 200 years or less. Of the 800 or so comets whose orbits have been calculated, about 160 are periodic. The brightest is Halley’s comet. The one with the shortest known period is Encke’s comet, which orbits the Sun every 3.3 years. A dozen or more comets are discovered every year, some by amateur astronomers. A vast amount of data concerning comets and their orbits have been accumulated, much of it consistent with the hypothesis that they have their origin in a reservoir of appropriate material on the confines of the solar system. This material and the lumps into which it accretes move round the Sun in long-period orbits, relatively few of which have perihelion distances of less than 50 astronomical units. Every now and again, however, some of these orbits are perturbed, either by mutual action or by the action of passing stars, so that lumps traveling in them will pass sufficiently close to the Sun to become visible as comets. Comets are divided into two classes according to their orbital period, those with periods less than 200 years being known as short-period, the remainder as long-period comets. The orbits of the long-period comets are inclined at all angles to the ecliptic, and about equal numbers of them are direct and retrograde. The short-period comets, on the other hand, move mainly in direct orbits that. Lie close to the mean plane of the solar system and have aphelion distances close to the orbit of Jupiter. Such comets are sometimes referred to as belonging to Jupiter’s family, and there are similar, but less numerous, families associated with the other major planets. There is little doubt that they are long-period comets that have been captured. B. G. Marsden’s Catalog of Cometary Orbits compiled in 1972 lists 97 short-period comets and 503 long-period ones. Most comets are named for their discoverers and denoted by letters in order of discovery each year, but are subsequently numbered in order of their perihelion passage; e.g. the Arend-Roland Comet was 1956h but became 1957 III. Some of the orbits of the long-period comets so closely resemble each other that there is little doubt that they are traced out by fragments of a former bigger comet that broke into pieces as it passed round the Sun. One of the best known of these groups is that of the bright Sun-grazing comets which includes the Great Comet of 1. 668, 1843 I, 1880 I, 1882 II, 1887 I, 1945 VII, 1963 V, 1965 VIII, 1970 VI, and possibly one or two others. These comets passed through the solar corona and near perihelion were bright enough to be observed in daylight. Few of the short-period comets show conspicuous tails, presumably because they have lost most of their volatile material. Some disintegrate and are not seen again, though some of the fragments may cause periodic meteor showers. Zvezda repatica, kosara, kreće se oko Sunca parabolnom ili eliptičnom putanjom (velikog ekscentriteta) tako da je teško razlikovati eliptičnu i parabolnu putanju. (grč.)
Parity of zero Information In mathematics, the parity of zero is even, or zero is an even number. In other words, its parity—the quality of an integer being even or odd—is even. This can be easily verified based on the definition of "even": it is an integer multiple of 2, specifically 0 × 2. As a result, zero shares all the properties that characterize even numbers: for example, 0 is neighbored on both sides by odd numbers, any decimal integer has the same parity as its last digit—so, since 10 is even, 0 will be even, and if y is even then y + x has the same parity as x—and 0 + x always have the same parity. Zero also fits into the patterns formed by other even numbers. The parity rules of arithmetic, such as even − even = even, require 0 to be even. Zero is the additive identity element of the group of even integers, and it is the starting case from which other even natural numbers are recursively defined. Applications of this recursion from graph theory to computational geometry rely on zero being even. Not only is 0 divisible by 2, it is divisible by every power of 2, which is relevant to the binary numeral system used by computers. In this sense, 0 is the "most even" number of all. Among the general public, the parity of zero can be a source of confusion. In reaction time experiments, most people are slower to identify 0 as even than 2, 4, 6, or 8. Some teachers—and some children in mathematics classes—think that zero is odd, or both even and odd, or neither. Researchers in mathematics education propose that these misconceptions can become learning opportunities. Studying equalities like 0 × 2 = 0 can address students' doubts about calling 0 a number and using it in arithmetic. Class discussions can lead students to appreciate the basic principles of mathematical reasoning, such as the importance of definitions. Evaluating the parity of this exceptional number is an early example of a pervasive theme in mathematics: the abstraction of a familiar concept to an unfamiliar setting. The standard definition of "even number" can be used to directly prove that zero is even. A number is called "even" if it is an integer multiple of 2. As an example, the reason that 10 is even is that it equals 5 × 2. In the same way, zero is an integer multiple of 2, namely 0 × 2, so zero is even. It is also possible to explain why zero is even without referring to formal definitions. The following explanations make sense of the idea that zero is even in terms of fundamental number concepts. From this foundation, one can provide a rationale for the definition itself—and its applicability to zero. Given a set of objects, one uses a number to describe how many objects are in the set. Zero is the count of no objects; in more formal terms, it is the number of objects in the empty set. The concept of parity is used for making groups of two objects. If the objects in a set can be marked off into groups of two, with none left over, then the number of objects is even. If an object is left over, then the number of objects is odd. The empty set contains zero groups of two, and no object is left over from this grouping, so zero is even. These ideas can be illustrated by drawing objects in pairs. It is difficult to depict zero groups of two, or to emphasize the nonexistence of a leftover object, so it helps to draw other groupings and to compare them with zero. For example, in the group of five objects, there are two pairs. More importantly, there is a leftover object, so 5 is odd. In the group of four objects, there is no leftover object, so 4 is even. In the group of just one object, there are no pairs, and there is a leftover object, so 1 is odd. In the group of zero objects, there is no leftover object, so 0 is even. There is another concrete definition of evenness: if the objects in a set can be placed into two groups of equal size, then the number of objects is even. This definition is equivalent to the first one. Again, zero is even because the empty set can be divided into two groups of zero items each. Numbers can also be visualized as points on a number line. When even and odd numbers are distinguished from each other, their pattern becomes obvious, especially if negative numbers are included: With the introduction of multiplication, parity can be approached in a more formal way using arithmetic expressions. Every integer is either of the form (2 × ▢) + 0 or (2 × ▢) + 1; the former numbers are even and the latter are odd. For example, 1 is odd because 1 = (2 × 0) + 1, and 0 is even because 0 = (2 × 0) + 0. Making a table of these facts then reinforces the number line picture above. The precise definition of a mathematical term, such as "even" meaning "integer multiple of two", is ultimately a convention. Unlike "even", some mathematical terms are purposefully constructed to exclude trivial or degenerate cases. Prime numbers are a famous example. Before the 20th century, definitions of primality were inconsistent, and significant mathematicians such as Goldbach, Lambert, Legendre, Cayley, and Kronecker wrote that 1 was prime. The modern definition of "prime number" is "positive integer with exactly 2 factors", so 1 is not prime. This definition can be rationalized by observing that it more naturally suits mathematical theorems that concern the primes. For example, the fundamental theorem of arithmetic is easier to state when 1 is not considered prime. It would be possible to similarly redefine the term "even" in a way that no longer includes zero. However, in this case, the new definition would make it more difficult to state theorems concerning the even numbers. Already the effect can be seen in the algebraic rules governing even and odd numbers. The most relevant rules concern addition, subtraction, and multiplication: - even ± even = even - odd ± odd = even - even × integer = even Inserting appropriate values into the left sides of these rules, one can produce 0 on the right sides: - 2 − 2 = 0 - −3 + 3 = 0 - 4 × 0 = 0 The above rules would therefore be incorrect if zero were not even. At best they would have to be modified. For example, one test study guide asserts that even numbers are characterized as integer multiples of two, but zero is "neither even nor odd". Accordingly, the guide's rules for even and odd numbers contain exceptions: - even ± even = even (or zero) - odd ± odd = even (or zero) - even × nonzero integer = even Making an exception for zero in the definition of evenness forces one to make such exceptions in the rules for even numbers. From another perspective, taking the rules obeyed by positive even numbers and requiring that they continue to hold for integers forces the usual definition and the evenness of zero. Countless results in number theory invoke the fundamental theorem of arithmetic and the algebraic properties of even numbers, so the above choices have far-reaching consequences. For example, the fact that positive numbers have unique factorizations means that one can determine whether a number has an even or odd number of distinct prime factors. Since 1 is not prime, nor does it have prime factors, it is a product of 0 distinct primes; since 0 is an even number, 1 has an even number of distinct prime factors. This implies that the Möbius function takes the value μ(1) = 1, which is necessary for it to be a multiplicative function and for the Möbius inversion formula to work. A number n is odd if there is an integer k such that n = 2k + 1. One way to prove that zero is not odd is by contradiction: if 0 = 2k + 1 then k = −1/2, which is not an integer. Since zero is not odd, if an unknown number is proven to be odd, then it cannot be zero. This apparently trivial observation can provide a convenient and revealing proof explaining why an odd number is nonzero. A classic result of graph theory states that a graph of odd order (having an odd number of vertices) always has at least one vertex of even degree. (The statement itself requires zero to be even: the empty graph has an even order, and an isolated vertex has an even degree.) In order to prove the statement, it is actually easier to prove a stronger result: any odd-order graph has an odd number of even degree vertices. The appearance of this odd number is explained by a still more general result, known as the handshaking lemma: any graph has an even number of vertices of odd degree. Finally, the even number of odd vertices is naturally explained by the degree sum formula. Sperner's lemma is a more advanced application of the same strategy. The lemma states that a certain kind of coloring on a triangulation of a simplex has a subsimplex that contains every color. Rather than directly construct such a subsimplex, it is more convenient to prove that there exists an odd number of such subsimplices through an induction argument. A stronger statement of the lemma then explains why this number is odd: it naturally breaks down as (n + 1) + n when one considers the two possible orientations of a simplex. The fact that zero is even, together with the fact that even and odd numbers alternate, is enough to determine the parity of every other natural number. This idea can be formalized into a recursive definition of the set of even natural numbers: - 0 is even. - (n + 1) is even if and only if n is not even. This definition has the conceptual advantage of relying only on the minimal foundations of the natural numbers: the existence of 0 and of successors. As such, it is useful for computer logic systems such as LF and the Isabelle theorem prover. With this definition, the evenness of zero is not a theorem but an axiom. Indeed, "zero is an even number" may be interpreted as one of the Peano axioms, of which the even natural numbers are a model. A similar construction extends the definition of parity to transfinite ordinal numbers: every limit ordinal is even, including zero, and successors of even ordinals are odd. The classic point in polygon test from computational geometry applies the above ideas. To determine if a point lies within a polygon, one casts a ray from infinity to the point and counts the number of times the ray crosses the edge of polygon. The crossing number is even if and only if the point is outside the polygon. This algorithm works because if the ray never crosses the polygon, then its crossing number is zero, which is even, and the point is outside. Every time the ray does cross the polygon, the crossing number alternates between even and odd, and the point at its tip alternates between outside and inside. In graph theory, a bipartite graph is a graph whose vertices are split into two colors, such that neighboring vertices have different colors. If a connected graph has no odd cycles, then a bipartition can be constructed by choosing a base vertex v and coloring every vertex black or white, depending on whether its distance from v is even or odd. Since the distance between v and itself is 0, and 0 is even, the base vertex is colored differently from its neighbors, which lie at a distance of 1. In abstract algebra, the even integers form various algebraic structures that require the inclusion of zero. The fact that the additive identity (zero) is even, together with the evenness of sums and additive inverses of even numbers and the associativity of addition, means that the even integers form a group. Moreover, the group of even integers under addition is a subgroup of the group of all integers; this is an elementary example of the subgroup concept. The earlier observation that the rule "even − even = even" forces 0 to be even is part of a general pattern: any nonempty subset of an additive group that is closed under subtraction must be a subgroup, and in particular, must contain the identity. Since the even integers form a subgroup of the integers, they partition the integers into cosets. These cosets may be described as the equivalence classes of the following equivalence relation: x ~ y if (x − y) is even. Here, the evenness of zero is directly manifested as the reflexivity of the binary relation ~. There are only two cosets of this subgroup—the even and odd numbers—so it has index 2. Analogously, the alternating group is a subgroup of index 2 in the symmetric group on n letters. The elements of the alternating group, called even permutations, are the products of even numbers of transpositions. The identity map, an empty product of no transpositions, is an even permutation since zero is even; it is the identity element of the group. The rule "even × integer = even" means that the even numbers form an ideal in the ring of integers, and the above equivalence relation can be described as equivalence modulo this ideal. In particular, even integers are exactly those integers k where k ≡ 0 (mod 2). This formulation is useful for investigating integer zeroes of polynomials. There is a sense in which some multiples of 2 are "more even" than others. Multiples of 4 are called doubly even, since they can be divided by 2 twice. Not only is zero divisible by 4, zero has the unique property of being divisible by every power of 2, so it surpasses all other numbers in "evenness". One consequence of this fact appears in the bit-reversed ordering of integer data types used by some computer algorithms, such as the Cooley–Tukey fast Fourier transform. This ordering has the property that the farther to the left the first 1 occurs in a number's binary expansion, or the more times it is divisible by 2, the sooner it appears. Zero's bit reversal is still zero; it can be divided by 2 any number of times, and its binary expansion does not contain any 1s, so it always comes first. Although 0 is divisible by 2 more times than any other number, it is not straightforward to quantify exactly how many times that is. For any nonzero integer n, one may define the 2-adic order of n to be the number of times n is divisible by 2. This description does not work for 0; no matter how many times it is divided by 2, it can always be divided by 2 again. Rather, the usual convention is to set the 2-order of 0 to be infinity as a special case. This convention is not peculiar to the 2-order; it is one of the axioms of an additive valuation in higher algebra. The chart on the right depicts children's beliefs about the parity of zero, as they progress from Year 1 to Year 6 of the English education system. The data is from Len Frobisher, who conducted a pair of surveys of English schoolchildren. Frobisher was interested in how knowledge of single-digit parity translates to knowledge of multiple-digit parity, and zero figures prominently in the results. In a preliminary survey of nearly 400 seven-year-olds, 45% chose even over odd when asked the parity of zero. A follow-up investigation offered more choices: neither, both, and don't know. This time the number of children in the same age range identifying zero as even dropped to 32%. Success in deciding that zero is even initially shoots up and then levels off at around 50% in Years 3 to 6. For comparison, the easiest task, identifying the parity of a single digit, levels off at about 85% success. In interviews, Frobisher elicited the students' reasoning. One fifth-year decided that 0 was even because it was found on the 2 times table. A couple of fourth-years realized that zero can be split into equal parts. Another fourth-year reasoned "1 is odd and if I go down it's even." The interviews also revealed the misconceptions behind incorrect responses. A second-year was "quite convinced" that zero was odd, on the basis that "it is the first number you count". A fourth-year referred to 0 as "none" and thought that it was neither odd nor even, since "it's not a number". In another study, Annie Keith observed a class of 15 second-graders who convinced each other that zero was an even number based on even-odd alternation and on the possibility of splitting a group of zero things in two equal groups. More in-depth investigations were conducted by Esther Levenson, Pessia Tsamir, and Dina Tirosh, who interviewed a pair of sixth-grade students in the USA who were performing highly in their mathematics class. One student preferred deductive explanations of mathematical claims, while the other preferred practical examples. Both students initially thought that 0 was neither even nor odd, for different reasons. Levenson et al. demonstrated how the students' reasoning reflected their concepts of zero and division. |Claims made by students | |"Zero is not even or odd."| |"Zero could be even."| |"Zero is not odd."| |"Zero has to be an even."| |"Zero is not an even number."| |"Zero is always going to be an even number."| |"Zero is not always going to be an even number."| |"Zero is even."| |"Zero is special."| Deborah Loewenberg Ball analyzed US third grade students' ideas about even and odd numbers and zero, which they had just been discussing with a group of fourth-graders. The students discussed the parity of zero, the rules for even numbers, and how mathematics is done. The claims about zero took many forms, as seen in the list on the right. Ball and her coauthors argued that the episode demonstrated how students can "do mathematics in school", as opposed to the usual reduction of the discipline to the mechanical solution of exercises. One of the themes in the research literature is the tension between students' concept images of parity and their concept definitions. Levenson et al.'s sixth-graders both defined even numbers as multiples of 2 or numbers divisible by 2, but they were initially unable to apply this definition to zero, because they were unsure how to multiply or divide zero by 2. The interviewer eventually led them to conclude that zero was even; the students took different routes to this conclusion, drawing on a combination of images, definitions, practical explanations, and abstract explanations. In another study, David Dickerson and Damien Pitman examined the use of definitions by five advanced undergraduate mathematics majors. They found that the undergraduates were largely able to apply the definition of "even" to zero, but they were still not convinced by this reasoning, since it conflicted with their concept images. Researchers of mathematics education at the University of Michigan have included the true-or-false prompt "0 is an even number" in a database of over 250 questions designed to measure teachers' content knowledge. For them, the question exemplifies "common knowledge ... that any well-educated adult should have", and it is "ideologically neutral" in that the answer does not vary between traditional and reform mathematics. In a 2000–2004 study of 700 primary teachers in the United States, overall performance on these questions significantly predicted improvements in students' standardized test scores after taking the teachers' classes. In a more in-depth 2008 study, the researchers found a school where all of the teachers thought that zero was neither odd nor even, including one teacher who was exemplary by all other measures. The misconception had been spread by a math coach in their building. It is uncertain how many teachers harbor misconceptions about zero. The Michigan studies did not publish data for individual questions. Betty Lichtenberg, an associate professor of mathematics education at the University of South Florida, in a 1972 study reported that when a group of prospective elementary school teachers were given a true-or-false test including the item "Zero is an even number", they found it to be a "tricky question", with about two thirds answering "False". Mathematically, proving that zero is even is a simple matter of applying a definition, but more explanation is needed in the context of education. One issue concerns the foundations of the proof; the definition of "even" as "integer multiple of 2" is not always appropriate. A student in the first years of primary education may not yet have learned what "integer" or "multiple" means, much less how to multiply with 0. Additionally, stating a definition of parity for all integers can seem like an arbitrary conceptual shortcut if the only even numbers investigated so far have been positive. It can help to acknowledge that as the number concept is extended from positive integers to include zero and negative integers, number properties such as parity are also extended in a nontrivial way. Adults who do believe that zero is even can nevertheless be unfamiliar with thinking of it as even, enough so to measurably slow them down in a reaction time experiment. Stanislas Dehaene, a pioneer in the field of numerical cognition, led a series of such experiments in the early 1990s. A numeral is flashed to the subject on a monitor, and a computer records the time it takes the subject to push one of two buttons to identify the number as odd or even. The results showed that 0 was slower to process than other even numbers. Some variations of the experiment found delays as long as 60 milliseconds or about 10% of the average reaction time—a small difference but a significant one. Dehaene's experiments were not designed specifically to investigate 0 but to compare competing models of how parity information is processed and extracted. The most specific model, the mental calculation hypothesis, suggests that reactions to 0 should be fast; 0 is a small number, and it is easy to calculate 0 × 2 = 0. (Subjects are known to compute and name the result of multiplication by zero faster than multiplication of nonzero numbers, although they are slower to verify proposed results like 2 × 0 = 0.) The results of the experiments suggested that something quite different was happening: parity information was apparently being recalled from memory along with a cluster of related properties, such as being prime or a power of two. Both the sequence of powers of two and the sequence of positive even numbers 2, 4, 6, 8, ... are well-distinguished mental categories whose members are prototypically even. Zero belongs to neither list, hence the slower responses. Repeated experiments have shown a delay at zero for subjects with a variety of ages and national and linguistic backgrounds, confronted with number names in numeral form, spelled out, and spelled in a mirror image. Dehaene's group did find one differentiating factor: mathematical expertise. In one of their experiments, students in the École Normale Supérieure were divided into two groups: those in literary studies and those studying mathematics, physics, or biology. The slowing at 0 was "essentially found in the [literary] group", and in fact, "before the experiment, some L subjects were unsure whether 0 was odd or even and had to be reminded of the mathematical definition". This strong dependence on familiarity again undermines the mental calculation hypothesis. The effect also suggests that it is inappropriate to include zero in experiments where even and odd numbers are compared as a group. As one study puts it, "Most researchers seem to agree that zero is not a typical even number and should not be investigated as part of the mental number line." Some of the contexts where the parity of zero makes an appearance are purely rhetorical. The issue provides material for Internet message boards and ask-the-expert websites. Linguist Joseph Grimes muses that asking "Is zero an even number?" to married couples is a good way to get them to disagree. People who think that zero is neither even nor odd may use the parity of zero as proof that every rule has a counterexample, or as an example of a trick question. Around the year 2000, media outlets noted a pair of unusual milestones: "1999/11/19" was the last calendar date composed of all odd digits that would occur for a very long time, and that "2000/02/02" was the first all-even date to occur in a very long time. Since these results make use of 0 being even, some readers disagreed with the idea. In standardized tests, if a question asks about the behavior of even numbers, it might be necessary to keep in mind that zero is even. Official publications relating to the GMAT and GRE tests both state that 0 is even. The parity of zero is relevant to odd–even rationing, in which cars may drive or purchase gasoline on alternate days, according to the parity of the last digit in their license plates. Half of the numbers in a given range end in 0, 2, 4, 6, 8 and the other half in 1, 3, 5, 7, 9, so it makes sense to include 0 with the other even numbers. However, in 1977, a Paris rationing system led to confusion: on an odd-only day, the police avoided fining drivers whose plates ended in 0, because they did not know whether 0 was even. To avoid such confusion, the relevant legislation sometimes stipulates that zero is even; such laws have been passed in New South Wales and Maryland. On U.S. Navy vessels, even-numbered compartments are found on the port side, but zero is reserved for compartments that intersect the centerline. That is, the numbers read 6-4-2-0-1-3-5 from port to starboard. In the game of roulette, the number 0 does not count as even or odd, giving the casino an advantage on such bets. Similarly, the parity of zero can affect payoffs in prop bets when the outcome depends on whether some randomized number is odd or even, and it turns out to be zero. The game of " odds and evens" is also affected: if both players cast zero fingers, the total number of fingers is zero, so the even player wins. One teachers' manual suggests playing this game as a way to introduce children to the concept that 0 is divisible by 2. - Arnold 1919, p. 21 "By the same test zero surpasses all numbers in 'evenness.'"; Wong 1997, p. 479 "Thus, the integer b000⋯000 = 0 is the most 'even.' - Penner 1999, p. 34: Lemma B.2.2, The integer 0 is even and is not odd. Penner uses the mathematical symbol ∃, the existential quantifier, to state the proof: "To see that 0 is even, we must prove that ∃k (0 = 2k), and this follows from the equality 0 = 2 ⋅ 0." - Ball, Lewis & Thames (2008, p. 15) discuss this challenge for the elementary-grades teacher, who wants to give mathematical reasons for mathematical facts, but whose students neither use the same definition, nor would understand it if it were introduced. - Compare Lichtenberg (1972, p. 535) Fig. 1 - Lichtenberg 1972, pp. 535–536 "...numbers answer the question How many? for the set of objects ... zero is the number property of the empty set ... If the elements of each set are marked off in groups of two ... then the number of that set is an even number." - Lichtenberg 1972, pp. 535–536 "Zero groups of two stars are circled. No stars are left. Therefore, zero is an even number." - Dickerson & Pitman 2012, p. 191. - Lichtenberg 1972, p. 537; compare her Fig. 3. "If the even numbers are identified in some special way ... there is no reason at all to omit zero from the pattern." - Lichtenberg 1972, pp. 537–538 "At a more advanced level ... numbers expressed as (2 × ▢) + 0 are even numbers ... zero fits nicely into this pattern." - Caldwell & Xiong 2012, pp. 5–6. - Gowers 2002, p. 118 "The seemingly arbitrary exclusion of 1 from the definition of a prime … does not express some deep fact about numbers: it just happens to be a useful convention, adopted so there is only one way of factorizing any given number into primes." For a more detailed discussion, see Caldwell & Xiong (2012). - Partee 1978, p. xxi - Stewart 2001, p. 54 These rules are given, but they are not quoted verbatim. - Devlin 1985, pp. 30–33 - Penner 1999, p. 34. - Berlinghoff, Grant & Skrien 2001 For isolated vertices see p. 149; for groups see p. 311. - Lovász, Pelikán & Vesztergombi 2003, pp. 127–128 - Starr 1997, pp. 58–62 - Border 1985, pp. 23–25 - Lorentz 1994, pp. 5–6; Lovas & Pfenning 2008, p. 115; Nipkow, Paulson & Wenzel 2002, p. 127 - Bunch 1982, p. 165 - Salzmann et al. 2007, p. 168 - Wise 2002, pp. 66–67 - Anderson 2001, p. 53; Hartsfield & Ringel 2003, p. 28 - Dummit & Foote 1999, p. 48 - Andrews 1990, p. 100 - Tabachnikova & Smith 2000, p. 99; Anderson & Feil 2005, pp. 437–438 - Barbeau 2003, p. 98 - Wong 1997, p. 479 - Gouvêa 1997, p. 25 Of a general prime p: "The reasoning here is that we can certainly divide 0 by p, and the answer is 0, which we can divide by p, and the answer is 0, which we can divide by p…" (ellipsis in original) - Krantz 2001, p. 4 - Salzmann et al. 2007, p. 224 - Frobisher 1999, p. 41 - This is the timeframe in United States, Canada, Great Britain, Australia, and Israel; see Levenson, Tsamir & Tirosh (2007, p. 85). - Frobisher 1999, pp. 31 (Introduction), 40–41 (The number zero), 48 (Implications for teaching) - Frobisher 1999, pp. 37, 40, 42; results are from the survey conducted in the mid- summer term of 1992. - Frobisher 1999, p. 41 "The percentage of Year 2 children deciding that zero is an even number is much lower than in the previous study, 32 per cent as opposed to 45 per cent" - Frobisher 1999, p. 41 "The success in deciding that zero is an even number did not continue to rise with age, with approximately one in two children in each of Years 2 to 6 putting a tick in the 'evens' box ..." - Frobisher 1999, pp. 40–42, 47; these results are from the February 1999 study, including 481 children, from three schools at a variety of attainment levels. - Frobisher 1999, p. 41, attributed to "Jonathan" - Frobisher 1999, p. 41, attributed to "Joseph" - Frobisher 1999, p. 41, attributed to "Richard" - Keith 2006, pp. 35–68 "There was little disagreement on the idea of zero being an even number. The students convinced the few who were not sure with two arguments. The first argument was that numbers go in a pattern ...odd, even, odd, even, odd, even... and since two is even and one is odd then the number before one, that is not a fraction, would be zero. So zero would need to be even. The second argument was that if a person has zero things and they put them into two equal groups then there would be zero in each group. The two groups would have the same amount, zero" - Levenson, Tsamir & Tirosh 2007, pp. 83–95 - Ball, Lewis & Thames 2008, p. 27, Figure 1.5 "Mathematical claims about zero." - Ball, Lewis & Thames 2008, p. 16. - Levenson, Tsamir & Tirosh 2007; Dickerson & Pitman 2012 - Dickerson & Pitman 2012. - Ball, Hill & Bass 2005, pp. 14–16 - Hill et al. 2008, pp. 446–447. - Lichtenberg 1972, p. 535 - Ball, Lewis & Thames 2008, p. 15. See also Ball's keynote for further discussion of appropriate definitions. - As concluded by Levenson, Tsamir & Tirosh (2007, p. 93), referencing Freudenthal (1983, p. 460) - Nuerk, Iversen & Willmes (2004, p. 851): "It can also be seen that zero strongly differs from all other numbers regardless of whether it is responded to with the left or the right hand. (See the line that separates zero from the other numbers.)" - See data throughout Dehaene, Bossini & Giraux (1993), and summary by Nuerk, Iversen & Willmes (2004, p. 837). - Dehaene, Bossini & Giraux 1993, pp. 374–376 - Dehaene, Bossini & Giraux 1993, pp. 376–377 - Dehaene, Bossini & Giraux 1993, p. 376 "In some intuitive sense, the notion of parity is familiar only for numbers larger than 2. Indeed, before the experiment, some L subjects were unsure whether 0 was odd or even and had to be reminded of the mathematical definition. The evidence, in brief, suggests that instead of being calculated on the fly by using a criterion of divisibility by 2, parity information is retrieved from memory together with a number of other semantic properties ... If a semantic memory is accessed in parity judgments, then interindividual differences should be found depending on the familiarity of the subjects with number concepts." - Nuerk, Iversen & Willmes 2004, pp. 838, 860–861 - The Math Forum participants 2000; Straight Dope Science Advisory Board 1999; Doctor Rick 2001 - Grimes 1975, p. 156 "...one can pose the following questions to married couples of his acquaintance: (1) Is zero an even number? ... Many couples disagree..." - Wilden & Hammer 1987, p. 104 - Snow 2001; Morgan 2001 - Steinberg 1999; Siegel 1999; Stingl 2006 - Sones & Sones 2002 "It follows that zero is even, and that 2/20/2000 nicely cracks the puzzle. Yet it's always surprising how much people are bothered by calling zero even..."; Column 8 readers 2006a "'...according to mathematicians, the number zero, along with negative numbers and fractions, is neither even nor odd,' writes Etan..."; Column 8 readers 2006b "'I agree that zero is even, but is Professor Bunder wise to 'prove' it by stating that 0 = 2 x 0? By that logic (from a PhD in mathematical logic, no less), as 0 = 1 x 0, it's also odd!' The prof will dispute this and, logically, he has a sound basis for doing so, but we may be wearing this topic a little thin ..." - Kaplan Staff 2004, p. 227 - Graduate Management Admission Council 2005, pp. 108, 295–297; Educational Testing Service 2009, p. 1 - Arsham 2002; The quote is attributed to the heute broadcast of October 1, 1977. Arsham's account is repeated by Crumpacker (2007, p. 165). - Sones & Sones 2002 "Penn State mathematician George Andrews, who recalls a time of gas rationing in Australia ... Then someone in the New South Wales parliament asserted this meant plates ending in zero could never get gas, because 'zero is neither odd nor even. So the New South Wales parliament ruled that for purposes of gas rationing, zero is an even number!'" - A 1980 Maryland law specifies, "(a) On even numbered calendar dates gasoline shall only be purchased by operators of vehicles bearing personalized registration plates containing no numbers and registration plates with the last digit ending in an even number. This shall not include ham radio operator plates. Zero is an even number; (b) On odd numbered calendar dates ..." Partial quotation taken from Department of Legislative Reference (1974), Laws of the State of Maryland, Volume 2, p. 3236, retrieved 2 June 2013 - Cutler 2008, pp. 237–238 - Brisman 2004, p. 153 - Smock 2006; Hohmann 2007; Turner 1996 - Diagram Group 1983, p. 213 - Baroody & Coslick 1998, p. 1.33 - Anderson, Ian (2001), A First Course in Discrete Mathematics, London: Springer, ISBN 978-1-85233-236-5 - Anderson, Marlow; Feil, Todd (2005), A First Course in Abstract Algebra: Rings, Groups, And Fields, London: CRC Press, ISBN 978-1-58488-515-3 - Andrews, Edna (1990), Markedness Theory: the union of asymmetry and semiosis in language, Durham: Duke University Press, ISBN 978-0-8223-0959-8 - Arnold, C. L. (January 1919), "The Number Zero", The Ohio Educational Monthly, 68 (1): 21–22, retrieved 11 April 2010 - Arsham, Hossein (January 2002), "Zero in Four Dimensions: Historical, Psychological, Cultural, and Logical Perspectives", The Pantaneto Forum, archived from the original on 25 September 2007, retrieved 24 September 2007 - Ball, Deborah Loewenberg; Hill, Heather C.; Bass, Hyman (2005), "Knowing Mathematics for Teaching: Who Knows Mathematics Well Enough To Teach Third Grade, and How Can We Decide?", American Educator, hdl: 2027.42/65072 - Ball, Deborah Loewenberg; Lewis, Jennifer; Thames, Mark Hoover (2008), "Making mathematics work in school" (PDF), Journal for Research in Mathematics Education, M14: 13–44 and 195–200, retrieved 4 March 2010 - Barbeau, Edward Joseph (2003), Polynomials, Springer, ISBN 978-0-387-40627-5 - Baroody, Arthur; Coslick, Ronald (1998), Fostering Children's Mathematical Power: An Investigative Approach to K-8, Lawrence Erlbaum Associates, ISBN 978-0-8058-3105-4 - Berlinghoff, William P.; Grant, Kerry E.; Skrien, Dale (2001), A Mathematics Sampler: Topics for the Liberal Arts (5th rev. ed.), Rowman & Littlefield, ISBN 978-0-7425-0202-4 - Border, Kim C. (1985), Fixed Point Theorems with Applications to Economics and Game Theory, Cambridge University Press, ISBN 978-0-521-38808-5 - Brisman, Andrew (2004), Mensa Guide to Casino Gambling: Winning Ways, Sterling, ISBN 978-1-4027-1300-2 - Bunch, Bryan H. (1982), Mathematical Fallacies and Paradoxes, Van Nostrand Reinhold, ISBN 978-0-442-24905-2 - Caldwell, Chris K.; Xiong, Yeng (27 December 2012), "What is the Smallest Prime?", Journal of Integer Sequences, 15 (9), arXiv: 1209.2007, Bibcode: 2012arXiv1209.2007C - Column 8 readers (10 March 2006a), "Column 8", The Sydney Morning Herald (First ed.), p. 18, Factiva SMHH000020060309e23a00049 - Column 8 readers (16 March 2006b), "Column 8", The Sydney Morning Herald (First ed.), p. 20, Factiva SMHH000020060315e23g0004z - Crumpacker, Bunny (2007), Perfect Figures: The Lore of Numbers and How We Learned to Count, Macmillan, ISBN 978-0-312-36005-4 - Cutler, Thomas J. (2008), The Bluejacket's Manual: United States Navy (Centennial ed.), Naval Institute Press, ISBN 978-1-55750-221-6 - Dehaene, Stanislas; Bossini, Serge; Giraux, Pascal (1993), "The mental representation of parity and numerical magnitude" (PDF), Journal of Experimental Psychology: General, 122 (3): 371–396, doi: 10.1037/0096-3422.214.171.1241, archived from the original (PDF) on 19 July 2011, retrieved 13 September 2007 - Devlin, Keith (April 1985), "The golden age of mathematics", New Scientist, 106 (1452) - Diagram Group (1983), The Official World Encyclopedia of Sports and Games, Paddington Press, ISBN 978-0-448-22202-8 - Dickerson, David S; Pitman, Damien J (July 2012), Tai-Yih Tso (ed.), "Advanced college-level students' categorization and use of mathematical definitions" (PDF), Proceedings of the 36th Conference of the International Group for the Psychology of Mathematics Education, 2: 187–195 - Dummit, David S.; Foote, Richard M. (1999), Abstract Algebra (2e ed.), New York: Wiley, ISBN 978-0-471-36857-1 - Educational Testing Service (2009), Mathematical Conventions for the Quantitative Reasoning Measure of the GRE® revised General Test (PDF), Educational Testing Service, retrieved 6 September 2011 - Freudenthal, H. (1983), Didactical phenomenology of mathematical structures, Dordrecht, The Netherlands: Reidel - Frobisher, Len (1999), "Primary School Children's Knowledge of Odd and Even Numbers", in Anthony Orton (ed.), Pattern in the Teaching and Learning of Mathematics, London: Cassell, pp. 31–48 - Gouvêa, Fernando Quadros (1997), p-adic numbers: an introduction (2nd ed.), Springer-Verlag, ISBN 978-3-540-62911-5 - Gowers, Timothy (2002), Mathematics: A Very Short Introduction, Oxford University Press, ISBN 978-0-19-285361-5 - Graduate Management Admission Council (September 2005), The Official Guide for GMAT Review (11th ed.), McLean, VA: Graduate Management Admission Council, ISBN 978-0-9765709-0-5 - Grimes, Joseph E. (1975), The Thread of Discourse, Walter de Gruyter, ISBN 978-90-279-3164-1 - Hartsfield, Nora; Ringel, Gerhard (2003), Pearls in Graph Theory: A Comprehensive Introduction, Mineola: Courier Dover, ISBN 978-0-486-43232-8 - Hill, Heather C.; Blunk, Merrie L.; Charalambous, Charalambos Y.; Lewis, Jennifer M.; Phelps, Geoffrey C.; Sleep, Laurie; Ball, Deborah Loewenberg (2008), "Mathematical Knowledge for Teaching and the Mathematical Quality of Instruction: An Exploratory Study", Cognition and Instruction, 26 (4): 430–511, doi: 10.1080/07370000802177235 - Hohmann, George (25 October 2007), "Companies let market determine new name", Charleston Daily Mail, p. P1C, Factiva CGAZ000020071027e3ap0001l - Kaplan Staff (2004), Kaplan SAT 2400, 2005 Edition, Simon and Schuster, ISBN 978-0-7432-6035-0 - Keith, Annie (2006), "Mathematical Argument in a Second Grade Class: Generating and Justifying Generalized Statements about Odd and Even Numbers", Teachers Engaged in Research: Inquiry in Mathematics Classrooms, Grades Pre-K-2, IAP, ISBN 978-1-59311-495-4 - Krantz, Steven George (2001), Dictionary of algebra, arithmetic, and trigonometry, CRC Press, ISBN 978-1-58488-052-3 - Levenson, Esther; Tsamir, Pessia; Tirosh, Dina (2007), "Neither even nor odd: Sixth grade students' dilemmas regarding the parity of zero", The Journal of Mathematical Behavior, 26 (2): 83–95, doi: 10.1016/j.jmathb.2007.05.004 - Lichtenberg, Betty Plunkett (November 1972), "Zero is an even number", The Arithmetic Teacher, 19 (7): 535–538, doi: 10.5951/AT.19.7.0535 - Lorentz, Richard J. (1994), Recursive Algorithms, Intellect Books, ISBN 978-1-56750-037-0 - Lovas, William; Pfenning, Frank (22 January 2008), "A Bidirectional Refinement Type System for LF", Electronic Notes in Theoretical Computer Science, 196: 113–128, doi: 10.1016/j.entcs.2007.09.021 - Lovász, László; Pelikán, József; Vesztergombi, Katalin L. (2003), Discrete Mathematics: Elementary and Beyond, Springer, ISBN 978-0-387-95585-8 - Morgan, Frank (5 April 2001), "Old Coins", Frank Morgan's Math Chat, The Mathematical Association of America, retrieved 22 August 2009 - Nipkow, Tobias; Paulson, Lawrence C.; Wenzel, Markus (2002), Isabelle/Hol: A Proof Assistant for Higher-Order Logic, Springer, ISBN 978-3-540-43376-7 - Nuerk, Hans-Christoph; Iversen, Wiebke; Willmes, Klaus (July 2004), "Notational modulation of the SNARC and the MARC (linguistic markedness of response codes) effect", The Quarterly Journal of Experimental Psychology A, 57 (5): 835–863, doi: 10.1080/02724980343000512, PMID 15204120, S2CID 10672272 - Partee, Barbara Hall (1978), Fundamentals of Mathematics for Linguistics, Dordrecht: D. Reidel, ISBN 978-90-277-0809-0 - Penner, Robert C. (1999), Discrete Mathematics: Proof Techniques and Mathematical Structures, River Edje: World Scientific, ISBN 978-981-02-4088-2 - Salzmann, H.; Grundhöfer, T.; Hähl, H.; Löwen, R. (2007), The Classical Fields: Structural Features of the Real and Rational Numbers, Cambridge University Press, ISBN 978-0-521-86516-6 - Siegel, Robert (19 November 1999), "Analysis: Today's date is signified in abbreviations using only odd numbers. 1-1, 1-9, 1-9-9-9. The next time that happens will be more than a thousand years from now.", All Things Considered, National Public Radio - Smock, Doug (6 February 2006), "The odd bets: Hines Ward vs. Tiger Woods", Charleston Gazette, p. P1B, Factiva CGAZ000020060207e226000bh - Snow, Tony (23 February 2001), "Bubba's fools", Jewish World Review, retrieved 22 August 2009 - Sones, Bill; Sones, Rich (8 May 2002), "To hide your age, button your lips", Deseret News, p. C07, retrieved 21 June 2014 - Starr, Ross M. (1997), General Equilibrium Theory: An Introduction, Cambridge University Press, ISBN 978-0-521-56473-1 - Steinberg, Neil (30 November 1999), "Even year, odd facts", Chicago Sun-Times (5XS ed.), p. 50, Factiva chi0000020010826dvbu0119h - Stewart, Mark Alan (2001), 30 Days to the GMAT CAT, Stamford: Thomson, ISBN 978-0-7689-0635-6 - Stingl, Jim (5 April 2006), "01:02:03 04/05/06; We can count on some things in life", Milwaukee Journal Sentinel (Final ed.), p. B1, archived from the original on 27 April 2006, retrieved 21 June 2014 - Tabachnikova, Olga M.; Smith, Geoff C. (2000), Topics in Group Theory, London: Springer, ISBN 978-1-85233-235-8 - The Math Forum participants (2000), "A question around zero", Math Forum » Discussions » History » Historia-Matematica, Drexel University, retrieved 25 September 2007 - Turner, Julian (13 July 1996), "Sports Betting – For Lytham Look to the South Pacific", The Guardian, p. 23, Factiva grdn000020011017ds7d00bzg - Wilden, Anthony; Hammer, Rhonda (1987), The rules are no game: the strategy of communication, Routledge Kegan & Paul, ISBN 978-0-7100-9868-9 - Wise, Stephen (2002), GIS Basics, CRC Press, ISBN 978-0-415-24651-4 - Wong, Samuel Shaw Ming (1997), Computational Methods in Physics and Engineering, World Scientific, ISBN 978-981-02-3043-2 |Wikiquote has quotations related to Parity of zero.| - Media related to Parity of zero at Wikimedia Commons - Doctor Rick (2001), "Is Zero Even?", Ask Dr. Math, The Math Forum, retrieved 6 June 2013 - Straight Dope Science Advisory Board (1999), "Is zero odd or even?", The Straight Dope Mailbag, retrieved 6 June 2013 - Is Zero Even? - Numberphile, video with Dr. James Grime, University of Nottingham
Ohm’s Law | What is Ohm’s Law Ohm’s Law | What is Ohm’s Law :- In 1826, the German scientist Dr. George Simon Ohm expressed the relationship between the potential difference across a conductor and the current flowing in it by a law, which is called Ohm’s law. According to Ohm’s law, “If the physical state of a conductor (such as temperature, length, area, etc.) is kept unchanged, then the ratio of the potential difference across its ends to the current flowing through it remains constant.” “If the physical conditions of a conductor such as temperature, pressure, length, area, etc., remain constant, then the applied potential difference across its ends is proportional to the current flowing through it.” That is, if I current flows through the conductor by applying V potential difference, then by Ohm’s law V/I = Constant Here this constant is called the electrical resistance of the conductor and is expressed by R. Electrical Resistance (R) :- The property of a conductor by which it opposes the flow of charge flowing through it, is called resistance. S.I. unit of resistance :- Volt/Ampere or Ohm Definition of 1 Ohm : – If one volt potential difference applied between the ends of a conductor results in one ampere current through it, then the resistance of the conductor is said to be one ohm. 1 Ω = 1 V/1 A Formula of Ohm’s Law :- V = IR Verification of Ohm’s Law :- To verify Ohm’s law, a battery (B), a rheostat (Rh), an ammeter (A) and a conductor wire PQ are taken as shown in the figure. The voltmeter conductor is connected in parallel with the wire PQ and the ammeter is connected in series with the wire. By changing the value of current with the help of rheostat, the corresponding potential difference for the value of different currents is found with a voltmeter. A straight line graph is obtained between the readings of the voltmeter (V) and the ammeter (I). The slope (tanθ) of the above V-I graph is the resistance (R) of the conducting wire PQ. Limitations of Ohm’s Law Ohm’s law has been found valid over a large class of materials, but there are some materials and devices used in electric circuits where the proportionality of V and I does not hold. The main deviations observed are : - V ceases to be proportional to I In above graph the dashed line represents the linear Ohm’s law and the solid line is the voltage V versus current I for a good conductor. 2. The relation between V and I depends on the sign of V, i.e., if I is the current for a certain V, then reversing the direction of V keeping its magnitude fixed, does not produce the same current I in the opposite direction. This happens, for example, in a semiconductor diode. Characteristic curve of a semiconductor diode. 3. The relation between V and I is not unique, i.e., there is more than one value of V for the same current I. For example in V-I graph for GaAs, we get more than one value of V for the same current I. Ohmic and Non-Ohmic Conductors (i) Ohmic Conductors : – The materials which obey Ohm’s law or for which the graph between V and I is a straight line, are called Ohmic conductors. (ii) Non-Ohmic Conductors :- Those materials which do not obey Ohm’s law or for which the graph between V and I is not a straight line but a curved line, are called non-ohmic resistance. - If the temperature of the conductor increases, the amplitude of the vibrations of the positive ions in the conductor also increase. Due to this, the free electrons collide more frequently with the vibrating ions and as a result, the average relaxation time decreases. This results in an increase in resistance of the conductor. - At different temperatures V–I curves are different. Slope (m) of V-I graph = tanθ = V/I = R Here tan θ1 > tan θ2 So R1 > R2 i.e. T1 > T2 Next Topic :- Microscopic form of Ohm’s Law Previous Topic :- Relation Between Current and Drift Velocity
In meteorology, precipitation is any product of the condensation of atmospheric water vapour that falls under gravity. The main forms of precipitation include drizzle, rain, sleet, snow, graupel and hail. Precipitation occurs when a portion of the atmosphere becomes saturated with water vapour, so that the water condenses and "precipitates". Thus, fog and mist are not precipitation but suspensions because the water vapour does not condense sufficiently to precipitate. Two processes, possibly acting together, can lead to air becoming saturated: cooling the air or adding water vapour to the air. Generally, precipitation will fall to the surface; an exception is virga which evaporates before reaching the surface. Precipitation forms as smaller droplets coalesce via collision with other rain drops or ice crystals within a cloud. Rain drops range in size from oblate, pancake-like shapes for larger drops, to small spheres for smaller drops. Unlike raindrops, snowflakes grow in a variety of different shapes and patterns, determined by the temperature and humidity characteristics of the air the snowflake moves through on its way to the ground. While snow and ice pellets require temperatures close to the ground to be near or below freezing, hail can occur during much warmer temperature regimes due to the process of its formation. Moisture overriding associated with weather fronts is an overall major method of precipitation production. If enough moisture and upward motion is present, precipitation falls from convective clouds such as cumulonimbus and can organize into narrow rainbands. Where relatively warm water bodies are present, for example due to water evaporation from lakes, lake-effect snowfall becomes a concern downwind of the warm lakes within the cold cyclonic flow around the backside of extratropical cyclones. Lake-effect snowfall can be locally heavy. Thundersnow is possible within a cyclone's comma head and within lake effect precipitation bands. In mountainous areas, heavy precipitation is possible where upslope flow is maximized within windward sides of the terrain at elevation. On the leeward side of mountains, desert climates can exist due to the dry air caused by compressional heating. The movement of the monsoon trough, or intertropical convergence zone, brings rainy seasons to savannah climes. Precipitation is a major component of the water cycle, and is responsible for depositing the fresh water on the planet. Approximately 505,000 cubic kilometres (Template:Convert/mi3) of water falls as precipitation each year; 398,000 cubic kilometres (Template:Convert/mi3) of it over the oceans and 107,000 cubic kilometres (Template:Convert/mi3) over land. Given the Earth's surface area, that means the globally averaged annual precipitation is 990 millimetres (39 in), but over land it is only 715 millimetres (28.1 in). Climate classification systems such as the Köppen climate classification system use average annual rainfall to help differentiate between differing climate regimes. The urban heat island effect may lead to increased rainfall, both in amounts and intensity, downwind of cities. Global warming is also causing changes in the precipitation pattern globally. Precipitation may occur on other celestial bodies, e.g. when it gets cold, Mars has precipitation which most likely takes the form of ice needles, rather than rain or snow. - 1 Types - 2 How the air becomes saturated - 3 Formation - 4 Causes - 5 Measurement - 6 Return period - 7 Role in Köppen climate classification - 8 Effect on agriculture - 9 Changes due to global warming - 10 Changes due to urban heat island - 11 Forecasting - 12 See also - 13 References - 14 External links Precipitation is a major component of the water cycle, and is responsible for depositing most of the fresh water on the planet. Approximately 505,000 km3 (121,000 mi3) of water falls as precipitation each year, 398,000 km3 (95,000 cu mi) of it over the oceans. Given the Earth's surface area, that means the globally averaged annual precipitation is 990 millimetres (39 in). Mechanisms of producing precipitation include convective, stratiform, and orographic rainfall. Convective processes involve strong vertical motions that can cause the overturning of the atmosphere in that location within an hour and cause heavy precipitation, while stratiform processes involve weaker upward motions and less intense precipitation. Precipitation can be divided into three categories, based on whether it falls as liquid water, liquid water that freezes on contact with the surface, or ice. Mixtures of different types of precipitation, including types in different categories, can fall simultaneously. Liquid forms of precipitation include rain and drizzle. Rain or drizzle that freezes on contact within a subfreezing air mass is called "freezing rain" or "freezing drizzle". Frozen forms of precipitation include snow, ice needles, ice pellets, hail, and graupel. How the air becomes saturated Cooling air to its dew point The dew point is the temperature to which a parcel must be cooled in order to become saturated, and (unless super-saturation occurs) condenses to water. Water vapour normally begins to condense on condensation nuclei such as dust, ice, and salt in order to form clouds. An elevated portion of a frontal zone forces broad areas of lift, which form clouds decks such as altostratus or cirrostratus. Stratus is a stable cloud deck which tends to form when a cool, stable air mass is trapped underneath a warm air mass. It can also form due to the lifting of advection fog during breezy conditions. There are four main mechanisms for cooling the air to its dew point: adiabatic cooling, conductive cooling, radiational cooling, and evaporative cooling. Adiabatic cooling occurs when air rises and expands. The air can rise due to convection, large-scale atmospheric motions, or a physical barrier such as a mountain (orographic lift). Conductive cooling occurs when the air comes into contact with a colder surface, usually by being blown from one surface to another, for example from a liquid water surface to colder land. Radiational cooling occurs due to the emission of infrared radiation, either by the air or by the surface underneath. Evaporative cooling occurs when moisture is added to the air through evaporation, which forces the air temperature to cool to its wet-bulb temperature, or until it reaches saturation. Adding moisture to the air The main ways water vapour is added to the air are: wind convergence into areas of upward motion, precipitation or virga falling from above, daytime heating evaporating water from the surface of oceans, water bodies or wet land, transpiration from plants, cool or dry air moving over warmer water, and lifting air over mountains. Coalescence occurs when water droplets fuse to create larger water droplets, or when water droplets freeze onto an ice crystal, which is known as the Bergeron process. The fall rate of very small droplets is negligible, hence clouds do not fall out of the sky; precipitation will only occur when these coalesce into larger drops. When air turbulence occurs, water droplets collide, producing larger droplets. As these larger water droplets descend, coalescence continues, so that drops become heavy enough to overcome air resistance and fall as rain. Raindrops have sizes ranging from 0.1 millimetres (0.0039 in) to 9 millimetres (0.35 in) mean diameter, above which they tend to break up. Smaller drops are called cloud droplets, and their shape is spherical. As a raindrop increases in size, its shape becomes more oblate, with its largest cross-section facing the oncoming airflow. Contrary to the cartoon pictures of raindrops, their shape does not resemble a teardrop. Intensity and duration of rainfall are usually inversely related, i.e., high intensity storms are likely to be of short duration and low intensity storms can have a long duration. Rain drops associated with melting hail tend to be larger than other rain drops. The METAR code for rain is RA, while the coding for rain showers is SHRA. Ice pellets or sleet are a form of precipitation consisting of small, translucent balls of ice. Ice pellets are usually (but not always) smaller than hailstones. They often bounce when they hit the ground, and generally do not freeze into a solid mass unless mixed with freezing rain. The METAR code for ice pellets is PL. Ice pellets form when a layer of above-freezing air exists with sub-freezing air both above and below. This causes the partial or complete melting of any snowflakes falling through the warm layer. As they fall back into the sub-freezing layer closer to the surface, they re-freeze into ice pellets. However, if the sub-freezing layer beneath the warm layer is too small, the precipitation will not have time to re-freeze, and freezing rain will be the result at the surface. A temperature profile showing a warm layer above the ground is most likely to be found in advance of a warm front during the cold season, but can occasionally be found behind a passing cold front. Like other precipitation, hail forms in storm clouds when supercooled water droplets freeze on contact with condensation nuclei, such as dust or dirt. The storm's updraft blows the hailstones to the upper part of the cloud. The updraft dissipates and the hailstones fall down, back into the updraft, and are lifted again. Hail has a diameter of 5 millimetres (0.20 in) or more. Within METAR code, GR is used to indicate larger hail, of a diameter of at least 6.4 millimetres (0.25 in). GR is derived from the French word grêle. Smaller-sized hail, as well as snow pellets, use the coding of GS, which is short for the French word grésil. Stones just larger than golf ball-sized are one of the most frequently reported hail sizes. Hailstones can grow to 15 centimetres (6 in) and weigh more than 500 grams (1 lb). In large hailstones, latent heat released by further freezing may melt the outer shell of the hailstone. The hailstone then may undergo 'wet growth', where the liquid outer shell collects other smaller hailstones. The hailstone gains an ice layer and grows increasingly larger with each ascent. Once a hailstone becomes too heavy to be supported by the storm's updraft, it falls from the cloud. Snow crystals form when tiny supercooled cloud droplets (about 10 μm in diameter) freeze. Once a droplet has frozen, it grows in the supersaturated environment. Because water droplets are more numerous than the ice crystals the crystals are able to grow to hundreds of micrometers or millimeters in size at the expense of the water droplets. This process is known as the Wegner-Bergeron-Findeison process. The corresponding depletion of water vapour causes the droplets to evaporate, meaning that the ice crystals grow at the droplets' expense. These large crystals are an efficient source of precipitation, since they fall through the atmosphere due to their mass, and may collide and stick together in clusters, or aggregates. These aggregates are snowflakes, and are usually the type of ice particle that falls to the ground. Guinness World Records list the world's largest snowflakes as those of January 1887 at Fort Keogh, Montana; allegedly one measured 38 cm (15 inches) wide. The exact details of the sticking mechanism remain a subject of research. Although the ice is clear, scattering of light by the crystal facets and hollows/imperfections mean that the crystals often appear white in color due to diffuse reflection of the whole spectrum of light by the small ice particles. The shape of the snowflake is determined broadly by the temperature and humidity at which it is formed. Rarely, at a temperature of around −2 °C (28 °F), snowflakes can form in threefold symmetry—triangular snowflakes. The most common snow particles are visibly irregular, although near-perfect snowflakes may be more common in pictures because they are more visually appealing. No two snowflakes are alike, which grow at different rates and in different patterns depending on the changing temperature and humidity within the atmosphere that the snowflake falls through on its way to the ground. The METAR code for snow is SN, while snow showers are coded SHSN. Diamond dust, also known as ice needles or ice crystals, forms at temperatures approaching −40 °C (−40.0 °F) due to air with slightly higher moisture from aloft mixing with colder, surface based air. They are made of simple ice crystals that are hexagonal in shape. The METAR identifier for diamond dust within international hourly weather reports is IC. Stratiform or dynamic precipitation occurs as a consequence of slow ascent of air in synoptic systems (on the order of cm/s), such as over surface cold fronts, and over and ahead of warm fronts. Similar ascent is seen around tropical cyclones outside of the eyewall, and in comma-head precipitation patterns around mid-latitude cyclones. A wide variety of weather can be found along an occluded front, with thunderstorms possible, but usually their passage is associated with a drying of the air mass. Occluded fronts usually form around mature low-pressure areas. Precipitation may occur on celestial bodies other than Earth. When it gets cold, Mars has precipitation that most likely takes the form of ice needles, rather than rain or snow. Convective rain, or showery precipitation, occurs from convective clouds, e.g., cumulonimbus or cumulus congestus. It falls as showers with rapidly changing intensity. Convective precipitation falls over a certain area for a relatively short time, as convective clouds have limited horizontal extent. Most precipitation in the tropics appears to be convective; however, it has been suggested that stratiform precipitation also occurs. Graupel and hail indicate convection. In mid-latitudes, convective precipitation is intermittent and often associated with baroclinic boundaries such as cold fronts, squall lines, and warm fronts. Orographic precipitation occurs on the windward side of mountains and is caused by the rising air motion of a large-scale flow of moist air across the mountain ridge, resulting in adiabatic cooling and condensation. In mountainous parts of the world subjected to relatively consistent winds (for example, the trade winds), a more moist climate usually prevails on the windward side of a mountain than on the leeward or downwind side. Moisture is removed by orographic lift, leaving drier air (see katabatic wind) on the descending and generally warming, leeward side where a rain shadow is observed. In Hawaii, Mount Waiʻaleʻale, on the island of Kauai, is notable for its extreme rainfall, as it has the second highest average annual rainfall on Earth, with 12,000 millimetres (460 in). Storm systems affect the state with heavy rains between October and March. Local climates vary considerably on each island due to their topography, divisible into windward (Koʻolau) and leeward (Kona) regions based upon location relative to the higher mountains. Windward sides face the east to northeast trade winds and receive much more rainfall; leeward sides are drier and sunnier, with less rain and less cloud cover. In South America, the Andes mountain range blocks Pacific moisture that arrives in that continent, resulting in a desertlike climate just downwind across western Argentina. The Sierra Nevada range creates the same effect in North America forming the Great Basin and Mojave Deserts. Extratropical cyclones can bring cold and dangerous conditions with heavy rain and snow with winds exceeding 119 km/h (74 mph), (sometimes referred to as windstorms in Europe). The band of precipitation that is associated with their warm front is often extensive, forced by weak upward vertical motion of air over the frontal boundary which condenses as it cools and produces precipitation within an elongated band, which is wide and stratiform, meaning falling out of nimbostratus clouds. When moist air tries to dislodge an arctic air mass, overrunning snow can result within the poleward side of the elongated precipitation band. In the Northern Hemisphere, poleward is towards the North Pole, or north. Within the Southern Hemisphere, poleward is towards the South Pole, or south. Southwest of extratropical cyclones, curved cyclonic flow bringing cold air across the relatively warm water bodies can lead to narrow lake-effect snow bands. Those bands bring strong localized snowfall which can be understood as follows: Large water bodies such as lakes efficiently store heat that results in significant temperature differences (larger than 13 °C or 23 °F) between the water surface and the air above. Because of this temperature difference, warmth and moisture are transported upward, condensing into vertically oriented clouds (see satellite picture) which produce snow showers. The temperature decrease with height and cloud depth are directly affected by both the water temperature and the large-scale environment. The stronger the temperature decrease with height, the deeper the clouds get, and the greater the precipitation rate becomes. In mountainous areas, heavy snowfall accumulates when air is forced to ascend the mountains and squeeze out precipitation along their windward slopes, which in cold conditions, falls in the form of snow. Because of the ruggedness of terrain, forecasting the location of heavy snowfall remains a significant challenge. Within the tropics The wet, or rainy, season is the time of year, covering one or more months, when most of the average annual rainfall in a region falls. The term green season is also sometimes used as a euphemism by tourist authorities. Areas with wet seasons are dispersed across portions of the tropics and subtropics. Savanna climates and areas with monsoon regimes have wet summers and dry winters. Tropical rainforests technically do not have dry or wet seasons, since their rainfall is equally distributed through the year. Some areas with pronounced rainy seasons will see a break in rainfall mid-season when the intertropical convergence zone or monsoon trough move poleward of their location during the middle of the warm season. When the wet season occurs during the warm season, or summer, rain falls mainly during the late afternoon and early evening hours. The wet season is a time when air quality improves, freshwater quality improves, and vegetation grows significantly. Soil nutrients diminish and erosion increases. Animals have adaptation and survival strategies for the wetter regime. The previous dry season leads to food shortages into the wet season, as the crops have yet to mature. Developing countries have noted that their populations show seasonal weight fluctuations due to food shortages seen before the first harvest, which occurs late in the wet season. Tropical cyclones, a source of very heavy rainfall, consist of large air masses several hundred miles across with low pressure at the centre and with winds blowing inward towards the centre in either a clockwise direction (southern hemisphere) or counterclockwise (northern hemisphere). Although cyclones can take an enormous toll in lives and personal property, they may be important factors in the precipitation regimes of places they impact, as they may bring much-needed precipitation to otherwise dry regions. Areas in their path can receive a year's worth of rainfall from a tropical cyclone passage. Large-scale geographical distribution On the large scale, the highest precipitation amounts outside topography fall in the tropics, closely tied to the Intertropical Convergence Zone, itself the ascending branch of the Hadley cell. Mountainous locales near the equator in Colombia are amongst the wettest places on Earth. North and south of this are regions of descending air that form subtropical ridges where precipitation is low; the land surface underneath is usually arid, which forms most of the Earth's deserts. An exception to this rule is in Hawaii, where upslope flow due to the trade winds lead to one of the wettest locations on Earth. Otherwise, the flow of the Westerlies into the Rocky Mountains lead to the wettest, and at elevation snowiest, locations within North America. In Asia during the wet season, the flow of moist air into the Himalayas leads to some of the greatest rainfall amounts measured on Earth in northeast India. The standard way of measuring rainfall or snowfall is the standard rain gauge, which can be found in 100 mm (4 in) plastic and 200 mm (8 in) metal varieties. The inner cylinder is filled by 25 mm (1 in) of rain, with overflow flowing into the outer cylinder. Plastic gauges have markings on the inner cylinder down to 0.25 mm (0.01 in) resolution, while metal gauges require use of a stick designed with the appropriate 0.25 mm (0.01 in) markings. After the inner cylinder is filled, the amount inside it is discarded, then filled with the remaining rainfall in the outer cylinder until all the fluid in the outer cylinder is gone, adding to the overall total until the outer cylinder is empty. These gauges are used in the winter by removing the funnel and inner cylinder and allowing snow and freezing rain to collect inside the outer cylinder. Some add anti-freeze to their gauge so they do not have to melt the snow or ice that falls into the gauge. Once the snowfall/ice is finished accumulating, or as 300 mm (12 in) is approached, one can either bring it inside to melt, or use lukewarm water to fill the inner cylinder with in order to melt the frozen precipitation in the outer cylinder, keeping track of the warm fluid added, which is subsequently subtracted from the overall total once all the ice/snow is melted. Other types of gauges include the popular wedge gauge (the cheapest rain gauge and most fragile), the tipping bucket rain gauge, and the weighing rain gauge. The wedge and tipping bucket gauges will have problems with snow. Attempts to compensate for snow/ice by warming the tipping bucket meet with limited success, since snow may sublimate if the gauge is kept much above freezing. Weighing gauges with antifreeze should do fine with snow, but again, the funnel needs to be removed before the event begins. For those looking to measure rainfall the most inexpensively, a can that is cylindrical with straight sides will act as a rain gauge if left out in the open, but its accuracy will depend on what ruler is used to measure the rain with. Any of the above rain gauges can be made at home, with enough know-how. When a precipitation measurement is made, various networks exist across the United States and elsewhere where rainfall measurements can be submitted through the Internet, such as CoCoRAHS or GLOBE. If a network is not available in the area where one lives, the nearest local weather office will likely be interested in the measurement. A concept used in precipitation measurement is the hydrometeor. Bits of liquid or solid water in the atmosphere are known as hydrometeors. Formations due to condensation, such as clouds, haze, fog, and mist, are composed of hydrometeors. All precipitation types are made up of hydrometeors by definition, including virga, which is precipitation which evaporates before reaching the ground. Particles blown from the Earth's surface by wind, such as blowing snow and blowing sea spray, are also hydrometeors. Although surface precipitation gauges are considered the standard for measuring precipitation, there are many areas in which their use is not feasible. This includes the vast expanses of ocean and remote land areas. In other cases, social, technical or administrative issues prevent the dissemination of gauge observations. As a result, the modern global record of precipitation largely depends on satellite observations. Satellite sensors work by remotely sensing precipitation—recording various parts of the electromagnetic spectrum that theory and practice show are related to the occurrence and intensity of precipitation. The sensors are almost exclusively passive, recording what they see, similar to a camera, in contrast to active sensors (radar, lidar) that send out a signal and detect its impact on the area being observed. Satellite sensors now in practical use for precipitation fall into two categories. Thermal infrared (IR) sensors record a channel around 11 micron wavelength and primarily give information about cloud tops. Due to the typical structure of the atmosphere, cloud-top temperatures are approximately inversely related to cloud-top heights, meaning colder clouds almost always occur at higher altitudes. Further, cloud tops with a lot of small-scale variation are likely to be more vigorous than smooth-topped clouds. Various mathematical schemes, or algorithms, use these and other properties to estimate precipitation from the IR data. The second category of sensor channels is in the microwave part of the electromagnetic spectrum. The frequencies in use range from about 10 gigahertz to a few hundred GHz. Channels up to about 37 GHz primarily provide information on the liquid hydrometeors (rain and drizzle) in the lower parts of clouds, with larger amounts of liquid emitting higher amounts of microwave radiant energy. Channels above 37 GHz display emission signals, but are dominated by the action of solid hydrometeors (snow, graupel, etc.) to scatter microwave radiant energy. Satellites such as the Tropical Rainfall Measuring Mission (TRMM) and the Global Precipitation Measurement (GPM) mission employ microwave sensors to form precipitation estimates. Additional sensor channels and products have been demonstrated to provide additional useful information including visible channels, additional IR channels, water vapor channels and atmospheric sounding retrievals. However, most precipitation data sets in current use do not employ these data sources. Satellite data sets The IR estimates have rather low skill at short time and space scales, but are available very frequently (15 minutes or more often) from satellites in geosynchronous Earth orbit. IR works best in cases of deep, vigorous convection—such as the tropics—and becomes progressively less useful in areas where stratiform (layered) precipitation dominates, especially in mid- and high-latitude regions. The more-direct physical connection between hydrometeors and microwave channels gives the microwave estimates greater skill on short time and space scales than is true for IR. However, microwave sensors fly only on low Earth orbit satellites, and there are few enough of them that the average time between observations exceeds three hours. This several-hour interval is insufficient to adequately document precipitation because of the transient nature of most precipitation systems as well as the inability of a single satellite to appropriately capture the typical daily cycle of precipitation at a given location. Since the late 1990s, several algorithms have been developed to combine precipitation data from multiple satellites' sensors, seeking to emphasize the strengths and minimize the weaknesses of the individual input data sets. The goal is to provide "best" estimates of precipitation on a uniform time/space grid, usually for as much of the globe as possible. In some cases the long-term homogeneity of the dataset is emphasized, which is the Climate Data Record standard. In other cases, the goal is producing the best instantaneous satellite estimate, which is the High Resolution Precipitation Product approach. In either case, of course, the less-emphasized goal is also considered desirable. One key result of the multi-satellite studies is that including even a small amount of surface gauge data is very useful for controlling the biases that are endemic to satellite estimates. The difficulties in using gauge data are that 1) their availability is limited, as noted above, and 2) the best analyses of gauge data take two months or more after the observation time to undergo the necessary transmission, assembly, processing and quality control. Thus, precipitation estimates that include gauge data tend to be produced further after the observation time than the no-gauge estimates. As a result, while estimates that include gauge data may provide a more accurate depiction of the "true" precipitation, they are generally not suited for real- or near-real-time applications. The work described has resulted in a variety of datasets possessing different formats, time/space grids, periods of record and regions of coverage, input datasets, and analysis procedures, as well as many different forms of dataset version designators. In many cases, one of the modern multi-satellite data sets is the best choice for general use. The likelihood or probability of an event with a specified intensity and duration, is called the return period or frequency. The intensity of a storm can be predicted for any return period and storm duration, from charts based on historic data for the location. The term 1 in 10 year storm describes a rainfall event which is rare and is only likely to occur once every 10 years, so it has a 10 percent likelihood any given year. The rainfall will be greater and the flooding will be worse than the worst storm expected in any single year. The term 1 in 100 year storm describes a rainfall event which is extremely rare and which will occur with a likelihood of only once in a century, so has a 1 percent likelihood in any given year. The rainfall will be extreme and flooding to be worse than a 1 in 10 year event. As with all probability events, it is possible though unlikely to have two "1 in 100 Year Storms" in a single year. Role in Köppen climate classification The Köppen classification depends on average monthly values of temperature and precipitation. The most commonly used form of the Köppen classification has five primary types labeled A through E. Specifically, the primary types are A, tropical; B, dry; C, mild mid-latitude; D, cold mid-latitude; and E, polar. The five primary classifications can be further divided into secondary classifications such as rain forest, monsoon, tropical savanna, humid subtropical, humid continental, oceanic climate, Mediterranean climate, steppe, subarctic climate, tundra, polar ice cap, and desert. Rain forests are characterized by high rainfall, with definitions setting minimum normal annual rainfall between 1,750 and 2,000 mm (69 and 79 in). A tropical savanna is a grassland biome located in semi-arid to semi-humid climate regions of subtropical and tropical latitudes, with rainfall between 750 and 1,270 mm (30 and 50 in) a year. They are widespread on Africa, and are also found in India, the northern parts of South America, Malaysia, and Australia. The humid subtropical climate zone is where winter rainfall (and sometimes snowfall) is associated with large storms that the westerlies steer from west to east. Most summer rainfall occurs during thunderstorms and from occasional tropical cyclones. Humid subtropical climates lie on the east side continents, roughly between latitudes 20° and 40° degrees away from the equator. An oceanic (or maritime) climate is typically found along the west coasts at the middle latitudes of all the world's continents, bordering cool oceans, as well as southeastern Australia, and is accompanied by plentiful precipitation year round. The Mediterranean climate regime resembles the climate of the lands in the Mediterranean Basin, parts of western North America, parts of Western and South Australia, in southwestern South Africa and in parts of central Chile. The climate is characterized by hot, dry summers and cool, wet winters. A steppe is a dry grassland. Subarctic climates are cold with continuous permafrost and little precipitation. Effect on agriculture Precipitation, especially rain, has a dramatic effect on agriculture. All plants need at least some water to survive, therefore rain (being the most effective means of watering) is important to agriculture. While a regular rain pattern is usually vital to healthy plants, too much or too little rainfall can be harmful, even devastating to crops. Drought can kill crops and increase erosion, while overly wet weather can cause harmful fungus growth. Plants need varying amounts of rainfall to survive. For example, certain cacti require small amounts of water, while tropical plants may need up to hundreds of inches of rain per year to survive. In areas with wet and dry seasons, soil nutrients diminish and erosion increases during the wet season. Animals have adaptation and survival strategies for the wetter regime. The previous dry season leads to food shortages into the wet season, as the crops have yet to mature. Developing countries have noted that their populations show seasonal weight fluctuations due to food shortages seen before the first harvest, which occurs late in the wet season. Changes due to global warming Increasing temperatures tend to increase evaporation which leads to more precipitation. Precipitation has generally increased over land north of 30°N from 1900 to 2005 but has declined over the tropics since the 1970s. Globally there has been no statistically significant overall trend in precipitation over the past century, although trends have varied widely by region and over time. Eastern portions of North and South America, northern Europe, and northern and central Asia have become wetter. The Sahel, the Mediterranean, southern Africa and parts of southern Asia have become drier. There has been an increase in the number of heavy precipitation events over many areas during the past century, as well as an increase since the 1970s in the prevalence of droughts—especially in the tropics and subtropics. Changes in precipitation and evaporation over the oceans are suggested by the decreased salinity of mid- and high-latitude waters (implying more precipitation), along with increased salinity in lower latitudes (implying less precipitation, more evaporation, or both). Over the contiguous United States, total annual precipitation increased at an average rate of 6.1% per century since 1900, with the greatest increases within the East North Central climate region (11.6% per century) and the South (11.1%). Hawaii was the only region to show a decrease (-9.25%). Changes due to urban heat island The urban heat island warms cities 0.6 to 5.6 °C (1.1 to 10.1 °F) above surrounding suburbs and rural areas. This extra heat leads to greater upward motion, which can induce additional shower and thunderstorm activity. Rainfall rates downwind of cities are increased between 48% and 116%. Partly as a result of this warming, monthly rainfall is about 28% greater between 32 to 64 kilometres (20 to 40 mi) downwind of cities, compared with upwind. Some cities induce a total precipitation increase of 51%. The Quantitative Precipitation Forecast (abbreviated QPF) is the expected amount of liquid precipitation accumulated over a specified time period over a specified area. A QPF will be specified when a measurable precipitation type reaching a minimum threshold is forecast for any hour during a QPF valid period. Precipitation forecasts tend to be bound by synoptic hours such as 0000, 0600, 1200 and 1800 GMT. Terrain is considered in QPFs by use of topography or based upon climatological precipitation patterns from observations with fine detail. Starting in the mid to late 1990s, QPFs were used within hydrologic forecast models to simulate impact to rivers throughout the United States. Forecast models show significant sensitivity to humidity levels within the planetary boundary layer, or in the lowest levels of the atmosphere, which decreases with height. QPF can be generated on a quantitative, forecasting amounts, or a qualitative, forecasting the probability of a specific amount, basis. Radar imagery forecasting techniques show higher skill than model forecasts within six to seven hours of the time of the radar image. The forecasts can be verified through use of rain gauge measurements, weather radar estimates, or a combination of both. Various skill scores can be determined to measure the value of the rainfall forecast. - List of meteorology topics - Basic precipitation - Mango showers, pre-monsoon showers in the Indian states of Karnataka and Kerala that help in the ripening of mangoes. - Sunshower, an unusual meteorological phenomenon in which rain falls while the sun is shining. - Wintry showers, an informal meteorological term for various mixtures of rain, freezing rain, sleet and snow. - ^ "Precipitation". Glossary of Meteorology. American Meteorological Society. 2009. http://amsglossary.allenpress.com/glossary/search?id=precipitation1. Retrieved 2009-01-02. - ^ a b Dr. Chowdhury's Guide to Planet Earth (2005). "The Water Cycle". WestEd. http://www.planetguide.net/book/chapter_2/water_cycle.html. Retrieved 2006-10-24. - ^ a b Dr. Jim Lochner (1998). "Ask an Astrophysicist". NASA Goddard Space Flight Center. http://imagine.gsfc.nasa.gov/docs/ask_astro/answers/980403c.html. Retrieved 2009-01-16. - ^ Emmanouil N. Anagnostou (2004). "A convective/stratiform precipitation classification algorithm for volume scanning weather radar observations". Meteorological Applications 11 (4): 291–300. DOI:10.1017/S1350482704001409. - ^ A.J. Dore, M. Mousavi-Baygi, R.I. Smith, J. Hall, D. Fowler and T.W. Choularton (June 2006). "A model of annual orographic precipitation and acid deposition and its application to Snowdonia". Atmosphere Environment 40 (18): 3316–3326. DOI:10.1016/j.atmosenv.2006.01.043. - ^ a b Robert Penrose Pearce (2002). Meteorology at the Millennium. Academic Press. p. 66. ISBN 978-0-12-548035-2. http://books.google.com/?id=QECy_UBdyrcC&pg=PA66&lpg=PA66&dq=ways+to+moisten+the+atmosphere. Retrieved 2009-01-02. - ^ Jan Jackson (2008). "All About Mixed Winter Precipitation". National Weather Service. http://www.erh.noaa.gov/rnk/Newsletter/Fall_2008/mixed_precip/Mixed_precip.html. Retrieved 2009-02-07. - ^ Glossary of Meteorology (June 2000). "Dewpoint". American Meteorological Society. http://amsglossary.allenpress.com/glossary/search?id=dewpoint1. Retrieved 2011-01-31. - ^ FMI (2007). "Fog And Stratus - Meteorological Physical Background". Zentralanstalt für Meteorologie und Geodynamik. http://www.zamg.ac.at/docu/Manual/SatManu/main.htm?/docu/Manual/SatManu/CMs/FgStr/backgr.htm. Retrieved 2009-02-07. - ^ Glossary of Meteorology (2009). "Adiabatic Process". American Meteorological Society. http://amsglossary.allenpress.com/glossary/search?id=adiabatic-process1. Retrieved 2008-12-27. - ^ TE Technology, Inc (2009). "Peltier Cold Plate". http://www.tetech.com/Cold-Plate-Coolers.html. Retrieved 2008-12-27. - ^ Glossary of Meteorology (2009). "Radiational cooling". American Meteorological Society. http://amsglossary.allenpress.com/glossary/search?p=1&query=radiational+cooling&submit=Search. Retrieved 2008-12-27. - ^ Robert Fovell (2004). "Approaches to saturation". University of California in Los Angelese. http://www.atmos.ucla.edu/~fovell/AS3downloads/saturation.pdf. Retrieved 2009-02-07. - ^ National Weather Service Office, Spokane, Washington (2009). "Virga and Dry Thunderstorms". http://www.wrh.noaa.gov/otx/outreach/ttalk/virga.php. Retrieved 2009-01-02. - ^ Bart van den Hurk and Eleanor Blyth (2008). "Global maps of Local Land-Atmosphere coupling". KNMI. http://www.knmi.nl/~hurkvd/Loco_workshop/Workshop_report.pdf. Retrieved 2009-01-02. - ^ H. Edward Reiley, Carroll L. Shry (2002). Introductory horticulture. Cengage Learning. p. 40. ISBN 978-0-7668-1567-4. http://books.google.com/?id=jZvsnsLIkNsC&pg=PA40&lpg=PA40&dq=plant+transpiration+adds+moisture+to+air+book#v=onepage&q&f=false. Retrieved 2011-01-31. - ^ National Weather Service JetStream (2008). "Air Masses". http://www.srh.weather.gov/srh/jetstream/synoptic/airmass.htm. Retrieved 2009-01-02. - ^ a b Dr. Michael Pidwirny (2008). "CHAPTER 8: Introduction to the Hydrosphere (e). Cloud Formation Processes". Physical Geography. http://www.physicalgeography.net/fundamentals/8e.html. Retrieved 2009-01-01. - ^ Paul Sirvatka (2003). "Cloud Physics: Collision/Coalescence; The Bergeron Process". College of DuPage. http://weather.cod.edu/sirvatka/bergeron.html. Retrieved 2009-01-01. - ^ United States Geological Survey (2009). "Are raindrops tear shaped?". United States Department of the Interior. http://ga.water.usgs.gov/edu/raindropshape.html. Retrieved 2008-12-27. - ^ a b c d J . S. 0guntoyinbo and F. 0. Akintola (1983). "Rainstorm characteristics affecting water availability for agriculture". IAHS Publication Number 140. http://www.cig.ensmp.fr/~iahs/redbooks/a140/iahs_140_0063.pdf. Retrieved 2008-12-27. Cite error: Invalid <ref>tag; name "JS" defined multiple times with different content - ^ a b Robert A. Houze Jr (1997). "Stratiform Precipitation in Regions of Convection: A Meteorological Paradox?". Bulletin of the American Meteorological Society 78 (10): 2179–2196. DOI:<2179:SPIROC>2.0.CO;2 10.1175/1520-0477(1997)078<2179:SPIROC>2.0.CO;2. Retrieved on 2008-12-27. - ^ Norman W. Junker (2008). "An ingredients based methodology for forecasting precipitation associated with MCS's". Hydrometeorological Prediction Center. http://www.wpc.ncep.noaa.gov/research/mcs_web_test_test_files/Page882.htm. Retrieved 2009-02-07. - ^ a b c d e Alaska Air Flight Service Station (2007-04-10). "SA-METAR". Federal Aviation Administration via the Internet Wayback Machine. Archived from the original on 2008-05-01. //web.archive.org/web/20080501074014/http://www.alaska.faa.gov/fai/afss/metar+taf/sametara.htm. Retrieved 2009-08-29. - ^ "Hail (glossary entry)". National Oceanic and Atmospheric Administration's National Weather Service. http://www.weather.gov/glossary/index.php?word=hail. Retrieved 2007-03-20. - ^ Weatherquestions.com. "What causes ice pellets (sleet)?". http://www.weatherquestions.com/What_causes_ice_pellets.htm. Retrieved 2007-12-08. - ^ Glossary of Meteorology (2009). "Hail". American Meteorological Society. http://amsglossary.allenpress.com/glossary/search?id=hail1. Retrieved 2009-07-15. - ^ Ryan Jewell and Julian Brimelow (2004-08-17). "P9.5 Evaluation of an Alberta Hail Growth Model Using Severe Hail Proximity Soundings in the United States". http://www.spc.noaa.gov/publications/jewell/hailslsc.pdf. Retrieved 2009-07-15. - ^ National Severe Storms Laboratory (2007-04-23). "Aggregate hailstone". National Oceanic and Atmospheric Administration. http://www.photolib.noaa.gov/htmls/nssl0001.htm. Retrieved 2009-07-15. - ^ Julian C. Brimelow, Gerhard W. Reuter, and Eugene R. Poolman (October 2002). "Modeling Maximum Hail Size in Alberta Thunderstorms". Weather and Forecasting 17 (5): 1048–1062. DOI:<1048:MMHSIA>2.0.CO;2 10.1175/1520-0434(2002)017<1048:MMHSIA>2.0.CO;2. - ^ Jacque Marshall (2000-04-10). "Hail Fact Sheet". University Corporation for Atmospheric Research. http://www.ucar.edu/communications/factsheets/Hail.html. Retrieved 2009-07-15. - ^ a b M. Klesius (2007). "The Mystery of Snowflakes". National Geographic 211 (1). ISSN 0027-9358. - ^ William J. Broad (2007-03-20). "Giant Snowflakes as Big as Frisbees? Could Be". New York Times. http://www.nytimes.com/2007/03/20/science/20snow.html. Retrieved 2009-07-12. - ^ Jennifer E. Lawson (2001). Hands-on Science: Light, Physical Science (matter) - Chapter 5: The Colors of Light. Portage & Main Press. p. 39. ISBN 978-1-894110-63-1. http://books.google.com/?id=4T-aXFsMhAgC&pg=PA39&lpg=PA39. Retrieved 2009-06-28. - ^ Kenneth G. Libbrecht (2006-09-11). "Guide to Snowflakes". California Institute of Technology. http://www.its.caltech.edu/~atomic/snowcrystals/class/class.htm. Retrieved 2009-06-28. - ^ John Roach (2007-02-13). ""No Two Snowflakes the Same" Likely True, Research Reveals". National Geographic News. http://news.nationalgeographic.com/news/2007/02/070213-snowflake.html. Retrieved 2009-07-14. - ^ Kenneth Libbrecht (Winter 2004–2005). "Snowflake Science". American Educator. Retrieved on 2009-07-14. - ^ Glossary of Meteorology (June 2000). "Diamond Dust". American Meteorological Society. http://amsglossary.allenpress.com/glossary/search?p=1&query=diamond+dust&submit=Search. Retrieved 2010-01-21. - ^ Kenneth G. Libbrecht (2001). "Morphogenesis on Ice: The Physics of Snow Crystals". Engineering & Science (1). Retrieved on 2010-01-21. - ^ a b B. Geerts (2002). "Convective and stratiform rainfall in the tropics". University of Wyoming. http://www-das.uwyo.edu/~geerts/cwx/notes/chap10/con_str.html. Retrieved 2007-11-27. - ^ David Roth (2006). "Unified Surface Analysis Manual". Hydrometeorological Prediction Center. http://www.wpc.ncep.noaa.gov/sfc/UASfcManualVersion1.pdf. Retrieved 2006-10-22. - ^ Glossary of Meteorology (2009). "Graupel". American Meteorological Society. http://amsglossary.allenpress.com/glossary/search?p=1&query=graupel&submit=Search. Retrieved 2009-01-02. - ^ Toby N. Carlson (1991). Mid-latitude Weather Systems. Routledge. p. 216. ISBN 978-0-04-551115-0. http://books.google.com/?id=2lIVAAAAIAAJ&pg=PA216&lpg=PA216&dq=where+convection+occurs+in+the+mid-latitudes. Retrieved 2009-02-07. - ^ Diana Leone (2002). "Rain supreme". Honolulu Star-Bulletin. http://starbulletin.com/2002/05/27/news/story3.html. Retrieved 2008-03-19. - ^ Western Regional Climate Center (2002). "Climate of Hawaii". http://www.wrcc.dri.edu/narratives/HAWAII.htm. Retrieved 2008-03-19. - ^ Paul E. Lydolph (1985). The Climate of the Earth. Rowman & Littlefield. p. 333. ISBN 978-0-86598-119-5. http://books.google.com/?id=bBjIuXHEgZ4C&pg=PA333&lpg=PA333&dq=effect+of+Andes+on+rainfall+in+Chile. Retrieved 2009-01-02. - ^ Michael A. Mares (1999). Encyclopedia of Deserts. University of Oklahoma Press. p. 252. ISBN 978-0-8061-3146-7. http://books.google.com/?id=g3CbqZtaF4oC&pg=PA252&lpg=PA252&dq=sierra+nevada+leads+to+great+basin+desert. Retrieved 2009-01-02. - ^ Adam Ganson (2003). "Geology of Death Valley". Indiana University. http://www.indiana.edu/~sierra/papers/2003/Ganson.html. Retrieved 2009-02-07. - ^ Joan Von Ahn; Joe Sienkiewicz; Greggory McFadden (April 2005). "Hurricane Force Extratropical Cyclones Observed Using QuikSCAT Near Real Time Winds". Mariners Weather Log 49 (1). Retrieved on 2009-07-07. - ^ Owen Hertzman (1988). "Three-Dimensional Kinematics of Rainbands in Midlatitude Cyclones Abstract". - ^ Yuh-Lang Lin (2007). Mesoscale Dynamics. Cambridge University Press. p. 405. ISBN 978-0-521-80875-0. http://books.google.com/?id=4KXtnQ3bDeEC&pg=PA405. Retrieved 2009-07-07. - ^ B. Geerts (1998). "Lake Effect Snow.". University of Wyoming. http://www-das.uwyo.edu/~geerts/cwx/notes/chap10/lake_effect_snow.html. Retrieved 2008-12-24. - ^ Greg Byrd (1998-06-03). "Lake Effect Snow". University Corporation for Atmospheric Research. http://www.comet.ucar.edu/class/smfaculty/byrd/sld010.htm. Retrieved 2009-07-12. - ^ Karl W. Birkeland and Cary J. Mock (1996). "Atmospheric Circulation Patterns Associated With Heavy Snowfall Events, Bridger Bowl, Montana, USA". Mountain Research and Development 16 (3): 281–286. DOI:10.2307/3673951. - ^ Glossary of Meteorology (2009). "Rainy season". American Meteorological Society. http://amsglossary.allenpress.com/glossary/search?id=rainy-season1. Retrieved 2008-12-27. - ^ Costa Rica Guide (2005). "When to Travel to Costa Rica". ToucanGuides. http://costa-rica-guide.com/when.htm. Retrieved 2008-12-27. - ^ Michael Pidwirny (2008). "CHAPTER 9: Introduction to the Biosphere". PhysicalGeography.net. http://www.physicalgeography.net/fundamentals/9k.html. Retrieved 2008-12-27. - ^ Elisabeth M. Benders-Hyde (2003). "World Climates". Blue Planet Biomes. http://www.blueplanetbiomes.org/climate.htm. Retrieved 2008-12-27. - ^ Mei Zheng (2000). "The sources and characteristics of atmospheric particulates during the wet and dry seasons in Hong Kong". University of Rhode Island. http://digitalcommons.uri.edu/dissertations/AAI9989458/. Retrieved 2008-12-27. - ^ S. I. Efe, F. E. Ogban, M. J. Horsfall, E. E. Akporhonor (2005). "Seasonal Variations of Physico-chemical Characteristics in Water Resources Quality in Western Niger Delta Region, Nigeria". Journal of Applied Scientific Environmental Management 9 (1): 191–195. ISSN 1119-8362. Retrieved on 2008-12-27. - ^ C. D. Haynes, M. G. Ridpath, M. A. J. Williams (1991). Monsoonal Australia. Taylor & Francis. p. 90. ISBN 978-90-6191-638-3. http://books.google.com/?id=ZhvtSmJYuN4C&pg=PA91&lpg=PA91&dq=wet+season+characteristics. Retrieved 2008-12-27. - ^ a b Marti J. Van Liere, Eric-Alain D. Ategbo, Jan Hoorweg, Adel P. Den Hartog, and Joseph G. A. J. Hautvast (1994). "The significance of socio-economic characteristics for adult seasonal body-weight fluctuations: a study in north-western Benin". British Journal of Nutrition 72 (3): 479–488. DOI:10.1079/BJN19940049. PMID 7947661. - ^ Chris Landsea (2007). "Subject: D3 - Why do tropical cyclones' winds rotate counter-clockwise (clockwise) in the Northern (Southern) Hemisphere?". National Hurricane Center. http://www.aoml.noaa.gov/hrd/tcfaq/D3.html. Retrieved 2009-01-02. - ^ Climate Prediction Center (2005). "2005 Tropical Eastern North Pacific Hurricane Outlook". National Oceanic and Atmospheric Administration. http://www.cpc.ncep.noaa.gov/products/Epac_hurr/Epac_hurricane.html. Retrieved 2006-05-02. - ^ Jack Williams (2005-05-17). "Background: California's tropical storms". USA Today. http://www.usatoday.com/weather/whhcalif.htm. Retrieved 2009-02-07. - ^ National Climatic Data Center (2005-08-09). "Global Measured Extremes of Temperature and Precipitation". National Oceanic and Atmospheric Administration. http://www.ncdc.noaa.gov/oa/climate/globalextremes.html. Retrieved 2007-01-18. - ^ Dr. Owen E. Thompson (1996). Hadley Circulation Cell. Channel Video Productions. Retrieved on 2007-02-11. - ^ ThinkQuest team 26634 (1999). The Formation of Deserts. Oracle ThinkQuest Education Foundation. Retrieved on 2009-02-16. - ^ "USGS 220427159300201 1047.0 Mt. Waialeale Rain Gage nr Lihue, Kauai, HI". USGS Real-time rainfall data at Waiʻaleʻale Raingauge. http://waterdata.usgs.gov/hi/nwis/uv?site_no=220427159300201&PARAmeter_cd=00045. Retrieved 2008-12-11. - ^ USA Today. Mt. Baker snowfall record sticks. Retrieved on 2008-02-29. - ^ National Weather Service Office, Northern Indiana (2009). "8 Inch Non-Recording Standard Rain Gauge". http://www.crh.noaa.gov/iwx/program_areas/coop/8inch.php. Retrieved 2009-01-02. - ^ Chris Lehmann (2009). "10/00". Central Analytical Laboratory. http://nadp.sws.uiuc.edu/CAL/2000_reminders-4thQ.htm. Retrieved 2009-01-02. - ^ National Weather Service Office Binghamton, New York (2009). "Rainguage Information". http://www.erh.noaa.gov/bgm/spotters_skywarn/precip4.shtml. Retrieved 2009-01-02. - ^ National Weather Service (2009). "Glossary: W". http://www.weather.gov/glossary/index.php?letter=w. Retrieved 2009-01-01. - ^ Discovery School (2009). "Build Your Own Weather Station". Discovery Education. Archived from the original on 2008-12-26. //web.archive.org/web/20081226022910/http://school.discovery.com/lessonplans/activities/weatherstation/itsrainingitspouring.html. Retrieved 2009-01-02. - ^ "Community Collaborative Rain, Hail & Snow Network Main Page". Colorado Climate Center. 2009. http://cocorahs.org. Retrieved 2009-01-02. - ^ The Globe Program (2009). "Global Learning and Observations to Benefit the Environment Program". http://www.globe.gov/fsl/welcome/welcomeobject.pl. Retrieved 2009-01-02. - ^ National Weather Service (2009). "NOAA's National Weather Service Main Page". http://www.nws.noaa.gov. Retrieved 2009-01-01. - ^ Glossary of Meteorology (2009). "Hydrometeor". American Meteorological Society. http://glossary.ametsoc.org/wiki/Hydrometeor. Retrieved 2009-07-16. - ^ National Aeronautics and Space Administration (2012). "NASA and JAXA's GPM Mission Takes Rain Measurements Global". http://www.nasa.gov/mission_pages/GPM/news/overview.html#.Ut3LEnn0CqQ. Retrieved 2014-01-21. - ^ C. Kidd, G.J. Huffman (2011). "Global Precipitation Measurement". Meteorological Applications, 18: 334–353. DOI:10.1002/met.284. - ^ F.J. Tapiador, F.J. Turk, W. Petersen, A.Y. Hou, E. Garcia-Ortega, L.T. Machado, C.F. Angelis, P. Salio, C. Kidd, G.J. Hffman M. De Castro (2012). "Global Precipitation Measurement Methods, Datasets and Applications.". Atmospheric Research 104-105: 70–97. DOI:10.1016/j.atmosres.2011.10.012. - ^ International Precipitation Working Group. "Global Precipitation Datasets". http://www.isac.cnr.it/~ipwg/data/datasets.html. Retrieved 2014-01-21. - ^ Glossary of Meteorology (June 2000). "Return period". American Meteorological Society. http://amsglossary.allenpress.com/glossary/search?id=return-period1. Retrieved 2009-01-02. - ^ Glossary of Meteorology (June 2000). "Rainfall intensity return period". American Meteorological Society. http://amsglossary.allenpress.com/glossary/search?p=1&query=return+period&submit=Search. Retrieved 2009-01-02. - ^ Boulder Area Sustainability Information Network (2005). "What is a 100 year flood?". Boulder Community Network. http://bcn.boulder.co.us/basin/watershed/flood.html. Retrieved 2009-01-02. - ^ Susan Woodward (1997-10-29). "Tropical Broadleaf Evergreen Forest: The Rainforest". Radford University. http://www.radford.edu/~swoodwar/CLASSES/GEOG235/biomes/rainforest/rainfrst.html. Retrieved 2008-03-14. - ^ Susan Woodward (2005-02-02). "Tropical Savannas". Radford University. http://www.radford.edu/~swoodwar/CLASSES/GEOG235/biomes/savanna/savanna.html. Retrieved 2008-03-16. - ^ "Humid subtropical climate". Encyclopædia Britannica. Encyclopædia Britannica Online. 2008. http://www.britannica.com/eb/article-53358/climate. Retrieved 2008-05-14. - ^ Michael Ritter (2008-12-24). "Humid Subtropical Climate". University of Wisconsin–Stevens Point. http://www.uwsp.edu/geo/faculty/ritter/geog101/textbook/climate_systems/humid_subtropical.html. Retrieved 2008-03-16. - ^ Lauren Springer Ogden (2008). Plant-Driven Design. Timber Press. p. 78. ISBN 978-0-88192-877-8. - ^ Michael Ritter (2008-12-24). "Mediterranean or Dry Summer Subtropical Climate". University of Wisconsin–Stevens Point. http://www.uwsp.edu/geo/faculty/ritter/geog101/textbook/climate_systems/mediterranean.html. Retrieved 2009-07-17. - ^ Brynn Schaffner and Kenneth Robinson (2003-06-06). "Steppe Climate". West Tisbury Elementary School. http://www.blueplanetbiomes.org/steppe_climate_page.htm. Retrieved 2008-04-15. - ^ Michael Ritter (2008-12-24). "Subarctic Climate". University of Wisconsin–Stevens Point. http://www.uwsp.edu/geo/faculty/ritter/geog101/textbook/climate_systems/subarctic.html. Retrieved 2008-04-16. - ^ Bureau of Meteorology (2010). "Living With Drought". Commonwealth of Australia. http://www.bom.gov.au/climate/drought/livedrought.shtml. Retrieved 2010-01-15. - ^ Robert Burns (2007-06-06). "Texas Crop and Weather". Texas A&M University. http://agnewsarchive.tamu.edu/dailynews/stories/CROP/Jun0607a.htm. Retrieved 2010-01-15. - ^ James D. Mauseth (2006-07-07). "Mauseth Research: Cacti". University of Texas. http://www.sbs.utexas.edu/mauseth/researchoncacti/. Retrieved 2010-01-15. - ^ A. Roberto Frisancho (1993). Human Adaptation and Accommodation. University of Michigan Press, pp. 388. ISBN 978-0-472-09511-7. Retrieved on 2008-12-27. - ^ Climate Change Division (2008-12-17). "Precipitation and Storm Changes". United States Environmental Protection Agency. http://www.epa.gov/climatechange/science/recentpsc.html. Retrieved 2009-07-17. - ^ Dale Fuchs (2005-06-28). "Spain goes hi-tech to beat drought". London: The Guardian. http://www.guardian.co.uk/weather/Story/0,2763,1516375,00.html. Retrieved 2007-08-02. - ^ Goddard Space Flight Center (2002-06-18). "NASA Satellite Confirms Urban Heat Islands Increase Rainfall Around Cities". National Aeronautics and Space Administration. //web.archive.org/web/20100316084837/http://www.gsfc.nasa.gov/topstory/20020613urbanrain.html. Retrieved 2009-07-17. - ^ Jack S. Bushong (1999). "Quantitative Precipitation Forecast: Its Generation and Verification at the Southeast River Forecast Center". University of Georgia. http://cms.ce.gatech.edu/gwri/uploads/proceedings/1999/BushongJ-99.pdf. Retrieved 2008-12-31. - ^ Daniel Weygand (2008). "Optimizing Output From QPF Helper". National Weather Service Western Region. http://www.wrh.noaa.gov/wrh/talite0821.pdf. Retrieved 2008-12-31. - ^ Noreen O. Schwein (2009). "Optimization of quantitative precipitation forecast time horizons used in river forecasts". American Meteorological Society. http://ams.confex.com/ams/89annual/techprogram/paper_149707.htm. Retrieved 2008-12-31. - ^ Christian Keil, Andreas Röpnack, George C. Craig, and Ulrich Schumann (2008-12-31). "Sensitivity of quantitative precipitation forecast to height dependent changes in humidity". Geophysical Research Letters 35 (9): L09812. DOI:10.1029/2008GL033657. - ^ P. Reggiani and A. H. Weerts (2007). "Probabilistic Quantitative Precipitation Forecast for Flood Prediction: An Application". Journal of Hydrometeorology 9 (1): 76–95. DOI:10.1175/2007JHM858.1. Retrieved on 2008-12-31. - ^ Charles Lin (2005). "Quantitative Precipitation Forecast (QPF) from Weather Prediction Models and Radar Nowcasts, and Atmospheric Hydrological Modelling for Flood Simulation". Achieving Technological Innovation in Flood Forecasting Project. http://www.actif-ec.net/Workshop2/Presentations/ACTIF_P_S1_02.pdf. Retrieved 2009-01-01. - World precipitation map - Collision/Coalescence; The Bergeron Process - Report local rainfall inside the United States at this site (CoCoRaHS) - Report local rainfall related to tropical cyclones worldwide at this site - Global Precipitation Climatology Center GPCC |This page uses content from the English language Wikipedia. The original content was at Precipitation (meteorology). The list of authors can be seen in the page history. As with this Familypedia wiki, the content of Wikipedia is available under the Creative Commons License.|
Microsoft Excel is a spreadsheet software that’s a part of the iconic Microsoft Office software. With Microsoft Excel, you can store and work with a large database. Excel provides you with various tools that you can use to handle data and sort it. You can use functions, graphical charts, tables and some data analysis tools to make your data presentable and understandable. Microsoft Excel provides comparison operators that can be combined with functions, which will allow you to automate certain data calculations. Some of these operators include the equals operator (=), the less than operator (<), the greater than operator (>), the less than or equal to operator (<=), the greater than or equal to operator (>=) and finally the not equal to (<>) operator. We’re going to be using MS Excel 2013 for our tutorial. The differences between Excel 2007, 2010 and 2013 are cosmetic and so you can use any one of the three you have access to. We’re assuming that you’re at least slightly familiar with MS Excel. Don’t worry if you’re not. Excel is pretty intuitive and a basic course in Excel should be enough to get you started. The Not Equal to <> Comparison Operator In this tutorial, you’ll get to learn about the Not Equal to Comparison operator. The symbol for the not equal to comparison operator is “<>”. If you pair it with the IF logical function, you can create all kinds of complex queries. But before we get to the not equal to comparison operator, we’ll take a quick look at the syntax of the IF function. This will help you grasp the concept of the “not equal to” comparison operator better. The IF Logical Function The IF logical function is very useful. You will see it used a lot in a typical work environment. For example: you can create an Excel spreadsheet for your school to help you see how the students fared this semester. If a student gets less than 20 out of 50, he fails. If a student gets more than 20, he passes. The syntax for the IF function is: IF (logical_test [value_if_true], [value_if_false]) Explanation: The “logical_test” is where you provide the condition using the comparison operators. For example, did the student score less than 20 marks? The “value_if_true” part lets you decide what happens if your condition is passed. You can ask excel to display “Student has passed” if the condition is met. The “value_if_false” lets you decide what happens if your condition is failed. You can ask excel to display “Student has failed” if the value of the “logical_test” turns out to be false. Example: We name our column “A” as Marks and column “B” as Status. We apply the formula =IF(A4<=20, “Failed”, “Passed”) for the entire column B. Now, when we input the marks of the student in column A, we directly get the result in column B. Marks Status 10 Failed 35 Passed 19 Failed Let’s see what happened here. Excel first checked if score of the student was less than or equal to (<=) 20. If it was less than 20, it printed “Failed” in the corresponding cell. If it was more than 20, it printed “Passed”. Quick Tip: Don’t waste time typing formula for each cell in your column! If you type a formula for a single cell, you can just click on it to select it and drag it downwards by clicking on its outline (A little “+” symbol will appear). This will apply the formula for multiple cells in your column. Pairing the Not Equal to Comparison Operator with the “IF” logical function The not equal to comparison operator is slightly tricky to use. We’ll show you a few ways in which you can use the operator with the IF function. The not equal to operator uses the greater than and less than signs together “<>” together. The general syntax of the not equal to operator is: =IF (cellname <> condition, result 1, result 2) Let’s continue with our earlier example. Suppose that you want students who have received 20 out of 50 on their exams to be put on probation. You can assign that status using the not equal to comparison operator. Marks Status Probation 10 Failed No 35 Passed No 19 Failed No 20 Passed Yes We add a third column called “Probation”, and paste this formula into all rows =IF(A#<>20, “No”, “Yes”) As you can see, the formula is a little tricky. If the value of A is not equal to 20, then the student is not on probation. If the value of A is equal to 20, he is on probation. We’ve also changed our formula for the “Status” column a little. The new formula is =IF(A#<=19, “Failed”, “Passed”). This slight alteration in our formula allows us to give a “Passed” status to any student who has scored 20 on their exams. Quick Tip: How do you use multiple “IF” functions together? You will, in reality, be using several “IF” functions chained together in a real spreadsheet. You can write an “IF” Function within another (or several “IF” functions). For example, if you wanted the students who scored more than 40 to receive the “A” grade, students who scored more than 30 the “B” grade and students who scored more than 20 the “C” grade, students who scored “20” the D grade and students who got less than 20 the “F” grade, you would use the formula =IF(A1>=40, “A”, IF(A1>=30, “B”, IF(A1<>20, “D”,”F”))) Marks Status Probation Grade 10 Failed No A 35 Passed No F 19 Failed No B 20 Passed Yes D Note that we have not used the “not equal to” comparison operator in assigning grades, because it’s much simpler and faster to use the other operators provided by Excel. There are usually multiple ways you can filter your data. You can choose which method best suits your needs. As you can see, it has been pretty easy to use the “not equal to” comparison in Excel. We hope these examples give you a good feel of how to use the “not equal to” comparison, and also how to build complex “IF” conditionals. If you’d like to explore Excel further, feel free to take up a Basic Excel course, or even a deep dive with an advanced course.
Lewis dot structures (or just Lewis structures) were developed around 1920 by pioneering chemist Gilbert Lewis, as a way of picturing chemical bonding in molecules. We draw Lewis structures to Discover the bonding arrangement of atoms, Discover whether there is any degeneracy of bonding (more on that later), and Figure out whether a given group of atoms might even bond together to form a molecule at all. In a Lewis structure, every atom is surrounded by dots that represent its valence-shell electrons. So nitrogen (N) would look like this: Sometimes we draw the electrons of different atoms with different colors or symbols so we can keep track of them, like this: OK, let's learn how to use Lewis structures. You might have seen the bonding of methane already in the section on covalent bonding. With Lewis structures, we take a trial-and-error approach to figuring out bonding patterns. The goal is to make sure that each atom is surrounded by eight dots (two for hydrogen), representing eight valence electrons, some shared in bonds. In this structure, the carbon atoms shares one of its valence electrons with each hydrogen, and each hydrogen shares its single electron with carbon to make a compound with a complete valence shell. In the figure below, the central carbon is surrounded by a complete octet of eight electrons, and each hydrogen has its capacity, two electrons, by sharing electrons. Two electrons between atoms indicates a single bond, which can be rewritten, as on the right, by a single bar. Ammonia is NH3. Nitrogen has five valence electrons and each hydrogen brings one to the molecule. It's easy to see that those three electrons from the hydrogens could complete an octet on the nitrogen: 5 + 3 = 8. There's a twist in this molecule—a small one—and it gives ammonia some startling properties, some of which are beyond the scope of these notes, but trust me, they're cool. The figure below shows how to construct the Lewis structure. Start with each atom surrounded by its valence electrons, 5 for N, 1 for H. I like to color each atom's electrons differently so I can keep track of them, but it's not absolutely necessary. Arrange the atoms by trial and error (and intuition that you'll develop with enough practice) to get a structure in which the sharing of electrons completes the valence shells of all atoms. That's the most likely way the molecule will bond. One of the pairs of electrons on the nitrogen of ammonia is not bonded. We call that a lone pair. It occupies its own stable orbital, as shown in the stick diagram on the right. Lewis structures tell us about the most-likely bonding arrangement and bond types of molecules, but they tell us little about the structure - the 3D shape of the molecule. For that we'll need other skills. Water is a very important molecule, so it's important to understand its bonding, which will in turn create all of its other properties. The Lewis structure of water shows that the oxygen atom has two lone pairs. Those lone pairs, together with the large difference in electronegativity between oxygen and hydrogen, give water one of its most important properties, its strong polarity. In the rendering on the right below you can see that water, which actually is a bent planar molecule (again, you wouldn't necessarily know this just from the Lewis structure), has a negative end and a positive end. More correctly, it has one end that is more negative than the other (called δ-) and one more positive (called δ+). The unbonded electron pairs create a region of dense negative charge. And because oxygen holds the bonding electrons of the H-atoms tightly to itself, the H-atoms are essentially bare protons hanging off the oxygen. The polarity of water and its ability to hydrogen bond gives water some of the properties that are deeply intertwined with the chemistry of living things on Earth. It's for that reason that it's difficult for us to conceive of life on another planet without water – but you never know ... Lewis structures can show us when double and triple bonds are most likely, or perhaps the only kind of bonding that make a molecule possible. Here are some Lewis structures that contain double and triple bonds (and indeed the real molecules do, too). The double bonds in carbon dioxide, CO2, are what make it a linear, non-polar molecule, and that structure, in turn, gives it most of its properties. CO2 solidifies, for example, at about -57˚C, and the liquid only exists if the gas is placed under about 5 atmospheres of pressure. The triple bond of nitrogen gas, N2, is very strong. Although our atmosphere is mostly nitrogen in the form of N2, most organisms on Earth can't use it in that form because they can't break that bond. They require other sources of the crucial element. Some Lewis structures will lead to bonding that is ambiguous. A double bond might be present between an atom and one or more other equivalent partners. Which one to choose? This is called a degeneracy, and it turns out that nature tends to pick both, neither, and a combination of the two bonds. As an example, let's look at the Lewis structure of nitric acid, HNO3. First the atoms with their valence electrons: Now we can arrange the bonds in two ways. In both, all atoms have a full valence shell. Here they are: Take a minute and convince yourself that every atom (except the hydrogen) has an octet of electrons in its valence. Here are the two structures in stick form: So which one does nature pick? Well, it turns out that whenever we have two equivalent structures like this (we have two degenerate structures or a degeneracy), nature picks a combination of both, and we're better off writing the two bonds more like 1-1/2 bonds, like this: Notice that the red oxygen is different than the other two. It's bound to a hydrogen and the electronegativity difference makes this bond more ionic than covalent. The result is that the hydrogen can detach as a bare proton quite readily, leaving a NO3- ion behind. That's why HNO3 is an acid. Now let's see how a molecular ion, the carbonate ion, CO32-, can bond stably. When we work with ions, we begin with the usual number of valence electrons of a neutral molecule, in this case four for carbon and six for each of the three oxygens. But this is a 2- ion, so we'll add two electrons to the neutral mix to give it that -2 net charge. Those last two electrons can be filled in anywhere they're needed to form a full valence. We start with the raw materials: Now put them together and use the two extra (red) electrons to fill in any gaps in order to form full valence shells for every atom. Finally, notice that this is another degenerate structure. There are two other equivalent places to put our double bond, like this: In this case, the reality is that each C-O bond is equal, stronger than a single bond, but weaker than a double bond. We might write the structure of CO32- like this: Here I've left off the lone pairs of the oxygen atoms. There might be a context in which we'd want to show those, but most of the time it's quicker to omit them. Many molecules for which stable valence shells cannot be built from the number of electrons present on the neutral atoms that compose them can be made stable if electrons are lost are gained. These are molecular ions. Some important molecular ions are OH-, NH4+, SO42-, PO43-, CO32-, COO- & CN- The sulfate ion, SO42-, is a very interesting exception to many of our assumptions about bonding. In fact, it caused a lot of argument in the chemistry community early on. At first glance, with sulfur and oxygen both holding six valence electrons, we might draw a Lewis structure like this: The problem is that the two extra charges needed to stabilize this molecule are localized on the sulfur atom. Nature tends to spread that extra charge out, and sometimes at a cost that can contradict what we've already learned. In this case, it's the octet rule. In fact, SO42- tends to bond more like this: Now the extra charges are a little more spread out, and we can see that there's nothing special about our double bond locations, which means resonance structures and even more spreading of charge. But the twelve electrons around the sulfur are troubling. Remember that sulfur has d-electrons, and some of those are used as valence electrons. Overall this structure is more stable than the all singly bonded one. Here are all of the resonance forms of SO42-, and they lead to an ion with four identical bonds that are somewhere between double and single in strength. It was a measure of the S-O bond length in SO42- that led to a deeper investigation of the bonding. You shouldn't feel like you ought to have recognized this case. It took some Nobel-prize winning chemists, including Linus Pauling, some time and significant argument to uncover the truth. It says a lot about the nature of the electron, if you think about it. The sulfate ion (above) is one case of an exception to the octet rules. There are others. I'm not sure there's any point in memorizing such exceptions; better to know that they exist and be wary of them. Know some of the signs that an exception might exist. Here are a few examples. Phosphorus pentachloride (PCl5) is an exception to the octet rule. You can see the Lewis structure of PCl3 in the practice problems below. Because a chlorine atom only needs one electron to complete its valence shell, it shares one and only one electron with phosphorus, so in PCl5, phosphorus is surrounded by a total of ten electrons. It does this by using its d-shell electrons. A more 3-dimensional structure is shown on the right. Three chlorines are in a plane (blue triangle) and the line containing the other two cuts through the center of that triangle and is perpendicular to it. The arrangement is called a trigonal bipyramid. For similar reasons, sulfur can bind to six fluorine atoms with single bonds in a square bipyramid arrangement. Finally, we don't normally think of noble gases interacting with anything, let alone forming bonds. But it turns out that if the noble gas is big enough, like xenon (Xe), the d-orbitals can allow bonds to form. Xenon forms a couple of covalent compounds, one of which is XF6. XF6 looks just like SF6, but the central Xe is now surrounded by 14 electrons. Roll over the boxes to see the answers - but not before you've tried it yourself! Sometimes it's difficult to tell which of two possible Lewis structures of a compound represents the actual bonding of the molecule. In those cases we resort to calculating what's called the formal charge of each atom. Formal charge is just a way of bookkeeping that helps us to decide which of multiple Lewis structures is the likely true bonding arrangement of a covalent molecule. The sum of the formal charges, with a couple of extra rules, will help us to decide which of multiple-possible valid Lewis structures is likely to be the correct one. Here's how it's done. For each atom The carbon in CH4 has four electrons as a neutral atom. It has no lone pairs, and it shares four bonds, so the formal charge is zero. Each hydrogen atom has one electron as a neutral atom, no lone pairs and shares one bond, for a formal charge of zero. All atoms in the molecule have zero formal charge, the "happiest" situation for any molecule. The central carbon has a formal charge of 4 (valence electrons) - 0 (lone pairs) - 4 (bonds) = 0. The oxygen has a formal charge of 6 - 4 - 2 = 0 (same ordering of terms). Each of the methyl (CH3) carbons has a formal charge of 4 - 0 - 4 = 0 Here is an example of a case where we can find two valid Lewis structures for a compound, fulminic acid (HCNO). We can use formal charges to decide which is most likely to be the actual arrangement of atoms. Here are the structures: Now let's calculate the formal charges of the lower structure, using double-bonds: Note that the carbon has a formal charge of -1 and the nitrogen a charge of -1. The formal charges of the structure with the triple bond look like this: Here, the oxygen — the most electronegative element in the molecule — has the negative charge, and the nitrogen retains its +1 charge. This structure is more likely to be the correct one, because the negative charge is on the most electronegative element of C, N and O. The Lewis structure most likely to represent the actual bonding arrangement is the one in which all formal charges are the closest to zero. If two structures have similar formal charges, the one in which a negative charge lies on the most electronegative atom wins. Given that sulfur is in the third row of the periodic table, and can thus accommodate more than an octet of electrons when bonded in a molecule, there are a number of possible Lewis structures for SO3 that might work. Fortunately, experimental evidence confirms that one of them is the actual structure. Formal charge results are consistent with that structure, too. Here are the Lewis structure possibilities: Here are the formal charges on each atom for each bonding arrangement. The fully double-bonded structure (right) has the lowest formal charges on each atom. Even though sulfur has a bonded valence of 12 electrons, this is still the most stable structure. A few elements in the third row of the periodic table, plus a great many elements with d-electrons, are capable of this. This bonding arrangement of SO3 is confirmed by experiment, which shows that its structure is trigonal-planar (a flat molecule with oxygen atoms at the vertices of an equilateral triangle. VSEPR theory predicts that the 1- and 2-double bonded structures would be a little different (but distinguishable). While formal charge assignment isn't too useful in other areas of chemistry, it can be really enlightening when questions about bonding like this need to be resolved. This site is a one-person operation. If you can manage it, I'd appreciate anything you could give to help defray the cost of keeping up the domain name, server, search service and my time. xaktly.com by Dr. Jeff Cruzan is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. © 2012, Jeff Cruzan. All text and images on this website not specifically attributed to another source were created by me and I reserve all rights as to their use. Any opinions expressed on this website are entirely mine, and do not necessarily reflect the views of any of my employers. Please feel free to send any questions or comments to email@example.com.
4.OA.1: Interpret a multiplication equation as a comparison, e.g., interpret 35 = 5 × 7 as a statement that 35 is 5 groups of 7 and 7 groups of 5 (Commutative property). Represent verbal statements of multiplicative comparisons as multiplication equations. 4.OA.2: Multiply or divide to solve word problems involving multiplicative comparison (e.g., by using drawings and equations with a symbol for the unknown number to represent the problem or missing numbers in an array). Distinguish multiplicative comparison from additive comparison. 4.OA.3: Solve multistep word problems posed with whole numbers and having whole-number answers using the four operations, including problems in which remainders must be interpreted. Represent these problems using equations with a letter standing for the unknown quantity. Assess the reasonableness of answers using mental computation and estimation strategies including rounding. 4.OA.5: Generate a number, shape pattern, table, t-chart, or input/output function that follows a given rule. Identify apparent features of the pattern that were not explicit in the rule itself. Be able to express the pattern in algebraic terms. 4.OA.6: Extend patterns that use addition, subtraction, multiplication, division or symbols, up to 10 terms, represented by models (function machines), tables, sequences, or in problem situations. 4.NBT.1: Recognize that in a multi-digit whole number, a digit in one place represents ten times what it represents in the place to its right. 4.NBT.2: Read and write multi-digit whole numbers using base-ten numerals, number names, and expanded form. Compare two multi-digit numbers based on the value of the digits in each place, using >, =, and < symbols to record the results of comparisons. 4.NBT.3: Use place value understanding to round multi-digit whole numbers to any place using a variety of estimation methods; be able to describe, compare, and contrast solutions. 4.NBT.4: Fluently add and subtract multi-digit whole numbers using any algorithm. Verify the reasonableness of the results. 4.NBT.5: Multiply a whole number of up to four digits by a one-digit whole number, and multiply two two-digit numbers, using strategies based on place value and the properties of operations. Illustrate and explain the calculation by using equations, rectangular arrays, and/or area models. 4.NBT.6: Find whole-number quotients and remainders with up to four-digit dividends and one-digit divisors, using strategies based on place value, the properties of operations, and/or the relationship between multiplication and division. Illustrate and explain the calculation by using equations, rectangular arrays, and/or area models. 4.NF.1: Explain why a fraction a/b is equivalent to a fraction (n x a)/(n x b) by using visual fraction models, with attention to how the number and size of the parts differ even though the two fractions themselves are the same size. Use this principle to recognize and generate equivalent fractions. 4.NF.2: Compare two fractions with different numerators and different denominators (e.g., by creating common denominators or numerators, or by comparing to a benchmark fraction such as 1/2). Recognize that comparisons are valid only when the two fractions refer to the same whole. Record the results of comparisons with symbols >, =, or <, and justify the conclusions (e.g., by using a visual fraction model). 4.NF.3: Understand a fraction a/b with a > 1 as a sum of fractions 1/b. 4.NF.3.a: Understand addition and subtraction of fractions as joining and separating parts referring to the same whole. 4.NF.3.b: Decompose a fraction into a sum of fractions with the same denominator in more than one way, recording each decomposition by an equation. Justify decompositions (e.g., by using a visual fraction model). 4.NF.3.c: Add and subtract mixed numbers with like denominators (e.g., by replacing each mixed number with an equivalent fraction, and/or by using properties of operations and the relationship between addition and subtraction). 4.NF.3.d: Solve word problems involving addition and subtraction of fractions referring to the same whole and having like denominators (e.g., by using visual fraction models and equations to represent the problem). 4.NF.4: Apply and extend previous understandings of multiplication to multiply a fraction by a whole number. 4.NF.4.a: Understand a fraction a/b as a multiple of 1/b. 4.NF.4.b: Understand a multiple of a/b as a multiple of 1/b, and use this understanding to multiply a fraction by a whole number. 4.NF.6: Use decimal notation for fractions with denominators 10 or 100. 4.NF.7: Compare two decimals to hundredths by reasoning about their size. Recognize that comparisons are valid only when the two decimals refer to the same whole. Record the results of comparisons with the symbols >, =, or <, and justify the conclusions (e.g., by using a visual model). 4.MD.1: Know relative sizes of measurement units within one system of units including km, m, cm; kg, g; lb, oz.; l, ml; hr, min, sec. Within a single system of measurement, express measurements in a larger unit in terms of a smaller unit. Record measurement equivalents in a two-column table. 4.MD.2: Use the four operations to solve word problems involving distances, intervals of time, liquid volumes, masses of objects, and money, including problems involving simple fractions or decimals, and problems that require expressing measurements given in a larger unit in terms of a smaller unit. Represent measurement quantities using diagrams such as number line diagrams that feature a measurement scale. 4.MD.3: Apply the area and perimeter formulas for rectangles in real-world and mathematical problems. 4.MD.6: Explain the classification of data from real-world problems shown in graphical representations including the use of terms range and mode with a given set of data. 4.MD.9: Recognize angle measure as additive. When an angle is divided into non-overlapping parts, the angle measure of the whole is the sum of the angle measures of the parts. Solve addition and subtraction problems to find unknown angles on a diagram in real-world and mathematical problems (e.g., by using an equation with a symbol for the unknown angle measure). 4.G.1: Draw points, lines, line segments, rays, angles (right, acute, obtuse), and perpendicular, parallel, and intersecting line segments. Identify these in two-dimensional (plane) figures. 4.G.2: Classify two-dimensional (plane) figures based on the presence or absence of parallel or perpendicular lines, or the presence or absence of angles of a specified size. Recognize right triangles as a category, and identify right triangles. 4.G.3: Recognize a line of symmetry for a two-dimensional (plane) figure as a line across the figure such that the figure can be folded along the line into matching parts. Identify line-symmetric figures and draw lines of symmetry. Correlation last revised: 9/22/2020
We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today! The words “artificial intelligence” (AI) have been used to describe the workings of computers for decades, but the precise meaning shifted with time. Today, AI describes efforts to teach computers to imitate a human’s ability to solve problems and make connections based on insight, understanding and intuition. What is artificial intelligence? Artificial intelligence usually encompasses the growing body of work in technology on the cutting edge that aims to train the technology to accurately imitate or — in some cases — exceed the capabilities of humans. Older algorithms, when they grow commonplace, tend to be pushed out of the tent. For instance, transcribing human voices into words was once an active area of research for scientists exploring artificial intelligence. Now it is a common feature embedded in phones, cars and appliances and it isn’t described with the term as often. Today, AI is often applied to several areas of research: - Machine vision: Which helps computers understand the position of objects in the world through lights and cameras. - Machine learning (ML): The general problem of teaching computers about the world with a training set of examples. - Natural language processing (NLP): Making sense of knowledge encoded in human languages. - Robotics: Designing machines that can work with some degree of independence to assist with tasks, especially work that humans can’t do because it may be repetitive, strenuous or dangerous. There is a wide range of practical applicability to artificial intelligence work. Some chores are well-understood and the algorithms for solving them are already well-developed and rendered in software. They may be far from perfect, but the application is well-defined. Finding the best route for a trip, for instance, is now widely available via navigation applications in cars and on smartphones. Other areas are more philosophical. Science fiction authors have been writing about computers developing human-like attitudes and emotions for decades, and some AI researchers have been exploring this possibility. While machines are increasingly able to work autonomously, general questions of sentience, awareness or self-awareness remain open and without a definite answer. [Related: ‘Sentient’ artificial intelligence: Have we reached peak AI hype?] AI researchers often speak of a hierarchy of capability and awareness. The directed tasks at the bottom are often called “narrow AI” or “reactive AI”. These algorithms can solve well-defined problems, sometimes without much direction from humans. Many of the applied AI packages fall into this category. The notion of “general AI” or “self-directed AI” applies to software that could think like a human and initiate plans outside of a well-defined framework. There are no good examples of this level of AI at this time, although some developers sometimes like to suggest that their tools are beginning to exhibit some of this independence. Beyond this is the idea of “super AI”, a package that can outperform humans in reasoning and initiative. These are largely discussed hypothetically by advanced researchers and science fiction authors. In the last decade, many ideas from the AI laboratory have found homes in commercial products. As the AI industry has emerged, many of the leading technology companies have assembled AI products through a mixture of acquisitions and internal development. These products offer a wide range of solutions, and many businesses are experimenting with using them to solve problems for themselves and their customers. How are the biggest companies approaching AI? Leading companies have invested heavily in AI and developed a wide range of products aimed at both developers and end users. Their product lines are increasingly diverse as the companies experiment with different tiers of solutions to a wide range of applied problems. Some are more polished and aimed at the casual computer user. Others are aimed at other programmers who will integrate the AI into their own software to enhance it. The largest companies all offer dozens of products now and it’s hard to summarize their increasingly varied options. IBM has long been one of the leaders in AI research. Its AI-based competitor in the TV game Jeopardy, Watson, helped ignite the recent interest in AI when it beat humans in 2011 demonstrating how adept the software could be at handling more general questions posed in human language. Since then, IBM has built a broad collection of applied AI algorithms under the Watson brand name that can automate decisions in a wide range of business applications like risk management, compliance, business workflow and devops. These solutions rely upon a mixture of natural language processing and machine learning to create models that can either make production decisions or watch for anomalies. In one case study of its applications, for instance, the IBM Safer Payments product prevented $115 million worth of credit card fraud. Another example, Microsoft’s AI platform offers a wide range of algorithms, both as products and services available through Azure. The company also targets machine learning and computer vision applications and like to highlight how their tools search for secrets inside extremely large data sets. Its Megatron-Turing Natural Language Generation model (MT-NLG), for instance, has 530 billion parameters to model the nuances of human communication. Microsoft is also working on helping businesses processes shift from being automated to becoming autonomous by adding more intelligence to handle decision-making. Its autonomous packages are, for instance, being applied to both the narrow problems of keeping assembly lines running smoothly and the wider challenges of navigating drones. Google developed a strong collection of machine learning and computer vision algorithms that it uses for both internal projects indexing the web while also reselling the services through their cloud platform. It has pioneered some of the most popular open-source machine learning platforms like TensorFlow and also built custom hardware for speeding up training models on large data sets. Google’s Vertex AI product, for instance, automates much of the work of turning a data set into a working model that can then be deployed. The company also offers a number of pretrained models for common tasks like optical character recognition or conversational AI that might be used for an automated customer service agent. In addition, Amazon also uses a collection of AI routines internally in its retail website, while marketing the same backend tools to AWS users. Products like Personalize are optimized for offering customers personalized recommendations on products. Rekognitition offers predeveloped machine vision algorithms for content moderation, facial recognition and text detection and conversion. These algorithms also have a prebuilt collection of models of well-known celebrities, a useful tool for media companies. Developers who want to create and train their own models can also turn to products like SageMaker which automates much of the workload for business analysts and data scientists. Facebook also uses artificial intelligence to help manage the endless stream of images and text posts. Algorithms for computer vision classify uploaded images, and text algorithms analyze the words in status updates. While the company maintains a strong research team, the company does not actively offer standalone products for others to use. It does share a number of open-source projects like NeuralProphet, a framework for decision-making. Additionally, Oracle is integrating some of the most popular open-source tools like Pytorch and Tensorflow into their data storage hierarchy to make it easier and faster to turn information stored in Oracle databases into working models. They also offer a collection of prebuilt AI tools with models for tackling common challenges like anomaly detection or natural language processing. How are startups approaching AI? New AI companies tend to be focused on one particular task, where applied algorithms and a determined focus will produce something transformative. For instance, a wide-reaching current challenge is producing self-driving cars. Startups like Waymo, Pony AI, Cruise Automation and Argo are four major startups with significant funding who are building the software and sensor systems that will allow cars to navigate themselves through the streets. The algorithms involve a mixture of machine learning, computer vision, and planning. Many startups are applying similar algorithms to more limited or predictable domains like warehouse or industrial plants. Companies like Nuro, Bright Machines and Fetch are just some of the many that want to automate warehouses and industrial spaces. Fetch also wants to apply machine vision and planning algorithms to take on repetitive tasks. A substantial number of startups are also targeting jobs that are either dangerous to humans or impossible for them to do. Against this backdrop, Hydromea is building autonomous underwater drones that can track submerged assets like oil rigs or mining tools. Another company, Solinus, makes robots for inspecting narrow pipes. Many startups are also working in digital domains, in part because the area is a natural habitat for algorithms, since the data is already in digital form. There are dozens of companies, for instance, working to simplify and automate routine tasks that are part of the digital workflow for companies. This area, sometimes called robotic process automation (RPA), rarely involves physical robots because it works with digital paperwork or chit. However, it is a popular way for companies to integrate basic AI routines into their software stack. Good RPA platforms, for example, often use optical character recognition and natural language processing to make sense of uploaded forms in order to simplify the office workload. Many companies also depend upon open-source software projects with broad participation. Projects like Tensorflow or PyTorch are used throughout research and development organizations in universities and industrial laboratories. Some projects like DeepDetect, a tool for deep learning and decision-making, are also spawning companies that offer mixtures of support and services. There are also hundreds of effective and well-known open-source projects used by AI researchers. OpenCV, for instance, offers a large collection of computer vision algorithms that can be adapted and integrated with other stacks. It is used frequently in robotics, medical projects, security applications and many other tasks that rely upon understanding the world through a camera image or video. Is there anything AI can’t do? There are some areas where AI finds more success than others. Statistical classification using machine learning is often pretty accurate but it is often limited by the breadth of the training data. These algorithms often fail when they are asked to make decisions in new situations or after the environment has shifted substantially from the training corpus. Much of the success or failure depends upon how much precision is demanded. AI tends to be more successful when occasional mistakes are tolerable. If the users can filter out misclassification or incorrect responses, AI algorithms are welcomed. For instance, many photo storage sites offer to apply facial recognition algorithms to sort photos by who appears in them. The results are good but not perfect, but users can tolerate the mistakes. The field is largely a statistical game and succeeds when judged on a percentage basis. A number of the most successful applications don’t require especially clever or elaborate algorithms, but depend upon a large and well-curated dataset organized by tools that are now manageable. The problem once seemed impossible because of the scope, until large enough teams tackled it. Navigation and mapping applications like Waze just use simple search algorithms to find the best path but these apps could not succeed without a large, digitized model of the street layouts. Natural language processing is also successful with making generalizations about the sentiment or basic meaning in a sentence but it is frequently tripped up by neologisms, slang or nuance. As language changes or processes, the algorithms can adapt, but only with pointed retraining. They also start to fail when the challenges are outside a large training set. Robotics and autonomous cars can be quite successful in limited areas or controlled spaces but they also face trouble when new challenges or unexpected obstacles appear. For them, the political costs of failure can be significant, so developers are necessarily cautious on leaving the envelope. Indeed, determining whether an algorithm is capable or a failure often depends upon criteria that are politically determined. If the customers are happy enough with the response, if the results are predictable enough to be useful, then the algorithms succeed. As they become taken for granted, they lose the appellation of AI. If the term is generally applied to the topics and goals that are just out of reach, if AI is always redefined to exclude the simple, well-understood solutions, then AI will always be moving toward the technological horizon. It may not be 100% successful presently, but when applied in specific cases, it can be tantalizingly close. [Read more: The quest for explainable AI] VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.
Common questions and answers about Synthetic Aperture Radar Unlike the aperture in a camera, which opens to let in light, radar aperture is another term for the antenna on the spacecraft or aircraft. The radar antenna first transmits electromagnetic energy toward Earth and then receives the returning energy after it reflects off of objects on the planet. In the NASA image below, the radar antenna is the rectangle at the Earth end of the 1978 Seasat satellite. The data collected by the radar antenna are then transmitted to another kind of antenna on Earth — such as the antennas of the ASF Satellite Tracking Ground Station — so they can be stored and processed. In general, the larger the antenna, the more unique information scientists can obtain about an object — and the more information, the better the image resolution. However, antennas in space are not large. So scientists use the spacecraft’s motion, along with advanced signal-processing techniques, to simulate a larger antenna. Synthetic aperture radar (SAR) interferometry (InSAR) detects motion or elevation by comparing radar signals from two or more images of the same scene. The images are taken at different times from the same vantage point in space. SAR interferometry is often used to detect surface changes (for use in seismology, for example) or to generate digital elevation maps. The InSAR image below shows deformation on Okmok, a volcano in the Aleutian Islands. Because the radar wavelength is longer than particles in a cloud, such as droplets, the signal traveling through a cloud is mostly unaffected by any refraction at the boundaries of the different media. In microwave remote sensing, scientists measure the time and magnitude of the signal backscattered from the ground to the radar antenna. The magnitude of the signal defines the brightness of a given pixel in the image. The resulting image has a grayscale. Scientists sometimes colorize SAR images to highlight certain data or features. The interpretation of synthetic aperture radar (SAR) images is not straightforward. The reasons include the non-intuitive, side-looking geometry. Here are some general rules of thumb: - Regions of calm water and other smooth surfaces appear black (the incident radar reflects away from the spacecraft). - Rough surfaces appear brighter, as they reflect the radar in all directions, and more of the energy is scattered back to the antenna. Rough surface backscatter even more brightly when it is wet. - Any slopes lead to geometric distortions. Steeper angles lead to more extreme layover, in which the signals from the tops of mountains or other tall objects “layover” on top of other signals, effectively creating foreshortening. Mountaintops always appear to tip towards the sensor. - Layover is highlighted by bright pixel values. The various combinations of the polarization for the transmitted and received signals have a large impact on the backscattering of the signal. The right choice of polarization can help emphasize particular topographic features. - In urban areas, it is at times challenging to determine the orbit direction. All buildings that are perfectly perpendicularly aligned to the flight direction show very bright returns. - Surface variations near the size of the radar’s wavelength cause strong backscattering. If the wavelength is a few centimeters long, dirt clods and leaves might backscatter brightly. - A longer wavelength would be more likely to scatter off boulders than dirt clods, or tree trunks rather than leaves. - Wind-roughened water can backscatter brightly when the resulting waves are close in size to the incident radar’s wavelength. - Hills and other large-scale surface variations tend to appear bright on one side and dim on the other. (The side that appears bright was facing the SAR.) - Due to the reflectivity and angular structure of buildings, bridges, and other human-made objects, these targets tend to behave as corner reflectors and show up as bright spots in a SAR image. A particularly strong response — for example, from a corner reflector or ASF’s receiving antenna — can look like a bright cross in a processed SAR image. In ASF’s full-resolution synthetic aperture radar (SAR) images, objects can be distinguished as small as about 30 meters wide. Some of the smaller items scientists have spotted have been ships and their wakes. When the synthetic aperture radar (SAR) happens to be aligned at a certain angle, long thin objects such as roads or even the Alaskan oil pipeline can also be seen. Objects can be much smaller than the resolution and still be observable such as bright point objects. They only need to be perfectly aligned with the look direction of the synthetic aperture radar (SAR) sensor. As the spacecraft moves along in its orbit, the radar antenna transmits pulses very rapidly in order to obtain many backscattered radar responses from a particular object. The synthetic aperture radar (SAR) processor could use all of these responses to obtain the object’s radar cross-section (how brightly the object backscattered the incoming radar), but the result often contains quite a bit of speckle. Generally considered to be noise, speckle can be caused by an object that is a very strong reflector at a particular alignment between itself and the spacecraft, or by the combined effect of various responses all within one grid cell. To reduce speckle, the data are sometimes processed in sections that are later combined — called looks. The more looks used to process an image, the less speckle. However, resolution is reduced, and information is lost in this process. Several research groups are developing/improving algorithms to reduce speckle while saving as much accurate information as possible. Noise is defined as random or regular interfering effects that degrade the data’s information-bearing quality. Speckle is a scattering phenomenon that arises because the resolution of the sensor is not sufficient to resolve individual scatterers. Physically speaking, speckle is not noise, as the same imaging configuration leads to the identical speckle pattern. Speckle is removed by multi-looking. See “What is a ‘look’” above. After the radar sends its microwave signal toward a target, the target reflects part of the signal back to the radar antenna. That reflection is called backscatter. Various properties of the target affect how much it backscatters the signal. - PALSAR (Faraday rotation can be a factor.) - RADARSAT-1 (The most suitable RADARSAT-1 data for InSAR were acquired during and after the Modified Antarctic Mapping Mission in the fall of 2000.) IfSAR is another term for InSAR. InSAR is the more common term, particularly for satellite-borne sensors. IfSAR has been used more by the military and/or for airborne sensors. Layover is a type of distortion in a synthetic aperture radar (SAR) image. The radar is pointed to the side (side-looking) for imaging. Radar signals that return to the spacecraft from a mountaintop arrive earlier or at the same time as the signal from the foot of the mountain, seeming to indicate that the mountaintop and the foot of the mountain are in nearly the same place, or the mountaintop may also appear “before” the foot. In a synthetic aperture radar (SAR) image with layover, the mountains look as if they have “fallen over” towards the sensor. Where features are shifted from their actual location, the resulting geolocations are incorrect. This effect can be removed by the technique of terrain correction (also see “What is terrain correction?” below). As with shadows from sunlight, shadows in synthetic aperture radar (SAR) images appear behind vertical objects. Mountains may appear to have black shadows behind them, depending on the steepness of the slope. The shadows appear black because no radar signals return from there. Radiometric correction involves removing the misleading influence of topography on backscatter values. For example, the correction eliminates bright backscatter from a steep slope, leaving only the backscatter that reveals surface characteristics such as vegetation and soil moisture. Terrain correction is the process of correcting geometric distortions that lead to geolocation errors. The distortions are induced by side-looking (rather than straight-down looking or nadir) imaging, and compounded by rugged terrain. Terrain correction moves image pixels into the proper spatial relationship with each other. Mountains that look as if they have “fallen over” towards the sensor are corrected in their shape and geolocation. Most digital elevation models (DEM) are geoid-based and require a correction before they can be used for terrain correction. The DEM included in an ASF radiometrically terrain corrected (RTC) product file was converted from source DEM orthometric height to ellipsoid height using the ASF MapReady geoid_adjust tool. This tool applies a geoid correction so that the resulting DEM relates to the ellipsoid. An online tool is available that computes the height of the geoid above the WGS84 ellipsoid, and will show the amount of correction that was applied to the source DEM used in creating an RTC product. Orthorectification corrects geometric distortions in imagery, just as terrain correction does (see “What is terrain correction?” above). The term ‘orthorectification’ is used more often in association with aerial and optical imagery. Terrain correction generally refers to synthetic aperture radar (SAR) imagery. A georeferenced image has the location of the four corners of the image and the information needed to put the data into a projection. Geocoded data is already projected. Each point in the image is associated with a geographic coordinate. - C-band (~5.3 GHz) Applies to ERS-1, ERS-2, RADARSAT-1, Sentinel-1 – Variety of applications, but particularly sea ice, ocean winds, glaciers - L-band (~1.2 GHz) Applies to PALSAR, UAVSAR, AIRSAR, JERS-1, Seasat – Provides vegetation penetration – Applications included sea ice, tropical forest mapping, soil moisture – Subject to ionospheric effects - P-band (~0.4 GHz) Applies to some products of UAVSAR – Greatest penetration depth through vegetation and into soil – Ideal for soil moisture and biomass – Difficult to operate from space due to ionospheric effects
Gross Domestic Product (GDP) is the market value of all goods and services produced within a country in a given period. GDP can often be looked at as the total value added of every business in an economy. GDP is also an indicator of the living standard of a country. Usually, GDP is basically comparing a country’s economy yearly. For example, if a country’s year-to-year GDP is up 5%, this could mean that the country’s economy has grown by 5% over the previous year. GDP was first developed by Simon Smith Kuznets for a US Congress report in 1934. Simon Kuznets was a Russian American economist and worked at the Wharton School of the University of Pennsylvania. GDP is a very extensive economic data and it is usually watched carefully. It is used in the United States of America by the White House and Congress to prepare the Federal budget and used by Wall Street to indicate the economic activities. The business community also use GDP when preparing forecasts of economic performance that provide the basis for production, investment, and employment planning. Real GDP vs. Nominal GDP GDP is calculated by using the market value of final goods and services produced in a country within a financial year. However there are difficulties in comparing the market value of GDP by year to year because the market value is measured by money and money could have inflated or deflated in the current year compared to the previous year. To avoid this problem from happening, GDP is deflated when prices rise and inflated when prices fall. This GDP is called nominal GDP which is not adjusted for inflation or deflation. A GDP which has been inflated or deflated to reflect the changes in the market price is called real GDP or adjusted GDP. Measuring GDP The economists have figured out three ways to measure GDP. 1)The expenditure approach 2)The income approach 3)The product approach The expenditure approach Among these three methods, the expenditure method is thought to be the most basic one. This method calculates GDP by adding up the four types of expenditures. The formula is; Y = C + I + G + NX(X – M) Y = Gross Domestic Product C = Household Consumption I = Investment G = Government spending NX = Net exports X = Exports of goods and services M = Imports of goods and services Consumption consists of private households’ final consumption expenditure and it is the largest component in the economy. Consumption is calculated by adding up durable goods, non-durable goods and services. For example, food, rent, clothes, medicines, petrol, etc. Buying a new house is not included in consumption. Investment includes the new purchase for a business but doesn’t include exchanging the existing assets. For example, buying new machinery or software or equipment for the business are the investments. A private household buying a new house also goes under this category. Investment in GDP doesn’t include the financial products such as bonds and stocks. These products are called ‘saving’ to avoid double-counting. Government spending is basically the government’s expenditure. It includes all the final goods and...
Illustration of a novel room-temperature process to remove carbon dioxide (CO2) by converting the molecule into carbon monoxide (CO). Instead of using heat, the nanoscale method relies on the energy from surface plasmons (violet hue) that are excited when a beam of electrons (vertical beam) strikes aluminum nanoparticles resting on graphite, a crystalline form of carbon. In the presence of the graphite, aided by the energy derived from the plasmons, carbon dioxide molecules (black dot bonded to two red dots) are converted to carbon monoxide (black dot bonded to one red dot. The hole under the violet sphere represents the graphite etched away during the chemical reaction CO2 + C = 2CO. New method could potentially reduce carbon dioxide emission into the atmosphere and slash costs of chemical manufacturing. Researchers at the National Institute of Standards and Technology (NIST) and their colleagues have demonstrated a room-temperature method that could significantly reduce carbon dioxide levels in fossil-fuel power plant exhaust, one of the main sources of carbon emissions in the atmosphere. Although the researchers demonstrated this method in a small-scale, highly controlled environment with dimensions of just nanometers (billionths of a meter), they have already come up with concepts for scaling up the method and making it practical for real-world applications. In addition to offering a potential new way of mitigating the effects of climate change, the chemical process employed by the scientists also could reduce costs and energy requirements for producing liquid hydrocarbons and other chemicals used by industry. That’s because the method’s byproducts include the building blocks for synthesizing methane, ethanol and other carbon-based compounds used in industrial processing. The team tapped a novel energy source from the nanoworld to trigger a run-of-the-mill chemical reaction that eliminates carbon dioxide. In this reaction, solid carbon latches onto one of the oxygen atoms in carbon dioxide gas, reducing it to carbon monoxide. The conversion normally requires significant amounts of energy in the form of high heat — a temperature of at least 700 degrees Celsius, hot enough to melt aluminum at normal atmospheric pressure. Instead of heat, the team relied on the energy harvested from traveling waves of electrons, known as localized surface plasmons (LSPs), which surf on individual aluminum nanoparticles. The team triggered the LSP oscillations by exciting the nanoparticles with an electron beam that had an adjustable diameter. A narrow beam, about a nanometer in diameter, bombarded individual aluminum nanoparticles while a beam about a thousand times wider generated LSPs among a large set of the nanoparticles. In the team’s experiment, the aluminum nanoparticles were deposited on a layer of graphite, a form of carbon. This allowed the nanoparticles to transfer the LSP energy to the graphite. In the presence of carbon dioxide gas, which the team injected into the system, the graphite served the role of plucking individual oxygen atoms from carbon dioxide, reducing it to carbon monoxide. The aluminum nanoparticles were kept at room temperature. In this way, the team accomplished a major feat: getting rid of the carbon dioxide without the need for a source of high heat. Previous methods of removing carbon dioxide have had limited success because the techniques have required high temperature or pressure, employed costly precious metals, or had poor efficiency. In contrast, the LSP method not only saves energy but uses aluminum, a cheap and abundant metal. Although the LSP reaction generates a poisonous gas — carbon monoxide — the gas readily combines with hydrogen to produce essential hydrocarbon compounds, such as methane and ethanol, that are often used in industry, said NIST researcher Renu Sharma. She and her colleagues, including scientists from the University of Maryland in College Park and DENSsolutions, in Delft, the Netherlands, reported their findings in Nature Materials. “We showed for the first time that this carbon dioxide reaction, which otherwise will only happen at 700 degrees C or higher, can be triggered using LSPs at room temperature,” said researcher Canhui Wang of NIST and the University of Maryland. The researchers chose an electron beam to excite the LSPs because the beam can also be used to image structures in the system as small as a few billionths of a meter. This enabled the team to estimate how much carbon dioxide had been removed. They studied the system using a transmission electron microscope (TEM). Because both the concentration of carbon dioxide and the reaction volume of the experiment were so small, the team had to take special steps to directly measure the amount of carbon monoxide generated. They did so by coupling a specially modified gas cell holder from the TEM to a gas chromatograph mass spectrometer, allowing the team to measure parts-per-millions concentrations of carbon dioxide. Sharma and her colleagues also used the images produced by the electron beam to measure the amount of graphite that was etched away during the experiment, a proxy for how much carbon dioxide had been taken away. They found that the ratio of carbon monoxide to carbon dioxide measured at the outlet of the gas cell holder increased linearly with the amount of carbon removed by etching. Imaging with the electron beam also confirmed that most of the carbon etching — a proxy for carbon dioxide reduction — occurred near the aluminum nanoparticles. Additional studies revealed that when the aluminum nanoparticles were absent from the experiment, only about one-seventh as much carbon was etched. Limited by the size of the electron beam, the team’s experimental system was small, only about 15 to 20 nanometers across (the size of a small virus). To scale up the system so that it could remove carbon dioxide from the exhaust of a commercial power plant, a light beam may be a better choice than an electron beam to excite the LSPs, Wang said. Sharma proposes that a transparent enclosure containing loosely packed carbon and aluminum nanoparticles could be placed over the smokestack of a power plant. An array of light beams impinging upon the grid would activate the LSPs. When the exhaust passes through the device, the light-activated LSPs in the nanoparticles would provide the energy to remove carbon dioxide. The aluminum nanoparticles, which are commercially available, should be evenly distributed to maximize contact with the carbon source and the incoming carbon dioxide, the team noted. The Latest Updates from Bing News & Google News Go deeper with Bing News on: Reduce carbon dioxide levels - Carbon dioxide levels hit new record despite COVID-19 lockdownson November 26, 2020 at 5:36 am Experts say that our response to COVID-19 demonstrates that we are capable of addressing climate change. Despite the initial projections that showed widespread lockdowns and slowed industrial activity ... - Carbon dioxide levels continue at record levels, despite COVID-19 lockdownon November 23, 2020 at 11:05 am The industrial slowdown due to the COVID-19 pandemic has not curbed record levels of greenhouse gases which are trapping heat in the atmosphere, increasing temperatures and driving more extreme ... - Carbon dioxide levels keep rising despite industrial lockdownon November 23, 2020 at 9:37 am World Meteorological Organization data crush hopes lockdowns across the world would have pushed emissions to record low. - The Drilldown: Carbon dioxide levels increasing despite pandemic lockdowns: UN reporton November 23, 2020 at 9:24 am The Lead Greenhouse gases have soared to record levels says the UN’s World Meterological Organization (WMO). Although 2020 saw a drop in emissions of between 4.2 per cent and 7.5 per cent, the WMO ... - Carbon Dioxide Levels Hit New Record High, Despite Covid-19 Lockdownson November 23, 2020 at 9:01 am Greenhouse gas concentrations in Earth's atmosphere soared to record new heights in 2019 and continued to rise again this year, despite an expected drop in ... Go deeper with Google Headlines on: Reduce carbon dioxide levels Go deeper with Bing News on: CO2 to CO - Waste Milk Could Be Used to Reduce Power Plant CO2 Emissions, Says Clarkson University Researchon November 30, 2020 at 8:33 am Clarkson University research, which shows how surplus milk may be used to capture carbon dioxide (CO2) from fossil‐fuel based power plant emissions, is featured on the front cover of the November ... - Audi E-Tron EVs add urgency to plan making production sites CO2-neutral by 2025on November 30, 2020 at 5:00 am Audi is hoping to make all of its assembly plants CO2-neutral, but past that there's much progress to be made in looking upstream in the supply chain. - Climate Change: Using a soft crystal to visualize how absorbed carbon dioxide behaves in liquidon November 30, 2020 at 3:08 am Photo: The CO2-absorbing soft crystal developed for this study (Photo: Shin-ichiro Noro). view more Credit Image: Shin-ichiro Noro A team of scientists has succeeded in visualizing how carbon dioxide ... - Adaptation to Climate Change and Increasing CO2 in Riceon November 30, 2020 at 12:08 am Improving yield response in rice under increasing atmospheric carbon dioxide (CO2) concentrations will help this crop adapt to the changing climate. Dr Toshihiro Hasegawa, a plant physiologist at ... - Farmers will need to ‘cut CO2 emissions and wildlife loss’ to access funds post Brexiton November 30, 2020 at 12:02 am Farmers will need to cut carbon emissions, create more space for wildlife and make improvements to animal welfare in order to access public money post Brexit, the government has said. As the UK leaves ...
Light pollution. satelite trains and radio frequency interference. The invading civilization. All of these pose threats to ground-based astronomy. But did anyone ever think that global climate change could wreak havoc on observatories? It turns out that the answer is “yes”. We are all familiar with predictions about global climate change. It will make hot places warmer, cold places cooler, and it will bring wilder weather to places all over the planet. Economically, its effects are already making a dent in world trade and are changing the living conditions of millions of people. And it will inevitably change astronomy and the places where astronomers do their work. Observatories need clean, dry air Observations from ground-based telescopes are incredibly sensitive to local atmospheric conditions. Most observatories are located well above sea level. Less atmosphere to look at means better astronomical data. And there are other factors too. For example, the observatories on the Big Island of Hawaii sit on top of a 4,000-meter volcano. Infrared-sensitive instruments, like those at the Subaru and Gemini observatories, work very well there. This is because there is very little water vapor in the air at high altitudes, and near-infrared light can leak out. Many telescopes are also built in deserts, which also have less cloudy nights and lower water vapor content. Those are observation sites that are much more sensitive to climate change. The wildest conditions will adversely affect observatories long before the end of the useful life of their instruments. Back when these places were planned and built, selection committees only looked at short-term atmospheric analyzes (like five years or so of weather data). They also used older climate models to project future conditions at the sites. As astronomy faces the challenges of global climate change, it looks like it will have to improve its site selection criteria and look to longer-term climate predictions. That is the conclusion reached by a team of researchers, led by Caroline Haslebacher of the University of Bern and the National Center of Competence in Research (NCCR) PlanetS. They analyzed environmental conditions at various sites around the world. The group also discussed the site selection process for each facility. Team members recommend that planners use longer time frames and newer climate models to predict weather conditions in new locations. Astronomy in the age of climate change So how will things change for astronomy in the age of anthropogenic climate change? For one thing, global warming has the effect of putting more water into the atmosphere. To be specific, major astronomical observatories from Hawaii to the Canary Islands, Chile, Mexico, South Africa, and Australia will experience an increase in temperature and atmospheric water content by 2050. Those changes will affect observing time and data quality. that they get get. “Today’s astronomical observatories are designed to function in today’s site conditions and only have some room for adaptation,” said Haslebacher, lead author of the study. “Potential consequences of weather conditions for telescopes therefore include an increased risk of condensation due to a higher dew point or malfunctioning cooling systems, which can lead to more air turbulence in the dome. of the telescope. It is likely that technology and observing practices at current facilities can be adapted to these conditions in the short term. However, for future observatories, planners should use better atmospheric models in site selection criteria. Haslebacher points out that improving the data is the key to avoiding the degradation of ground-based astronomy. “Although telescopes typically have a lifespan of several decades, site selection processes only take atmospheric conditions into account for a short period of time. Typically over the last five years, too short to capture long-term trends, let alone future changes caused by global warming,” he said. What’s next for astronomy? The failure of previous planners to take into account the effects of climate change was not simply an oversight. They had to plan with the information they had. Study co-author Marie-Estelle Demory (Wyss Academy, University of Bern, Switzerland) notes that state-of-the-art climate models are incredibly important for future observing sites. “Thanks to the higher resolution of global climate models developed through the Horizon 2020 SPRING project, we were able to examine conditions in various locations around the world with high fidelity,” he said. “This is something we couldn’t do with conventional models. These models are valuable tools for the work we do at Wyss Academy.” Global climate change is not going to go away. It’s something we’ll all have to deal with for decades to come. For astronomers, it is another challenge to face. The silver lining is that the data is there to help, according to Haslebacher. “This now allows us to say with certainty that anthropogenic climate change must be taken into account in site selection for next-generation telescopes and in the construction and maintenance of astronomical facilities,” he said. This article was originally published on universe today by Carolyn Collins Petersen. Read the original article here.
Basic Algebra/Polynomials/Adding and Subtracting Polynomials Polynomial: Mathematical sentence with "many terms" (literal English translation of polynomial). Terms are separated by either a plus (+) or a minus (-) sign. There will always be one more term than there are plus (+) or minus (-) signs. Also, the number of terms will (generally speaking) be one higher than the lead exponent. EX: A Quadratic function has a lead exponent of 2, but generally has three terms (ax2 + bx + c; lead exponent = 2, # of + (or -) signs = 2, # of terms = 3) Like Terms: Terms in a polynomial that have the same power of the variable EX: 3x2 and 2x2 are like terms, but 3x2 and 4x3 are not! PROPERTIES TO REMEMBER: If we see a variable standing alone (it has no coefficient, no number next to it) then we assume that there is an invisible one (1) standing there: x2 = 1x2 We are extremely lazy in math and do not like to write numbers that we feel are unnecessary. This is one of the cases in which we do not write what is actually there but we always remember it is. Another case is with fractions and whole numbers. The DEMONIMATOR of every whole number is a one (1) but we do not write this one (1) because we do not feel the need. We always remember it is there though. 4 = 4/1 [READ: 4 over 1] There are many, many types of polynomials in the world of mathematics and are classified by the power (or exponent) of their leading term. Some Common Functions 1) f(x) = ax + b (more commonly seen as y = mx + b). The leading term has an exponent of one (1) and is called a LINEAR FUNCTION because it creates a line when graphed. Because there are two terms, this function is called a binomial, or two-termed, function. 2) f(x) = ax2 + bx + c The leading term has an exponent of two (2) and is called a QUADRATIC FUNCTION because the first x is squared and squares are QUADrilaterals. This function generally has three terms and is therefore called a trinomial. A QUADRATIC FUNCTION has amazing properties that span years of mathematical studies. Since this is the first polynomial to have more than two terms, it is the first polynomial able to be factored. However, there are special cases in which ax2 + bx + c cannot be factored. 3) f(x) = ax3 + bx2 + cx + d The leading term has an exponent of three (3) and is called a CUBIC FUNCTION because the first x is cubed (raised to the third power). This function generally has four terms and will always be able to factor out at least one term of the form (x - h) [where h is any number]. There are an infinite number of polynomials and each one has amazing features unique to that function. However, there are a few universal traits to all functions. Every function with an even lead exponent (ax2, ax4, etc. . .) have a chance of not being factorable. Every function with an odd lead exponent (ax, ax3, etc. . .) will be able to factor AT LEAST ONE term of the form (x - h) [where h is any number]. As with regular numbers, we can add and subtract polynomials. However, instead of only worrying about which numbers have an x and which numbers do not, we also have to keep in mind that the exponents have to be the same in order for us to add and subtract terms. EX: Add (4x3 + 3x + 1) + (-3x3 + 2x2 + 4) Step 1: We have to match up our terms: (4x3 + -3x3) + (2x2) + (3x) + (1 + 4) Step 2: We combine the coefficients of the like terms: x3 + 2x2 + 3x + 5 <- We've solved the problem (4x3 + 3x + 1) + (-3x3 + 2x2 + 4) = x3 + 2x2 + 3x + 5 Subtracting polynomials is the same thing, except we add an extra step. When we subtract polynomials we use the distributive property first and multiply the second polynomial by a negative one (-1). This changes all the signs of the second polynomial to the OPPOSITE of what they are. [NOTE: When we add a negative number we actually subtract!!!] EX: Subtract (3x4 + 2x2 + 2) - (x4 + 6x2 + 12x - 1) Step 1: We distribute the negative one (-1) across the second polynomial and our new polynomial reads: (-x4 - 6x2 - 12x + 1) <- Notice how the signs are all opposite of what we were given. Step 2: We match up our terms: (3x4 + -x4) + (2x2 + -6x2) + (-12x) + (2 + 1) Step 3: We combine our coefficients of the like terms: 2x4 - 4x2 - 12x + 3 (3x4 + 2x2 + 2) - (x4 + 6x2 + 12x - 1) = 2x4 - 4x2 - 12x + 3 Now we've successfully subtracted two polynomials. ^ for exponentiation
|This article does not cite any references or sources. (March 2013)| In mathematics, the term identity has several different important meanings: - An identity is an equality relation A = B, such that A and B contain some variables and give the same result when the variables are substituted by any values (usually numbers). In other words, A = B is an identity if A and B define the same functions. This means that an identity is an equality between functions that are differently defined. For example (x + y)2 = x2 + 2xy + y2 and cos(x)2 + sin(x)2 = 1 are identities. Identities were sometimes indicated by the triple bar symbol ≡ instead of the equals sign =, but it is no more the common usage. - In algebra, an identity or identity element of a set S with a binary operation · is an element e that, when combined with any element x of S, produces that same x. That is, e·x = x·e = x for all x in S. An example of this is the identity matrix. - The identity function from a set S to itself, often denoted or , is the function which maps every element to itself. In other words, for all x in S. This function serves as the identity element in the set of all functions from S to itself with respect to function composition. Identity relation A common example of the first meaning is the trigonometric identity which is true for all complex values of (since the complex numbers are the domain of sin and cos), as opposed to which is true only for some values of , not all. For example, the latter equation is true when false when . See also list of mathematical identities. Identity element The concepts of "additive identity" and "multiplicative identity" are central to the Peano axioms. The number 0 is the "additive identity" for integers, real numbers, and complex numbers. For the real numbers, for all - ً ً ً ًً Similarly, The number 1 is the "multiplicative identity" for integers, real numbers, and complex numbers. For the real numbers, for all Identity function A common example of an identity function is the identity permutation, which sends each element of the set to itself or to itself in natural order. Also, some care is sometimes needed to avoid ambiguities: 0 is the identity element for the addition of numbers and x + 0 = x is an identity. On the other hand, the identity function f(x) = x is not the identity element for the addition or the multiplication of functions (these are the constant functions 0 and 1), but is the identity element for the function composition.
• To make understand the basic concept and techniques of composition and resolution of vectors and computing the resultant of vectors. • To enable to use the knowledge of gradient of a straight line in finding speed, acceleration etc. • To enable to use the knowledge of conic in finding the girder of a railway bridge, cable of a suspension bridge and maximum height of an arch. • To provide ability to apply the knowledge of differential calculus in solving problem like slope, gradient of a curve, velocity, acceleration, rate of flow of liquid etc. • To enable to apply the process of integration in solving practical problems like calculation of area of a regular figure in two dimensions and volume of regular solids of different shapes. Vector : Addition and subtraction, dot and cross product. Co-ordinate Geometry : Co-ordinates of a point, locus and its equation, straight lines, circles and conic. Differential Calculus : Function and limit of a function, differentiation with the help of limit, differentiation of functions, geometrical interpretation of dydx , successive differentiation and Leibnitz theorem, partial differentiation. Integral Calculus : Fundamental integrals, integration by substitutions, integration by parts, integration by partial fraction, definite integrals. 1 Apply the theorems of vector algebra. 1.1 Define scalar and vector. 1.2 Explain null vector, free vector, like vector, equal vector, collinear vector, unit vector, position vector, addition and subtraction of vectors, linear combination, direction cosines and direction ratios, dependent and independent vectors, scalar fields and vector field. 1.3 Prove the laws of vector algebra. 1.4 Resolve a vector in space along three mutually perpendicular directions 1.5 solve problems involving addition and subtraction of vectors. 2 Apply the concept of dot product and cross product of vectors. 2.1 Define dot product and cross product of vectors. 2.2 Interpret dot product and cross product of vector geometrically. 2.3 Deduce the condition of parallelism and perpendicularity of two vectors. 2.4 Prove the distributive law of dot product and cross product of vector. 2.5 Explain the scalar triple product and vector triple product. 2.6 Solve problems involving dot product and cross product. 3 Apply the concept of co-ordinates to find lengths and areas. 3.1 Explain the co-ordinates of a point. 3.2 State different types of co-ordinates of a point. 3.3 Find the distance between two points (x1, y1) and (x2, y2 ). 3.4 Find the co-ordinates of a point which divides the straight line joining two points in certain ratio. 3.5 Find the area of a triangle whose vertices are given. 3.6 Solve problems related to co-ordinates of points and distance formula. 4 Apply the concept of locus. 4.1 Define locus of a point. 4.2 Find the locus of a point. 4.3 Solve problems for finding locus of a point under certain conditions. 5 Apply the equation of straight lines in calculating various parameter. 5.1 Describe the equation x=a and y=b and slope of a straight line. 5.2 Find the slope of a straight line passing through two point (x1, y1,) and (x2, y2 ). 5.3 Find the equation of straight lines: i) Point slope form. ii) Slope intercept form. iii) Two points form. iv) Intercept form. v) Perpendicular form. 5.4 Find the point of intersection of two given straight lines. 5.5 Find the angle between two given straight lines. 5.6 Find the condition of parallelism and perpendicularity of two given straight lines. 5.7 Find the distances of a point from a line. 6 Apply the equations of circle, tangent and normal in solving problems. 6.1 Define circle, center and radius . 6.2 Find the equation of a circle in the form: i) x2 + y2 =a 2 ii) (x h) 2 + (y k) 2 =a 2 iii) x 2 + y 2 + 2gx + 2fy + c=0 6.3 Find the equation of a circle described on the line joining (x1, y1) and (x2, y2). 6.4 Define tangent and normal. 6.5 Find the condition that a straight line may touch a circle. 6.6 Find the equations of tangent and normal to a circle at any point. 6.7 Solve the problems related to equations of circle, tangent and normal. 7. Understand conic or conic sections. 7.1 Define conic, focus, directrix and eccentricity. 7.2 Find the equations of parabola, ellipse and hyperbola. 7.3 Solve problems related to parabola, ellipse and hyperbola. FUNCTION AND LIMIT 8. Understand the concept of functions and limits. 8.1 Define constant, variable, function, domain, range and continuity of a function. 8.2 Define limit of a function 8.3 Distinguish between f(x) and f(a). 8.4 Establish i) lim sinxx =1 . ii) lim tanxx =1. 9. Understand differential co-efficient and differentiation. 9.1 Define differential co-efficient in the form of dydx = lim f(x+h)-f(x)h 9.2 Find the differential co-efficient of algebraic and trigonometrical functions from first principle. 10. Apply the concept of differentiation. 10.1 State the formulae for differentiation: i) sum or difference iv) function of function v) logarithmic function Find the differential co-efficient using the sum or difference formula, product formula and quotient formula. 10.2 Find the differential co-efficient function of function and logarithmic function. 11. Apply the concept of geometrical meaning of dydx 11.1 Interpret dydx geometrically. 11.2 Explain dydx under different conditions 11.3 Solve the problems of the type: A circular plate of metal expands by heat so that its radius increases at the rate of 0.01 cm per second. At what rate is the area increasing when the radius is 700 cm ? 12 Use Leibnitz’s theorem to solve the problems of successive differentiation. 12.1 Find 2nd, 3rd and 4th derivatives of a function and hence find n-th derivatives. 12.2 Express Leibnitz’s theorem 12.3 Solve the problems of successive differentiation and Leibnitz’s theorem. 13 Understand partial differentiation. 13.1 Define partial derivatives. 13.2 State formula for total differential. 13.3 State formulae for partial differentiation of implicit function and homogenous function. 13.4 State Euler’s theorem on homogeneous function. 13.5 Solve the problems of partial derivatives. 14 Apply fundamental indefinite integrals in solving problems. 14.1 Explain the concept of integration and constant of integration. 14.2 State fundamental and standard integrals. 14.3 Write down formulae for: i) Integration of algebraic sum. ii) Integration of the product of a constant and a function. 14.4 Integrate by method of substitution, integrate by parts and by partial fractions. 14.5 Solve problems of indefinite integration. 15 Apply the concept of definite integrals. 15.1 Explain definite integration. 15.2 Interpret geometrically the meaning of 15.3 Solve problems of the following types: P* =Practical continuous assessment
Excel VBA Counter There is the various function in MS Excel to count values, whether it is a string, numbers. Counting can be done based on some criteria. Functions include COUNT, COUNTACOUNTAThe COUNTA function is an inbuilt statistical excel function that counts the number of non-blank cells (not empty) in a cell range or the cell reference. For example, cells A1 and A3 contain values but, cell A2 is empty. The formula “=COUNTA(A1,A2,A3)” returns 2. , COUNTBLANK, COUNTIFCOUNTIFThe COUNTIF function in Excel counts the number of cells within a range based on pre-defined criteria. It is used to count cells that include dates, numbers, or text. For example, COUNTIF(A1:A10,”Trump”) will count the number of cells within the range A1:A10 that contain the text “Trump” , and COUNTIFS in excelCOUNTIFS In ExcelThe COUNTIFS excel function counts the values of the supplied range based on one or multiple criteria (conditions). The supplied range can be single or multiple and adjacent or non-adjacent. Being a statistical function of Excel, the COUNTIFS supports the usage of comparison operators and wildcard characters. . However, these functions cannot do some tasks like counting the cells based on their color, counting only bold values, etc. That is why we will create a counter in VBA so that we can count for these types of tasks in excel. Let us create some counter in excel VBA. Examples of Excel VBA Counter Below are examples of the counter in VBA. Suppose we have data like above for 32 rows. We will create a VBA counter, which will count the values, which are greater than 50 and one more counter to count the values, which are less than 50. We will create the VBA codeCreate The VBA CodeVBA code refers to a set of instructions written by the user in the Visual Basic Applications programming language on a Visual Basic Editor (VBE) to perform a specific task. in this way so that the user can have data for unlimited rows in excel. To do the same, steps would be: Make sure the Developer tab Excel is visible. To make the tab visible (if not), steps are: Click on the ‘File’ tab in the ribbon and choose ‘Option’ from the list. Choose ‘Customize Ribbon’ from the list, check the box for ‘Developer,’ and click on OK. Now the ‘Developer’ tab is visible. Insert the command button using the ‘Insert’ command available in the ‘Controls’ group in the ‘Developer’ tab. While pressing the ALT key, create the command button with the mouse. If we keep pressing the ALT key, then the edges of the command button go automatically with the border of the cells. Right-click on the command button to open the contextual menu (make sure ‘Design Mode’ is activated; otherwise, we will not be able to open the contextual menu). Choose ‘Properties’ from the menu. Change the properties of the command button, i.e., Name, Caption, and Font, etc. Right-click again and choose the ‘View Code’ from the contextual menu. Visual Basic Editor is opened now, and by default, a subroutine is already created for the command button. We will write code now. We will declare 3 variables. One for loop purpose, one to count, and one to store the value for the last row. We will use the code to select cell A1 and then the current region of cell A1 and then get down to the last filled row to get the last filled row number. We will run a ‘for’ loop in VBA to check the values written in the A2 cell to the last filled cell in the A column. We will increase the value of the ‘counter’ variable by 1 if the value is greater than 50 and will change the font color of the cell to ‘Blue,’ and if the value is less than 50, then the font color of the cell would be ‘Red.’ After checking and counting, we need to display the values. To do the same, we will use ‘VBA MsgBoxVBA MsgBoxVBA MsgBox function is an output function which displays the generalized message provided by the developer. This statement has no arguments and the personalized messages in this function are written under the double quotes while for the values the variable reference is provided..’ Private Sub CountingCellsbyValue_Click() Dim i, counter As Integer Dim lastrow As Long lastrow = Range("A1").CurrentRegion.End(xlDown).Row For i = 2 To lastrow If Cells(i, 1).Value > 50 Then counter = counter + 1 Cells(i, 1).Font.ColorIndex = 5 Else Cells(i, 1).Font.ColorIndex = 3 End If Next i MsgBox "There are " & counter & " values which are greater than 50" & _ vbCrLf & "There are " & lastrow - counter & " values which are less than 50" End Sub Deactivate the ‘Design Mode’ and click on the ‘Command button.’ The result would be as follows. Suppose we want to create the time counter using excel VBA as follows: If we click on the ‘Start’ button, the timer starts, and if we click on the ‘Stop’ button, the timer stops. To do the same, steps would be: Create a format like this in an excel sheet. Change the format of the cell A2 as ‘hh:mm: ss.’ Merge the cells C3 to G7 by using the Merge and Center ExcelMerge And Center ExcelThe merge and center button is used to merge two or more different cells. When data is inserted into any merged cells, it is in the center position, hence the name merge and center. command in the ‘Alignment’ group in the ‘Home’ tab. Give the reference of cell A2 for just merged cell and then do the formatting like make the font style to ‘Baskerville,’ font size to 60, etc. Create two command buttons, ‘Start’ and ‘Stop’ using the ‘Insert’ command available in the ‘Controls’ group in the ‘Developer’ tab. Using the ‘Properties’ command available in the ‘Controls’ group in the ‘Developer’ tab, change the properties. Select the commands buttons one by one and choose the ‘View Code’ command from the ‘Controls’ group in the ‘Developer’ tab to write the code as follows. Choose from the drop-down the appropriate command button. Insert a module into ‘ThisWorkbookThisWorkbookVBA ThisWorkbook refers to the workbook on which the users currently write the code to execute all of the tasks in the current workbook. In this, it doesn't matter which workbook is active and only requires the reference to the workbook, where the users write the code.‘ by right-clicking on the ‘Thisworkbook’ and then choose ‘Insert’ and then ‘Module.’ Write the following code in the module. Sub start_time() Application.OnTime Now + TimeValue("00:00:01"), "next_moment" End Sub Sub end_time() Application.OnTime Now + TimeValue("00:00:01"), "next_moment", , False End Sub Sub next_moment() If Worksheets("Time Counter").Range("A2").Value = 0 Then Exit Sub Worksheets("Time Counter").Range("A2").Value = Worksheets("Time Counter").Range("A2").Value - TimeValue("00:00:01") start_time End Sub We have used the ‘onTime‘ method of the Application object, which is used to run a procedure at a scheduled time. The procedure, which we have scheduled to run, is “next_moment.” Save the code. Write the time in the A2 cell and click on the ‘Start’ button to start the time counter. Suppose we have a list of students along with marks scored by them. We want to count the number of students who passed and who failed. To do the same, we will write the VBA code. Steps would be: Open Visual Basic editor by pressing shortcut in excelShortcut In ExcelAn Excel shortcut is a technique of performing a manual task in a quicker way. Alt+F11 and double click on ‘Sheet3 (Counting Number of students)’ to insert a subroutine based on an event in Sheet3. Choose ‘Worksheet’ from the dropdown. As we pick ‘Worksheet’ from the list, we can see, there are various events in the adjacent dropdown. We need to choose ‘SelectionChange’ from the list. We will declare the VBA variableDeclare The VBA VariableVariable declaration is necessary in VBA to define a variable for a specific data type so that it can hold values; any variable that is not defined in VBA cannot hold values. ‘lastrow’ for storing last row number as a list for students can increase, ‘pass’ to store a number of students who passed, and ‘fail’ to store a number of students who failed. We will store the value of the last row number in ‘lastrow.’ We will create the ‘for’ loop for counting based on condition. We have set the condition if the total marks are greater than 99, then add the value 1 to the ‘pass’ variable and add one value to the ‘fail’ variable if the condition fails. The last statement makes the heading ‘Summary’ bold. To print the values in the sheet, the code would be: Private Sub Worksheet_SelectionChange(ByVal Target As Range) Dim lastrow As Long Dim pass As Integer Dim fail As Integer lastrow = Range("A1").CurrentRegion.End(xlDown).Row For i = 2 To lastrow If Cells(i, 5) > 99 Then pass = pass + 1 Else fail = fail + 1 End If Cells(1, 7).Font.Bold = True Next i Range("G1").Value = "Summary" Range("G2").Value = "The number of students who passed is " & pass Range("G3").Value = "The number of students who failed is " & fail End Sub Now whenever there is a change in selection, values will be calculated again as below: Things to Remember - Save the file after writing code in VBA with .xlsm excel extensionExcel ExtensionExcel extensions represent the file format. It helps the user to save different types of excel files in various formats. For instance, .xlsx is used for simple data, and XLSM is used to store the VBA code.; otherwise, the macro will not work. - Use the ‘For’ loop when it is decided already for how many times the code in the VBA loopCode In The VBA LoopA VBA loop in excel is an instruction to run a code or repeat an action multiple times. will run. This has been a guide to VBA Counter. Here we discuss how we will create a counter in excel VBA so that we can count cells based on their color, counting only bold values, etc. Below you can find some useful excel VBA articles –
Where Lightning Strikes New maps from orbiting sensors that can detect flashes of lightning even during the daytime reveal where on Earth the powerful bolts will most likely strike. These are just a few of the things NASA scientists have learned using satellites to monitor worldwide lightning. "For the first time, we've been able to map the global distribution of lightning, noting its variation as a function of latitude, longitude and time of year," says Hugh Christian, project leader for the National Space Science and Technology Center's (NSSTC's) lightning team at NASA's Marshall Space Flight Center. Left: A lightning bolt strikes the Atlantic Ocean near Florida. According to a new NASA map of global lightning rates, such strikes over open ocean waters are rare. Image courtesy NOAA. "Basically, these optical sensors use high-speed cameras to look for changes in the tops of clouds, changes your eyes can't see," he explains. By analyzing a narrow wavelength band around 777 nanometers -- which is in the near-infrared region of the spectrum -- they can spot brief lightning flashes even under daytime conditions. Before OTD and LIS, global lightning patterns were known only approximately. Ground-based lightning detectors employing radio-frequency sensors provide high-quality local measurements. But because such sensors have a limited range, oceans and low-population areas had been poorly sampled. The development of space-based optical detectors was a major advance, giving researchers their first complete picture of planet-wide lightning activity. Above: Data from space-based optical sensors reveal the uneven distribution of worldwide lightning strikes. Units: flashes/km2/yr. Image credit: NSSTC Lightning Team. The new maps show that Florida, for example, is one place where the rate of strikes is unusually high. Dennis Boccippio, an atmospheric scientist with the NSSTC lightning team, explains why: "Florida experiences two sea breezes: one from the east coast and one from the west coast." The "push" between these two breezes forces ground air upward and triggers thunderstorms. Within thunderclouds, turbulence spawned by updrafts causes tiny ice crystals and water droplets (called "hydrometeors") to bump around and collide. For reasons not fully understood, positive electric charge accumulates on smaller particles -- that is, on hydrometeors smaller than about 100 micrometers -- while negative charges grow on the larger ones. Winds and gravity separate the charged hydrometeors and produce an enormous electrical potential within the storm. "Lightning is one of the mechanisms to relax this build-up," says Boccippio. Right: Lightning is a sudden discharge of electricity between charged regions of thunderclouds and the ground. Only about 25 percent of lightning strikes are cloud-to-ground. The rest are either cloud-to-cloud or intracloud. [more information] Another lightning hot spot is in the Himalayas where the extreme local topography forces the convergence of air masses from the Indian Ocean. And where does lightning strike most frequently? Central Africa. "There you get thunderstorms all year 'round," Christian says. "[It's a result of] weather patterns, air flow from the Atlantic Ocean, and enhancement by mountainous areas." The satellite data also track patterns of lightning intensity over time. In the northern hemisphere, for example, most lightning happens during the summer months. But in equatorial regions, lightning appears more often during the fall and spring. This seasonal variation contributes to a curious north-south asymmetry: Lightning ignites many of North America's late-summer wildfires, while some studies find that wildfires in South America are sparked more often by humans. Why the difference? It's simply because lightning in South America happens during a season when the ground is damp. Summertime lightning bolts in North America strike when the ground is dry and littered with fuel for fires. Meanwhile, areas such as the Arctic and Antarctic have very few thunderstorms and, therefore, almost no lightning at all. "Oceanic areas also experience [a dearth of lightning]," Christian says. "People living on some of the islands in the Pacific don't describe much lightning in their language." The ocean surface doesn't warm up as much as land does during the day because of water's higher heat capacity. Heating of low-lying air is crucial for storm formation, so the oceans don't experience as many thunderstorms. Left: Relax. Lightning rarely strikes Pacific Islands. According to Boccippio these global patterns probably aren't much influenced by human activity. Some people have suggested that buildings and metal communications towers increase the overall frequency of lightning strikes. But, "lightning that does make it to the ground is pretty much creating its own channels," Boccippio says. "The likelihood that we are changing the amount of cloud-to-ground strikes with construction of towers is very slim." He cautions, however, that this has not been verified experimentally. To answer such questions, a new lightning detector -- the Lightning Mapper Sensor or "LMS" -- is on the drawing board at the NSSTC. The proposed instrument would circle our planet in a geostationary orbit over the United States, detecting all forms of lightning with a high spatial resolution and detection efficiency. Right: The Lightning Imaging Sensor (LIS) on board TRMM monitors lightning flashes on the Earth below by collecting 500 images per second. The same optical technology will likely be found in future space-based lightning sensors. Image courtesy NSSTC. The LMS or something like it could provide valuable -- even life-saving -- data to weather forecasters. "The same updrafts that drive severe weather often cause a spike in the lightning rate [at the onset of] a storm," explains Boccippio. So, measuring the rate of lightning flashes in real time might offer a way to identify potentially deadly storms before they become deadly. Clearly, lightning research is a field truly crackling with potential. You can learn more about it from the Global Hydrology and Climate Center's web site: Lightning and Atmospheric Electricity. National Space Science and Technology Center -- a research and education initiative consisting of researchers and resources from government, academia, and industry. The NSSTC focuses on Earth Science, Space Science, Materials Science, Biotechnology, Advanced Optics Technology, Space Propulsion Physics, and Information Technology. Global Hydrology and Climate Center -- an NSSTC partner. Observing Lightning from Space -- (NSSTC) Lightning strikes the ground, so why observe it from space? Find out by visiting this web site. Lightning and atmospheric electricity -- (NSSTC) includes a lightning primer and much more... Learning from Lightning -- (Science@NASA) Little by little, lightning sensors in space are revealing the inner workings of severe storms. Scientists hope to use the technique to improve forecasts of deadly weather. The Tropical Rainfall Measuring Mission -- a joint mission between NASA and the National Space Development Agency (NASDA) of Japan. Join our growing list of subscribers - sign up for our express news deliveryand you will receive a mail message every time we post a new story!!!
NGSS ConnectionsIn this section, we first explain the synergy between this MDP and the eight science and engineering practices, then provide examples, options, and variations of activities and instructional strategies that are aligned with this MDP for each science and engineering practice. However, this does not mean that teachers must use all of these strategies to enact this MDP when promoting the science and engineering practice, nor that these strategies are the only way to do so. We encourage teachers to use their professional discretion to select what will work best for them and their classrooms, and to modify and innovate on these strategies. Asking questions and defining problems drives science and engineering. If students do not feel confident that they can ask questions and define problems successfully, they are less likely to put effort into these tasks that are key to engaging meaningfully in authentic scientific inquiry and engineering design. In order to be able to successfully implement activities aimed at answering questions and finding design solutions, it is important to build students’ confidence by providing clear directions, explaining the phenomenon or design scenario, and providing supports to help students ask questions and define problems that are calibrated to their current level of understanding and skill. Providing informational feedback supports students in further developing their skill at asking questions and defining problems. - One way to introduce these questions is to have students practice by asking questions about a phenomenon or design problem that is familiar to them Models are a reflection of a scientist’s or engineer’s current understanding of a system. Students will have varying levels of understanding throughout a learning sequence in which they develop and generate a representation of a target phenomenon or design problem, use and describe its relationships and interactions, and evaluate and determine its limitations and explanations. Throughout this process it is imperative to support students’ confidence in developing, using, and evaluating a model as they move from potentially naive understanding to more sophisticated understanding of a system. Students may also lack prior experience engaging in scientific or engineering modeling practices, as they may not have encountered this in previous science classes. Science and engineering teachers, therefore, have an important role to play in supporting the confidence of students in learning this particular practice. Students may have little experience with planning, carrying out, and evaluating investigations and with the specialized equipment needed to investigate particular phenomena or test design solutions. Investigations also contain many steps and therefore many places where students may encounter challenges. Supporting students’ confidence as they engage in these activities will be crucial for them to feel comfortable proceeding from planning to completing an investigation. At the same time, the complexity and safety risks of some investigations could make teachers prone to over-scaffold, which could reduce students’ confidence by diluting the level of challenge and communicating feelings of distrust. It is therefore important for teachers to balance adequate supports with sufficient challenge in order for students to build confidence in this practice. - identifying multiple variables, such as independent and dependent variables and controls; - selecting tools needed for data collection; - determining how measurements will be taken and logged; and, - deciding how many data points are sufficient for supporting a claim; provide time for students to reflect on the process and their progress on achieving each mini-goal) so they can focus on one part at a time and provide informational feedback on how their plans are aligning with the objective for the investigation. As students gain competence in planning parts of an investigation, give them larger chunks at a time to plan. There is likely a wide variety of math ability levels in a single science class. Differences in skill may require different levels of scaffolding in order to develop confidence for all students. Some students may have little experience with data or may have limited confidence in successfully being able to tabulate, graph, or perform statistical analysis on data. Students may also be uncomfortable presenting the results of the analysis and interpretation to their peers. Supporting students’ confidence as they engage in these activities will be crucial for them to feel comfortable working with data. - Guidelines for graphing: a checklist for the parts of a graph, thinking guides for students to determine a good scale for the data they are graphing or the type of graph that will be most useful for their purpose - Guidelines for data tabulation: checklists for how to set up a frequency table, how to set up a table for the different variables in an experiment, etc. - Guidelines for summarizing data: different summary statistics (e.g., mean, median, mode) and what information they provide scientists and engineers Students may have little experience using mathematics and computational thinking to represent, model, and analyze variables and their relationships to make sense of phenomena or solve design problems. Students’ mathematical and computational thinking proficiency and, more importantly, their confidence in those abilities may vary widely. Ample examples, models, and opportunities for success are crucial to support students who may enter science class with lower confidence in these areas and for those students whose skills can be developed further. Informational feedback will help all students understand when they are progressing and, if they are not, what they can work on to improve. - Guidelines for graphing: the parts of a graph; how to determine a good scale for the data they are graphing; what type of graph will be most useful for their purpose - Spreadsheets for algorithms: - Functions in Excel or Google Sheets (e.g., calculating the mean or sum of several numbers; looking up numbers or text in a data set) - Examples can help demonstrate to students what an algorithm is - Process charts for algorithms: - Common logical structures (e.g., if, then, else; for loop; while loop) - Examples of simple algorithms that can be used as building blocks or jumping off points Constructing an explanation requires students to use several skills at once to articulate a claim, select and present supporting evidence and science knowledge, and support the claim using logical reasoning. Some students may be less familiar or comfortable with the norms and practices of constructing explanations. Each student will have a different skill level and comfort level in each of the skills and practices needed to construct an explanation and in being able to use those skills in concert to create an explanation. Providing clear description and expectations of an explanation task and supporting, encouraging, and giving informational feedback to students as they develop these skills helps students to improve in constructing explanations without becoming overly frustrated or thinking they cannot do it. When properly structured and scaffolded, constructing explanations can build students’ confidence by reaffirming and building on what they already know. Similarly, designing solutions requires the iterative application of several skills to arrive at a design solution and benefits from the support, encouragement, and feedback promoted by this design principle. - Some scaffolding examples: - Here is my claim [... we believe that X is caused by ... or we believe that Y has a role in how Z happens ...] - If this claim or explanation is true, then when I look at this data, I would expect to see [this particular result or this outcome] - The reason I’d expect to see this is because I collected data from a situation that is really close to the real thing we are studying, and if we had these outcomes, it would mean that [state a brief causal chain of events—this chain has to be consistent with known science ideas/facts] - We did see the data pattern we expected. We believe this supports our claim - If our claim was not true, then I’d expect to see [a different set of patterns in the data or a particular outcome]. But we didn’t see that outcome, so this reasoning also supports our claim - There may be other explanations for the data, such as ______ or ______, but this does not seem likely because __________ Explicitly teach the qualities of a successful explanation and how to improve preliminary explanations (e.g., determine whether and describe why the claim, evidence, and/or reasoning are/are not appropriate or valid), then use those same qualities to provide encouraging and informational feedback when students are sharing their own explanations. - 1(2011). Supporting Grade 5-8 Students in Constructing Explanations in Science: The Claim, Evidence, and Reasoning Framework for Talk and Writing (1st ed.). Boston: Pearson.. Some students may lack confidence to engage in argumentation, especially if they feel unsure of their own science and engineering understanding. Teachers can combat this lack of confidence by helping students understand the goals and expectations of argumentation and in supporting students throughout the process of developing and making an argument with guidance and informational feedback. Showing students that strategies can help them compose effective arguments can also help build their confidence in this practice. Finally, careful attention to the specificity of the informational feedback students receive when competing arguments are considered and evaluated is critical to supporting students’ confidence in argumentation. - Create tools to support common tasks (e.g., graphic organizers for Claim-Evidence-Reasoning) and make them consistently available to students. This can help make challenging tasks more accessible - Use board space or anchor charts to ensure that the central question is clear and to record and sort points of agreement and disagreement as the students move toward reconciliation in their argument - Provide options for level of challenge so students can select the level that suits them. For example, allow students to decide whether or not they need a Claim-Evidence-Reasoning graphic organizer later in the school year Reading comprehension and synthesis can be difficult for many students, and they may lack confidence as readers. In particular, scientific readings, especially NGSS-based readings, can be difficult, quite long, and formatted differently than traditional textbooks (e.g., main ideas may not be in bold face or in pull-out boxes). Understanding and evaluating the information in these readings may require a different process from what students are used to. Providing students with multiple strategies for identifying big ideas, main points, and potential flaws in reasoning, as well as annotating text effectively for future communication is important for students’ confidence as they try to understand these challenging texts. Students may also feel uncertain that they can effectively communicate new information about phenomena or design problems when they lack confidence in their own scientific understanding. - Set norms where students have guidelines to follow for engaging in discussion – e.g., each student has a chance to speak, must offer at least one piece of evidence for their conclusions about scientific and technical texts. Provide opportunities for students to communicate their understanding in a variety of ways
Program 2: Hello World From the exiting program, you can add a couple more lines before it to actually output the “Hello World!” You have already used the _start: label but what does that actually do? A label simply names a place in the program that you can reference to later. In the data section, it’s common to label the beginning part of a piece of data. When this happens, that label and all references to it, will become the same address that points to a place in the data loaded into memory. The data section # In the data section, the format is: label:? .datatype data In this example you are going to use the ascii datatype which takes the data between the two double quotes and converts them to their ASCII equivalence. You will see other types of data in the near future. LDR operation # ldr loads a value into a register. However, unlike ldr loads a value from memory. This is a small but very important difference. But in order to load a value from memory, you need to know where in memory it is located. Luckily for us, the compiler will do the calculation for use if you reference a label in the data section. ldr r1, =hello @ Load the address that hello points to @ into register 1 .data @ The next line labels the location our string starts as `hello` hello: .ascii "Hello World!\n"
Welcome to the captivating world of probability! Our Probability Worksheets for Grade 5 are designed to introduce students to this important mathematical concept, equipping them with the skills to understand and apply probability in various situations. With our printable worksheets, students will embark on an exciting journey of exploring chance and likelihood. Download the Printable Probability Worksheets for Grade 5 Click here to access the printable Probability Worksheets for Grade 5. These worksheets provide a wide range of engaging activities that aim to strengthen students’ understanding of probability. By utilizing these downloadable resources, students will develop their probability skills while mastering the art of analyzing and predicting outcomes. Below are our grade 5 Probability worksheets to calculate simple probabilities as fractions. - Probability Based on Dice - Probability Based on Darts - Probability Based on Raffle - Probability Based on Spinning Wheel - Probability Based on Playing Cards Probability Based on Dice Below are our grade 5 Probability worksheets on playing dice. Probability Based on Darts Below are our grade 5 Probability worksheets on the darts. Probability Based on Raffle Below are our grade 5 Probability worksheets on the raffle. Probability Based on Spinning Wheel Below are our grade 5 Probability worksheets on the spinning wheel. Probability Based on Playing Cards Below are our grade 5 Probability worksheets on the playing cards. Probability Worksheets Benefits Our Probability Worksheets offer several benefits for Grade 5 students. Here are the key advantages: - Conceptual Understanding: The worksheets focus on teaching students the fundamental concepts of probability, including the likelihood of events occurring and the notion of chance. By working through these worksheets, students will develop a solid foundation in probability theory. - Problem-Solving Skills: The worksheets provide numerous opportunities for students to apply probability concepts to solve problems. Through engaging exercises, students will enhance their ability to analyze scenarios, make predictions, and calculate probabilities, fostering critical thinking and problem-solving skills. - Real-Life Relevance: Probability is a vital concept in various real-life situations, such as weather forecasts, sports predictions, and risk assessment. By mastering probability, students acquire a skillset that can be applied to real-world scenarios, helping them make informed decisions based on the likelihood of outcomes. - Mathematical Thinking: Probability encourages students to think mathematically, enabling them to reason, analyze data, and draw conclusions based on probability concepts. By working with these worksheets, students will develop their mathematical thinking skills, promoting logical reasoning and quantitative literacy. Why Grade 5 Students Should Learn the Important Concept of Probability Learning probability is essential for Grade 5 students for several reasons: - Decision-Making Skills: Probability provides a framework for making informed decisions based on available information. By understanding probability, students can assess risks, evaluate alternatives, and make more accurate predictions, enhancing their decision-making skills. - Data Analysis: Probability is closely linked to data analysis. By learning probability, students gain the ability to analyze and interpret data, identify patterns, and draw meaningful conclusions from statistical information. - Critical Thinking: Probability fosters critical thinking by challenging students to analyze situations, consider possibilities, and evaluate evidence. It promotes logical reasoning and helps students become more discerning in their thinking and decision-making processes. - Everyday Application: Probability concepts are applicable in everyday life. Whether it’s estimating the likelihood of an event, understanding game strategies, or interpreting survey results, probability plays a significant role. By learning probability, students develop essential skills that can be used in various contexts throughout their lives. Mastering the concept of probability is a valuable skill for Grade 5 students. Our Probability Worksheets provide comprehensive resources to develop students’ understanding and application of probability. By downloading and utilizing these worksheets, you will empower your students to excel in critical thinking, problem-solving, and decision-making.
In a system of linear equations, each equation corresponds with a straight line corresponds and one seeks out the point where the two lines intersect. Manipulating expressions with unknown variables Video transcript In the last video, we saw what a system of equations is. And in this video, I'm going to show you one algebraic technique for solving systems of equations, where you don't have to graph the two lines and try to figure out exactly where they intersect. This will give you an exact algebraic answer. And in future videos, we'll see more methods of doing this. So let's say you had two equations. One is x plus 2y is equal to 9, and the other equation is 3x plus 5y is equal to Now, if we did what we did in the last video, we could graph each of these. You could put them in either slope-intercept form or point-slope form. They're in standard form right now. And then you could graph each of these lines, figure out where they intersect, and that would be a solution to that. But it's sometimes hard to find, to just by looking, figure out exactly where they intersect. So let's figure out a way to algebraically do this. And what I'm going to do is the substitution method. I'm going to use one of the equations to solve for one of the variables, and then I'm going to substitute back in for that variable over here. So let me show you what I'm talking about. So let me solve for x using this top equation. So the top equation says x plus 2y is equal to 9. I want to solve for x, so let's subtract 2y from both sides of this equation. So I'm left with x is equal to 9 minus 2y. This is what this first equation is telling me. I just rearranged it a little bit. |3 Ways to Solve Systems of Algebraic Equations Containing Two Variables||The reason for completing this exercise is I want students to re-visit the prompt's content and questions they completed previously so they have a better context to the problem.| The first equation is saying that.With this direction, you are being asked to write a system of equations. You want to write two equations that pertain to this problem. Notice that you are given two different pieces of information. You are given information about the price of the shoes and the number of shoes bought. Therefore, we will write one equation for both pieces of information. Let's use the second equation and the variable "y" (it looks the simplest equation). Write one of the equations so it is in the style "variable = ": We can subtract x from both sides of x + y = 8 to get y = 8 − x. Learn how beautifully simple linear relationships are and how easy they are to identify. Discover how you can see them in use in the world around you on an everyday basis and why they are useful. Write each equation in standard form Ax + By = C We enter an augmented matrix which represents the system of equations into our calculator We use the calculator to find the row reduced Systems of Linear Equations in Two Variables Author: Joseph Ottum Created Date. Sixth Grade Math Placement & Level Test. Interactive Exercises, Fun Games, Math Worksheets & Extras for Teaching Sixth Grade. Sarah Carter is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to barnweddingvt.com Posts may occasionally contain Amazon Affiliate Program links.
A function is that type of relation in which a domain element corresponding to one element exactly in the range set. However, more than one element in domain set can correspond to a single and same element in the range; still the relation is a function. This can be more easily understood by studying the graphs of the relations. The first two parameters that are connected with any function are the domain and range. The domain of a function means the set or sets of real values of the variable for which the function is defined. The range is the set or sets of values of the function from a minimum level to maximum level in the domain of the function. Usually these two are expressed in interval notation. A square bracket is used if the value at that point is included and a round bracket is used when it is not be included. The interval of all real numbers is represented as (-∞,∞). A function may not be defined before or after certain value of the variable. For example, the function f(x) = √(1 + x) is not defined for all values of ‘x’ before -1 and the function f(x) = √(1 - x) is not defined for all values of ‘x’ after 1. But in many cases a function may not be defined only for a particular or for some values of the variable. At such places the function is said to be have discontinuities. At the beginning we said that a relation can be called a function if one domain element corresponds to only element in the range and further extended that even more than one domain element can correspond to a single range element. In the former case the function is called as 'One to one' and the later case is referred as 'many to one'. For example, f(x) = 2x + 1 is an one to one function. But, f(x) = x2 is a many to one function', because for both x = -2 and x = 2, f(x) is same as 4. If all the elements of range sets are used by all the elements of domain set (even if it is many to one), the function is called onto. Again, f(x) = 2x + 1 is onto because the range is open as (-∞, ∞). But in case of f(x) = x2 , the range is not covering the negative values. Hence this function is not onto.
|This article needs additional citations for verification. (September 2014)| |Floating point precision| |Floating point decimal precision| In computer architecture, 31-bit integers, memory addresses, or other data units are those that are 31 bits wide. Perhaps the only computing architecture based on 31-bit addressing is one of computing's most famous and most profitable. In 1983, IBM introduced 31-bit addressing in the System/370-XA mainframe architecture as an upgrade to the 24-bit physical and virtual, and transitional 24-bit-virtual/26-bit physical, addressing of earlier models. This enhancement allowed address spaces to be 128 times larger, permitting programs to address memory above 16 MiB (referred to as "above the line"). In the System/360 and early System/370 architectures, the general purpose registers were 32 bits wide, the machine did 32-bit arithmetic operations, and addresses were always stored in 32-bit words, so the architecture was considered 32-bit, but the machines ignored the top 8 bits of the address resulting in 24-bit addressing. With the XA extension, no bits in the word were ignored. The transition was tricky: assembly language programmers, including IBM's own operating systems architects and developers, had been using the spare byte at the top of addresses for flags for almost twenty years. IBM chose to provide two forms of addressing to minimize the pain: if the most significant bit (bit 0) of a 32-bit address was on, the next 31 bits were interpreted as the virtual address. If the most significant bit was off, then only the lower 24 bits were treated as the virtual address (just as with pre-XA systems). Thus programs could continue using the seven low-order bits of the top byte for other purposes as long as they left the top bit off. The only programs requiring modification were those that set the top (leftmost) bit of a word containing an address. This also affected address comparisons: The leftmost bit of a word is also interpreted as a sign-bit in 2's complement arithmetic, indicating a negative number if bit 0 is on. Programs that use signed arithmetic comparison instructions could get reversed results. Two equivalent addresses could be compared as non-equal if one of them had the sign bit turned on even if the remaining bits were identical. Fortunately, most of this was invisible to programmers using high-level languages like COBOL or FORTRAN, and IBM aided the transition with dual mode hardware for a period of time. Certain machine instructions in this 31-bit addressing mode alter the addressing mode bit as a possibly intentional side effect. For example, the original subroutine call instructions BAL, Branch and Link, and its register-register equivalent, BALR, Branch and Link Register, store certain status information, the instruction length code, the condition code and the program mask, in the top byte of the return address. A BAS, Branch and Store, instruction was added to allow 31-bit return addresses. BAS, and its register-register equivalent, BASR, Branch and Store Register, was part of the instruction set of the System/360 Model 67, which was the only System/360 model to allow addresses longer than 24 bits. These instructions were maintained, but were modified and extended for 31-bit addressing. Additional instructions in support of 24/31-bit addressing include two new register-register call/return instructions which also effect an addressing mode change (e.g. Branch and Save and Set Mode, BASSM, the 24/31 bit version of a call where the linkage address including the mode is saved and a branch is taken to an address in a possibly different mode, and BSM, Branch and Set Mode, the 24/31 bit version of a return, where the return is directly to the previously saved linkage address and in its previous mode). Taken together, BASSM and BSM allow 24-bit calls to 31-bit (and return to 24-bit), 31-bit calls to 24-bit (and return to 31-bit), 24-bit calls to 24-bit (and return to 24-bit) and 31-bit calls to 31-bit (and return to 31-bit). Like BALR 14,15 (the 24-bit-only form of a call), BASSM is used as BASSM 14,15, where the linkage address and mode are saved in register 14, and a branch is taken to the subroutine address and mode specified in register 15. Somewhat similarly to BCR 15,14 (the 24-bit-only form of an unconditional return), BSM is used as BSM 0,14, where 0 indicates that the current mode is not saved (the program is leaving the subroutine, anyway), and a return to the caller at the address and mode specified in register 14 is to be taken. Refer to IBM publication MVS/Extended Architecture System Programming Library: 31-Bit Addressing, GC28-1158-1, for extensive examples of the use of BAS, BASR, BASSM and BSM, in particular, pp. 29–30. In the 1990s IBM introduced 370/ESA architecture (later named 390/ESA and finally ESA/390 or System/390, in short S/390), completing the evolution to full 31-bit virtual addressing and keeping this addressing mode flag. These later architectures allow more than 2 GiB of physical memory and allow multiple concurrent address spaces up to 2 GiB each in size. As of mid-2006 there still are not too many programs unduly constrained by this multiple 31-bit addressing mode. Nonetheless, IBM broke the 2 GiB linear addressing barrier ("the bar") in 2000 with the introduction of the first 64-bit z/Architecture system, the IBM zSeries Model 900. Unlike the XA transition, z/Architecture does not reserve a top bit to identify earlier code. Yet z/Architecture does maintain compatibility with 24-bit and 31-bit code, even older code running concurrently with newer 64-bit code. Since Linux/390 was first released for the existing 32-bit data/31-bit addressing hardware in 1999, initial mainframe Linux applications compiled in pre-z/Architecture mode are also limited to 31-bit addressing. This limitation disappeared with 64-bit hardware, 64-bit Linux on zSeries, and 64-bit Linux applications. The 64-bit Linux distributions still run 32-bit data/31-bit addressing programs. IBM's 31-bit addressing allows 31-bit code to make use of additional memory. However, at any one instant, a maximum of 2 GiB is in each working address space. For non-64-bit Linux on processors with 31-bit addressing, it is possible to assign memory above the 2 GiB bar as a RAM disk. 31-bit Linux kernel (not user-space) support was removed in version 4.1. - Indeed, in a variable length parameter list of addresses, the last address entry traditionally had its most significant bit set to 1, whereas the other address entries were required to have their most significant bit set to 0. - Because the instruction length code is 00b for a BALR and is 01b for a BAL, the high order bit is always guaranteed to be set to 0, thereby indicating 24-bit mode, for BALR and BAL on XA and later systems. - "4.1 Merge window, part 1". LWN. April 15, 2015.
Disenfranchisement after the Reconstruction Era Disenfranchisement after the Reconstruction Era deals with the efforts made by Southern states of the former Confederacy at the turn of the century in the United States to prevent their black citizens from registering to vote and voting. Their actions defied the Fifteenth Amendment to the United States Constitution, ratified in 1870, which was intended to protect the suffrage of freedmen after the American Civil War. Considerable violence and fraud accompanied elections as the white Democrats regained power; they used paramilitary groups to suppress black Republican voting and turn Republicans out of office. In the Wilmington Insurrection of 1898 (long called a race riot by whites), white Democrats conducted a coup d'etat of city government, the only one in United States history; they overturned a duly elected biracial government and then widely attacked the black community, destroying lives and property. Finally, Democrats achieved disenfranchisement by law: from 1890 to 1908, Southern states passed new constitutions, constitutional amendments and laws that made voter registration and voting more difficult, achieving the desired result of disenfranchising most black voters, as well as many poor whites. The Republican Party was nearly destroyed in the region. Southern Democrats established a one-party system based on white supremacy. As Congressional apportionment was based on the total population, the Southern white Democrats, the Southern Bloc, came to have outsize power in Congress for decades. "Section 2 of the Fourteenth Amendment reduces congressional representation for states that deny suffrage on racial grounds," but it was not enforced, as opponents of the South could not get around their power in Congress. In 1912 Woodrow Wilson gained an Electoral College bonus as a result of this black Republican disenfranchisement, and won that and the 1916 presidential elections, to become the first Southern president since 1856. He changed race relations in the federal government in 1913, overtly instituting racial segregation throughout federal workplaces and establishing racial discrimination in hiring. During World War I the military was also segregated, with black soldiers poorly trained and equipped, and often sent on suicide missions. The results of disenfranchisement also had additional far-reaching effects in Congress, where the Democratic South gained "about 25 extra seats in Congress for each decade between 1903 and 1953." Also, the end of a two-party system in the South meant that Southerners were entrenched in Congress, giving them seniority privileges and control of chairmanships of important committees, as well as leadership of the national Democrats. During the Great Depression, numerous national social programs were passed without representation from African Americans, leading to gaps in coverage of the programs. In addition, because black Americans in the South were not on the voting register, they were consequently excluded from jury service. Segregation in the military ended in 1948. - 1 Background - 2 Post-Reconstruction disenfranchisement - 3 State disenfranchising constitutions, 1890-1908 - 4 Black and white disenfranchisement - 5 White primary - 6 Congressional response - 7 Woodrow Wilson's elections - 8 Legislative and cultural effects - 9 20th-century Supreme Court decisions - 10 Civil rights movement - 11 See also - 12 References The American Civil War ended in 1865, marking the start of the Reconstruction era in the eleven former Confederate states. Congress refused to re-admit these states back to the Union until they were reconstructed and freedmen's rights to vote safeguarded. In 1866, ten of these states did not provide suffrage and equal civil rights to freedmen. The exception was Tennessee, which had adopted a new constitution in 1865. Congress passed the Reconstruction Acts, starting in 1867, establishing military districts to oversee the affairs of these states pending reconstruction. During the Reconstruction era, blacks constituted absolute majorities of the populations in Mississippi and South Carolina, were equal to the white population in Louisiana, and represented more than 40% of the population in four other former Confederate states. Southern whites, fearing black domination, resisted the freedmen's exercise of political power. In 1867, black men voted for the first time. By the 1868 presidential election, Texas, Mississippi, and Virginia had still not been re-admitted to the Union. Radical Republican Civil War General Ulysses S. Grant was elected president thanks to 700,000 black voters. In February 1870, the Fifteenth Amendment was ratified; it was designed to protect blacks' right to vote from infringement by the states. White supremacist paramilitary organizations, allied with Southern Democrats, used intimidation, violence and assassinations to repress blacks and prevent them from exercising their civil rights in elections from 1868 until the mid-1870s. The insurgent Ku Klux Klan (KKK) was formed in 1865 in Tennessee (as a backlash to defeat in the war) and quickly became a powerful secret vigilante group, with chapters across the South. The Klan initiated a campaign of intimidation directed against blacks and sympathetic whites. Their violence included vandalism and destruction of property, physical attacks and assassinations, and lynchings. Teachers who came from the North to teach freedmen were sometimes attacked or intimidated as well. Under the Force Acts of 1870, the KKK was suppressed by federal prosecution. Klan murders led Congress to pass laws to end the violence. In 1870, the strongly Republican Congress passed the Enforcement Acts, imposing penalties for conspiracy to deny black suffrage. The Acts empowered the President to deploy the armed forces to suppress organizations that deprived people of rights guaranteed by the Fourteenth Amendment. Organizations whose members appeared in arms were considered in rebellion against the United States. The President could suspend habeas corpus under those circumstances. President Grant used these provisions in parts of the Carolinas in late 1871. United States marshals supervised state voter registrations and elections and could summon the help of military or naval forces if needed. These measures led to the demise of the first Klan by the early 1870s. New paramilitary groups unleashed a second wave of violence, resulting in over 1,000 deaths, usually black or Republican. The Supreme Court ruled in 1876 in United States v. Cruikshank, arising from trials related to the Colfax Massacre, that protections of the Fourteenth Amendment, which the Enforcement Acts were intended to support, did not apply to the actions of individuals, but only to the actions of state governments. More significant were paramilitary organizations that arose in the mid to late 1870s as part of continuing insurgency in the South after the Civil War, as armed veterans in the South began varied forms of resistance to social changes, including preventing black Americans from voting and running for office. Such groups included the White League, formed in Louisiana in 1874 from white militias, with chapters forming in other Southern states; the Red Shirts, formed in 1875 in Mississippi but also active in North Carolina and South Carolina; and other "White Liners," such as rifle clubs and the Knights of the White Camellia. Compared to the Klan, they were open societies, better organized and devoted to the political goal of regaining control of the state legislature and suppressing Republicans, including most blacks. They often solicited newspaper coverage for publicity to increase their threat. The scale of operations was such that in 1876, North Carolina had 20,000 members of rifle clubs alone. Made up of well-armed Confederate veterans, a class that covered most adult men who could have fought in the war, the paramilitary groups worked for political aims: to turn Republicans out of office, disrupt their organizing, and use force to intimidate and terrorize freedmen to keep them away from the polls. Such groups have been described as "the military arm of the Democratic Party." They were instrumental in many Southern states in driving blacks away from the polls and ensuring a white Democratic takeover of legislatures and governorships in most Southern states in the 1870s, most notoriously during the controversial 1876 elections. As a result of a national Compromise of 1877 arising from the 1876 presidential election, the federal government withdrew its forces from the South, formally ending the Reconstruction era. By that time, Southern Democrats had effectively regained control in Louisiana, South Carolina, and Florida – they identified as the Redeemers. In the South, the process has been called "the Redemption". African-American historians sometimes call the Compromise of 1877 "The Great Betrayal." Following continuing violence around elections as insurgents worked to suppress black voting, the Democratic-dominated Southern states passed legislation to create barriers to voter registrations by blacks and poor whites, starting with the Georgia poll tax in 1877. Other measures followed, particularly near the end of the century. Results could be seen across the South in states such as Tennessee. After Reconstruction, Tennessee initially had the most "consistently competitive political system in the South". A bitter election battle in 1888, marked by unmatched corruption and violence, resulted in white Democrats' taking over the state legislature. To consolidate their power, they worked to suppress the black vote and sharply reduced it through changes in voter registration, requiring poll taxes, as well as changing election procedures to make voting more complex. In 1890 Mississippi adopted a new constitution, which contained provisions for voter registration which required voters to pass a literacy test and pay poll taxes. The literacy test was subjectively applied by white administrators, and the two provisions effectively disenfranchised most blacks and many poor whites. The constitutional provisions survived a Supreme Court challenge in Williams v. Mississippi (1898). Other southern states quickly adopted new constitutions and what they called the "Mississippi plan." By 1908, all Southern states of the former Confederacy had passed new constitutions, sometimes bypassing general elections to achieve this. Legislators created a variety of barriers, including longer residency requirements, rule variations, literacy and understanding tests, which were subjectively applied against minorities, or were particularly hard for the poor to fulfill. Such constitutional provisions were unsuccessfully challenged at the Supreme Court in Giles v. Harris (1903). In practice, these provisions, including white primaries, created a maze that blocked most blacks and many poor whites from voting in Southern states until passage of federal civil rights legislation in the mid-1960s. Voter registration and turnout dropped sharply across the South. The disenfranchisement of a large proportion of voters attracted the attention of Congress, and in 1900 some members proposed stripping the South of seats, related to the number of people who were barred from voting. Apportionment of seats was still based on total population (with the assumption of the usual number of voting males), and white Southerners commanded a number of seats out of proportion to the people they represented. In the end, Congress did not act on this issue, as the Southern bloc of Democrats had sufficient power to reject or stall such action. For decades, white Southern Democrats exercised Congressional representation derived from a full count of the population, but they disenfranchised several million black and white citizens. Southern white Democrats comprised a powerful voting bloc in Congress until the mid-20th century. Their representatives, re-elected repeatedly by one-party states, exercised the power of seniority, controlling numerous chairmanships of important committees in both houses. Their power allowed them to have control over rules, budgets and important patronage projects, among other issues, as well as to defeat bills to make lynching a federal crime. State disenfranchising constitutions, 1890-1908 Despite white Southerners' complaints about Reconstruction, several Southern states kept most provisions of their Reconstruction constitutions for more than two decades, until late in the 19th century. In some states, the number of blacks elected to local offices reached a peak in the 1880s although Reconstruction had ended. They had an influence at the local level, although not winning many state seats. Subsequently, state legislatures passed restrictive laws that made voter registration and election rules more complicated. In addition, most legislatures drafted new constitutions or amendments that adopted indirect methods for limiting the vote by most blacks and, often, many poor whites. Florida approved a new constitution in 1885 that included provisions for poll taxes as a prerequisite for voter registration and voting. From 1890 to 1908, ten of the eleven Southern states rewrote their constitutions. All included provisions that effectively restricted voter registration and suffrage, including requirements for poll taxes, increased residency, and subjective literacy tests. With educational improvements, blacks had markedly increased their rate of literacy. By 1891, their illiteracy had declined to 58%. The rate of white illiteracy in the South was 31%. Some states used grandfather clauses to exempt white voters from literacy tests altogether. Other states required otherwise eligible black voters to meet literacy and knowledge requirements to the satisfaction of white registrars, who applied subjective judgment and, in the process, rejected most black voters. By 1900, the majority of blacks were literate, but even many of the best-educated of these men continued to "fail" the literacy tests administered by white registrars. The historian J. Morgan Kousser noted, "Within the Democratic party, the chief impetus for restriction came from the black belt members," whom he identified as "always socioeconomically privileged." In addition to wanting to affirm white supremacy, the planter and business elite were concerned about voting by lower-class and uneducated whites. Kousser found, "They disfranchised these whites as willingly as they deprived blacks of the vote." Perman noted the goals of disenfranchisement resulted from several factors. Competition between white elites and white lower classes, for example, and a desire to prevent alliances between lower-class white and black Americans, as had been seen in Populist-Republican alliances, led white Democratic legislators to restrict voter rolls. With the passage of new constitutions, Southern states adopted provisions that caused disenfranchisement of large portions of their populations by skirting US constitutional protections of the Fourteenth and Fifteenth Amendments. While their voter registration requirements applied to all citizens, in practice they disenfranchised most blacks and also "would remove [from voter registration rolls] the less educated, less organized, more impoverished whites as well - and that would ensure one-party Democratic rules through most of the 20th century in the South." As white Democrats regained political power in the South in the 1870s, they had worked to suppress black voting. The Red Shirts (Southern United States) and White League intimidated and attacked black voters, and often turned Republicans out of office. The new provisions eliminated black voting by law. Secondly, the Democratic legislatures passed Jim Crow laws to assert white supremacy, establish racial segregation in public facilities, and treat blacks as second-class citizens. The landmark court decision in Plessy v. Ferguson (1896) held that "separate but equal" facilities, as on railroad cars, was constitutional. The new constitutions passed numerous Supreme Court challenges. In cases where a particular restriction was overruled by the Supreme Court in the early 20th century, states quickly devised new methods of excluding most blacks from voting, such as the white primary, as the only competitive contests were reduced to Democratic Party primaries. For the national Democratic Party, the alignment after Reconstruction resulted in a Southern anchor that was useful for congressional clout. But, it inhibited the national party from fulfilling center-left initiatives prior to President Franklin D. Roosevelt. Black and white disenfranchisement In Florida, Alabama, Tennessee, Arkansas, Louisiana, Mississippi, Georgia (1877), North and South Carolina, Virginia (until 1882 and again from 1902 with its new constitution), Texas (1901) and in some northern and western states, proof of payment of taxes (or poll taxes) was a prerequisite to voter registration. Georgia created a cumulative poll tax requirement in 1877: men of any race 21 to 60 years of age had to pay a sum of money for every year from the time they had turned 21, or from the time that the law took effect. The poll tax requirements applied to whites as well as blacks, and adversely affected poor citizens. Many states required payment of the tax at a time separate from the election, and then required voters to bring receipts with them to the polls. If they could not locate such receipts, they could not vote. In addition, many states surrounded registration and voting with complex record-keeping requirements. These were particularly difficult for sharecropper and tenant farmers to comply with, as they moved frequently. The poll tax was sometimes used alone or together with a literacy qualification. In a kind of grandfather clause, North Carolina in 1900 exempted from the poll tax those men entitled to vote as of January 1, 1867. This excluded all blacks, who did not then have suffrage. Educational and character requirements Alabama, Arkansas, Mississippi, Tennessee, and South Carolina created an educational requirement, with review by a local registrar of a voter's qualifications. In 1898 Georgia rejected such a device. Alabama delegates at first hesitated, out of concern that illiterate whites would lose their votes. After the legislature stated that the new constitution would not disenfranchise any white voters and that it would be submitted to the people for ratification, Alabama passed an educational requirement. It was ratified at the polls in November 1901. Its distinctive feature was the "good character clause" (also known as the "grandfather clause"). An appointment board in each county could register "all voters under the present [previous] law" who were veterans or the lawful descendants of such, and "all who are of good character and understand the duties and obligations of citizenship." This gave the board discretion to approve voters on a case-by-case basis. In practice, they enfranchised whites and rejected blacks, most of whom had been slaves and did not have military service. South Carolina, Louisiana (1889), and later, Virginia incorporated an educational requirement in their new constitutions. In 1902 Virginia adopted a constitution with the "understanding" clause as a literacy test to use until 1904. In addition, application for registration had to be in the applicant's handwriting and written in the presence of the registrar. Thus, someone who could not write, could not vote. Eight Box Law By 1882, the Democrats in South Carolina were firmly in power. Republican voters were contained in the heavily black counties of Beaufort and Georgetown. Because the state had a large black majority, white Democrats still feared a possible resurgence of black voters at the polls. To remove the black threat, the General Assembly created an indirect literacy test, called the "Eight Box Law." The law required a separate box for each office, and for a voter to insert the ballot into the corresponding box or it would not count. The ballots could not have party symbols on them. They had to be of a correct size and type of paper. Many ballots were arbitrarily rejected because they slightly deviated from the requirements. Ballots could also randomly be rejected if there were more ballots in a box than registered voters. The multiple-ballot box law was challenged in court. On May 8, 1895, Judge Goff of the United States Circuit Court declared the provision unconstitutional and enjoined the state from taking further action under it. But in June 1895, the US Circuit Court of Appeals reversed Judge Goff and dissolved the injunction, leaving the way open for a convention. The constitutional convention met on September 10 and adjourned on December 4, 1895. By the new constitution, South Carolina adopted the Mississippi Plan until January 1, 1898. Any male citizen could be registered who was able to read a section of the constitution or to satisfy the election officer that he understood it when read to him. Those thus registered were to remain voters for life. Under the new constitution, there was a massive drop in black voters registered: by 1896, in a state where blacks comprised a majority of the population, only 5,500 black voters had succeeded in registering. Grandfather clauses were used that allowed a man to vote if his grandfather or father had voted prior to January 1, 1867 (neither free people of color, even if property owners, nor freedmen could vote before this date.) The grandfather clause effectively denied all freedmen the ability to vote. Free men of color before 1831 could vote in North Carolina if they met property qualifications, at one time, but were excluded there and elsewhere after the Nat Turner slave rebellion of 1831. Justice Benjamin Curtis's dissent in Dred Scott v. Sandford, 60 U.S. 393 (1857) had noted that free people of color in numerous states had the right to vote at the time of the Articles of Confederation (as part of the argument about whether people of African descent could be citizens of the new United States): Of this there can be no doubt. At the time of the ratification of the Articles of Confederation, all free native-born inhabitants of the States of New Hampshire, Massachusetts, New York, New Jersey, and North Carolina, though descended from African slaves, were not only citizens of those States, but such of them as had the other necessary qualifications possessed the franchise of electors, on equal terms with other citizens. North Carolina's constitutional amendment of 1900 exempted from the poll tax those men entitled to vote as of January 1, 1867, another type of use of a grandfather clause. Virginia also used a type of grandfather clause. In Guinn v. United States (1915), the Supreme Court invalidated the Oklahoma Constitution's "old soldier" and "grandfather clause" exemptions from literacy tests. In practice, these had disenfranchised blacks, as had occurred in numerous Southern states. This decision affected similar provisions in the constitutions of Alabama, Georgia, Louisiana, North Carolina, and Virginia election rules. Oklahoma and other states quickly reacted by passing laws that created other rules for voter registration that worked against blacks and minorities. This was the first of many cases in which the NAACP filed a brief. In Lane v. Wilson (1939), the Supreme Court invalidated an Oklahoma provision designed to disenfranchise blacks. It had replaced the clause struck down in Guinn. This clause permanently disenfranchised everyone qualified to vote who had not registered to vote in a 12-day window between April 30 and May 11, 1916, except for those who voted in 1914. While designed to be more resistant to challenges based on discrimination, as the law did not specifically mention race, the Court struck it down partially because it relied on the 1914 election, when voters had been discriminated against under the rule invalidated in Guinn. With a population evenly divided between races, in 1896 there were 130,334 black voters on the registration rolls and about the same number of whites. The constitution created by Louisiana State legislators in 1898 included the "grandfather" clause, and a literacy test or property requirement. The would-be voter must be able to read and write English or his native tongue, or own property assessed at $300 or more. The literacy test was administered by the voting registrar; in practice, they were white Democrats. The grandfather clause provided that "Any citizen who was a voter on January 1, 1867, or his son or grandson, or any person naturalized prior to January 1, 1898, if applying for registration before September 1, 1898, might vote, notwithstanding illiteracy or poverty." Separate registration lists were kept for whites and blacks, making it easy to discriminate against the latter in literacy tests. The constitution of 1898 required a longer residency requirement in the state, county, parish, and precinct before voting than did the constitution of 1879. The effect of these changes on the black population of black voters in Louisiana was devastating; by 1900 black voters were reduced from 130,334 to 5,320 on the rolls. By 1910, only 730 blacks were registered, less than 0.5% of eligible black men. "In 27 of the state's 60 parishes, not a single black voter was registered any longer; in 9 more parishes, only one black voter was." In 1894, a coalition of Republicans and the Populist Party won control of the North Carolina state legislature (and with it, the ability to elect two US Senators) and were successful in having several US Representatives elected through electoral fusion. The fusion coalition made impressive gains in the 1896 election when their legislative majority expanded, Republican Daniel Lindsay Russell won the gubernatorial race in 1897, the first Republican governor of the state since the end of Reconstruction in 1877. The election also resulted in more than 1,000 elected or appointed black officials, including the election in 1897 of George Henry White to Congress, as a member of the House of Representatives. At the 1898 election, the Democrats ran on White Supremacy and disenfranchisement in a bitter race-baiting campaign led by Furnifold McLendel Simmons, who became the state's senator in 1901 and held the office until 1931, and Josephus Daniels, editor and publisher of The Raleigh News & Observer. The Republican/Populist coalition disintegrated, and the Democrats won the North Carolina 1898 election and the following 1900 election. They used their power in the state legislature to disenfranchise blacks and ensure that Democratic Party and white power would not be threatened again. They passed laws restricting voter registration, and in 1900 the Democrats adopted a constitutional suffrage amendment which lengthened the residence period before registration and enacted both an educational qualification (to be assessed by a registrar, which meant that it could be subjectively applied) and prepayment of poll tax. A grandfather clause exempted from the poll tax those entitled to vote on January 1, 1867. They also passed Jim Crow laws establishing racial segregation in public facilities and transportation. The effect in North Carolina was the complete elimination of black voters from voter rolls by 1904. Contemporary accounts estimated that 75,000 black male citizens lost the vote. In 1900 blacks numbered 630,207 citizens, about 33% of the state's total population. The growth of the thriving black middle class was slowed. In North Carolina and other Southern states, there were also the insidious effects of invisibility: "[W]ithin a decade of disenfranchisement (sic), the white supremacy campaign had erased the image of the black middle class from the minds of white North Carolinians." In Virginia, Democrats sought disenfranchisement in the late 19th century after a coalition of white and black Republicans with populist Democrats had come to power; the coalition had been formalized as the Readjuster Party. The Readjuster Party held control from 1881 to 1883, electing a governor and controlling the legislature, which also elected a US Senator from the state. As in North Carolina, state Democrats were able to divide Readjuster supporters through appeals to White Supremacy. After regaining power, Democrats changed state laws and the constitution in 1902 to disenfranchise blacks. They ratified the new constitution in the legislature and did not submit it to popular vote. Voting in Virginia fell by nearly half as a result of the disenfranchisement of blacks. The 80-year stretch of white Democratic control ended only in the late 1960s after passage and enforcement of the federal Voting Rights Act of 1965 and the collapse of the Byrd Organization machine. About the turn of the 20th century, white members of the Democratic Party in some southern states (minorities were commonly excluded) began to treat the party as a "private club" and insist on white primaries, barring black and other minority voters who managed to get through other barriers. These became common for all elections. As the Democratic Party was dominant and the only competitive voting was in the primaries, barring voters from the primaries was another means of exclusion. Court challenges overturned the white primary system, but many states passed laws that authorized parties to set up their systems, such as the white primary. Texas, for instance, passed such a state law in 1923. It was used to bar Mexican Americans as well as black Americans from voting and passed Supreme Court challenges until the 1940s. Use of "white primaries" in the South and 1900 population of African Americans in those states |No. of African Americans||% of Population||Year of law or constitution| |Texas||622,041||20.40||1901 / 1923 laws| The North had heard the South's version of Reconstruction abuses, such as financial corruption, high taxes, and incompetent freedmen. Industry wanted to invest in the South and not worry about political problems. In addition, reconciliation between white veterans of the North and South reached a peak in the early 20th century. As historian David Blight demonstrated in Race and Reunion: The Civil War in American Memory, reconciliation meant the pushing aside by whites of the major issues of race and suffrage. Southern whites were effective for many years at having their version of history accepted, especially as it was confirmed in ensuing decades by influential historians of the Dunning School at Columbia University and other institutions. Disenfranchisement of black Americans in the South was covered by national newspapers and magazines as new constitutions were created, and many Northerners were outraged and alarmed. In 1900 the Committee of Census of Congress considered proposals for adding more seats to the House of Representatives because of increased population. Proposals ranged for a total number of seats from 357 to 386. Edgar D. Crumpacker (R-IN) filed an independent report urging that the Southern states be stripped of seats due to the large numbers of voters they had disenfranchised. He noted this was provided for in Section 2 of the Fourteenth Amendment, which provided for stripping representation from states that reduced suffrage due to race. The Committee and House failed to agree on this proposal. Supporters of black suffrage worked to secure Congressional investigation of disenfranchisement, but concerted opposition of the Southern Democratic bloc was aroused, and the efforts failed. From 1896-1900, the House of Representatives with a Republican majority had acted in more than 30 cases to set aside election results from Southern states where the House Elections Committee had concluded that "black voters had been excluded due to fraud, violence, or intimidation." But, in the early 1900s, it began to back off from its enforcement of the Fifteenth Amendment and suggested that state and Federal courts should exercise oversight of this issue. The Southern bloc of Democrats exercised increasing power in the House. They had no interest in protecting suffrage of blacks. In 1904 Congress administered a coup de grâce to efforts to investigate disenfranchisement in its decision in the 1904 South Carolina election challenge of Dantzler v. Lever. The House Committee on Elections upheld Lever's victory. It suggested that citizens of South Carolina who felt their rights were denied should take their cases to the state courts, and ultimately, the Supreme Court. Blacks had no recourse through the Southern state courts; because they were disenfranchised, they could not serve on juries, and whites were clearly aligned against them on this and other racial issues. Despite the Lever decision and domination of Congress by Democrats, some Northern Congressmen continued to raise the issue of black disenfranchisement and resulting malapportionment. For instance, on December 6, 1920, Representative George H. Tinkham from Massachusetts offered a resolution for the Committee of Census to investigate alleged disenfranchisement of blacks. His intention was to enforce the provisions of the Fourteenth and Fifteenth amendments. In addition, he believed there should be reapportionment in the House related to the voting population of southern states, rather than the general population as enumerated in the census. Such reapportionment was authorized by the Constitution and would reflect reality, so that the South should not get credit for people it had disenfranchised. Tinkham detailed how outsized the South's representation was related to the total number of voters in each state, compared to other states with the same number of representatives: - States with four representatives: - Florida, with a total vote of 31,613. - Colorado, with a total vote of 208,855. - Maine, with a total vote of 121,836. - States with six representatives: - Nebraska, with a total vote of 216,014. - West Virginia, with a total vote of 211,643. - South Carolina, given 7 representatives because of its total population (which was majority black), but its voters numbered only 25,433. - States with 8 representatives: - Louisiana, with a total vote of 44,794. - Kansas, with a total vote of 425,641. - States with 10 representatives: - Alabama, with a total vote of 62,345. - Minnesota, with a total vote of 299,127. - Iowa, with a total vote of 316,377. - California, with 11 representatives, had a total vote of 644,790. - States with 12 representatives: - Georgia, with a total vote of 59,196. - New Jersey, with a total vote of 338,461. - Indiana, with 13 representatives, had a total vote of 565,216. He was defeated by the Democratic Southern Bloc. Woodrow Wilson's elections In 1912 Woodrow Wilson became the first Southerner to win a presidential election since 1856, after gaining an electoral advantage from a split in the Republican party and an electoral college bonus because of Democratic control of southern votes by means of black disenfranchisement and hobbling of the Republicans in the South. In 1912, the extra Southern electoral votes were not a decisive factor. Wilson won the election in a landslide, not only winning every Southern electoral vote but also winning a large majority of electoral votes outside the South. This was entirely due to the split in the Republican party - indeed, had all of the voters for President Taft and former President Roosevelt backed a single candidate, Wilson would have lost the election even with the extra Southern votes. However, Southern electoral votes did prove decisive in securing Wilson's reelection in the much closer 1916 presidential election. Wilson changed national race relations shortly after taking office in 1913. He overtly instituted racial segregation in workplaces throughout the federal government and established racial discrimination in hiring. During World War I the military was segregated, with black soldiers poorly trained and equipped, and often sent on suicide missions. Troops were still segregated during World War II, at southern Congressional insistence. In 1948 Democratic President Harry Truman used an executive order to end racial segregation in the military, overseeing a multi-year process of change. Legislative and cultural effects 20th-century Supreme Court decisions Black Americans and their allies worked hard to regain their ability to exercise the constitutional rights of citizens. Booker T. Washington, widely known for his accommodationist approach as the leader of the Tuskegee Institute, called on northern backers to help finance legal challenges to disenfranchisement and segregation. He raised substantial funds and also arranged for representation on some cases, such as the two for Giles in Alabama. He challenged the state's grandfather clause and a citizenship test required for new voters, which was administered in a discriminatory way against blacks. In its ruling in Giles v. Harris (1903), the United States Supreme Court under Justice Oliver Wendell Holmes, Jr. effectively upheld such southern voter registration provisions in dealing with a challenge to the Alabama constitution. Its decision said the provisions were not targeted at blacks and thus did not deprive them of rights. This has been characterized as the "most momentous ignored decision" in constitutional history. Trying to deal with the grounds of the Court's ruling, Giles mounted another challenge. In Giles v. Teasley (1904), the U.S. Supreme Court upheld Alabama's disenfranchising constitution. That same year the Congress refused to overturn a disputed election, and essentially sent plaintiffs back to the state courts. Even when black plaintiffs gained rulings in their favor from the Supreme Court, states quickly devised alternative ways to exclude them from the political process. It was not until later in the 20th century that such legal challenges on disenfranchisement began to meet more success in the courts. With the founding of the National Association for the Advancement of Colored People (NAACP) in 1909, the interracial group based in New York began to provide financial and strategic support to lawsuits on voting issues. What became the NAACP Legal Defense Fund organized and mounted numerous cases in repeated court and legal challenges to the many barriers of segregation, including disenfranchisement provisions of the states. The NAACP often represented plaintiffs directly, or helped raise funds to support legal challenges. The NAACP also worked at public education, lobbying of Congress, demonstrations, and encouragement of theater and academic writing as other means to reach the public. NAACP chapters were organized in cities across the country, and membership increased rapidly in the South. The American Civil Liberties Union also represented plaintiffs in some disenfranchisement cases. In Smith v. Allwright (1944), the Supreme Court reviewed a Texas case and ruled against the white primary; the state legislature had authorized the Democratic Party to devise its own rules of operation. The 1944 court ruling was that this was unconstitutional, as the state had failed to protect the constitutional rights of its citizens. Following the 1944 ruling, civil rights organizations in major cities moved quickly to register black voters. For instance, in Georgia, in 1940 only 20,000 blacks had managed to register to vote. After the Supreme Court decision, the All-Citizens Registration Committee (ACRC) of Atlanta started organizing. By 1947 they and others had succeeded in getting 125,000 black Americans registered, 18.8% of those of eligible age. Each legal victory was followed by white-dominated legislatures' renewed efforts to control black voting through different exclusionary schemes. In 1958 Georgia passed a new voter registration act that required those who were illiterate to satisfy "understanding tests" by correctly answering 20 of 30 questions related to citizenship posed by the voting registrar. Blacks had made substantial advances in education, but the individual white registrars were the sole persons to determine whether individual prospective voters answered correctly. In practice, registrars disqualified most black voters, whether they were educated or not. In Terrell County, for instance, which was 64% black in population, after passage of the act, only 48 black Americans were able to register to vote in 1958. Civil rights movement The NAACP's steady progress with individual cases was thwarted by southern Democrats' continuing resistance and passage of new statutory barriers to blacks' exercising the franchise. Through the 1950s and 1960s, private citizens enlarged the effort by becoming activists throughout the South, led by many black churches and their leaders, and joined by both young and older activists from northern states. Nonviolent confrontation and demonstrations were mounted in numerous Southern cities, often provoking violent reaction by white bystanders and authorities. The moral crusade of the Civil Rights Movement gained national media coverage, attention across the country, and a growing national demand for change. Widespread violence against the Freedom Riders in 1961, which was covered by television and newspapers, the murders of activists in Alabama in 1963 gained support for the activists' cause at the national level. President John F. Kennedy introduced civil rights legislation to Congress in 1963 before he was assassinated. President Lyndon B. Johnson took up the charge. In January 1964, Johnson met with civil rights leaders. On January 8, during his first State of the Union address, Johnson asked Congress to "let this session of Congress be known as the session which did more for civil rights than the last hundred sessions combined." On January 23, 1964, the 24th Amendment to the U.S. Constitution, prohibiting the use of poll taxes in national elections, was ratified with the approval of South Dakota, the 38th state to do so. On June 21, 1964, civil rights workers Michael Schwerner, Andrew Goodman, and James Chaney, disappeared in Neshoba County, Mississippi. The three were volunteers aiding in the registration of black voters as part of the Mississippi Freedom Summer Project. Forty-four days later the Federal Bureau of Investigation recovered their bodies from an earthen dam where they were buried. The Neshoba County deputy sheriff Cecil Price and 16 others, all Ku Klux Klan members, were indicted for the murders; seven were convicted. The investigation also revealed the bodies of several black men, whose deaths had never been revealed or prosecuted by white law enforcement officials. When the Civil Rights Bill came before the full Senate for debate on March 30, 1964, the "Southern Bloc" of 18 southern Democratic Senators and one Republican Senator, led by Richard Russell (D-GA), launched a filibuster to prevent its passage. Russell said: - "We will resist to the bitter end any measure or any movement which would have a tendency to bring about social equality and intermingling and amalgamation of the races in our (Southern) states." After 57 working days of filibuster, and several compromises, the Senate had enough votes (71 to 29) to end the debate and the filibuster. It was the first time that Southern senators had failed to win with such tactics against civil rights bills. On July 2, President Johnson signed into law the Civil Rights Act of 1964. The Act prohibited segregation in public places and barred unequal application of voter registration requirements. It did not explicitly ban literacy tests, which had been used to disqualify blacks and poor white voters. As the United States Department of Justice has stated: "By 1965 concerted efforts to break the grip of state disenfranchisement (sic) had been under way for some time, but had achieved only modest success overall and in some areas had proved almost entirely ineffectual. The murder of voting-rights activists in Philadelphia, Mississippi, gained national attention, along with numerous other acts of violence and terrorism. Finally, the unprovoked attack on March 7, 1965, by state troopers on peaceful marchers crossing the Edmund Pettus Bridge in Selma, Alabama, en route to the state capitol in Montgomery, persuaded the President and Congress to overcome Southern legislators' resistance to effective voting rights legislation. President Johnson issued a call for a strong voting rights law and hearings began soon thereafter on the bill that would become the Voting Rights Act." Passed in 1965, this law prohibited the use of literacy tests as a requirement to register to vote. It provided for recourse for local voters to Federal oversight and intervention, plus federal monitoring of areas that historically had low voter turnouts to ensure that new measures were not taken against minority voters. It provided for federal enforcement of voting rights. African Americans began to enter the formal political process, most for the first time in the South. They have won numerous seats and offices at local, state and federal levels. - Felony disenfranchisement - Electoral fraud - Jim Crow laws - Nadir of American race relations - Race legislation in the United States - Voting rights in the United States - "Disenfranchise vs. disfranchise". Grammarist. Retrieved 2014-09-19. - Richard M. Valelly, The Two Reconstructions: The Struggle for Black Enfranchisement University of Chicago Press, 2009, pp. 146-147 - "Another Open Letter to Woodrow Wilson W.E.B. DuBois, September, 1913". Teachingamericanhistory.org. Retrieved 2013-02-28. - "Chronology of Emancipation during the Civil War". University of Maryland: Department of History. - Gabriel J. Chin & Randy Wagner, "The Tyranny of the Minority: Jim Crow and the Counter-Majoritarian Difficulty,"43 Harvard Civil Rights-Civil Liberties Law Review 65 (2008) - Andrews, E. Benjamin (1912). History of the United States. New York: Charles Scribner's Sons. - George C. Rable, But There Was No Peace: The Role of Violence in the Politics of Reconstruction, Athens: University of Georgia Press, 1984, p. 132 - "Key Events in the Presidency of Rutherford B. Hayes". American President: A Reference Resource. Miller Center. Retrieved 8 January 2013. - J. Morgan Kousser, The Shaping of Southern Politics: Suffrage Restriction and the Establishment of the One-Party South, 1880-1910, p.104 - Richard H. Pildes, "Democracy, Anti-Democracy, and the Canon", Constitutional Commentary, Vol.17, 2000, Accessed 10 Mar 2008 - Richard H. Pildes, "Democracy, Anti-Democracy, and the Canon", Constitutional Commentary, Vol.17, 2000, p.10, Accessed 10 Mar 2008 - COMMITTEE AT ODDS ON REAPPORTIONMENT, The New York Times, 20 Dec 1900, accessed 10 Mar 2008 - W.E.B. DuBois, Black Reconstruction in America, 1868-1880, New York: Oxford University Press, 1935; reprint, New York: The Free Press, 1998 - Michael Perman.Struggle for Mastery: Disenfranchisement (sic) in the South, 1888-1908. Chapel Hill: North Carolina Press, 2001, Introduction - 1878-1895: Disenfranchisement (sic), Southern Education Foundation, accessed 16 Mar 2008 - J. Morgan Kousser.The Shaping of Southern Politics: Suffrage Restriction and the Establishment of the One-Party South, New Haven: Yale University Press, 1974 - Glenn Feldman, The Disenfranchisement Myth: Poor Whites and Suffrage Restriction in Alabama, Athens: University of Georgia Press, 2004, pp. 135–136 - Woodrow Wilson, one of the two Democrats elected to the presidency between Abraham Lincoln and Franklin D. Roosevelt, was elected due to a "bonus" resulting from the disenfranchisement of blacks and crippling of the Republican Party in the South. Soon after taking office, Wilson directed the segregation of federal facilities in the District of Columbia, which had been integrated during Reconstruction. - "Virginia's Constitutional Convention of 1901–1902". Virginia Historical Society. Retrieved 2006-09-14. - Dabney, Virginius (1971). Virginia, The New Dominion. University Press of Virginia. pp. 436–437. ISBN 0-8139-1015-3. - Texas Politics: Historical Barriers to Voting, accessed 11 Apr 2008 Archived April 2, 2008 at the Wayback Machine - "Atlanta in the Civil Rights Movement", Atlanta Regional Council for Higher Education - Richard H. Pildes, "Democracy, Anti-Democracy, and the Canon", 2000, pp.12 and 27 Accessed 10 Mar 2008 - Holt, Thomas (1979). Black over White: Negro Political Leadership in South Carolina during Reconstruction. Urbana: University of Illinois Press. - Richard H. Pildes, "Democracy, Anti-Democracy, and the Canon", 2000, p.12, accessed 10 Mar 2008 - Curtis, Benjamin Robbins (Justice). "Dred Scott v. Sandford, Curtis dissent". Legal Information Institute at Cornell Law School. Archived from the original on 8 July 2012. Retrieved 16 April 2008. - Richard M. Valelly, The Two Reconstructions: The Struggle for Black Enfranchisement, Chicago: University of Chicago Press, 2004, p.141 - North Carolina History Project "Fusion Politics" - The North Carolina Collection, UNC Libraries "The North Carolina Election of 1898" - Albert Shaw, The American Monthly Review of Reviews, Vol.XXII, Jul-Dec 1900, p.274 - Richard H. Pildes, "Democracy, Anti-Democracy, and the Canon", Constitutional Commentary, Vol. 17, 2000, pp. 12-13 - Historical Census Browser, 1900 US Census, University of Virginia, accessed 15 Mar 2008 - Richard H. Pildes, "Democracy, Anti-Democracy, and the Canon", 2000, p.12 and 27 Accessed 10 Mar 2008 - Texas Politics: Historical Barriers to Voting, accessed 11 Apr 2008 Archived April 2, 2008 at the Wayback Machine - Historical Census Browser, 1900 Federal Census, University of Virginia, accessed 15 Mar 2008 - Julien C. Monnet, "The Latest Phase of Negro Disenfranchisement", Harvard Law Review, Vol. 26, No. 1, Nov. 1912, p. 42, accessed 14 Apr 2008 - Data obtained from existing data in table. Number of African Americans total obtained by 827,545+366,984+231,209+...+661,329=7,199,364. Percentage data: 827,545/45.26%=1,828,425(rounded to nearest whole) for total population of Alabama, 366,984/27.98%=1,311,594(nearest whole) for Arkansas, etc. Total of all state populations=18,975,448. 7,199,364/18,975,448=37.94% - Richard H. Pildes, "Democracy, Anti-Democracy, and the Canon", Constitutional Commentary, Vol. 17, 2000, pp.19-20, Accessed 10 Mar 2008 - Richard H. Pildes, "Democracy, Anti-Democracy, and the Canon", Constitutional Commentary, Vol. 17, 2000, pp.20-21 Accessed 10 Mar 2008 - "DEMANDS INQUIRY ON DISFRANCHISING; Representative Tinkham Aims to Enforce 14th and 15th Articles of Constitution. ASKS REAPPORTIONMENT House Resolution Will Point Out Disparity Between Southern Membership and Votes Cast". The New York Times. 6 December 1920. Retrieved 4 September 2012. - Richard H. Pildes, "Democracy, Anti-Democracy, and the Canon", Constitutional Commentary, Vol. 17, 2000, p. 21 Accessed 10 Mar 2008 - Richard H. Pildes, "Democracy, Anti-Democracy, and the Canon", Constitutional Commentary, Vol. 17, 2000, p.32 Accessed 10 Mar 2008 - Chandler Davidson and Bernard Grofman, Quiet Revolution in the South: The Impact of the Voting Rights Act, Princeton: Princeton University Press, 1994, p.70 - Davidson and Grofman (1994), Quiet Revolution in the South, p. 71 - "Major Features of the Civil Rights Act of 1964". Congresslink.org. Retrieved 2010-06-06. - "Civil Rights Act of 1964". Spartacus.schoolnet.co.uk. Retrieved 2010-06-06. - "Civil Rights during the administration of Lyndon B. Johnson". LBJ Library and Museum. Retrieved 2007-02-25. - "Introduction To Federal Voting Rights Laws". United States Department of Justice. Retrieved 2007-02-25.
SAT Math 1 & 2 Subject Tests The rules of trigonometry tested on the Math Level 1 Subject Test are much more limited than those tested on the Math Level 2 Subject Test. Trigonometry on the Math Level 1 Subject Test is confined to right triangles and the most basic relationships between the sine, cosine, and tangent functions. If you”re taking the Math Level 1, that”s the only material from this chapter you need to know. If you plan to take the Math Level 2, then this entire chapter is your domain; rule it wisely. Here are some trigonometric terms that appear on the Math Subject Tests. Make sure you”re familiar with them. If the meaning of any of these vocabulary words keeps slipping your mind, add that word to your flash cards. An angle whose measure in degrees is between 0 and 90, exclusive. An angle whose measure in degrees is between 90 and 180, exclusive. The symbol θ (pronounced thay-tuh) is a variable, just like x and y, used to represent the measure of an angle in trigonometry. Prefix added to trigonometric functions, meaning inverse. THE BASIC FUNCTIONS The basis of trigonometry is the relationship between the parts of a right triangle. When you know the measure of one of the acute angles in a right triangle, you know all the angles in that triangle. For example, if you know that a right triangle contains a 20° angle, then you know all three angles—the right triangle must have a 90° angle, and because there are 180° in a triangle, the third angle must measure 70°. You don”t know the lengths in the triangle, but you know its shape and its proportions. Similar Right Triangles Remember that similar triangles have the same angles. So, any right triangle that contains a 20° angle will be similar to all other right triangles with a A right triangle that contains a 20° angle can have only one shape, though it can be any size. The same is true for a right triangle containing any other acute angle. That”s the fundamental idea of trigonometry. Once you know the measure of an acute angle in a right triangle, you know that triangle”s proportions. The three basic functions in trigonometry—the sine, cosine, and tangent—are ways of expressing proportions in a right triangle (that”s the ratio of one side to another). They may sound familiar to you. Or maybe you”ve heard of a little phrase called SOHCAHTOA? Let”s break it down. The sine of an angle is the ratio of the opposite side to the hypotenuse. The sine function of an angle θ is abbreviated sin θ. It”s All About A trigonometric function of any angle comes from the proportions of a right triangle containing that angle. For any given angle, there is only one possible set of proportions. The cosine of an angle is the ratio of the adjacent side to the hypotenuse. The cosine function of an angle θ is abbreviated cos θ. The tangent of an angle is the ratio of the opposite side to the adjacent side. The tangent function of an angle θ is abbreviated tan θ. These three functions form the basis of everything else in trigonometry. All of the more complicated functions and rules in trigonometry can be derived from the information contained in SOHCAHTOA. What Your Calculator Can Do for You Tables of sine, cosine, and tangent values are programmed into your calculator—that”s what the “sin,” “cos,” and “tan” keys do. • If you press one of the three trigonometric function keys and then enter an angle measure, your calculator will give you the function (sine, cosine, or tangent) of that angle. Just make sure that your calculator is in degree mode. This operation is written: sin 30° = 0.5 cos 30° = 0.866 tan 30° = 0.577 • Your calculator can also take a trig function value and tell you what angle would produce that value. Press the “2nd” key, then press “sin,” “cos,” or “tan,” then enter the decimal or fraction you”re given, and your calculator will give you the measure of that angle. This is called taking an inverse function, and it”s written: The expressions “sin−1 (0.5)” and “arcsin (0.5)” have the same meaning. Both mean “the angle whose sine is 0.5.” While ordinary trig functions take angle measures and output ratios, inverse trig functions take ratios and produce the corresponding angle measures; they work in reverse. Check Your Calculator For some scientific calculators, you need to enter things in reverse order. To find sin 30°, for example, you would type “30” first and then hit “sin.” To find sin−1(0.5), you would type “0.5” first and then hit “2nd” and “sin.” Finding Trig Functions in Right Triangles On the Math Level 1 Subject Test, the three basic trigonometric functions always occur in right triangles—particularly the Pythagorean triplets from Chapter 5. Special Right Triangles Be on the lookout for special right triangles on Use the definitions of the sine, cosine, and tangent to fill in the requested quantities in the following triangles. The answers to these drills can be found in Chapter 12. 1. sin θ = _____________ cos θ = _____________ tan θ = _____________ 2. sin θ = _____________ cos θ = _____________ tan θ = _____________ 3. sin θ = _____________ cos θ = _____________ tan θ = _____________ 4. sin θ = _____________ cos θ = _____________ tan θ = _____________ The preceding examples have all involved figuring out the values of trigonometric functions from lengths in a right triangle. Slightly more difficult trigonometry questions may require you to go the other way and figure out lengths or measures of angles using trigonometry. For example: x = _____________ Check Your Mode For Math Level 1, your calculator should always be in degree mode. For Math Level 2, it may sometimes need to be in radian mode (more on that Because we”re dealing with the hypotenuse and the side that is opposite the angle, the best definition to use is sine. sin 35° = 5(sin 35°) = x 5(0.5736) = x 2.8679 = x BC of ∆ABC therefore has a length of 2.87. In triangle ABC, you know only two quantities—the length of AB and the measure of ∠A. This question, unlike previous examples, doesn”t give you enough information to use the Pythagorean theorem. What you need is an equation that relates the information you have (AB and ∠A) to the information you don”t have (x). Use the SOHCAHTOA definitions to set up an equation. Solve that equation, and you find the value of the unknown. You can use a similar technique to find the measure of an unknown angle in a right triangle. For example: x = ________________________ In triangle DEF, you know DF and DF. EF is the side that is opposite the angle we”re looking for, and DF is the side that is adjacent to that same angle. So the best definition to use is tangent. tan x = tan x = tan x = 0.5 To solve for x, take the inverse tangent of both sides of the equation. On the left side, that just gives you x. The result is the angle whose tangent is 0.5. tan−1 (tan x) = tan−1(0.5) x = 26.57° Let Your Calculator Help To take the inverse tangent of the right side of this equation, press the “2nd” key, press the “tan” key, and then type in 0.5. The measure of ∠D is therefore 26.57°. Use the techniques you”ve just reviewed to complete the following triangles. The answers to these drills can be found in Chapter 12. 1. AB = _____________ CA = _____________ ∠B = _____________ 2. EF = _____________ FD = _____________ ∠D = _____________ 3. HJ = _____________ JK = _____________ ∠J = _____________ 4. LM = _____________ MN = _____________ ∠N = _____________ 5. TR = _____________ ∠S = _____________ ∠T = _____________ 6. YW = _____________ ∠W = _____________ ∠Y = _____________ Some Math Subject Test questions will ask you to do algebra with trigonometric functions. These questions usually involve using the SOHCAHTOA definitions of sine, cosine, and tangent. Often, the way to simplify equations that are mostly made up of trigonometric functions is to express the functions as follows: Writing trig functions this way can simplify trig equations, as the following example shows: Working with trig functions this way lets you simplify expressions. The equation above is actually a commonly used trigonometric identity. You should memorize this, as it can often be used to simplify equations. = tan x Here”s the breakdown of another frequently used trigonometric identity: That last step may seem a little baffling, but it”s really simple. This equation is based on a right triangle, in which O and A are legs of the triangle, and H is the hypotenuse. Consequently you know that O2 + A2 = H2. That”s just the Pythagorean theorem. That”s what lets you do the last step, in which = 1. This completes the second commonly used identity that you should memorize. sin2 θ + cos 2 θ = 1 In addition to memorizing these two identities, you should practice working algebraically with trig functions in general. Some questions may require you to use the SOHCAHTOA definitions of the trig functions; others may require you to use the two identities you”ve just reviewed. Take a look at these examples: 35. If sin x = 0.707, then what is the value of (sin x) • (cos x) • (tan x) ? Here”s How to Crack It This is a tricky question. To solve it, simplify that complicated trigonometric expression. Writing in the SOHCAHTOA definitions works just fine, but in this case it”s even faster to use one of those identities. Now it”s a simpler matter to answer the question. If sin x = 0.707, then sin2 x = 0.5. The answer is (C). Take a look at this one: 36. If sin a = 0.4, and 1 − cos2 a = x, then what is the value of x ? Here”s How to Crack It Here again, the trick to the question is simplifying the complicated trig expression. Since sin2 θ + cos2 θ= 1, you can rearrange any of those terms to rephrase it. Using the second trig identity, you can quickly take these steps: 1 − cos2 a = x sin2 a = x (0.4)2 = x x = 0.16 And that”s the answer. (E) is correct. Using the SOHCAHTOA definitions and the two trigonometric identities reviewed in this section, simplify trigonometric expressions to answer the following sample questions. Try the following practice questions. The answers to these drills can be found in Chapter 12. 25. (1 − sin x)(1 + sin x) = (A) cos x (B) sin x (C) tan x (D) cos2 x (E) sin2 x (D) cos2 x (E) tan x 39. −(sin x)(tan x) = (A) cos x (B) sin x (C) tan x (D) cos2 x (E) sin2 x (A) 1 − cos x (B) 1 − sin x (C) tan x + 1 (D) cos2 x (E) sin2 x The Other Trig Functions On the Math Level 2 Subject Test, you may run into the other three trigonometric functions—the cosecant, secant, and cotangent. These functions are abbreviated cscθ, secθ, and cotθ, respectively, and they are simply the reciprocals of the three basic trigonometric functions you”ve already reviewed. Here”s how they relate: You can also express these functions in terms of the sides of a right triangle—just by flipping over the SOHCAHTOA definitions of the three basic functions. These three functions generally show up in algebra-style questions, which require you to simplify complex expressions containing trig functions. The goal is usually to get an expression into the simplest form possible, one that contains no fractions. Such questions are like algebra-style questions involving the three basic trig functions; the only difference is that the addition of three more functions increases the number of possible forms an expression can take. For example: The entire expression (cos x)(cot x) + (sin2 x csc x) is therefore equivalent to a single trig function, the cosecant of x. That”s generally the way algebraic trigonometry questions work on the Math Level 2 Subject Test. Simplify each of these expressions to a single trigonometric function. Keep an eye out for the trigonometric identities reviewed on this page; they”ll still come in handy. The answers to these drills can be found in Chapter 12. 19. sec2 x − 1 = (A) sin x cos x (B) sec2 x (C) cos2 x (D) sin2 x (E) tan2 x (A) cos x (B) sin x (C) tan x (D) sec x (E) csc x 24. sin x + (cos x)(cot x) = (A) csc x (B) sec x (C) cot x (D) tan x (E) sin x GRAPHING TRIGONOMETRIC FUNCTIONS There are two common ways to represent trigonometric functions graphically—on the unit circle, or on the coordinate plane (you”ll get a good look at both methods in the coming pages). Both of these graphing approaches are ways of showing the repetitive nature of trigonometric functions. All of the trig functions (sine, cosine, and the rest) are called periodic functions. That simply means that they cycle repeatedly through the same values. The Unit Circle What Goes Around Comes Around If you picked a certain angle and its sine, cosine, and tangent, and then slowly changed the measure of that angle, you”d see the sine, cosine, and tangent change as well. But after a while, you would have increased the angle by 360°—in other words, you would come full circle, back to the angle you started with, going counterclockwise. The new angle, equivalent to the old one, would have the same sine, cosine, and tangent as the original. As you continued to increase the angle”s measure, the sine, cosine, and tangent would cycle through the same values all over again. All trigonometric functions repeat themselves every 360°. The tangent and cotangent functions actually repeat every 180°. Thus, angles of 0° and 360° are mathematically equivalent. So are angles of 40° and 400°, or 360° and 720°. Any two angle measures separated by 360° are equivalent. For example, to find equivalent angles to 40°, you just keep adding 360°. Likewise, you can go around the unit circleclockwise by subtracting multiples of 360°. Some angles equivalent to 40° would thus be 40° − 360° = −320°, −680°, −1040°, and so on. In the next few sections, you”ll see how that”s reflected in the graphs of trigonometric functions. This is the unit circle. It looks a little like the coordinate plane; in fact, it is the coordinate plane, or at least a piece of it. The circle is called the unit circle because it has a radius of 1 (a single unit). This is convenient because it makes trigonometric values easy to figure out. The radius touching any point on the unit circle is the hypotenuse of a right triangle. The length of the horizontal leg of the triangle is the cosine (which is therefore the x-coordinate) and the length of the vertical leg is the sine (which is the y-coordinate). It works out this way because sine = opposite ÷ hypotenuse, and cosine = adjacent ÷ hypotenuse; and here the hypotenuse is 1, so the sine is simply the length of the opposite side, and the cosine simply the length of the adjacent side. Suppose you wanted to show the sine and cosine of a 30° angle. That angle would appear on the unit circle as a radius drawn at a 30° angle to the positive x-axis (above). The x-coordinate of the point where the radius intercepts the circle is 0.866, which is the value of cos 30°. The y-coordinate of that point is 0.5, which is the value of sin 30°. Now take a look at the sine and cosine of a 150° angle. As you can see, it looks just like the 30° angle, flipped over the y-axis. Its y-value is the same—sin 150° = 0.5—but its x-value is now negative. The cosine of 150° is −0.866. Here, you see the sine and cosine of a 210° angle. Once again, this looks just like the 30° angle, but this time flipped over the x- and y-axes. The sine of 210° is −0.5; the cosine of 210° is −0.866. This is the sine and cosine of a 330° angle. Like the previous angles, the 330° angle has a sine and cosine equivalent in magnitude to those of the 30° angle. In the case of the 330° angle, the sine is negative and the cosine positive. So, sin 330° = −0.5 and cos 330° = 0.866. Notice that a 330° angle is equivalent to an angle of −30°. Following these angles around the unit circle gives us some useful information about the sine and cosine functions. • Sine is positive between 0° and 180° and negative between 180° and 360°. At 0°, 180°, and 360°, sine is zero. At 90°, sine is 1. At 270°, sine is −1. • Cosine is positive between 0° and 90° and between 270° and 360°. (You could also say that cosine is positive between −90° and 90°.) Cosine is negative between 90° and 270°. At 90° and 270°, cosine is zero. At 0° and 360°, cosine is 1. At 180°, cosine is −1. When these angles are sketched on the unit circle, sine is positive in quadrants I and II, and cosine is positive in quadrants I and IV. There”s another important piece of information you can get from the unit circle. The biggest value that can be produced by a sine or cosine function is 1. The smallest value that can be produced by a sine or cosine function is −1. Following the tangent function around the unit circle also yields useful information. The sine of 45° is , or 0.707, and the cosine of 45° is also , or 0.707. Since the tangent is the ratio of the sine to the cosine, that means that the tangent of 45° is 1. The tangent of 135° is −1. Here the sine is positive, but the cosine is negative. The tangent of 225° is 1. Here the sine and cosine are both negative. The tangent of 315° is −1. Here the sine is negative, and the cosine is positive. This is the pattern that the tangent function always follows. It”s positive in quadrants I and III and negative in quadrants II and IV. • Tangent is positive between 0° and 90° and between 180° and 270°. • Tangent is negative between 90° and 180° and between 270° and 360°. The unit circle is extremely useful for identifying equivalent angles (like 270° and −90°), and also for seeing other correspondences between angles, like the similarity between the 45° angle and the 135° angle, which are mirror images of one another on the unit circle. A good way to remember where sine, cosine, and tangent are positive is to write the words of the phrase All Students Take Calculus in quadrants I, II, III, and IV, respectively, on the coordinate plane. The first letter of each word (A S T C) tells you which functions are positive in that quadrant. So All three functions are positive in quadrant I, the Sine function is positive in quadrant II, the Tangent function is positive in quadrant III, and the Cosine function is positive in quadrant IV. Make simple sketches of the unit circle to answer the following questions about angle equivalencies. The answers to these drills can be found in Chapter 12. 18. If sin 135° = sin x, then x could equal 21. If cos 60° = cos n, then n could be 26. If sin 30° = cos t, then t could be 30. If tan 45° = tan x, then which of the following could be x ? 36. If 0° ≤ θ ≤ 360° and (sin θ)(cos θ) < 0, which of the following gives the possible values of θ ? (A) 0° ≤ θ ≤ 180° (B) 0° ≤ θ ≤ 180° or 270° ≤ θ ≤ 360° (C) 0° < θ < 90° or 180° < θ < 270° (D) 90° < θ < 180° or 270° < θ < 360° (E) 0° < θ < 180° or 270° < θ < 360° Degrees and Radians On the Math Level 2 Subject Test, you may run into an alternate means of measuring angles. This alternate system measures angles in radians rather than degrees. One degree is defined as of a full circle. One radian, on the other hand, is the measure of an angle that intercepts an arc exactly as long as the circle”s radius. Since the circumference of a circle is 2π times the radius, the circumference is about 6.28 times as long as the radius, and there are about 6.28 radians in a full circle. Because a number like 6.28 isn”t easy to work with, angle measurements in radians are usually given in multiples or fractions of π. For example, there are exactly 2π radians in a full circle. There are π radians in a semicircle. There are radians in a right angle. Because 2π radians and 360° both describe a full circle, you can relate degrees and radians with the following proportion: To convert degrees to radians, just plug the number of degrees into the proportion and solve for radians. The same technique works in reverse for converting radians to degrees. The figures on the next page show what the unit circle looks like in radians, compared to the unit circle in degrees. By referring to these unit circles and using the proportion given on this page, fill in the following chart of radian−degree equivalencies. The answers to these drills can be found in Chapter 12. A scientific or graphing calculator can calculate trigonometric functions of angles entered in radians, as well. However, it is necessary to shift the calculator from degree mode into radian mode. Consult your calculator”s operating manual and make sure you know how to do this. Trigonometric Graphs on the Coordinate Plane In a unit-circle diagram, the x-axis and y-axis represent the horizontal and vertical components of an angle, just as they do on the coordinate plane. The angle itself is represented by the angle between a certain radius and the positive x-axis. Any trigonometric function can be represented on a unit-circle diagram. are called periodic functions. The period of a function is the distance a function travels before it repeats. A periodic function will repeat the same pattern of values forever. As you can see from the graph, the period of the sine function is 2π radians, When a single trigonometric function is graphed, however, the axes take on different meanings. The x-axis represents the value of the angle; this axis is usually marked in radians. The y-axis represents a specific trigonometric function of that angle. For example, here is the coordinate plane graph of the sine function. Compare this graph to the unit circle on this page. A quick comparison will show you that both graphs present the same information. At an angle of zero, the sine is zero; at a quarter circle ( radians, or 90°), the sine is 1; and so on. Make Things Easier Because the sine and cosine curves have the same shape and size, you can focus on memorizing the facts for just one Here is the graph of the cosine function. Notice that the cosine curve is identical to the sine curve, only shifted to the left by radians, or 90°. The cosine function also has a period of 2π radians. Finally, here is the graph of the tangent function. This function, obviously, is very different from the others. First, the tangent function has no upper or lower limit, unlike the sine and cosine functions, which produce values no higher than 1 or lower than −1. Second, the tangent function has asymptotes. These are values on the x-axis at which the tangent function does not exist; they are represented by vertical dotted lines. Finally, the tangent function has a period of π radians. The Undefined Tangent It”s easy to see why the tangent function”s graph has asymptotes, if you recall the definition of the tangent. tan θ = A fraction is undefined whenever its denominator equals zero. At any value where the cosine function equals zero, therefore, the tangent function is undefined—it doesn”t exist. As you can see by comparing the cosine and tangent graphs, the tangent has an asymptote wherever the cosine function equals zero. It”s important to be able to recognize the graphs of the three basic trigonometric functions. You”ll find more information about these functions and their graphs in the following chapter on functions. TRIGONOMETRY IN NON-RIGHT TRIANGLES The rules of trigonometry are based on the right triangle, as you”ve seen in the preceding sections. Right triangles are not, however, the only places you can use trigonometric functions. There are a couple of powerful rules relating angles and lengths that you can use in any triangle. These are rules that only come up on the Math Level 2 Subject Test, and there are only two basic laws you need to know—the Law of Sines and the Law of Cosines. The Law of Sines The Law of Sines can be used to complete the dimensions of a triangle about which you have partial information. This is what the law says: In English, this law means that the sine of each angle in a triangle is related to the length of the opposite side by a constant proportion. Once you figure out the proportion relating the sine of one angle to the opposite side, you know the proportion for Let”s take a look at an example. ∠B = _____________ AB = _____________ AC = _____________ In this triangle, you know only two angles and one side. Immediately, you can fill in the third angle, knowing that there are 180° in a triangle. Then, you can fill in the missing sides using the Law of Sines. Write out the proportions of the Law of Sines, filling in the values you know. We Know, We Know Yes, 0.643 ÷ 8 rounds to 0.0804, but if you keep the value of sin 40° in your calculator and divide by 8, you”ll get 0.0803. At this point, you can set up two individual proportions and solve them individually for b and c, respectively. The length of AB is therefore 6.23, and the length of AC is 11.70. Now you know every dimension of triangle ABC. The Law of Sines can be used in any triangle if you know • two sides and one of their opposite angles (this gives you two different possible triangles) • two angles and any side The Law of Cosines When you don”t have the information necessary to use the Law of Sines, you may be able to use the Law of Cosines instead. The Law of Cosines is another way of using trigonometric functions to complete partial information about a triangle”s dimensions. c2 = a2 + b2 −2ab cos C The Law of Cosines is a way of completing the dimensions of any triangle. You”ll notice that it looks a bit like the Pythagorean theorem. That”s basically what it is, with a term added to the end to compensate for non-right angles. If you use the Law of Cosines on a right triangle, the “2ab cosC” term becomes zero, and the law becomes the Pythagorean theorem. The Law of Cosines can be used to fill in unknown dimensions of a triangle when you know any three of the quantities in the formula. c = ______________ ∠A = ______________ ∠B = ______________ In this triangle, you know only two sides and an angle—the angle between the known sides. That is, you know a, b, and C. In order to find the length of the third side, c, just fill the values you know into the Law of Cosines, and solve. c2 = a2 + b2 − 2ab cos C c2 = (10)2 + (12)2 − 2(10)(12) cos 45° c2 = 100 + 144 − 240(0.707) c2 = 74.3 c2 = 8.62 The length of AB is therefore 8.62. Now that you know the lengths of all three sides, just use the Law of Sines to find the values of the unknown angles, or re-arrange the Law of Cosines to put the other unknown angles in the C position, and solve to find the measures of the unknown angles. The Law of Cosines can be used in any triangle if you know • all three sides • two sides and the angle between them In the following practice exercises, use the Law of Sines and the Law of Cosines to complete the dimensions of these non-right triangles. The answers to these drills can be found in Chapter 12. 1. a = ______________ ∠B = ______________ ∠C = ______________ 2. ∠A = ______________ ∠B = ______________ ∠C = ______________ 3. c = ______________ ∠B = ______________ ∠C = ______________ Polar coordinates are another way of describing the position of a point in the coordinate plane. In the previous figure, the position of point P can be described in two ways. In standard rectangular coordinates, you would count across from the origin to get an x-coordinate and up from the origin to get a y-coordinate. (Remember: These x and y distances can be regarded as legs of a right triangle. The hypotenuse of the triangle is the distance between the point and the origin.) Rectangular coordinates consist of a horizontal distance and a vertical distance, and take the form (x,y). In rectangular coordinates, point P would be described as (5, 5). Polar coordinates consist of the distance, r, between a point and the origin, and the angle, θ, between that segment and the positive x-axis. Polar coordinates thus take the form (r, θ). The angle θ can be expressed in degrees, but is more often expressed in radians. In polar coordinates, therefore, P could be described as (10, 30°) or . As you saw in the unit circle, there”s more than one way to express any angle. For any angle, there is an infinite number of equivalent angles that can be produced by adding or subtracting 360° (or 2π, if you”re working in radians) any number of times. Therefore, there is an infinite number of equivalent polar coordinates for any point. Point P, at (10, 30°), can also be expressed as (10, 390°), or . You can continually produce equivalent expressions by adding or subtracting 360° (or 2π). There”s still another way to produce equivalent polar coordinates. The distance from the origin—the r in (r, θ)—can be negative. This means that once you”ve found the angle at which the hypotenuse must extend, a negative distance extends in the opposite direction, 180° away from the angle. Therefore, you can also create equivalent coordinates by increasing or decreasing the angle by 180° and flipping the sign on the distance. The point P(10, 30°) or could also be expressed as (−10, 210°) or Other equivalent coordinates can be generated by pairing equivalent angles with these negative distances. Converting rectangular coordinates to polar coordinates and vice versa is simple. You just use the trigonometry techniques reviewed in this chapter. Given a point (r, θ) in polar form, you can find its rectangular coordinates by drawing a right triangle such as the following: From this picture, using SOHCAHTOA and the Pythagorean theorem, you can see the following relationships: cos θ = ; sin θ = ; tan θ = ; x2 + y2 = r2; θ = tan−1 Try the following practice questions about polar coordinates. The answers to these drills can be found in Chapter 12. 39. Which of the following rectangular coordinate pairs is equivalent to the polar coordinates ? (A) (0.5, 1.7) (B) (2.6, 5.2) (C) (3.0, 5.2) (D) (4.2, 4.8) (E) (5.2, 15.6) 42. The point in polar coordinates is how far from the x-axis? 45. The points A, B, and C in polar coordinates define which of the following? (A) A point (B) A line (C) A plane (D) A three-dimensional space (E) None of these · For the purposes of the Level 1 Subject Test, trigonometry questions will deal only with basic trig. · Memorize SOHCAHTOA. It”s your best friend. sin = , cos = , and tan = . Tan is also equal to . ETS will test these with trigonometric identity questions. · You can use the inverse of a function on your calculator to find the angle when you know the value of the corresponding trigonometric function. · The unit circle is a circle on the coordinate plane with a radius of 1. You can use Pythagorean theorem, SOHCAHTOA, and the fact that if you draw a line from the origin to any point on the circle and create a triangle, the hypotenuse will always be 1. · For the Level 2 Subject Test only, it is important to know the following: · The reciprocals of the trig functions are cosecant, secant, and cotangent. Their relation to the trig functions are: csc θ = , sec θ = , and cot θ = . · Radians are just another way to measure angles. The relationship between degrees and radians is: . · Use All Students Take Calculus to remember which trig functions are positive in each quadrant. · The graphs of trigonometric functions are periodic functions. Know what each graph looks like. · For non-right triangles, there are two important laws. In a triangle with sides a, b, and c and corresponding angles A, B, and C, the Law of Sines says that ; and the Law of Cosines says that c2 = a2 + b2 − 2ab cos C. · Polar coordinates use the distance, r, between a point and the origin, and the angle θ, which can be written in degrees or radians. A point in polar coordinates would be (r, θ). · When converting coordinates between rectangular and polar, use either SOHCAHTOA and the Pythagorean theorem, or the following relationships: cos θ = ; sin θ = ; tan θ = ; x2 + y2 = r2; and θ tan−1.
Nationalism is a political and economic ideology and movement characterized by the promotion of the interests of a particular nation with the aim of gaining and maintaining the nation's sovereignty over its homeland. Nationalism holds that each nation should govern itself, free from outside interference, that a nation is a natural and ideal basis for a polity, that the nation is the only rightful source of political power, it further aims to build and maintain a single national identity—based on shared social characteristics such as culture, religion and belief in a shared singular history—and to promote national unity or solidarity. Nationalism, seeks to preserve and foster a nation's traditional culture, cultural revivals have been associated with nationalist movements, it encourages pride in national achievements, is linked to patriotism. Nationalism is combined with other ideologies, such as conservatism or socialism for example. Nationalism as an ideology is modern. Throughout history, people have had an attachment to their kin group and traditions, to territorial authorities and to their homeland, but nationalism did not become a widely-recognized concept until the 18th century. There are three paradigms for understanding the origins and basis of nationalism. Primordialism proposes that there have always been nations and that nationalism is a natural phenomenon. Ethnosymbolism explains nationalism as a dynamic, evolutionary phenomenon and stresses the importance of symbols and traditions in the development of nations and nationalism. Modernism proposes that nationalism is a recent social phenomenon that needs the socio-economic structures of modern society to exist. There are various definitions of a "nation", which leads to different strands of nationalism. Ethnic nationalism defines the nation in terms of shared ethnicity and culture, while civic nationalism defines the nation in terms of shared citizenship and institutions, is linked to constitutional patriotism; the adoption of national identity in terms of historical development has been a response by influential groups unsatisfied with traditional identities due to mismatch between their defined social order and the experience of that social order by its members, resulting in an anomie that nationalists seek to resolve. This anomie results in a society reinterpreting identity, retaining elements deemed acceptable and removing elements deemed unacceptable, to create a unified community. This development may be the result of internal structural issues or the result of resentment by an existing group or groups towards other communities foreign powers that are controlling them. National symbols and flags, national anthems, national languages, national myths and other symbols of national identity are important in nationalism. In practice, nationalism can be seen as positive or negative depending on context and individual outlook. Nationalism has been an important driver in independence movements, such as the Greek Revolution, the Irish Revolution, the Zionist movement that created modern Israel, the dissolution of the Soviet Union. Conversely, radical nationalism combined with racial hatred was a key factor in the Holocaust perpetrated by Nazi Germany. More nationalism was an important driver of the controversial annexation of Crimea by Russia. The terminological use of'nations','sovereignty' and associated concepts was refined with the writing by Hugo Grotius of De Jure Belli ac Pacis in the early 17th century. Living in the times of the Eighty Years' War between Spain and the Netherlands and the Thirty Years' War between Catholic and Protestant European nations, it is not surprising that Grotius was concerned with matters of conflicts between nations in the context of oppositions stemming from religious differences; the word nation was usefully applied before 1800 in Europe to refer to the inhabitants of a country as well as to collective identities that could include shared history, language, political rights and traditions, in a sense more akin to the modern conception. Nationalism as derived from the noun designating'nations' is a newer word, it became important in the 19th century. The term became negative in its connotations after 1914. Glenda Sluga notes that "The twentieth century, a time of profound disillusionment with nationalism, was the great age of globalism." Nationalism has been a recurring facet of civilizations since ancient times, though the modern sense of national political autonomy and self-determination was formalized in the late 18th century. Examples of nationalist movements can be found throughout history, from the Jewish revolts of the 1st and 2nd centuries, to the re-emergence of Persian culture during the Sasanid period of Persia, to the re-emergence of Latin culture in the Western Roman Empire during the 4th and 5th centuries, as well as many others. In modern times, examples can be seen in the emergence of German nationalism as a reaction against Napoleonic control of Germany as the Confederation of the Rhine around 1805–14. Linda Colley in Britons, Forging the Nation 1707–1837 explores how the role of nationalism emerged about 1700 and developed in Britain reaching full form in the 1830s. Historians of nationalism in Europe begin with the French Revolution, not only for its impact on French nationalism but more for its impact on Germans and Italians and on Eu Afro-Latin American or Black Latin American refers to Latin Americans of significant African ancestry. The term may refer to historical or cultural elements in Latin America thought to have emanated from this community; the term Afro-Latin American refers to people of African ancestry and not to European ancestry, such as Sub-Alpine European white. The term is not used in Latin America outside academic circles. Afro-Latin Americans are called black. More when referring to cultural aspects of African origin within specific countries of Latin America, terms carry an Afro- prefix followed by the relevant nationality. Notable examples include Afro-Cuban, Afro-Brazilian, Afro-Haitian, Afro-Latino, Afro-Latinx; the accuracy of statistics reporting on Afro-Latin Americans has been questioned where they are derived from census reports in which the subjects choose their own designation, because in various countries the concept of African ancestry is viewed with differing attitudes. In the 15th and 16 centuries, many people of African origin were brought into the Americas with the Spanish and Portuguese. Pedro Alonso Niño, traditionally considered the first of many New World explorers of African descent was a navigator in the 1492 Columbus expedition. Those who were directly from West Africa arrived in Latin America as part of the Atlantic slave trade, as agricultural and menial laborers and as mineworkers, they were employed in mapping and exploration and were involved in conquest The Caribbean and Latin America received 95 percent of the Africans arriving in the Americas with only 5 percent going to Northern America. Countries with significant African, Mulatto, or Zambo populations today include Brazil, Dominican Republic, Colombia and Ecuador. Traditional terms for Afro-Latin Americans with their own developed culture include garífuna and zambo in the Andes and Central America. Marabou is a term of Haitian origin denoting a Haitian of multiracial ethnicity; the mix of these African cultures with the Spanish, Portuguese and indigenous cultures of Latin America has produced many unique forms of language, music martial arts and dance. As of 2015, Mexico and Chile are the only two Latin American countries yet to formally recognize their Afro-Latin American population in their constitutions. This is in contrast to countries like Brazil and Colombia that lay out the constitutional rights of their African-descendant population. Terms used within Latin America used in reference to African heritage include mulato, zambo/chino and pardo and mestizo, which refers to an indigenous – European mixture in all cases except for in Venezuela, where it is used in place of "pardo"; the term mestizaje refers to the intermixing or fusing of ethnicities, whether by mere custom or deliberate policy. In Latin America this happened extensively between all ethnic groups and cultures, but involved European men and indigenous and African women. Afro-Latin Americans have limited media appearance. According to the Argentina national census of the year 2017, the total Argentine population is 40,117,096, from which 149,493 are of African ancestry. Traditionally it has been argued that the black population in Argentina declined since the early 19th century to insignificance. Many believe that the black population declined due to systematic efforts to reduce the black population in Argentina in order to mirror the racially homogeneous countries of Europe. However, the pilot census conducted in two neighborhoods of Argentina in 2006 on knowledge of ancestors from Sub-saharan Africa verified that 5% of the population knew of Black African ancestry, another 20% thought that it was possible but were not sure. Given that European immigration accounted for more than half the growth of the Argentine population in 1960, some researchers argue that, rather than decrease, what occurred was a process of overlaying, creating the "invisibility" of the population of Afro-Argentinians and their cultural roots. Black African descendants in Bolivia account for about 1% of the population, they were brought in during the Spanish colonial times and the majority live in the Yungas. There are about 500,000 people of Black African ancestry living in Bolivia. Around 7% of Brazil's 190 million people reported to the census as Black, many more Brazilians have some degree of African descent. Brazil experienced a long internal struggle over abolition of slavery and was the last Latin American country to do so. In 1850 it banned the importation of new slaves from overseas, after two decades since the first official attempts to outlaw the human traffic. In 1864 Brazil emancipated the slaves, on 28 September 1871, the Brazilian Congress approved the Rio Branco Law of Free Bir The Walt Disney Company The Walt Disney Company known as Walt Disney or Disney, is an American diversified multinational mass media and entertainment conglomerate headquartered at the Walt Disney Studios in Burbank, California. It is the world's largest media conglomerate in terms of revenue, ahead of NBCUniversal and WarnerMedia. Disney was founded on October 16, 1923 by brothers Walt and Roy O. Disney as the Disney Brothers Cartoon Studio; the company established itself as a leader in the American animation industry before diversifying into live-action film production and theme parks. Since the 1980s, Disney has created and acquired corporate divisions in order to market more mature content than is associated with its flagship family-oriented brands; the company is known for its film studio division, Walt Disney Studios, which includes Walt Disney Pictures, Walt Disney Animation Studios, Marvel Studios, Lucasfilm, 20th Century Fox, Fox Searchlight Pictures, Blue Sky Studios. Disney's other main divisions are Disney Parks and Products, Disney Media Networks, Walt Disney Direct-to-Consumer and International. Disney owns and operates the ABC broadcast network. The company has been a component of the Dow Jones Industrial Average since 1991. Cartoon character Mickey Mouse, created in 1928 by Walt Disney and Ub Iwerks, is one of the world's most recognizable characters, serves as the company's official mascot. In early 1923, Kansas City, animator Walt Disney created a short film entitled Alice's Wonderland, which featured child actress Virginia Davis interacting with animated characters. After the bankruptcy in 1923 of his previous firm, Laugh-O-Gram Studio, Disney moved to Hollywood to join his brother, Roy O. Disney. Film distributor Margaret J. Winkler of M. J. Winkler Productions contacted Disney with plans to distribute a whole series of Alice Comedies purchased for $1,500 per reel with Disney as a production partner. Walt and Roy Disney formed Disney Brothers Cartoon Studio that same year. More animated films followed after Alice. In January 1926, with the completion of the Disney studio on Hyperion Street, the Disney Brothers Studio's name was changed to the Walt Disney Studio. After the demise of the Alice comedies, Disney developed an all-cartoon series starring his first original character, Oswald the Lucky Rabbit, distributed by Winkler Pictures through Universal Pictures. The distributor owned Oswald, so Disney only made a few hundred dollars. Disney completed 26 Oswald shorts before losing the contract in February 1928, due to a legal loophole, when Winkler's husband Charles Mintz took over their distribution company. After failing to take over the Disney Studio, Mintz hired away four of Disney's primary animators to start his own animation studio, Snappy Comedies. In 1928, to recover from the loss of Oswald the Lucky Rabbit, Disney came up with the idea of a mouse character named Mortimer while on a train headed to California, drawing up a few simple drawings; the mouse was renamed Mickey Mouse and starred in several Disney produced films. Ub Iwerks refined Disney's initial design of Mickey Mouse. Disney's first sound film Steamboat Willie, a cartoon starring Mickey, was released on November 18, 1928 through Pat Powers' distribution company. It was the first Mickey Mouse sound cartoon released, but the third to be created, behind Plane Crazy and The Gallopin' Gaucho. Steamboat Willie was an immediate smash hit, its initial success was attributed not just to Mickey's appeal as a character, but to the fact that it was the first cartoon to feature synchronized sound. Disney used Pat Powers' Cinephone system, created by Powers using Lee de Forest's Phonofilm system. Steamboat Willie premiered at B. S. Moss's Colony Theater in New York City, now The Broadway Theatre. Disney's Plane Crazy and The Gallopin' Gaucho were retrofitted with synchronized sound tracks and re-released in 1929. Disney continued to produce cartoons with Mickey Mouse and other characters, began the Silly Symphony series with Columbia Pictures signing on as Symphonies distributor in August 1929. In September 1929, theater manager Harry Woodin requested permission to start a Mickey Mouse Club which Walt approved. In November, test comics strips were sent to King Features, who requested additional samples to show to the publisher, William Randolph Hearst. On December 16, the Walt Disney Studios partnership was reorganized as a corporation with the name of Walt Disney Productions, Limited with a merchandising division, Walt Disney Enterprises, two subsidiaries, Disney Film Recording Company and Liled Realty and Investment Company for real estate holdings. Walt and his wife held Roy owned 40 % of WD Productions. On December 30, King Features signed its first newspaper, New York Mirror, to publish the Mickey Mouse comic strip with Walt's permission. In 1932, Disney signed an exclusive contract with Technicolor to produce cartoons in color, beginning with Flowers and Trees. Disney released cartoons through Powers' Celebrity Pictures, Columbia Pictures, United Artists; the popularity of the Mickey Mouse series allowed Disney to plan for his first feature-length animation. The feature film Walt National Bolivarian Armed Forces of Venezuela The National Bolivarian Armed Forces are controlled by the Commander-in-Chief and a civilian Minister of Defense. In addition to the army and air force there is a national guard and national militia focused on internal security; the armed forces primary purpose is to defend Venezuelan territory from attack, combat drug trafficking, provide search and rescue capabilities, aid the civilian population in case of natural disasters protection, as well as numerous internal security assignments. As of 2018, the armed forces had 351,000 personnel; the origin of an organized and professional armed forces in Venezuela dates to the Spanish troops quartered in the former Province of Venezuela in the 18th century. Politically and militarily until the creation of the Captaincy General of Venezuela in 1777, the Province of Venezuela depended on the Real Audiencia of Santo Domingo or the Viceroyalty of New Granada for the defense of the area. In 1732 the Spanish crown created a Military Directorate and established a number of battalions, had a few units from infantry regiments based in Spain arrive in the area. Reform of the military in the colonies began a few decades later. The first squadrons of cavalry arrived from Spain in 1751; the first batteries of Artillery were raised just two years later. Both Creole whites and blacks were allowed to enter the ranks of the artillery companies; that same year, a Fixed Caracas Battalion was established. Until the creation of this battalion, defense had been based on small colonial militia companies, which only accepted whites; this racist policy yielded and the entry of mixed-race people was allowed in the militias. It was from these various units that the bulk of the officers who fought in the battles of the Venezuelan War of Independence emerged. Among them were Generalissimo Francisco de Miranda, Simón Bolívar, General in chief Santiago Mariño, Rafael Urdaneta, among many other heroes. With the establishment of an independent captaincy general in the latter half of the 18th century, the Spanish troops quartered in the province passed to the direct command of Caracas. The troops in the other provinces of the country, under the command of local governors, were overseen by the Captain General of Caracas, who served as commander in chief of the armed services. In this way a series of autonomous units was created for the peoples of the area and for defense duties, open to all fit males regardless of color. Aside from these the Spanish Navy operated naval bases in the Captaincy General's territorial coastline, open to both whites and blacks as well. In the early 19th century, many of these Venezuelans who had formed the bulk of the officer corps at the start of the formation of the national armed forces began to arrive in the country after participating in military campaigns abroad in the American Revolutionary War, the French Revolution, or after completing their studies in Europe. With them came a number of mercenaries and volunteers of many different nationalities: English, Irish, German, Poles and others, it was only in 1810 in the aftermath of the coup d'état of April 19 that year that formally began the process of raising the national armed services. Several of the military officers of the colonial military forces supported the coup and the subsequent creation of a junta. That Supreme Junta appointed Commander Lino de Clemente to be in charge of defense affairs for the Captaincy General, thus the armed forces began to be formed through their efforts, including the opening of a full military academy in Caracas for the training of officers joined by a naval academy in La Guaira for naval officer education the following year, it could be said that in the first two decades of the 19th century, the nascent Liberation Army and Navy, was in the midst of the intellectual training of their military cadres, in various attempts to unleash the revolutionary war, trying to build a modern army and navy. In the midst of that task came the generalissimo Francisco de Miranda, the Liberator Simón Bolívar, who called for immediate action to, once and for all, ensure the independence of the nation, achieved through the aformationed 19 April coup of 1810 and through the formal enactment of the 1811 Venezuelan Declaration of Independence. Bolívar surprised his military colleagues, when he rejected part of the Napoleonic military assumptions and behaviors, took more British soldiers and those from other nations, through third parties requested the assistance of the British Crown for the formation of the regular army and navy for the growing republic. And he did made no mistake indeed: the 19th century was dominated by British and Prussian military influences. Once in battle, Bolívar began to develop his own tactics, military strategies and practices, whose legacy remains till this day in the National Armed Forces, led to victory after victory and the full liberation of not just Venezuela, but of northern South America, through battles in both land and sea until the wars ended in 1824. During the second half of the 19th century, a school for officers continued, a standing Army and creating new services including the Corps of Sappers; this phase of the Venezuelan Army, is marked by infighting and a domain of local militias with no training. The little outside help in military matters at this stage is li Los Llanos (South America) Los Llanos is a vast tropical grassland plain situated to the east of the Andes in Colombia and Venezuela, in northwestern South America. It is an ecoregion of the flooded savannas biome; the Llanos' main river is the Orinoco, which forms part of the border between Colombia and Venezuela and is the major river system of Venezuela. During the rainy season from May to October, parts of the Llanos can flood up to a meter; this turns the woodlands and grassland into a temporary wetland, comparable to the Pantanal of central South America. This flooding makes the area unique for its wildlife; the area supports around 70 species including the scarlet ibis. A large portion of the distribution of the white-bearded flycatcher is in the Llanos; the flooding made the area unfit for most agriculture before the advent of modern, industrial farming technology. Therefore, during the Spanish colonial era, the primary economic activity of the area was the herding of millions of heads of cattle. An 1856 watercolor by Manuel María Paz depicts sparsely populated open grazing lands with cattle and palm trees. The term llanero became synonymous with the cowhands that took care of the herds, had some cultural similarities with the gauchos of the Pampas or the vaqueros of Spanish and Mexican Texas. In the wet season most of the Llanos is flooded and travel is by boat down the numerous temporary and permanent waterways. In Los Llanos the governments of Venezuela and Colombia had developed a strong oil and gas industry in zones of Arauca, Casanare, Guárico, Anzoátegui and Monagas; the Orinoco Belt in Venezuelan territory, consists of large deposits of extra heavy crude. The Orinoco belt oil sands are known to be one of the largest, behind that of the Athabasca Oil Sands in Alberta, Canada. Venezuela's non-conventional oil deposits of about 1,200 billion barrels, found in the Orinoco oil sands, are estimated to equal the world's reserves of conventional oil. Acacias, Meta Arauca, Arauca Gaviotas Mani, Casanare Orocue Paz de Ariporo Puerto Carreño Puerto Inirida Puerto López, Meta San José del Guaviare Saravena Tame Villavicencio Yopal Acarigua Araure Barinas Calabozo Caripito El Tigre Guanare Maturín Puerto Ayacucho Sabaneta San Carlos San Fernando de Apure Tucupita Valle de la Pascua History of Colombia History of Venezuela Index: Ecoregions of South America Dawn on the Plains Photo Feature, Havana Times, Oct 1, 2010. The llanos music The llanos of Colombia and Venezuela Los Llanos de Colombia "Llanos". Terrestrial Ecoregions. World Wildlife Fund Singing is the act of producing musical sounds with the voice and augments regular speech by the use of sustained tonality, a variety of vocal techniques. A person who sings is called a vocalist. Singers perform music that can be sung without accompaniment by musical instruments. Singing is done in an ensemble of musicians, such as a choir of singers or a band of instrumentalists. Singers may perform as soloists or accompanied by anything from a single instrument up to a symphony orchestra or big band. Different singing styles include art music such as opera and Chinese opera, Indian music and religious music styles such as gospel, traditional music styles, world music, blues and popular music styles such as pop, electronic dance music and filmi. Singing arranged or improvised, it may be done as a form of religious devotion, as a hobby, as a source of pleasure, comfort or ritual, as part of music education or as a profession. Excellence in singing requires time, dedication and regular practice. If practice is done on a regular basis the sounds can become more clear and strong. Professional singers build their careers around one specific musical genre, such as classical or rock, although there are singers with crossover success, they take voice training provided by voice teachers or vocal coaches throughout their careers. In its physical aspect, singing has a well-defined technique that depends on the use of the lungs, which act as an air supply or bellows. Though these four mechanisms function independently, they are coordinated in the establishment of a vocal technique and are made to interact upon one another. During passive breathing, air is inhaled with the diaphragm while exhalation occurs without any effort. Exhalation may be aided by lower pelvis/pelvic muscles. Inhalation is aided by use of external intercostals and sternocleidomastoid muscles; the pitch is altered with the vocal cords. With the lips closed, this is called humming; the sound of each individual's singing voice is unique not only because of the actual shape and size of an individual's vocal cords but due to the size and shape of the rest of that person's body. Humans have vocal folds which can loosen, tighten, or change their thickness, over which breath can be transferred at varying pressures. The shape of the chest and neck, the position of the tongue, the tightness of otherwise unrelated muscles can be altered. Any one of these actions results in a change in pitch, timbre, or tone of the sound produced. Sound resonates within different parts of the body and an individual's size and bone structure can affect the sound produced by an individual. Singers can learn to project sound in certain ways so that it resonates better within their vocal tract; this is known as vocal resonation. Another major influence on vocal sound and production is the function of the larynx which people can manipulate in different ways to produce different sounds; these different kinds of laryngeal function are described as different kinds of vocal registers. The primary method for singers to accomplish this is through the use of the Singer's Formant, it has been shown that a more powerful voice may be achieved with a fatter and fluid-like vocal fold mucosa. The more pliable the mucosa, the more efficient the transfer of energy from the airflow to the vocal folds. Vocal registration refers to the system of vocal registers within the voice. A register in the voice is a particular series of tones, produced in the same vibratory pattern of the vocal folds, possessing the same quality. Registers originate in laryngeal function, they occur. Each of these vibratory patterns appears within a particular range of pitches and produces certain characteristic sounds; the occurrence of registers has been attributed to effects of the acoustic interaction between the vocal fold oscillation and the vocal tract. The term "register" can be somewhat confusing; the term register can be used to refer to any of the following: A particular part of the vocal range such as the upper, middle, or lower registers. A resonance area such as chest voice or head voice. A phonatory process A certain vocal timbre or vocal "color" A region of the voice, defined or delimited by vocal breaks. In linguistics, a register language is a language which combines tone and vowel phonation into a single phonological system. Within speech pathology, the term vocal register has three constituent elements: a certain vibratory pattern of the vocal folds, a certain series of pitches, a certain type of sound. Speech pathologists identify four vocal registers based on the physiology of laryngeal function: the vocal fry register, the modal register, the falsetto register, the whistle register; this view is adopted by many vocal pedagogues. Vocal resonation is the process by which the basic product of phonation is en Caracas Santiago de León de Caracas, is the capital and largest city of Venezuela, centre of the Greater Caracas Area. Caracas is located along the Guaire River in the northern part of the country, following the contours of the narrow Caracas Valley on the Venezuelan coastal mountain range. Terrain suitable for building lies between 760 and 1,140 m above sea level, although there is some settlement above this range; the valley is close to the Caribbean Sea, separated from the coast by a steep 2,200-metre-high mountain range, Cerro El Ávila. The Metropolitan Region of Caracas has an estimated population of 4,923,201. Speaking, the centre of the city is still "Catedral", located near Bolívar Square though it is assumed that it is Plaza Venezuela, located in the Los Caobos neighbourhood. Chacaíto area, Luis Brión Square and El Rosal neighborhood are considered the geographic center of the Metropolitan Region of Caracas called "Greater Caracas". Businesses in the city include service companies and malls. Caracas has a service-based economy, apart from some industrial activity in its metropolitan area. The Caracas Stock Exchange and Petróleos de Venezuela are headquartered in Caracas. PDVSA is the largest company in Venezuela. Caracas is Venezuela's cultural capital, with many restaurants, theaters and shopping centers; some of the tallest skyscrapers in Latin America are located in Caracas. Caracas has been considered one of the most important cultural, tourist and economic centers of Latin America; the Museum of Contemporary Art of Caracas is one of the most important in South America. The Museum of Fine Arts and the National Art Gallery of Caracas are noteworthy; the National Art Gallery is projected to be the largest museum in Latin America, according to its architect Carlos Gómez De Llarena. Caracas is home to two of the tallest skyscrapers in South America: the Parque Central Towers, it has a nominal GDP of 91,988 million dollars, a nominal GDP per capita of 18,992 and a PPP GDP per capita of 32,710 dollars. Being the seventh city in GDP and the seventh metropolitan area in population of Latin America. Caracas has the highest per capita murder rate in the world, with 111.19 homicides per 100,000 inhabitants. At the time of the founding of the city in 1567, the valley of Caracas was populated by indigenous peoples. Francisco Fajardo, the son of a Spanish captain and a Guaiqueri cacica, attempted to establish a plantation in the valley in 1562 after founding a series of coastal towns. Fajardo's settlement did not last long, it was destroyed by natives of the region led by Guaicaipuro. This was the last rebellion on the part of the natives. On 25 July 1567, Captain Diego de Losada laid the foundations of the city of Santiago de León de Caracas; the foundation − 1567 – "I take possession of this land in the name of God and the King" These were the words of Don Diego de Losada in founding the city of Caracas on 25 July 1567. In 1577, Caracas became the capital of the Spanish Empire's Venezuela Province under Governor Juan de Pimentel. During the 17th century, the coast of Venezuela was raided by pirates. With the coastal mountains as a barrier, Caracas was immune to such attacks. However, in 1595, around 200 English privateers including George Sommers and Amyas Preston crossed the mountains through a little-used pass while the town's defenders were guarding the more often-used one. Encountering little resistance, the invaders sacked and set fire to the town after a failed ransom negotiation; as the cocoa cultivation and exports under the Compañía Guipuzcoana de Caracas grew in importance, the city expanded. In 1777, Caracas became the capital of the Captaincy General of Venezuela. José María España and Manuel Gual led an attempted revolution aimed at independence, but the rebellion was put down on 13 July 1797. Caracas was the site of the signing of a Declaration of independence on 17 August 1811. In 1812, an earthquake destroyed Caracas; the independentist war continued until 24 June 1821, when Bolívar defeated royalists in the Battle of Carabobo. Caracas grew in economic importance during Venezuela's oil boom in the early 20th century. During the 1950s, Caracas began an intensive modernization program which continued throughout the 1960s and early 1970s; the Universidad Central de Venezuela, designed by modernist architect Carlos Raúl Villanueva and declared World Heritage by UNESCO, was built. New working- and middle-class residential districts sprouted in the valley, extending the urban area toward the east and southeast. Joining El Silencio designed by Villanueva, were several workers' housing districts, 23 de Enero and Simon Rodriguez. Middle-class developments include Bello Monte, Los Palos Grandes, El Cafetal; the dramatic change in the economic structure of the country, which went from being agricultural to dependent on oil production, stimulated the fast development of Caracas, made it a magnet for people in rural communities who migrated to the capital city in an unplanned fashion searching for greater economic opportunity. This migration created the rancho belt of the valley of Caracas. The flag of Caracas consists of a burgundy red field with the version of the Coat of Arms of the City. The red field symbolises the blood spilt by Caraquenian people in favour of independence and the highest ideals of the Venezuelan Nation. In the year 1994 as a result of the change of municipal authorities, it was decided to increase the size of the Caracas coat of arms and move it to the centre of the field; this version
For a long time, the Moon was considered bone dry. And that’s not a surprise since our satellite has no atmosphere that could prevent liquid water from immediately evaporating into space. But what remained hidden from even the eyes of the Moon travelers, discovered by the probes, orbiters, and observers: there are enormous amounts of water on the Moon, frozen to ice. And, in time, more and more water on the moon is being discovered. Now, two new studies show that there is probably even more water on the Moon than expected. New clues for water on the moon In the first study, Casey Honniball from the University of Hawaii in Honolulu and his colleagues analyzed data from NASA’s Stratospheric Observatory for Infrared Astronomy “SOFIA” flying observatory. When investigating the Clavius crater, one of the largest craters visible from Earth, in the south of the Moon, the researchers found evidence of water molecules. They suspect that these could be preserved predominantly in glass beads or in crevices between rubble on the surface. Such water-containing glass beads were also discovered in the samples from the Apollo missions. Related: Can we create a lake on the Moon? In the second study, a team led by Paul Hayne from the University of Colorado looked specifically for craters, crevices, and other small areas where water ice could occur. Using data from the NASA “Lunar Reconnaissance Orbiter” probe and theoretical models, they searched for so-called cold traps – areas that are permanently in the shadow where water ice could be preserved due to the constant and extreme cold. In addition to impact craters, this also includes smaller areas that are always shielded from the Sun’s rays. According to the study, a total area of 40,000 square kilometers (15,500 square miles) could be in permanent shadow on the Moon – that’s about twice as much as assumed by other studies. Theoretically, water ice could be stored there. As expected, most of these regions are located in the polar regions. The researchers located 60 percent of these areas which never see the sunlight in the southern hemisphere. NASA’s SOFIA Discovers Water on Sunlit Surface of Moon Here is the press release by NASA: NASA’s Stratospheric Observatory for Infrared Astronomy (SOFIA) has confirmed, for the first time, water on the sunlit surface of the Moon. This discovery indicates that water may be distributed across the lunar surface, and not limited to cold, shadowed places. SOFIA has detected water molecules (H2O) in Clavius Crater, one of the largest craters visible from Earth, located in the Moon’s southern hemisphere. Previous observations of the Moon’s surface detected some form of hydrogen but were unable to distinguish between water and its close chemical relative, hydroxyl (OH). Data from this location reveal water in concentrations of 100 to 412 parts per million – roughly equivalent to a 12-ounce bottle of water – trapped in a cubic meter of soil spread across the lunar surface. The results are published in the latest issue of Nature Astronomy. “We had indications that H2O – the familiar water we know – might be present on the sunlit side of the Moon,” said Paul Hertz, director of the Astrophysics Division in the Science Mission Directorate at NASA Headquarters in Washington. “Now we know it is there. This discovery challenges our understanding of the lunar surface and raises intriguing questions about resources relevant for deep space exploration.” As a comparison, the Sahara desert has 100 times the amount of water than what SOFIA detected in the lunar soil. Despite the small amounts, the discovery raises new questions about how water is created and how it persists on the harsh, airless lunar surface. Water is a precious resource in deep space and a key ingredient of life as we know it. Whether the water SOFIA found is easily accessible for use as a resource remains to be determined. Under NASA’s Artemis program, the agency is eager to learn all it can about the presence of water on the Moon in advance of sending the first woman and next man to the lunar surface in 2024 and establishing a sustainable human presence there by the end of the decade. SOFIA’s results build on years of previous research examining the presence of water on the Moon. When the Apollo astronauts first returned from the Moon in 1969, it was thought to be completely dry. Orbital and impactor missions over the past 20 years, such as NASA’s Lunar Crater Observation and Sensing Satellite, confirmed ice in permanently shadowed craters around the Moon’s poles. Meanwhile, several spacecraft – including the Cassini mission and Deep Impact comet mission, as well as the Indian Space Research Organization’s Chandrayaan-1 mission – and NASA’s ground-based Infrared Telescope Facility, looked broadly across the lunar surface and found evidence of hydration in sunnier regions. Yet those missions were unable to definitively distinguish the form in which it was present – either H2O or OH. “Prior to the SOFIA observations, we knew there was some kind of hydration,” said Casey Honniball, the lead author who published the results from her graduate thesis work at the University of Hawaii at Mānoa in Honolulu. “But we didn’t know how much, if any, was actually water molecules – like we drink every day – or something more like drain cleaner.” SOFIA offered a new means of looking at the Moon. Flying at altitudes of up to 45,000 feet (13,800 meters), this modified Boeing 747SP jetliner with a 106-inch (2.7 meters) diameter telescope reaches above 99% of the water vapor in Earth’s atmosphere to get a clearer view of the infrared universe. Using its Faint Object infraRed CAmera for the SOFIA Telescope (FORCAST), SOFIA was able to pick up the specific wavelength unique to water molecules, at 6.1 microns, and discovered a relatively surprising concentration in sunny Clavius Crater. “Without a thick atmosphere, water on the sunlit lunar surface should just be lost to space,” said Honniball, who is now a postdoctoral fellow at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “Yet somehow we’re seeing it. Something is generating the water, and something must be trapping it there.” Several forces could be at play in the delivery or creation of this water. Micrometeorites raining down on the lunar surface, carrying small amounts of water, could deposit the water on the lunar surface upon impact. Another possibility is there could be a two-step process whereby the Sun’s solar wind delivers hydrogen to the lunar surface and causes a chemical reaction with oxygen-bearing minerals in the soil to create hydroxyl. Meanwhile, radiation from the bombardment of micrometeorites could be transforming that hydroxyl into water. How the water then gets stored – making it possible to accumulate – also raises some intriguing questions. The water could be trapped into tiny bead-like structures in the soil that form out of the high heat created by micrometeorite impacts. Another possibility is that the water could be hidden between grains of lunar soil and sheltered from the sunlight – potentially making it a bit more accessible than water trapped in beadlike structures. For a mission designed to look at distant, dim objects such as black holes, star clusters, and galaxies, SOFIA’s spotlight on Earth’s nearest and brightest neighbor was a departure from business as usual. The telescope operators typically use a guide camera to track stars, keeping the telescope locked steadily on its observing target. But the Moon is so close and bright that it fills the guide camera’s entire field of view. With no stars visible, it was unclear if the telescope could reliably track the Moon. To determine this, in August 2018, the operators decided to try a test observation. “It was, in fact, the first time SOFIA has looked at the Moon, and we weren’t even completely sure if we would get reliable data, but questions about the Moon’s water compelled us to try,” said Naseem Rangwala, SOFIA’s project scientist at NASA’s Ames Research Center in California’s Silicon Valley. “It’s incredible that this discovery came out of what was essentially a test, and now that we know we can do this, we’re planning more flights to do more observations.” SOFIA’s follow-up flights will look for water in additional sunlit locations and during different lunar phases to learn more about how the water is produced, stored, and moved across the Moon. The data will add to the work of future Moon missions, such as NASA’s Volatiles Investigating Polar Exploration Rover (VIPER), to create the first water resource maps of the Moon for future human space exploration. In the same issue of Nature Astronomy, scientists have published a paper using theoretical models and NASA’s Lunar Reconnaissance Orbiter data, pointing out that water could be trapped in small shadows, where temperatures stay below freezing, across more of the Moon than currently expected. The results can be found here. “Water is a valuable resource, for both scientific purposes and for use by our explorers,” said Jacob Bleacher, chief exploration scientist for NASA’s Human Exploration and Operations Mission Directorate. “If we can use the resources at the Moon, then we can carry less water and more equipment to help enable new scientific discoveries.” SOFIA is a joint project of NASA and the German Aerospace Center. Ames manages the SOFIA program, science, and mission operations in cooperation with the Universities Space Research Association, headquartered in Columbia, Maryland, and the German SOFIA Institute at the University of Stuttgart. The aircraft is maintained and operated by NASA’s Armstrong Flight Research Center Building 703, in Palmdale, California. Related: Amazing Moon facts Where does the water on the Moon come from? Earth and moon are not the same: on earth, comets, or water-rich asteroids are considered the source of the earth’s oceans of water. But the water on the moon comes from elsewhere. How do we know that? Because, if the water had come to the moon with asteroids, there would have to be so-called heavy water there, just like on Earth. Instead of the usual hydrogen atoms, heavy water has one or two deuterium atoms (D), which have an additional neutron in their atomic nucleus and thus differ from hydrogen. So far, no heavy water has been detected -it seems, there is only ordinary water on the moon – H2O, or hydroxyl radicals, OH. Therefore, researchers at the US University of Tennessee in Knoxville assume that the water on the moon is generated by solar wind. Related: Where did Earth’s water come from? The sun blows around a million tons of material into space every second, mainly hydrogen nuclei. We are protected from them on Earth because the magnetic field of our planet largely deflects the flow of these particles. On the Moon, however, it hits the ground at a blistering speed and forms with the oxygen present in the moon rock to form H2O and OH. The sun hardly has any deuterium either, because when a star is formed it almost completely fuses to form helium. For the researchers, one thing is certain: If there were more deuterium on the moon as on Earth, asteroids and comets would be the cause. But without deuterium stocks, the hydrogen must come from the Sun. Water is a precious resource in space Water is the raw material for the rocket propellant. Launching water from Earth into space consumes a lot of propellants, which makes the whole concept self-defeating. But, when there’s water, it can be split by electricity (a process called electrolysis) into hydrogen and oxygen to make rocket propellant – the key ingredients of rocket fuel. The future moon bases will also need a lot of water, so these new discoveries of water on the moon are really exciting. - “NASA’s SOFIA Discovers Water on Sunlit Surface of Moon” on NASA.gov - “Water on the moon is more common than we thought, studies reveal” on Space.com - “Tiny moon shadows may harbor hidden stores of ice” on the University of Colorado Boulder website - Study: “Molecular water detected on the sunlit Moon by SOFIA” on Nature Astronomy - Study: “Micro cold traps on the Moon” on Nature Astronomy
As a student or former student, it is very likely that you have had to deal with the Pythagorean theorem. This establishes the fundamental principles of mathematics and geometry. The theorem works for all sorts of applications in mathematics. Are you clear about what the Pythagorean theorem is about? Well, let’s start! Concept and definition of the Pythagorean theorem The Pythagorean theorem is a geometric statement. quite recognized within the mathematical world physical, which establishes a close relationship between the sides of a right triangle. a right triangle It is characterized by having a right angle, that is, 90 degrees. The theorem tells us that the square of the length of the hypotenuse is equal to the sum of the squares of the lengths of the legs. In short, if the sides of a right triangle are represented as a, b, and c (with c being the hypotenuse), then the Pythagorean theorem states that c^2 = a^2 + b^2. This mathematical logic has a large number of applications in various areas of mathematics and physics. For example, it is used to solve trigonometry problems, calculate distances on a Cartesian plane, and in number theory to find integers that satisfy the equation a^2 + b^2 = c^2. The Pythagorean theorem is named after the Greek mathematician of the same name, who is considered to be the first researcher to prove it rigorously. However, the relationship between the sides of right triangles was known to other cultures, such as the Babylonians and Egyptians, long before Pythagoras formally established it. What is the Pythagorean theorem used for? The Pythagorean theorem is named after in honor of the Greek mathematician Pythagoras. Although it is known that other cultures, such as the Babylonians and Egyptians, already knew the relationship between the sides of right triangles before Pythagoras formalized it. This theorem has great utility in mathematics and physics, since it allows us to calculate the length of a side of a right triangle as a function of the other two. For example, when the lengths of two sides of a right triangle are known, the Pythagorean theorem can be applied to determine the length of the hypotenuse, allowing us to calculate the height of a building or the distance between two points on a map. Furthermore, the Pythagorean theorem also is important in mathematics education, since it is one of the first tools taught in school and is used to introduce students to concepts such as irrational numbers and trigonometric functions. In summary, the Pythagorean theorem is a valuable and essential tool in mathematics and physics, with practical applications in many areas of life. What is the history and origin of the Pythagorean theorem? The history of the Pythagorean Theorem dates back to ancient Mesopotamia and Egypt, where mathematicians already knew some special cases of what we know today as the Pythagorean theorem. For example, in ancient Babylon a triangle with sides measuring 3, 4, and 5 units was used to measure right angles. In Egypt, right triangles were also used in their architectural constructions. However, it was in classical Greece that the theorem was formally proved. It is attributed to Pythagoras of Samos and his school of mathematics for having found a rigorous mathematical proof of this theorem. The test was first performed by Pythagoras’s disciple Hippasus of Metapontus, although Pythagoras is said to have been the first to recognize its importance and practical applications. The Pythagorean theorem became an essential tool for Greek geometry and It was used in the construction of temples, monuments and other architectural structures. The Pythagoreans also discovered many other interesting mathematical properties related to triangles, such as the relationships between the sides and angles of a triangle. Throughout history, this theorem It has been used in various practical applications., from building construction to navigation and astronomy. It has also been a source of inspiration for mathematicians and philosophers, who have explored its symbolic meaning and its relationship with nature. What are the characteristics of the Pythagorean theorem? The Pythagorean theorem has an explanation that can be summarized in certain quite specific characteristics, among some of the most outstanding we can find: Applies only to right trianglesthat is, those that have a right angle (90 degrees). the hypotenuse is always the longest side of the right triangle and is opposite the right angle. The theorem states that the sum of the squares of the two legs (the other two sides of the right triangle) is equal to the square of the hypotenuse. The theorem can be used to solve problems involving the length of an unknown side of a right triangleas well as to find the height of an object or the distance between two points. The Pythagorean theorem It has applications in physics. engineering and architecture, among other areas. It is commonly attributed to the Greek mathematician Pythagoras of Samos.who enunciated and demonstrated it in the 5th century BC What is the formula of the Pythagorean theorem? The formula of the Pythagorean theorem establishes the mathematical relationship between the three sides of a right triangle. This formula is: a² + b² = c² Where ‘a’ and ‘b’ are the legs of the right triangle (the two sides that form the right angle) and ‘c’ is the hypotenuse (the segment opposite the right angle, which is the longest side of the right triangle). This formula can be used to find the length of any unknown side of a right triangle, as long as the length of the other two sides is known. It can also be used for check if a triangle is right, since, if the formula of the Pythagorean theorem is fulfilled for the three sides of the triangle, then it can be affirmed that it is a rectangle. It is important to remember that the Pythagorean Theorem formula is only applicable to right triangles, and cannot be used on other types of triangles. Examples of the use of the Pythagorean theorem in mathematics This theorem has many very useful uses within the world of physics, geometry and mathematics and even economics. It should be noted that these uses can only be applied to right triangles. Some of the most important uses of this theorem are: - Find the length of the hypotenuse: Suppose we have a right triangle with legs of length 3 and 4 units. To find the length of the hypotenuse, we can use the formula of the Pythagorean theorem: c² = a² + b² c² = 3² + 4² c² = 9 + 16 c² = 25 c = √25 c = 5 Therefore, the length of the hypotenuse is 5 units. - Find the length of a leg: Suppose we have a geometric figure, a right triangle with a leg of length 5 units and a hypotenuse of length 13 units. To find the length of the other leg, we can use the Pythagorean theorem formula: a² = c² – b² a² = 13² – 5² a² = 169 – 25 a² = 144 a = √144 a = 12 Therefore, the length of the other leg is 12 units. - Check if a triangle is right: Suppose we have a triangle with sides of length 3, 4, and 5 units. To verify if it is a right triangle, we can use the formula of the Pythagorean theorem to check if the relationship is true: a² + b² = c² 3² + 4² = 5² 9 + 16 = 25 25 = 25 Since the relationship is fulfilled, we can affirm that the triangle is right-angled.
A data structure is a way of organizing and storing data in a computer so that it can be accessed and manipulated efficiently. In programming, data structures are an essential concept as they allow us to store and retrieve data in a structured and organized manner. Why are Data Structures important? Data structures play a vital role in programming because they help optimize the efficiency of algorithms. By choosing the appropriate data structure for a particular task, developers can improve performance, reduce memory usage, and make their programs more scalable. Let’s take a look at some commonly used data structures: An array is a collection of elements stored in contiguous memory locations. Elements in an array can be accessed using their index. Arrays are simple and efficient but have a fixed size. 2. Linked Lists A linked list is a collection of nodes where each node contains both data and a reference (or link) to the next node in the sequence. Linked lists have dynamic size and allow efficient insertion and deletion operations. A stack is an abstract data type that follows the Last-In-First-Out (LIFO) principle. It allows two main operations: pushing (adding) elements onto the stack and popping (removing) elements from the top of the stack. A queue is another abstract data type that follows the First-In-First-Out (FIFO) principle. It supports two primary operations: enqueue (adding elements to the rear) and dequeue (removing elements from the front). Trees are hierarchical structures consisting of nodes connected by edges or links. They have a root node and can have child nodes. Trees are widely used in applications like file systems, organization charts, and search algorithms. Graphs consist of a set of vertices (nodes) connected by edges (links). They are used to represent relationships between objects. Graphs have various applications in social networks, routing algorithms, and recommendation systems. Choosing the Right Data Structure When deciding which data structure to use, consider the requirements of your program, such as the type and volume of data, the operations you need to perform, and the expected time complexity. Here are some guidelines to help you choose: - If you need constant-time access to elements by index, use an array or an ArrayList (a dynamic array implementation). - If you frequently insert or delete elements at arbitrary positions, consider using a linked list. - If you require LIFO behavior (e.g., reversing order), use a stack. - If you need FIFO behavior (e., processing tasks in the order received), use a queue. - If you have hierarchical or parent-child relationships between data items, consider using trees. - If you have complex relationships between objects with multiple connections, graphs may be suitable. Data structures are essential tools for organizing and manipulating data efficiently in programming. By understanding different data structures and their characteristics, developers can make informed decisions about choosing the right one for their specific needs. Remember to consider factors such as performance requirements and expected operations when selecting a data structure for your program.
Internet Search Results Triangles - Equilateral, Isosceles and Scalene Equilateral, Isosceles and Scalene. There are three special names given to triangles that tell how many sides (or angles) are equal. There can be 3, 2 or no equal sides/angles: Triangle Classification - Cut-the-Knot A triangle is scalene if all of its three sides are different (in which case, the three angles are also different). If two of its sides are equal, a triangle is called isosceles.A triangle with all three equal sides is called equilateral.S. Schwartzman's The Words of Mathematics explain the etymology (the origins) of the words. The first two are of Greek (and related) origins; the word ... How to Find the Area of a Scalene Triangle | Sciencing TL;DR (Too Long; Didn't Read) The area of a scalene triangle with base b and height h is given by 1/2 bh. If you know the lengths of all three sides, you can calculate area using Heron's Formula without having to find the height. Scalene Triangle - K-6 Geometric Shapes The Scalene Triangle, like the equilateral triangle and the isosceles triangle is identified by its line lengths.. A Scalene triangle will ALWAYS have NO sides equal in length. As with this stage of geometry your child should easily be able to distinguish whether two lines are equal in length or not. What is a Scalene Triangle? - Definition, Properties ... Have you ever wondered how we classify triangles? In this lesson, we'll learn the definition of a scalene triangle, understand its properties, and look at some examples. 2015-10-02 The perimeter of a scalene triangle is 14.5 cm. The ... The perimeter of a scalene triangle is 14.5 cm. The longest side is twice that of the shortest side. Which equation can be used to find the side lengths if the longest side measures 6.2 cm? C Program to Calculate Area of Scalene Triangle - Area ... C Program to Find Area of Scalene Triangle : [crayon-5cb6aa3e90b79427717868/] Output : [crayon-5cb6aa3e90b83470753397/] C Program for Beginners : Area of Scalene Triangle Properties of Scalene Triangle : Scalene Triangle does not have sides having equal length. No angles of Scalene Triangle are equal. To calculate area we need at least two sides and the angle […] Math Forum: Ask Dr. Math FAQ: Triangle Formulas Scalene Triangle : A triangle with no two sides equal. (Note that the following formulas work with all triangles, not just scalene triangles.) P = a + b + c Scalene Triangle Area Calculator & Scalene Properties The triangle is one of the basic geometric shapes in geometry. It has 3 sides with 3 vertices (corners). The Scalene triangles is but one category of triangles which have 3 unequal sides and 3 unequal angles. TIN-007 Scalene Triangle Cohabitation - The Jealous Older ... TIN-007 Scalene Triangle Cohabitation - The Jealous Older Sister And The Younger Sister Who Wants It All - Chika Arimura Mayu Kurume, Watch Free JAV Porn, Chika Arimura Mayu Kurume, Creampie Fingering Sister, PRIME SOFT ON DEMAND
Home > CCA > Chapter 6 > Lesson 6.1.1 > Problem 6-4 Sam collected data by measuring the pencils of her classmates. She recorded the length of the painted part of each pencil and its weight. Her data is shown on the graph at right. Describe the association between weight and length of the painted part of the pencil. Remember to describe the form, direction, strength, and outliers. What shape do the points on the graph form? This is the form. Be sure to describe the direction, strength, and outliers. There is a strong positive linear association with an apparent outlier at Make a conjecture (a guess) about why Sam’s data had an outlier. How could you change the outlying point to make it correspond with the other points on the graph. Sam created a line of best fit: where is the weight of the pencil in grams and is the length of the painted part on the pencil in centimeters. What does the slope represent in this context? Which number in the given equation represents the slope? What would the units of the slope be in this situation? An increase of Sam’s teacher has a pencil where the painted part is long. Predict the weight of the teacher’s pencil. in the equation given in part (c). Interpret the meaning of the -intercept in context. The y-intercept in this situation is predicted to be . What are the units for and ? When the pencil is so short that there is no paint left, we predict that it will weigh
Here, you will find the summary notes for coordinate geometry, circles, and proof in plane geometry written based on O Level Additional Mathematics Syllabus. We talked about what’s in the syllabus for these topics in the article here. Most of what’s covered in these topics are also covered in the Mathematics syllabus for O Level, so do revise, or at least memorize the formulae in Mathematics when you are revising O Level Additional Mathematics. In my course on O Level Additional Mathematics, I’ll go through all the essential formulae that you’ll need to know for O Level Add Maths (including those that are already covered in elementary mathematics). You can get my course here. For coordinate geometry, know these formulae: - Equation of straight line: y = mx +c - m is the gradient - c is the y- intercept - Parallel lines have the same gradient - For 2 lines that are perpendicular to each other, multiplication of the gradients will give -1. - For 2 points (x1, y1) and (x2, y2), gradient of line is given by: - gradient = (y1- y2)/ (x1 – x2) Distance between 2 points For points A (x1, y1) and B (x2, y2), the distance of AB is given by: For points A (x1, y1) and B (x2, y2), the midpoint of AB is given by: To find the area, use the shoelace method. Let’s use this diagram below as an example to illustrate the shoelace method: Area of the quadrilateral ABCD can be found by shoelace method: Coordinate Geometry for Circles There are 2 ways to express the general equation of a circle: - (x-a)2+ (y-b)2 =r2 - radius of circle = r units; - centre of circle is (a, b) - x2 + y2+ 2gx + 2fy +c = 0 Proofs in Plane Geometry Here, you’ll need to prove similar triangles and/ or congruent triangles. From past questions, the use of AAA or proving that 2 pairs of angles in the 2 similar triangles are the same (and hence the third angle will be the same) is the most commonly used proof for O Level Additional Mathematics. Many questions also involve circles, so you will need to make use of the concepts learnt in properties of circles in this chapter for Additional Mathematics. While questions involving circles for geometry questions in elementary mathematics mainly ask you to find the angle, in O Level Additional Mathematics, you are asked to proof (e.g. angle ABC = angle EFG and so on). There aren’t many new concepts for this chapter (apart from the different ways in which questions are asked). New concepts here include: - alternate segment theorem - mid-point theorem What’s next after knowing the concepts? Apply them! Want to learn O Level Add Maths on-demand? Knowing the concepts and formulae above is not enough for you to even pass the O Level Additional Mathematics exams. You need to know how to apply these concepts. So, you’ll definitely need to try questions. In our course on O Level Additional Mathematics, we’ll go through the types of questions that are asked in each topic, what concepts are tested, and how to apply them. With these skills, students finish the course as confident students, able to apply what they have learned to tests and exams. If you want to ace these chapters in Additional Mathematics, definitely check out O Level Additional Mathematics Course here.
Scarcity is one of the fundamental issues in economics. The issue of scarcity means we have to decide how and what to produce from limited resources. It means there is a constant opportunity cost involved in making economic decisions. Economics solves the problem of scarcity by placing a higher price on scarce goods. The high price discourages demand and encourages firms to develop alternatives. How does economics solve the problem of scarcity? If we take a good like oil. The reserves of oil are limited, there is scarcity of the raw material. As we use up oil reserves, the supply of oil will start to fall. Diagram of fall in supply If there is scarcity of a good the supply will be falling and this causes the price to rise. In a free market this rising price acts as a signal and therefore demand for the good falls (movement along demand curve). Also the higher price of the good provides incentives for firms to: - Look for alternative sources of the good e.g. new supplies of oil from Antarctic - Look for alternatives e.g. solar panel cars. - If we were unable to find alternatives to oil, then we would have to respond by using less transport. People would cut back on transatlantic flights and make less trips. Therefore, in a free market there are incentives for the market mechanisms to deal with the issue of scarcity. However, the market can also have market failure. For example, firms may not think about the future until it is too late. Therefore, when the good becomes really scarce there might not be any practical alternative that has been developed. Another problem with the free market is that since goods are rationed by price, there may be a danger that some people cannot afford to buy certain goods; they have limited income. Therefore, economics is also concerned with the redistribution of income to help everyone be able to afford basic necessities. One solution to dealing with scarcity is to implement quotas on how much people can buy. An example of this is the rationing system that occurred in the Second World War. Because there was a scarcity of food, the government had strict limits on how much people could get. This was to ensure that even people with low incomes had access to food – a basic necessity. A problem of quotas is that it can lead to a black market; for some goods, people are willing to pay high amounts to get extra food. Therefore, it can be difficult to police a rationing system. But, it was a necessary policy for the second world war.
Self-esteem is an individual's subjective evaluation of their own worth. Self-esteem encompasses beliefs about oneself (for example, "I am unloved", "I am worthy") as well as emotional states, such as triumph, despair, pride, and shame. Smith and Mackie (2007) defined it by saying "The self-concept is what we think about the self; self-esteem, is the positive or negative evaluations of the self, as in how we feel about it." Self-esteem is an attractive psychological construct because it predicts certain outcomes, such as academic achievement, happiness, satisfaction in marriage and relationships, and criminal behavior. Self-esteem can apply to a specific attribute or globally. Psychologists usually regard self-esteem as an enduring personality characteristic (trait self-esteem), though normal, short-term variations (state self-esteem) also exist. Synonyms or near-synonyms of self-esteem include: self-worth, self-regard, self-respect, and self-integrity. It is a controversial term between academics due to some believing that the concept does not exist and is better measured by extraversion and introversion trait levels. Jordan Peterson and Albert Ellis are two of the most prominent psychologists to criticize the term. The concept of self-esteem has its origins in the 18th century, first expressed in the writings of the Scottish enlightenment thinker David Hume. Hume posits that it is important to value and think well of oneself because it serves a motivational function that enables people to explore their full potential. The identification of self-esteem as a distinct psychological construct has its origins in the work of philosopher, psychologist, geologist, and anthropologist William James (1892). James identified multiple dimensions of the self, with two levels of hierarchy: processes of knowing (called the "I-self") and the resulting knowledge about the self (the "Me-self"). The observation about the self and storage of those observations by the I-self creates three types of knowledge, which collectively account for the Me-self, according to James. These are the material self, social self, and spiritual self. The social self comes closest to self-esteem, comprising all characteristics recognized by others. The material self consists of representations of the body and possessions and the spiritual self of descriptive representations and evaluative dispositions regarding the self. This view of self-esteem as the collection of an individual's attitudes toward itself remains today. In the mid-1960s, social psychologist Morris Rosenberg defined self-esteem as a feeling of self-worth and developed the Rosenberg self-esteem scale (RSES), which became the most-widely used scale to measure self-esteem in the social sciences. In the early 20th century, the behaviorist movement minimized introspective study of mental processes, emotions, and feelings, replacing introspection with objective study through experiments on behaviors observed in relation with the environment. Behaviorism viewed the human being as an animal subject to reinforcements, and suggested placing psychology as an experimental science, similar to chemistry or biology. As a consequence, clinical trials on self-esteem were overlooked, since behaviorists considered the idea less liable to rigorous measurement. In the mid-20th century, the rise of phenomenology and humanistic psychology led to renewed interest in self-esteem. Self-esteem then took a central role in personal self-actualization and in the treatment of psychic disorders. Psychologists started to consider the relationship between psychotherapy and the personal satisfaction of persons with high self-esteem as useful to the field. This led to new elements being introduced to the concept of self-esteem, including the reasons why people tend to feel less worthy and why people become discouraged or unable to meet challenges by themselves. As of 1997[update] the core self-evaluations approach included self-esteem as one of four dimensions that comprise one's fundamental appraisal of oneself – along with locus of control, neuroticism, and self-efficacy. The concept of core self-evaluations as first examined by Judge, Locke, and Durham (1997), has since proven to have the ability to predict job satisfaction and job performance. Self-esteem may be essential to self-evaluation. In public policyEdit The importance of self-esteem gained endorsement from some government and non-government groups starting around the 1970s, such that one can speak of a self-esteem movement. This movement can be used[by whom?] as an example of promising evidence that psychological research can have an effect on forming public policy. The underlying idea of the movement was that low self-esteem was the root of problems for individuals, making it the root of societal problems and dysfunctions. A leading figure of the movement, psychologist Nathaniel Branden, stated: "[I] cannot think of a single psychological problem – from anxiety and depression, to fear of intimacy or of success, to spouse battery or child molestation – that is not traced back to the problem of low self-esteem".: 3 Self-esteem was believed[by whom?] to be a cultural phenomenon of Western individualistic societies since low self-esteem was not found in collectivist countries such as Japan. Concern about low self-esteem and its many presumed negative consequences led California assemblyman John Vasconcellos to work to set up and fund the Task Force on Self-Esteem and Personal and Social Responsibility, in California, in 1986. Vasconcellos argued that this task force could combat many of the state's problems – from crime and teen pregnancy to school underachievement and pollution. He compared increasing self-esteem to giving out a vaccine for a disease: it could help protect people from being overwhelmed by life's challenges. The task force set up committees in many California counties and formed a committee of scholars to review the available literature on self-esteem. This committee found very small associations between low self-esteem and its assumed consequences, ultimately showing that low self-esteem was not the root of all societal problems and not as important as the committee had originally thought. However, the authors of the paper that summarized the review of the literature still believed that self-esteem is an independent variable that affects major social problems. The task force disbanded in 1995, and the National Council for Self-Esteem and later the National Association for Self-Esteem (NASE) was established,[by whom?] taking on the task force's mission. Vasconcellos and Jack Canfield were members of its advisory board in 2003, and members of its masters' coalition included Anthony Robbins, Bernie Siegel, and Gloria Steinem. Many early theories suggested that self-esteem is a basic human need or motivation. American psychologist Abraham Maslow included self-esteem in his hierarchy of human needs. He described two different forms of "esteem": the need for respect from others in the form of recognition, success, and admiration, and the need for self-respect in the form of self-love, self-confidence, skill, or aptitude. Respect from others was believed to be more fragile and easily lost than inner self-esteem. According to Maslow, without the fulfillment of the self-esteem need, individuals will be driven to seek it and unable to grow and obtain self-actualization. Maslow also states that the healthiest expression of self-esteem "is the one which manifests in the respect we deserve for others, more than renown, fame, and flattery". Modern theories of self-esteem explore the reasons humans are motivated to maintain a high regard for themselves. Sociometer theory maintains that self-esteem evolved to check one's level of status and acceptance in one's social group. According to Terror Management Theory, self-esteem serves a protective function and reduces anxiety about life and death. Carl Rogers (1902–1987), an advocate of humanistic psychology, theorized the origin of many people's problems to be that they despise themselves and consider themselves worthless and incapable of being loved. This is why Rogers believed in the importance of giving unconditional acceptance to a client and when this was done it could improve the client's self-esteem. In his therapy sessions with clients, he offered positive regard no matter what. Indeed, the concept of self-esteem is approached since then in humanistic psychology as an inalienable right for every person, summarized in the following sentence: Every human being, with no exception, for the mere fact to be it, is worthy of unconditional respect of everybody else; he deserves to esteem himself and to be esteemed. Self-esteem is typically assessed using self-report inventories. One of the most widely used instruments, the Rosenberg self-esteem scale (RSES) is a 10-item self-esteem scale score that requires participants to indicate their level of agreement with a series of statements about themselves. An alternative measure, the Coopersmith Inventory uses a 50-question battery over a variety of topics and asks subjects whether they rate someone as similar or dissimilar to themselves. If a subject's answers demonstrate solid self-regard, the scale regards them as well adjusted. If those answers reveal some inner shame, it considers them to be prone to social deviance. Implicit measures of self-esteem began to be used in the 1980s. These rely on indirect measures of cognitive processing thought to be linked to implicit self-esteem, including the name letter task (or initial preference task) and the Implicit Association Task. Such indirect measures are designed to reduce awareness of the process of assessment. When used to assess implicit self-esteem, psychologists feature self-relevant stimuli to the participant and then measure how quickly a person identifies positive or negative stimuli. For example, if a woman was given the self-relevant stimuli of female and mother, psychologists would measure how quickly she identified the negative word, evil, or the positive word, kind. Development across lifespanEdit Experiences in a person's life are a major source of how self-esteem develops. In the early years of a child's life, parents have a significant influence on self-esteem and can be considered the main source of positive and negative experiences a child will have. Unconditional love from parents helps a child develop a stable sense of being cared for and respected. These feelings translate into later effects on self-esteem as the child grows older. Students in elementary school who have high self-esteem tend to have authoritative parents who are caring, supportive adults who set clear standards for their child and allow them to voice their opinion in decision making. Although studies thus far have reported only a correlation of warm, supportive parenting styles (mainly authoritative and permissive) with children having high self-esteem, these parenting styles could easily be thought of as having some causal effect in self-esteem development. Childhood experiences that contribute to healthy self-esteem include being listened to, being spoken to respectfully, receiving appropriate attention and affection and having accomplishments recognized and mistakes or failures acknowledged and accepted. Experiences that contribute to low self-esteem include being harshly criticized, being physically, sexually or emotionally abused, being ignored, ridiculed or teased or being expected to be "perfect" all the time. During school-aged years, academic achievement is a significant contributor to self-esteem development. Consistently achieving success or consistently failing will have a strong effect on students' individual self-esteem. However, students can also experience low self-esteem while in school. For example, they may not have academic achievements, or they live in a troubled environment outside of school. Issues like the ones previously stated, can cause adolescents to doubt themselves. Social experiences are another important contributor to self-esteem. As children go through school, they begin to understand and recognize differences between themselves and their classmates. Using social comparisons, children assess whether they did better or worse than classmates in different activities. These comparisons play an important role in shaping the child's self-esteem and influence the positive or negative feelings they have about themselves. As children go through adolescence, peer influence becomes much more important. Adolescents make appraisals of themselves based on their relationships with close friends. Successful relationships among friends are very important to the development of high self-esteem for children. Social acceptance brings about confidence and produces high self-esteem, whereas rejection from peers and loneliness brings about self-doubts and produces low self-esteem. Adolescence shows an increase in self-esteem that continues to increase in young adulthood and middle age. A decrease is seen from middle age to old age with varying findings on whether it is a small or large decrease. Reasons for the variability could be because of differences in health, cognitive ability, and socioeconomic status in old age. No differences have been found between males and females in their development of self-esteem. Multiple cohort studies show that there is not a difference in the life-span trajectory of self-esteem between generations due to societal changes such as grade inflation in education or the presence of social media. High levels of mastery, low risk taking, and better health are ways to predict higher self-esteem. In terms of personality, emotionally stable, extroverted, and conscientious individuals experience higher self-esteem. These predictors have shown us that self-esteem has trait-like qualities by remaining stable over time like personality and intelligence. However, this does not mean it can not be changed. Hispanic adolescents have a slightly lower self-esteem than their black and white peers, but then slightly higher levels by age 30. African Americans have a sharper increase in self-esteem in adolescence and young adulthood compared to Whites. However, during old age, they experience a more rapid decline in self-esteem. Shame can be a contributor to those with problems of low self-esteem. Feelings of shame usually occur because of a situation where the social self is devalued, such as a socially evaluated poor performance. A poor performance leads to higher responses of psychological states that indicate a threat to the social self namely a decrease in social self-esteem and an increase in shame. This increase in shame can be helped with self-compassion. Real self, ideal self, and dreaded selfEdit There are three levels of self-evaluation development in relation to the real self, ideal self, and the dreaded self. The real, ideal, and dreaded selves develop in children in a sequential pattern on cognitive levels. - Moral judgment stages: Individuals describe their real, ideal, and dreaded selves with stereotypical labels, such as "nice" or "bad". Individuals describe their ideal and real selves in terms of disposition for actions or as behavioral habits. The dreaded self is often described as being unsuccessful or as having bad habits. - Ego development stages: Individuals describe their ideal and real selves in terms of traits that are based on attitudes as well as actions. The dreaded self is often described as having failed to meet social expectations or as self-centered. - Self-understanding stages: Individuals describe their ideal and real selves as having unified identities or characters. Descriptions of the dreaded self focus on a failure to live up to one's ideals or role expectations often because of real world problems. This development brings with it increasingly complicated and encompassing moral demands. This level is where individuals' self-esteems can suffer because they do not feel as though they are living up to certain expectations. This feeling will moderately affect one's self-esteem with an even larger effect seen when individuals believe they are becoming their dreaded selves. People with a healthy level of self-esteem: - Firmly believe in certain values and principles, and are ready to defend them even when finding opposition, feeling secure enough to modify them in light of experience. - Are able to act according to what they think to be the best choice, trusting their own judgment, and not feeling guilty when others do not like their choice. - Do not lose time worrying excessively about what happened in the past, nor about what could happen in the future. They learn from the past and plan for the future, but live in the present intensely. - Fully trust in their capacity to solve problems, not hesitating after failures and difficulties. They ask others for help when they need it. - Consider themselves equal in dignity to others, rather than inferior or superior, while accepting differences in certain talents, personal prestige or financial standing. - Understand how they are an interesting and valuable person for others, at least for those with whom they have a friendship. - Resist manipulation, collaborate with others only if it seems appropriate and convenient. - Admit and accept different internal feelings and drives, either positive or negative, revealing those drives to others only when they choose. - Are able to enjoy a great variety of activities. - Are sensitive to feelings and needs of others; respect generally accepted social rules, and claim no right or desire to prosper at others' expense. - Can work toward finding solutions and voice discontent without belittling themselves or others when challenges arise. Secure vs. defensiveEdit A person can have high self-esteem and hold it confidently where they do not need reassurance from others to maintain their positive self-view, whereas others with defensive high self-esteem may still report positive self-evaluations on the Rosenberg Scale, as all high self-esteem individuals do; however, their positive self-views are fragile and vulnerable to criticism. Defensive high self-esteem individuals internalize subconscious self-doubts and insecurities, causing them to react very negatively to any criticism they may receive. There is a need for constant positive feedback from others for these individuals to maintain their feelings of self-worth. The necessity of repeated praise can be associated with boastful, arrogant behavior or sometimes even aggressive and hostile feelings toward anyone who questions the individual's self-worth, an example of threatened egotism. The Journal of Educational Psychology conducted a study in which they used a sample of 383 Malaysian undergraduates participating in work integrated learning (WIL) programs across five public universities to test the relationship between self-esteem and other psychological attributes such as self-efficacy and self-confidence. The results demonstrated that self-esteem has a positive and significant relationship with self-confidence and self-efficacy since students with higher self-esteem had better performances at university than those with lower self-esteem. It was concluded that higher education institutions and employers should emphasize the importance of undergraduates' self-esteem development. Implicit, explicit, narcissism and threatened egotismEdit Implicit self-esteem refers to a person's disposition to evaluate themselves positively or negatively in a spontaneous, automatic, or unconscious manner. It contrasts with explicit self-esteem, which entails more conscious and reflective self-evaluation. Both explicit self-esteem and implicit self-esteem are subtypes of self-esteem proper. Narcissism is a disposition people may have that represents an excessive love for one's self. It is characterized by an inflated view of self-worth. Individuals who score high on narcissism measures, Robert Raskin's 40 Item True or False Test, would likely select true to such statements as "If I ruled the world, it would be a much better place." There is only a moderate correlation between narcissism and self-esteem; that is to say that an individual can have high self-esteem but low narcissism or can be a conceited, obnoxious person and score high self-esteem and high narcissism. Low self-esteem can result from various factors, including genetic factors, physical appearance or weight, mental health issues, socioeconomic status, significant emotional experiences, social stigma, peer pressure or bullying. - Heavy self-criticism and dissatisfaction. - Hypersensitivity to criticism with resentment against critics and feelings of being attacked. - Chronic indecision and an exaggerated fear of mistakes. - Excessive will to please and unwillingness to displease any petitioner. - Perfectionism, which can lead to frustration when perfection is not achieved. - Neurotic guilt, dwelling on or exaggerating the magnitude of past mistakes. - Floating hostility and general defensiveness and irritability without any proximate cause. - Pessimism and a general negative outlook. - Envy, invidiousness, or general resentment. - Sees temporary setbacks as permanent, intolerable conditions. Individuals with low self-esteem tend to be critical of themselves. Some depend on the approval and praise of others when evaluating self-worth. Others may measure their likability in terms of successes: others will accept themselves if they succeed but will not if they fail. People who suffer from chronic low self esteem are at a higher risk for experiencing psychotic disorders; and this behavior is closely linked to forming psychotic symptoms as well. Metacognitive therapy, EMDR technique, mindfulness-based cognitive therapy, rational emotive behavior therapy, cognitive behavioral therapy and trait and construct therapies have been shown to improve the patient’s self-esteem. The three statesEdit This classification proposed by Martin Ross distinguishes three states of self-esteem compared to the "feats" (triumphs, honors, virtues) and the "anti-feats" (defeats, embarrassment, shame, etc.) of the individuals. The individual does not regard themselves as valuable or lovable. They may be overwhelmed by defeat, or shame, or see themselves as such, and they name their "anti-feat". For example, if they consider that being over a certain age is an anti-feat, they define themselves with the name of their anti-feat, and say, "I am old". They express actions and feelings such as pity, insulting themselves, and they may become paralyzed by their sadness. The individual has a generally positive self-image. However, their self-esteem is also vulnerable to the perceived risk of an imminent anti-feat (such as defeat, embarrassment, shame, discredit), consequently, they are often nervous and regularly use defense mechanisms. A typical protection mechanism of those with vulnerable self-esteem may consist in avoiding decision-making. Although such individuals may outwardly exhibit great self-confidence, the underlying reality may be just the opposite: the apparent self-confidence is indicative of their heightened fear of anti-feats and the fragility of their self-esteem. They may also try to blame others to protect their self-image from situations that would threaten it. They may employ defense mechanisms, including attempting to lose at games and other competitions in order to protect their self-image by publicly dissociating themselves from a need to win, and asserting an independence from social acceptance which they may deeply desire. In this deep fear of being unaccepted by an individual's peers, they make poor life choices by making risky decisions. People with strong self-esteem have a positive self-image and enough strength so that anti-feats do not subdue their self-esteem. They have less fear of failure. These individuals appear humble, cheerful, and this shows a certain strength not to boast about feats and not to be afraid of anti-feats. They are capable of fighting with all their might to achieve their goals because, if things go wrong, their self-esteem will not be affected. They can acknowledge their own mistakes precisely because their self-image is strong, and this acknowledgment will not impair or affect their self-image. They live with less fear of losing social prestige, and with more happiness and general well-being. However, no type of self-esteem is indestructible, and due to certain situations or circumstances in life, one can fall from this level into any other state of self-esteem. Contingent vs. non-contingentEdit Therefore, contingent self-esteem is marked by instability, unreliability, and vulnerability. Persons lacking a non-contingent self-esteem are "predisposed to an incessant pursuit of self-value". However, because the pursuit of contingent self-esteem is based on receiving approval, it is doomed to fail, as no one receives constant approval, and disapproval often evokes depression. Furthermore, fear of disapproval inhibits activities in which failure is possible. Non-contingent self-esteem is described as true, stable, and solid. It springs from a belief that one is "acceptable period, acceptable before life itself, ontologically acceptable". Belief that one is "ontologically acceptable" is to believe that one's acceptability is "the way things are without contingency". In this belief, as expounded by theologian Paul Tillich, acceptability is not based on a person's virtue. It is an acceptance given "in spite of our guilt, not because we have no guilt". Psychiatrist Thomas A Harris drew on Tillich for his classic I'm OK – You're OK that addresses non-contingent self-esteem. Harris translated Tillich's "acceptable" by the vernacular OK, a term that means "acceptable". The Christian message, said Harris, is not "YOU CAN BE OK, IF"; it is "YOU ARE ACCEPTED, unconditionally". A secure non-contingent self-esteem springs from the belief that one is ontologically acceptable and accepted. Whereas global self-esteem addresses how individuals appraise themselves in their entirety, domain-specific self-esteem facets relate to how they appraise themselves in various pertinent domains of life. Such functionally distinct facets of self-esteem may comprise self-evaluations in social, emotional, body-related, school performance-related, and creative-artistic domains. They have been found to be predictive of outcomes related to psychological functioning, health, education, and work. Low self-esteem in the social domain (i.e., self-perceived social competence), for example, has been repeatedly identified as a risk factor for bullying victimization. Abraham Maslow states that psychological health is not possible unless the essential core of the person is fundamentally accepted, loved and respected by others and by oneself. Self-esteem allows people to face life with more confidence, benevolence, and optimism, and thus easily reach their goals and self-actualize. Self-esteem may make people convinced they deserve happiness. Understanding this is fundamental, and universally beneficial, since the development of positive self-esteem increases the capacity to treat other people with respect, benevolence and goodwill, thus favoring rich interpersonal relationships and avoiding destructive ones. For Erich Fromm, the love of others and love of ourselves are not alternatives. On the contrary, an attitude of love toward themselves will be found in all those who are capable of loving others. Self-esteem allows creativity at the workplace and is a specially critical condition for teaching professions. José-Vicente Bonet claims that the importance of self-esteem is obvious as a lack of self-esteem is, he says, not a loss of esteem from others, but self-rejection. Bonet claims that this corresponds to major depressive disorder. Freud also claimed that the depressive has suffered "an extraordinary diminution in his self-regard, an impoverishment of his ego on a grand scale... He has lost his self-respect". The Yogyakarta Principles, a document on international human rights law, addresses the discriminatory attitude toward LGBT people that makes their self-esteem low to be subject to human rights violation including human trafficking. The World Health Organization recommends in "Preventing Suicide", published in 2000, that strengthening students' self-esteem is important to protect children and adolescents against mental distress and despondency, enabling them to cope adequately with difficult and stressful life situations. Other than increased happiness, higher self-esteem is also known to correlate with a better ability to cope with stress and a higher likeliness of taking on difficult tasks relative to those with low self-esteem. From the late 1970s to the early 1990s many Americans assumed as a matter of course that students' self-esteem acted as a critical factor in the grades that they earned in school, in their relationships with their peers, and in their later success in life. Under this assumption, some American groups created programs which aimed to increase the self-esteem of students. Until the 1990s, little peer-reviewed and controlled research took place on this topic. Peer-reviewed research undertaken since then has not validated previous assumptions. Recent research indicates that inflating students' self-esteems in and of itself has no positive effect on grades. Roy Baumeister has shown that inflating self-esteem by itself can actually decrease grades. The relationship involving self-esteem and academic results does not signify that high self-esteem contributes to high academic results. It simply means that high self-esteem may be accomplished as a result of high academic performance due to the other variables of social interactions and life events affecting this performance. "Attempts by pro-esteem advocates to encourage self-pride in students solely by reason of their uniqueness as human beings will fail if feelings of well-being are not accompanied by well-doing. It is only when students engage in personally meaningful endeavors for which they can be justifiably proud that self-confidence grows, and it is this growing self-assurance that in turn triggers further achievement." High self-esteem has a high correlation to self-reported happiness; whether this is a causal relationship has not been established. The relationship between self-esteem and life satisfaction is stronger in individualistic cultures. Additionally, self-esteem has been found to be related to forgiveness in close relationships, in that people with high self-esteem will be more forgiving than people with low self-esteem. In research conducted in 2014 by Robert S. Chavez and Todd F. Heatherton, it was found that self-esteem is related to the connectivity of the frontostriatal circuit. The frontostriatal pathway connects the medial prefrontal cortex, which deals with self-knowledge, to the ventral striatum, which deals with feelings of motivation and reward. Stronger anatomical pathways are correlated with higher long-term self-esteem, while stronger functional connectivity is correlated with higher short-term self-esteem. Criticism and controversyEdit The American psychologist Albert Ellis criticized on numerous occasions the concept of self-esteem as essentially self-defeating and ultimately destructive. Although acknowledging the human propensity and tendency to ego rating as innate, he has critiqued the philosophy of self-esteem as unrealistic, illogical and self- and socially destructive – often doing more harm than good. Questioning the foundations and usefulness of generalized ego strength, he has claimed that self-esteem is based on arbitrary definitional premises, and over-generalized, perfectionistic and grandiose thinking. Acknowledging that rating and valuing behaviors and characteristics is functional and even necessary, he sees rating and valuing human beings' totality and total selves as irrational and unethical. The healthier alternative to self-esteem according to him is unconditional self-acceptance and unconditional other-acceptance. Rational Emotive Behavior Therapy is a psychotherapy based on this approach. - "There seem to be only two clearly demonstrated benefits of high self-esteem....First, it increases initiative, probably because it lends confidence. People with high self-esteem are more willing to act on their beliefs, to stand up for what they believe in, to approach others, to risk new undertakings. (This unfortunately includes being extra willing to do stupid or destructive things, even when everyone else advises against them.)...It can also lead people to ignore sensible advice as they stubbornly keep wasting time and money on hopeless causes" For persons with low self-esteem, any positive stimulus will temporarily raise self-esteem. Therefore, possessions, sex, success, or physical appearance will produce the development of self-esteem, but the development is ephemeral at best. Such attempts to raise one's self-esteem by positive stimulus produce a "boom or bust" pattern. "Compliments and positive feedback" produce a boost, but a bust follows a lack of such feedback. For a person whose "self-esteem is contingent", success is "not extra sweet", but "failure is extra bitter". Life satisfaction, happiness, healthy behavioral practices, perceived efficacy, and academic success and adjustment have been associated with having high levels of self-esteem (Harter, 1987; Huebner, 1991; Lipschitz-Elhawi & Itzhaky, 2005; Rumberger 1995; Swenson & Prelow, 2005; Yarcheski & Mahon, 1989).: 270 However, a common mistake is to think that loving oneself is necessarily equivalent to narcissism, as opposed for example to what Erik Erikson speaks of as "a post-narcissistic love of the ego". People with a healthy self-esteem accept and love themselves unconditionally, acknowledging both virtues and faults in the self, and yet, in spite of everything, is able to continue to love themselves. In narcissists, by contrast, an " uncertainty about their own worth gives rise to...a self-protective, but often totally spurious, aura of grandiosity" – producing the class "of narcissists, or people with very high, but insecure, self-esteem... fluctuating with each new episode of social praise or rejection.": 479 Narcissism can thus be seen as a symptom of fundamentally low self-esteem, that is, lack of love towards oneself, but often accompanied by "an immense increase in self-esteem" based on "the defense mechanism of denial by overcompensation." "Idealized love of self...rejected the part of him" that he denigrates – "this destructive little child" within. Instead, the narcissist emphasizes their virtues in the presence of others, just to try to convince themself that they are a valuable person and to try to stop feeling ashamed for their faults; such "people with unrealistically inflated self-views, which may be especially unstable and highly vulnerable to negative information,...tend to have poor social skills.": 126 |Wikiquote has quotations related to: Self-esteem| - Body image - Clinical depression - Dunning–Kruger effect - Eating disorder - Emotional competence - Fear of negative evaluation - Gumption trap - Health-related embarrassment - Inner critic - Invisible support - Law of Jante - List of confidence tricks - Optimism bias - Outline of self - Overconfidence effect - Self-esteem functions - Self-esteem instability - Self-evaluation maintenance theory - Self image - Social anxiety - Social phobia - Hewitt, John P. (2009). Oxford Handbook of Positive Psychology. Oxford University Press. pp. 217–24. ISBN 978-0195187243. - Smith, E. R.; Mackie, D. M. (2007). Social Psychology (Third ed.). Hove: Psychology Press. ISBN 978-1841694085. - Marsh, H.W. (1990). "Causal ordering of academic self-concept and academic achievement: A multiwave, longitudinal path analysis". Journal of Educational Psychology. 82 (4): 646–56. doi:10.1037/0022-0622.214.171.1246. - Urbina Robalino, Gisella del Rocio; Eugenio Piloso, Mery Aracely (2015). Efectos de la violencia intrafamiliar en el autoestima de los estudiantes de octavo y noveno año de la Escuela de educación básica 11 de Diciembre (bachelor thesis) (in Spanish). Advised by S. Yagual. Ecuador: Universidad Estatal Península de Santa Elena. - Baumeister, R. F.; Campbell, J. D.; Krueger, J. I.; Vohs, K. D. (2003). "Does High Self-Esteem Cause Better Performance, Interpersonal Success, Happiness, or Healthier Lifestyles?". Psychological Science in the Public Interest. 4 (1): 1–44. doi:10.1111/1529-1006.01431. ISSN 1529-1006. PMID 26151640. - Orth U.; Robbins R.W. (2014). "The development of self-esteem". Current Directions in Psychological Science. 23 (5): 381–87. doi:10.1177/0963721414547414. S2CID 38796272. - "Great Books Online – Quotes, Poems, Novels, Classics and hundreds more". Bartleby.com. Archived from the original on 25 January 2009. Retrieved 11 December 2017. - "Bartleby.com: Great Books Online – Quotes, Poems, Novels, Classics and hundreds more". Bartleby.com. Archived from the original on 25 January 2009. Retrieved 11 December 2017. - "Great Books Online – Quotes, Poems, Novels, Classics and hundreds more". Bartleby.com. Archived from the original on 24 January 2009. Retrieved 11 December 2017. - The Macquarie Dictionary. Compare The Dictionary of Psychology by Raymond Joseph Corsini. Psychology Press, 1999. ISBN 158391028X. Online via Google Book Search. - "Jordan B. Peterson on How to Build Confidence, Cultivate Inner Peace, & the Psychology of MONEY (Part 2)". 7 April 2021. - Ellis, A. (2001). Feeling better, getting better, staying better. Impact Publishers - "Hume Texts Online". davidhume.org. Retrieved 2019-12-15. - Morris, William Edward; Brown, Charlotte R. (2019), "David Hume", in Zalta, Edward N. (ed.), The Stanford Encyclopedia of Philosophy (Summer 2019 ed.), Metaphysics Research Lab, Stanford University, retrieved 2019-12-17 - James, W. (1892). Psychology: The briefer course. New York: Henry Holt. - Baumeister, Roy F.; Smart, L.; Boden, J. (1996). "Relation of threatened egotism to violence and aggression: The dark side of self-esteem". Psychological Review. 103 (1): 5–33. CiteSeerX 10.1.1.1009.3747. doi:10.1037/0033-295X.103.1.5. PMID 8650299. - José-Vicente Bonet. Sé amigo de ti mismo: manual de autoestima. 1997. Ed. Sal Terrae. Maliaño (Cantabria, España). ISBN 978-8429311334. Fukuyama, Francis (1992). The End of History and the Last Man. New York: Simon and Schuster (published 2006). pp. xvi–xvii. ISBN 978-0743284554. Retrieved 2018-07-29. [...] Plato in the Republic [...] noted that there were three parts to the soul, a desiring part, a reasoning part, and a part that he called thymos, or 'spiritedness.' [...] The propensity to feel self-esteem arises out of the part of the soul called thymos. - Judge, T. A.; Locke, E. A.; Durham, C. C. (1997). "The dispositional causes of job satisfaction: A core evaluations approach". Research in Organizational Behavior. 19: 151–188. - Bono, J. E.; Judge, T. A. (2003). "Core self-evaluations: A review of the trait and its role in job satisfaction and job performance". European Journal of Personality. 17 (Suppl1): S5–S18. doi:10.1002/per.481. S2CID 32495455. - Dormann, C.; Fay, D.; Zapf, D.; Frese, M. (2006). "A state-trait analysis of job satisfaction: On the effect of core self-evaluations". Applied Psychology: An International Review. 55 (1): 27–51. doi:10.1111/j.1464-0597.2006.00227.x. - Judge, T. A.; Locke, E. A.; Durham, C. C.; Kluger, A. N. (1998). "Dispositional effects on job and life satisfaction: The role of core evaluations". Journal of Applied Psychology. 83 (1): 17–34. doi:10.1037/0021-9010.83.1.17. PMID 9494439. - Judge, T. A.; Bono, J. E. (2001). "Relationship of core self-evaluations traits – self-esteem, generalized self-efficacy, locus of control, and emotional stability – with job satisfaction and job performance: A meta-analysis". Journal of Applied Psychology. 86 (1): 80–92. doi:10.1037/0021-9010.86.1.80. PMID 11302235. - Nolan, James L. (1998). The Therapeutic State: Justifying Government at Century's End. NYU Press. pp. 152–61. ISBN 978-0814757918. Retrieved 2013-05-06. - Heine S. J.; Lehman D. R.; Markus H. R.; Kitayama S. (1999). "Is there a universal need for positive self-regard?". Psychological Review. 106 (4): 766–94. CiteSeerX 10.1.1.321.2156. doi:10.1037/0033-295X.106.4.766. PMID 10560328. - Maslow, A. H. (1987). Motivation and Personality (Third ed.). New York: Harper & Row. ISBN 978-0060419875. - Greenberg, J. (2008). "Understanding the vital human quest for self-esteem". Perspectives on Psychological Science. 3 (1): 48–55. doi:10.1111/j.1745-6916.2008.00061.x. PMID 26158669. S2CID 34963030. - Wickman S.A.; Campbell C. (2003). "An analysis of how Carl Rogers enacted client-centered conversation with Gloria". Journal of Counseling & Development. 81 (2): 178–84. doi:10.1002/j.1556-6678.2003.tb00239.x. - Rosenberg, M. (1965). Society and the adolescent self-image. Princeton, NJ: Princeton University Press. doi:10.1515/9781400876136. ISBN 9781400876136. - "MacArthur SES & Health Network – Research". Macses.ucsf.edu. Retrieved 11 December 2017. - Slater, Lauren (3 Feb 2002). "The Trouble With Self-Esteem". The New York Times. Retrieved 27 Nov 2012. - Bosson J.K.; Swann W.B.; Pennebaker J.W. (2000). "Stalking the perfect measure of implicit self esteem: The blind men and the elephant revisited?". Journal of Personality & Social Psychology. 79 (4): 631–43. CiteSeerX 10.1.1.371.9919. doi:10.1037/0022-35126.96.36.1991. PMID 11045743. - Koole, S. L., & Pelham, B. W. (2003). On the nature of implicit self-esteem: The case of the name letter effect. In S. Spencer, S. Fein, & M. P. Zanna (Eds.), Motivated social perception: The Ontario Symposium (pp. 93–116). Hillsdale, NJ: Lawrence Erlbaum. - Stieger, S.; Burger, C. (2013). "More complex than previously thought: New insights into the optimal administration of the Initial Preference Task". Self and Identity. 12 (2): 201–216. doi:10.1080/15298868.2012.655897. S2CID 142080983. - Greenwald, A. G.; McGhee, D. E.; Schwartz, J. L. K. (1998). "Measuring individual differences in implicit cognition: The Implicit Association Test" (PDF). Journal of Personality and Social Psychology. 74 (6): 1464–1480. doi:10.1037//0022-35188.8.131.524. PMID 9654756. - Hetts J.J.; Sakuma M.; Pelham B.W. (1999). "Two roads to positive regard: Implicit and explicit self-evaluation and culture". Journal of Experimental Social Psychology. 35 (6): 512–59. doi:10.1006/jesp.1999.1391. - Raboteg-Saric Z.; Sakic M. (2014). "Relations of parenting styles and friendship quality to self-esteem, life satisfaction, & happiness in adolescents". Applied Research in the Quality of Life. 9 (3): 749–65. doi:10.1007/s11482-013-9268-0. S2CID 143419028. - Olsen, J. M.; Breckler, S. J.; Wiggins, E. C. (2008). Social Psychology Alive (First Canadian ed.). Toronto: Thomson Nelson. ISBN 978-0176224523. - Coopersmith, S. (1967). The Antecedents of Self-Esteem. New York: W. H. Freeman. ISBN 9780716709121. - Isberg, R. S.; Hauser, S. T.; Jacobson, A. M.; Powers, S. I.; Noam, G.; Weiss-Perry, B.; Fullansbee, D. (1989). "Parental contexts of adolescent self-esteem: A developmental perspective". Journal of Youth and Adolescence. 18 (1): 1–23. doi:10.1007/BF02139243. PMID 24271601. S2CID 35823262. - Lamborn, S. D.; Mounts, N. S.; Steinberg, L.; Dornbusch, S. M. (1991). "Patterns of Competence and Adjustment among Adolescents from Authoritative, Authoritarian, Indulgent, and Neglectful Families". Child Development. 62 (5): 1049–65. doi:10.1111/j.1467-8624.1991.tb01588.x. PMID 1756655. - "Self-Esteem." Self-Esteem. N.p., n.d. Web. 27 Nov. 2012. - Crocker, J.; Sommers, S. R.; Luhtanen, R. K. (2002). "Hopes Dashed and Dreams Fulfilled: Contingencies of Self-Worth and Graduate School Admissions". Personality and Social Psychology Bulletin. 28 (9): 1275–86. doi:10.1177/01461672022812012. S2CID 143985402. - Butler, R. (1998). "Age Trends in the Use of Social and Temporal Comparison for Self-Evaluation: Examination of a Novel Developmental Hypothesis". Child Development. 69 (4): 1054–73. doi:10.1111/j.1467-8624.1998.tb06160.x. PMID 9768486. - Pomerantz, E. M.; Ruble, D. N.; Frey, K. S.; Grenlich, F. (1995). "Meeting Goals and Confronting Conflict: Children's Changing Perceptions of Social Comparison". Child Development. 66 (3): 723–38. doi:10.1111/j.1467-8624.1995.tb00901.x. PMID 7789198. - Thorne, A.; Michaelieu, Q. (1996). "Situating Adolescent Gender and Self-Esteem with Personal Memories". Child Development. 67 (4): 1374–90. doi:10.1111/j.1467-8624.1996.tb01802.x. PMID 8890489. - Leary, M. R.; Baumeister, R. F. (2000). "The Nature and Function of Self-Esteem: Sociometer Theory". In Zanna, M. P. (ed.). Advances in Experimental Social Psychology. 32. San Diego, CA: Academic Press. pp. 1–62. ISBN 978-0120152322. - Erol, R. Y.; Orth, U. (2011). "Self-Esteem Development From Age 14 to 30 Years: A Longitudinal Study". Journal of Personality and Social Psychology. 101 (3): 607–19. doi:10.1037/a0024299. PMID 21728448. - Maldonado L.; Huang Y.; Chen R.; Kasen S.; Cohen P.; Chen H. (2013). "Impact of early adolescent anxiety disorders on self-esteem development from adolescence to young adulthood". Journal of Adolescent Health. 53 (2): 287–292. doi:10.1016/j.jadohealth.2013.02.025. PMC 3725205. PMID 23648133. - Ehrenreich, Barbara (January 2007). Patterns for college Writing (12th ed.). Boston: Bedford/St. Martin's. p. 680. - Gruenewald T.L.; Kemeny M.E.; Aziz N.; Fahey J.L. (2004). "Acute threat to the social self: Shame, social self-esteem, and cortisol activity". Psychosomatic Medicine. 66 (6): 915–24. CiteSeerX 10.1.1.505.5316. doi:10.1097/01.psy.0000143639.61693.ef. PMID 15564358. S2CID 29504978. - Johnson E.A.; O'Brien K.A. (2013). "Self-compassion soothes the savage ego-threat system: Effects on negative affect, shame, rumination, & depressive symptoms". Journal of Social and Clinical Psychology. 32 (9): 939963. doi:10.1521/jscp.2013.32.9.939. - In a survey on technology 60% of people using social media reported that it has impacted their self-esteem in a negative way. - Power, F. Clark; Khmelkov, Vladimir T. (1998). "Character development and self-esteem: Psychological foundations and educational implications". International Journal of Educational Research. 27 (7): 539–51. doi:10.1016/S0883-0355(97)00053-0. - Adapted from Hamachek, D. E. (1971). Encounters with the Self. New York: Rinehart. ISBN 9780030777851. - New, Michelle (March 2012). "Developing Your Child's Self-Esteem". KidsHealth. Archived from the original on 2012-11-23. Retrieved 27 November 2012. - Jordan, C. H.; Spencer, S. J.; Zanna, M. P. (2003). "'I love me...I love me not': Implicit self-esteem, explicit self-esteem and defensiveness". In Spencer, S. J.; Fein, S.; Zanna, M. P.; Olsen, J. M. (eds.). Motivated social perception: The Ontario symposium. 9. Mahwah, NJ: Erlbaum. pp. 117–45. ISBN 978-0805840360. - Jordan, C. H.; Spencer, S. J.; Zanna, M. P.; Hoshino-Browne, E.; Correll, J. (2003). "Secure and defensive high self-esteem" (PDF). Journal of Personality and Social Psychology. 85 (5): 969–78. doi:10.1037/0022-35184.108.40.2069. PMID 14599258. - Jaaffar, Amar Hisham; Ibrahim, Hazril Izwar; Rajadurai, Jegatheesan; Sohail, M. Sadiq (2019-06-24). "Psychological Impact of Work-Integrated Learning Programmes in Malaysia: The Moderating Role of Self-Esteem on Relation between Self-Efficacy and Self-Confidence". International Journal of Educational Psychology. 8 (2): 188–213. doi:10.17583/ijep.2019.3389. ISSN 2014-3591. - Barbara Krahe, The Social Psychology of Aggression (Psychology Press, 2013), 75. - Sedikieds, C.; Rudich, E. A.; Gregg, A. P.; Kumashiro, M.; Rusbult, C. (2004). "Are normal narcissists psychologically healthy? Self-esteem matters". Journal of Personality and Social Psychology. 87 (3): 400–16. doi:10.1037/0022-35220.127.116.110. PMID 15382988. - "Narcissism vs. Authentic Self-Esteem". afterpsychotherapy.com. 17 January 2011. Retrieved 22 October 2017. - Morf, C. C.; Rhodewalk, F. (1993). "Narcissism and self-evaluation maintenance: Explorations in object relations". Personality and Social Psychology Bulletin. 19 (6): 668–76. doi:10.1177/0146167293196001. S2CID 145525829. - Twenge, J. M.; Campbell, W. K. (2003). "'Isn't it fun to get the respect we're going to deserve?' Narcissism, social rejection, and aggression". Personality and Social Psychology Bulletin. 29 (2): 261–72. doi:10.1177/0146167202239051. PMID 15272953. S2CID 29837581. - Jones FC (2003). "Low self esteem". Chicago Defender. p. 33. ISSN 0745-7014. - Adapted, Gill J. "Indispensable Self-Esteem". Human Development. 1: 1980. - Baldwin, M. W.; Sinclair, L. (1996). "Self-esteem and 'if...then' contingencies of interpersonal acceptance". Journal of Personality and Social Psychology. 71 (6): 1130–41. doi:10.1037/0022-3518.104.22.1680. PMID 8979382. S2CID 7294467. - Warman DM, Lysaker PH, Luedtke B, Martin JM (2010) Self-esteem and delusionproneness.JNervMentDis.198:455–457. - Smith B, Fowler DG, Freeman D, Bebbington P, Bashforth H, Garety P, Dunn G,Kuipers E (2006) Emotion and psychosis: Links between depression, self-esteem,negative schematic beliefs and delusions and hallucinations.Schizophr Res.86:181–188 - Garety PA, Kuipers E, Fowler D, Freeman D, Bebbington PE (2001) A cognitivemodel of the positive symptoms of psychosis.Psychol Med.31:189–195. - Bentall RP, Kinderman P, Kaney S (1994) The self, attributional processes andabnormal beliefs: Towards a model of persecutory delusions.Behav Res Ther.32:331–341 - Karatzias T, Gumley A, Power K, O'Grady M (2007) Illness appraisals and self-esteemas correlates of anxiety and affective comorbid disorders in schizophrenia.ComprPsychiatry.48:371–375. - Bradshaw W, Brekke JS (1999) Subjective experience in schizophrenia: Factorsinfluencing self-esteem, satisfaction with life, and subjective distress.Am J Ortho-psychiatry.69:254–260. - Blairy S, Linotte S, Souery D, Papadimitriou GN, Dikeos D, Lerer B, Kaneva R,Milanova V, Serretti A, Macciardi F, Mendlewicz J (2004) Social adjust-ment and self-esteem of bipolar patients: A multicentric study.J Affect Disord.79:97–103 - Bowins B, Shugar G (1998) Delusions and self-esteem.Can J Psychiatry.43:154–158. - "ORCID". ORCID. 2021-08-31. Retrieved 2021-09-07. - Ross, Martín. El Mapa de la Autoestima. 2013. Dunken. ISBN 978-9870267737 - Leiva, Darcy (11 May 2015). "Como influye el genero en la Autoestima de los Adolescentes". Monografias.com. Retrieved 11 December 2017. - Bonet Gallardo, L. (2015). La retroalimentació entre l'autoestima i l'activitat digital al col·lectiu adolescent [Feedback between self-esteem and digital activity in the adolescent group] (bachelor thesis) (in Spanish). Advised by Huertas Bailén, Amparo. Universidad Autónoma de Barcelona. - "Contingent Synonyms, Contingent Antonyms". thesaurus.com. Retrieved 22 October 2017. - "Unconditional". The Free Dictionary. Retrieved 11 December 2017. - Koivula, Nathalie; Hassmén, Peter; Fallby, Johan (2002). "Self-esteem and perfectionism in elite athletes: effects on competitive anxiety and self-confidence". Personality and Individual Differences. 32 (5): 865–75. doi:10.1016/S0191-8869(01)00092-7. - Victoria Blom. ""Striving for Self-esteem" (Department of Psychology, Stockholm University, 2011)" (PDF). p. 17. - "The Boom and Bust Ego". Psychology Today. Retrieved 11 December 2017. - Paul Tillich, Terry Lectures: Courage to Be (Yale University, 2000) 164. - Christopher J. Mruk, Self-esteem Research, Theory, and Practice (Springer, 1995), 88. - Terry D. Cooper, Paul Tillich and Psychology: Historic and Contemporary Explorations in Theology, Psychotherapy, and Ethics (Mercer University,2006). 7. - "Self-esteem/OKness: a personal story" (PDF). Ahpcc.org.uk. Retrieved 11 December 2017. - Terry D. Cooper, Paul Tillich and Psychology: Historic and Contemporary Explorations in Theology, Psychotherapy, and Ethics (Mercer University,2006). 5. - "OK". The Free Dictionary. Retrieved 11 December 2017. - Thomas A. Harris, I'm OK — You're OK (Harper and Row), 1969, 235. - Michael H. Kernis. "Toward a Conceptualization of Optimal Self-Esteem" (PDF). Academic.udayton.edu. Retrieved 11 December 2017. - Burger, C.; Bachmann, L. (2021). "Perpetration and victimization in offline and cyber contexts: A variable- and person-oriented examination of associations and differences regarding domain-specific self-esteem and school adjustment". Int J Environ Res Public Health. 18 (19): 10429. doi:10.3390/ijerph181910429. PMC 8508291. PMID 34639731. Text was copied from this source, which is available under a Creative Commons Attribution 4.0 International License. - Barbot B.; Safont-Mottay C.; Oubrayrie-Roussel N (2019). "Multidimensional scale of self-esteem (EMES-16): Psychometric evaluation of a domain-specific measure of self-esteem for French-speaking adolescents". International Journal of Behavioral Development. 43 (5): 436–446. doi:10.1177/0165025418824996. S2CID 151135576. - Orth U.; Dapp L.C.; Erol R.Y.; Krauss S.; Luciano E.C. (2021). "Development of domain-specific self-evaluations: A meta-analysis of longitudinal studies". Journal of Personality and Social Psychology. 120 (1): 145–172. doi:10.1037/pspp0000378. PMID 33252972. S2CID 227244920. - Andreou E (2001). "Bully/victim problems and their association with coping behaviour in conflictual peer interactions among school-age children". Educational Psychology. 21 (1): 59–66. doi:10.1080/01443410125042. S2CID 143734781. - Nathaniel Branden. Cómo mejorar su autoestima. 1987. Versión traducida: 1990. 1ª edición en formato electrónico: enero de 2010. Ediciones Paidós Ibérica. ISBN 978-8449323478. - Christian Miranda. La autoestima profesional: una competencia mediadora para la innovación en las prácticas pedagógicas Archived 2011-07-22 at the Wayback Machine. Revista Iberoamericana sobre Calidad, Eficacia y Cambio en Educación. 2005. Volume 3, number 1. PDF format. - Sigmund Freud, On Metapsychology (PFL 11) pp. 254–56 - The Yogyakarta Principles, Preamble and Principles 11 - World Health Organization (2014). "Preventing suicide: A global imperative". World Health Organization – Mental Health: 92. Archived from the original on September 5, 2014. - "Preventing Suicide, A resource for teachers and other school staff, WHO, Geneva, 2000" (PDF). who.int. Retrieved 22 October 2017.[dead link] - Schacter, Daniel L.; Gilbert, Daniel T.; Wegner, Daniel M. (2009). "Self Esteem". Psychology (Second ed.). New York: Worth. ISBN 978-0716752158. - Baumeister, Roy F.; Jennifer D. Campbell, Joachim I. Krueger and Kathleen D. Vohs; Krueger, Joachim I.; Vohs, Kathleen D. (January 2005). "Exploding the Self-Esteem Myth" (PDF). Scientific American. 292 (1): 84–91. Bibcode:2005SciAm.292a..84B. doi:10.1038/scientificamerican0105-84. PMID 15724341. Archived from the original (PDF) on 2 April 2015. Retrieved 20 February 2011. - Baumeister, Roy (23 December 2009). "Self-Esteem". Education.com. Retrieved 8 January 2015. - Reasoner, Robert W. (n.d.). "research.htm Extending self-esteem theory and research[permanent dead link]." Retrieved February 20, 2011. - Ulrich Schimmack and Ed Diener (2003). "Predictive validity of explicit and implicit self-esteem for subjective well-being" (PDF). Journal of Research in Personality. 37 (2): 100–06. doi:10.1016/S0092-6566(02)00532-9. - Eaton, J; Wardstruthers, C; Santelli, A (2006). "Dispositional and state forgiveness: The role of self-esteem, need for structure, and narcissism". Personality and Individual Differences. 41 (2): 371–80. doi:10.1016/j.paid.2006.02.005. ISSN 0191-8869. - Chavez, Robert S.; Heatherton, Todd F. (1 May 2014). "Multimodal frontostriatal connectivity underlies individual differences in self-esteem". Social Cognitive and Affective Neuroscience. Oxford University Press. 10 (3): 364–370. doi:10.1093/scan/nsu063. PMC 4350482. PMID 24795440. - Ellis, A. (2005). The Myth of Self-esteem. Amherst, NY: Prometheus Books. ISBN 978-1591023548. - Ellis, Albert; Dryden, Windy (2007). The Practice of Rational Emotive Behavior Therapy: Second Edition. Springer Publishing Company. ISBN 978-0826122179. Retrieved 11 December 2017 – via Google Books. - Baumeister; Tierney (2011). Willpower: The Greatest's Human Strength. p. 192. - Nathaniel Branden, The Six Pillars of Self-esteem (Bantam, 1995), 52. Also see Nathaniel Branden, How to Raise Your Self-Esteem: The Proven Action-Oriented Approach to Greater Self-Respect and Self-Confidence (Random House, 1988), 9. Spanish edition: Cómo mejorar su autoestima (Paidos, 2009). - Michaels, M.; Barr, A.; Roosa, M.; Knight, G. (2007). "Self-Esteem: Assessing Measurement Equivalence in a Multiethnic Sample of Youth". Journal of Early Adolescence. 27 (3): 269–95. doi:10.1177/0272431607302009. S2CID 146806309. - Erikson, Erik H. (1973). Childhood and Society. Harmondsworth: Penguin. p. 260. ISBN 978-0140207545. - Crompton, Simon (2007). All about Me. London: Collins. p. 16. ISBN 978-0007247950. - Fenichel, Otto (1946). The Psychoanalytic Theory of Neurosis. London. pp. 407–10. - Symington, Neville (2003). Narcissism: A New Theory. London: Karmac. p. 114. ISBN 978-1855750470. - Baumeister, Roy F. (2001). "Violent Pride: Do people turn violent because of self-hate or self-love?," in Scientific American, 284, No. 4, pp. 96–101; April 2001. - Branden, N. (1969). The Psychology of Self-Esteem. New York: Bantam. - Branden, N. (2001). The psychology of self-esteem: a revolutionary approach to self-understanding that launched a new era in modern psychology. San Francisco: Jossey-Bass, 2001. ISBN 0787945269 - Burke, C. (2008) "Self-esteem: Why?; Why not?," New York: 2008 - Crocker J.; Park L. E. (2004). "The costly pursuit of self-esteem". Psychological Bulletin. 130 (3): 392–414. doi:10.1037/0033-2909.130.3.392. PMID 15122925. - Franklin, Richard L. (1994). "Overcoming The Myth of Self-Worth: Reason and Fallacy in What You Say to Yourself." ISBN 0963938703 - Hill, S.E. & Buss, D.M. (2006). "The Evolution of Self-Esteem." In Michael Kernis, (Ed.), Self Esteem: Issues and Answers: A Sourcebook of Current Perspectives.. Psychology Press:New York. 328–33. Full text - Lerner, Barbara (1985). "Self-Esteem and Excellence: The Choice and the Paradox," American Educator, Winter 1985. - Mecca, Andrew M., et al., (1989). The Social Importance of Self-esteem University of California Press, 1989. (ed; other editors included Neil J. Smelser and John Vasconcellos) - Mruk, C. (2006). Self-Esteem research, theory, and practice: Toward a positive psychology of self-esteem (3rd ed.). New York: Springer. - Rodewalt F.; Tragakis M. W. (2003). "Self-esteem and self-regulation: Toward optimal studies of self-esteem". Psychological Inquiry. 14 (1): 66–70. doi:10.1207/s15327965pli1401_02. - Ruggiero, Vincent R. (2000). "Bad Attitude: Confronting the Views That Hinder Student's Learning" American Educator. - Sedikides, C., & Gregg. A. P. (2003). "Portraits of the self." In M. A. Hogg & J. Cooper (Eds.), Sage handbook of social psychology (pp. 110–38). London: Sage Publications. - Twenge, Jean M. (2007). Generation Me: Why Today's Young Americans Are More Confident, Assertive, Entitled – and More Miserable Than Ever Before. Free Press. ISBN 978-0743276986
So far we know that AC voltage alternates in polarity and AC current alternates in direction. We also know that AC can alternate in a variety of different ways, and by tracing the alternation over time we can plot it as a “waveform.” We can measure the rate of alternation by measuring the time it takes for a wave to evolve before it repeats itself (the “period”), and express this as cycles per unit time, or “frequency.” In music, frequency is the same as pitch, which is the essential property distinguishing one note from another. However, we encounter a measurement problem if we try to express how large or small an AC quantity is. With DC, where quantities of voltage and current are generally stable, we have little trouble expressing how much voltage or current we have in any part of a circuit. But how do you grant a single measurement of magnitude to something that is constantly changing? One way to express the intensity, or magnitude (also called the amplitude), of an AC quantity is to measure its peak height on a waveform graph. This is known as the peak or crest value of an AC waveform: Figure below Peak voltage of a waveform. Another way is to measure the total height between opposite peaks. This is known as the peak-to-peak (P-P) value of an AC waveform: Figure below Peak-to-peak voltage of a waveform. Unfortunately, either one of these expressions of waveform amplitude can be misleading when comparing two different types of waves. For example, a square wave peaking at 10 volts is obviously a greater amount of voltage for a greater amount of time than a triangle wave peaking at 10 volts. The effects of these two AC voltages powering a load would be quite different: Figure below A square wave produces a greater heating effect than the same peak voltage triangle wave. One way of expressing the amplitude of different waveshapes in a more equivalent fashion is to mathematically average the values of all the points on a waveform's graph to a single, aggregate number. This amplitude measure is known simply as the average value of the waveform. If we average all the points on the waveform algebraically (that is, to consider their sign, either positive or negative), the average value for most waveforms is technically zero, because all the positive points cancel out all the negative points over a full cycle: Figure below The average value of a sinewave is zero. This, of course, will be true for any waveform having equal-area portions above and below the “zero” line of a plot. However, as a practical measure of a waveform's aggregate value, “average” is usually defined as the mathematical mean of all the points' absolute values over a cycle. In other words, we calculate the practical average value of the waveform by considering all points on the wave as positive quantities, as if the waveform looked like this: Figure below Waveform seen by AC “average responding” meter. Polarity-insensitive mechanical meter movements (meters designed to respond equally to the positive and negative half-cycles of an alternating voltage or current) register in proportion to the waveform's (practical) average value, because the inertia of the pointer against the tension of the spring naturally averages the force produced by the varying voltage/current values over time. Conversely, polarity-sensitive meter movements vibrate uselessly if exposed to AC voltage or current, their needles oscillating rapidly about the zero mark, indicating the true (algebraic) average value of zero for a symmetrical waveform. When the “average” value of a waveform is referenced in this text, it will be assumed that the “practical” definition of average is intended unless otherwise specified. Another method of deriving an aggregate value for waveform amplitude is based on the waveform's ability to do useful work when applied to a load resistance. Unfortunately, an AC measurement based on work performed by a waveform is not the same as that waveform's “average” value, because the power dissipated by a given load (work performed per unit time) is not directly proportional to the magnitude of either the voltage or current impressed upon it. Rather, power is proportional to the square of the voltage or current applied to a resistance (P = E2/R, and P = I2R). Although the mathematics of such an amplitude measurement might not be straightforward, the utility of it is. Consider a bandsaw and a jigsaw, two pieces of modern woodworking equipment. Both types of saws cut with a thin, toothed, motor-powered metal blade to cut wood. But while the bandsaw uses a continuous motion of the blade to cut, the jigsaw uses a back-and-forth motion. The comparison of alternating current (AC) to direct current (DC) may be likened to the comparison of these two saw types: Figure below Bandsaw-jigsaw analogy of DC vs AC. The problem of trying to describe the changing quantities of AC voltage or current in a single, aggregate measurement is also present in this saw analogy: how might we express the speed of a jigsaw blade? A bandsaw blade moves with a constant speed, similar to the way DC voltage pushes or DC current moves with a constant magnitude. A jigsaw blade, on the other hand, moves back and forth, its blade speed constantly changing. What is more, the back-and-forth motion of any two jigsaws may not be of the same type, depending on the mechanical design of the saws. One jigsaw might move its blade with a sine-wave motion, while another with a triangle-wave motion. To rate a jigsaw based on its peak blade speed would be quite misleading when comparing one jigsaw to another (or a jigsaw with a bandsaw!). Despite the fact that these different saws move their blades in different manners, they are equal in one respect: they all cut wood, and a quantitative comparison of this common function can serve as a common basis for which to rate blade speed. Picture a jigsaw and bandsaw side-by-side, equipped with identical blades (same tooth pitch, angle, etc.), equally capable of cutting the same thickness of the same type of wood at the same rate. We might say that the two saws were equivalent or equal in their cutting capacity. Might this comparison be used to assign a “bandsaw equivalent” blade speed to the jigsaw's back-and-forth blade motion; to relate the wood-cutting effectiveness of one to the other? This is the general idea used to assign a “DC equivalent” measurement to any AC voltage or current: whatever magnitude of DC voltage or current would produce the same amount of heat energy dissipation through an equal resistance:Figure below An RMS voltage produces the same heating effect as a the same DC voltage In the two circuits above, we have the same amount of load resistance (2 Ω) dissipating the same amount of power in the form of heat (50 watts), one powered by AC and the other by DC. Because the AC voltage source pictured above is equivalent (in terms of power delivered to a load) to a 10 volt DC battery, we would call this a “10 volt” AC source. More specifically, we would denote its voltage value as being 10 volts RMS. The qualifier “RMS” stands for Root Mean Square, the algorithm used to obtain the DC equivalent value from points on a graph (essentially, the procedure consists of squaring all the positive and negative points on a waveform graph, averaging those squared values, then taking the square root of that average to obtain the final answer). Sometimes the alternative terms equivalent or DC equivalent are used instead of “RMS,” but the quantity and principle are both the same. RMS amplitude measurement is the best way to relate AC quantities to DC quantities, or other AC quantities of differing waveform shapes, when dealing with measurements of electric power. For other considerations, peak or peak-to-peak measurements may be the best to employ. For instance, when determining the proper size of wire (ampacity) to conduct electric power from a source to a load, RMS current measurement is the best to use, because the principal concern with current is overheating of the wire, which is a function of power dissipation caused by current through the resistance of the wire. However, when rating insulators for service in high-voltage AC applications, peak voltage measurements are the most appropriate, because the principal concern here is insulator “flashover” caused by brief spikes of voltage, irrespective of time. Peak and peak-to-peak measurements are best performed with an oscilloscope, which can capture the crests of the waveform with a high degree of accuracy due to the fast action of the cathode-ray-tube in response to changes in voltage. For RMS measurements, analog meter movements (D'Arsonval, Weston, iron vane, electrodynamometer) will work so long as they have been calibrated in RMS figures. Because the mechanical inertia and dampening effects of an electromechanical meter movement makes the deflection of the needle naturally proportional to the average value of the AC, not the true RMS value, analog meters must be specifically calibrated (or mis-calibrated, depending on how you look at it) to indicate voltage or current in RMS units. The accuracy of this calibration depends on an assumed waveshape, usually a sine wave. Electronic meters specifically designed for RMS measurement are best for the task. Some instrument manufacturers have designed ingenious methods for determining the RMS value of any waveform. One such manufacturer produces “True-RMS” meters with a tiny resistive heating element powered by a voltage proportional to that being measured. The heating effect of that resistance element is measured thermally to give a true RMS value with no mathematical calculations whatsoever, just the laws of physics in action in fulfillment of the definition of RMS. The accuracy of this type of RMS measurement is independent of waveshape. For “pure” waveforms, simple conversion coefficients exist for equating Peak, Peak-to-Peak, Average (practical, not algebraic), and RMS measurements to one another: Figure below Conversion factors for common waveforms. In addition to RMS, average, peak (crest), and peak-to-peak measures of an AC waveform, there are ratios expressing the proportionality between some of these fundamental measurements. The crest factor of an AC waveform, for instance, is the ratio of its peak (crest) value divided by its RMS value. The form factor of an AC waveform is the ratio of its RMS value divided by its average value. Square-shaped waveforms always have crest and form factors equal to 1, since the peak is the same as the RMS and average values. Sinusoidal waveforms have an RMS value of 0.707 (the reciprocal of the square root of 2) and a form factor of 1.11 (0.707/0.636). Triangle- and sawtooth-shaped waveforms have RMS values of 0.577 (the reciprocal of square root of 3) and form factors of 1.15 (0.577/0.5). Bear in mind that the conversion constants shown here for peak, RMS, and average amplitudes of sine waves, square waves, and triangle waves hold true only for pure forms of these waveshapes. The RMS and average values of distorted waveshapes are not related by the same ratios: Figure below Arbitrary waveforms have no simple conversions. This is a very important concept to understand when using an analog meter movement to measure AC voltage or current. An analog movement, calibrated to indicate sine-wave RMS amplitude, will only be accurate when measuring pure sine waves. If the waveform of the voltage or current being measured is anything but a pure sine wave, the indication given by the meter will not be the true RMS value of the waveform, because the degree of needle deflection in an analog meter movement is proportional to the average value of the waveform, not the RMS. RMS meter calibration is obtained by “skewing” the span of the meter so that it displays a small multiple of the average value, which will be equal to be the RMS value for a particular waveshape and a particular waveshape only. Since the sine-wave shape is most common in electrical measurements, it is the waveshape assumed for analog meter calibration, and the small multiple used in the calibration of the meter is 1.1107 (the form factor: 0.707/0.636: the ratio of RMS divided by average for a sinusoidal waveform). Any waveshape other than a pure sine wave will have a different ratio of RMS and average values, and thus a meter calibrated for sine-wave voltage or current will not indicate true RMS when reading a non-sinusoidal wave. Bear in mind that this limitation applies only to simple, analog AC meters not employing “True-RMS” technology. - The amplitude of an AC waveform is its height as depicted on a graph over time. An amplitude measurement can take the form of peak, peak-to-peak, average, or RMS quantity. - Peak amplitude is the height of an AC waveform as measured from the zero mark to the highest positive or lowest negative point on a graph. Also known as the crest amplitude of a wave. - Peak-to-peak amplitude is the total height of an AC waveform as measured from maximum positive to maximum negative peaks on a graph. Often abbreviated as “P-P”. - Average amplitude is the mathematical “mean” of all a waveform's points over the period of one cycle. Technically, the average amplitude of any waveform with equal-area portions above and below the “zero” line on a graph is zero. However, as a practical measure of amplitude, a waveform's average value is often calculated as the mathematical mean of all the points' absolute values (taking all the negative values and considering them as positive). For a sine wave, the average value so calculated is approximately 0.637 of its peak value. - “RMS” stands for Root Mean Square, and is a way of expressing an AC quantity of voltage or current in terms functionally equivalent to DC. For example, 10 volts AC RMS is the amount of voltage that would produce the same amount of heat dissipation across a resistor of given value as a 10 volt DC power supply. Also known as the “equivalent” or “DC equivalent” value of an AC voltage or current. For a sine wave, the RMS value is approximately 0.707 of its peak value. - The crest factor of an AC waveform is the ratio of its peak (crest) to its RMS value. - The form factor of an AC waveform is the ratio of its RMS value to its average value. - Analog, electromechanical meter movements respond proportionally to the average value of an AC voltage or current. When RMS indication is desired, the meter's calibration must be “skewed” accordingly. This means that the accuracy of an electromechanical meter's RMS indication is dependent on the purity of the waveform: whether it is the exact same waveshape as the waveform used in calibrating.
6th Grade – Expressions and Equations includes ten activities that help students develop mastery of the mathematical practices within the content area of expressions and equations. For educators for whom the Common Core Standards are important, each activity is referenced to the sixth grade expressions and equations standards. The description of each standard follows: 6.EE.A.1. Write and evaluate numerical expressions involving whole-number exponents. 6.EE.A.2. Write, read, and evaluate expressions in which letters stand for numbers. 6.EE.A.2.a. Write expressions that record operations with numbers and with letters standing for numbers. For example, express the calculation "Subtract y from 5" as 5 - y. 6.EE.A.2.b. Identify parts of an expression using mathematical terms (sum, term, product, factor, quotient, coefficient); view one or more parts of an expression as a single entity. For example, describe the expression 2 (8 + 7) as a product of two factors; view (8 + 7) as both a single entity and a sum of two terms. 6.EE.A.2.c. Evaluate expressions at specific values of their variables. Include expressions that arise from formulas used in real-world problems. Perform arithmetic operations, including those involving whole-number exponents, in the conventional order when there are no parentheses to specify a particular order (Order of Operations). For example, use the formulas V = s3 and A = 6 s2 to find the volume and surface area of a cube with sides of length s = 1/2. 6.EE.A.3. Apply the properties of operations to generate equivalent expressions. For example, apply the distributive property to the expression 3 (2 + x) to produce the equivalent expression 6 + 3x; apply the distributive property to the expression 24x + 18y to produce the equivalent expression 6 (4x + 3y); apply properties of operations to y + y + y to produce the equivalent expression 3y. 6.EE.A.4. Identify when two expressions are equivalent (i.e., when the two expressions name the same number regardless of which value is substituted into them). For example, the expressions y + y + y and 3y are equivalent because they name the same number regardless of which number y stands for. 6.EE.B.5. Understand solving an equation or inequality as a process of answering a question: which values from a specified set, if any, make the equation or inequality true? Use substitution to determine whether a given number in a specified set makes an equation or inequality true. 6.EE.B.6. Use variables to represent numbers and write expressions when solving a real-world or mathematical problem; understand that a variable can represent an unknown number, or, depending on the purpose at hand, any number in a specified set. 6.EE.B.7. Solve real-world and mathematical problems by writing and solving equations of the form x + p = q and px = q for cases in which p, q and x are all nonnegative rational numbers. 6.EE.B.8. Write an inequality of the form x > c or x < c to represent a constraint or condition in a real-world or mathematical problem. Recognize that inequalities of the form x > c or x < c have infinitely many solutions; represent solutions of such inequalities on number line diagrams. 6.EE.C.9. Use variables to represent two quantities in a real-world problem that change in relationship to one another; write an equation to express one quantity, thought of as the dependent variable, in terms of the other quantity, thought of as the independent variable. Analyze the relationship between the dependent and independent variables using graphs and tables, and relate these to the equation. For example, in a problem involving motion at constant speed, list and graph ordered pairs of distances and times, and write the equation d = 65t to represent the relationship between distance and time. 6.EE.A.1 is addressed in Activity 1 6.EE.A.2 is addressed in Activities 2, 6, 7 6.EE.A.2.a. is addressed in Activity 2 6.EE.A.2.b. is addressed in Activity 2 6.EE.A.2.c. is addressed in Activities 3 6.EE.A.3. is addressed in Activities 4 6.EE.A.4. is addressed in Activities 4 6.EE.B.5. is addressed in Activities 5 6.EE.B.6. is addressed in Activities 5, 6, 7, 8 6.EE.B.7. is addressed in Activities 6, 7 6.EE.B.8. is addressed in Activities 8 6.EE.C.9. is addressed in Activities 9 The Make It Real Learning Middle Grades Math Series is a collection of 20 workbooks that address the mathematical content standards of Grades 5, 6, 7, and 8. Each workbook contains ten standards-aligned activities that develop mathematical thinking. Interesting real world contexts are integrated into many of the activities. Fifth Grade Mathematics Workbooks • 5th Grade Operations and Algebraic Thinking • 5th Grade Numbers and Operations in Base 10 • 5th Grade Numbers and Operations – Fractions • 5th Grade Measurement and Data • 5th Grade Geometry Sixth Grade Mathematics Workbooks • 6th Grade Ratios and Proportional Relationships • 6th Grade The Number System • 6th Grade Expressions and Equations • 6th Grade Geometry • 6th Grade Statistics and Probability Seventh Grade Mathematics Workbooks • 7th Grade Ratios and Proportional Relationships • 7th Grade The Number System • 7th Grade Expressions and Equations • 7th Grade Geometry • 7th Grade Statistics and Probability Eighth Grade Mathematics Workbooks • 8th Grade The Number System • 8th Grade Expressions and Equations • 8th Grade Functions • 8th Grade Geometry • 8th Grade Statistics and Probability
Seven samples from the solar system's birth At this year's Lunar and Planetary Science Conference (LPSC), scientists reported that, after eight painstaking years of work, they have retrieved seven particles of interstellar dust from NASA's Stardust spacecraft. Stardust was launched in 1999 to chase the comet Wild 2. The spacecraft captured particles from the tail of the comet. The samples showed that cometary materials have undergone a great deal of alteration since the birth of the Solar System – such as heating and melting when comets pass near the Sun. In addition, the spacecraft also pointed its collectors toward bust blowing in from interstellar space. Stardust spent 200 days catching particles of dust during its mission. In 2006, Stardust returned to Earth and ejected a reentry capsule that carried its samples into the hands of eager scientists on our planet. When they opened up the capsule, it quickly became clear that the samples were going to be very difficult to retrieve. The particles were embedded in a substance called 'aerogel,' which was used to catch them… but they were so tiny that it was nearly impossible to spot them. To solve the problem, scientists developed the Stardust@home project, where over 30,000 members of the public helped hunt the particles down. The study authors report, "More than 30,000 volunteers carried out track identification in aerogel by searching stacks of digital optical images of the aerogel collectors, using an online virtual microscope." Ultimately, after one hundred million searches, only seven likely samples of interstellar dust were found. There were many more tracks identified, but according to the study, "Most of the tracks had trajectories that were consistent with an origin as ejecta from impacts on the solar panels." Now, the trick is getting the seven potential samples of interstellar dust out of the aerogel and into instruments that can analyze them. Stardust was the first NASA mission since Apollo to return samples to Earth. The total weight of the seven specs of dust is only one trillionth of a gram, but the materials could reveal important details about how our solar system was born. Studying the origin and evolution of the Solar System is important for astrobiologists who are trying to determine the conditions that lead to habitability on Earth. This information will help researchers hunt for similar, inhabited worlds around distant stars. More information: Westphal et al. (2014) Final reports of the Stardust ISPE: Seven probable interstellar dust particles. 45th Lunar and Planetary Science Conference. Provided by Astrobio.net
Washington: There may be far fewer galaxies further out in the universe than thought earlier, according to a new study. Over the years, the Hubble Space Telescope has allowed astronomers to look deep into the universe. The long view stirred theories of untold thousands of distant, faint galaxies. The new research, led by Michigan State University, offers a theory that reduces the estimated number of the most distant galaxies by 10 to 100 times “Our work suggests that there are far fewer faint galaxies than we once previously thought,” said Brian O’Shea, MSU associate professor of physics and astronomy. “Earlier estimates placed the number of faint galaxies in the early universe to be hundreds or thousands of times larger than the few bright galaxies that we can actually see with the Hubble Space Telescope. We now think that number could be closer to ten times larger,” said O’Shea. O’Shea and his team used the National Science Foundation’s Blue Waters supercomputer to run simulations to examine the formation of galaxies in the early universe. The team simulated thousands of galaxies at a time, including the galaxies’ interactions through gravity or radiation. The simulated galaxies were consistent with observed distant galaxies at the bright end of the distribution – in other words, those that have been discovered and confirmed. The simulations didn’t, however, reveal an exponentially growing number of faint galaxies, as has been previously predicted, researchers said. The number of those at the lower end of the brightness distribution was flat rather than increasing sharply, O’Shea added. The study appears in the Astrophysical Journal Letters.
The Asteroid Belt is the region of interplanetary space between Mars and Jupiter where most asteroids are found. It contains irregularly shaped chunks of debris called asteroids. The Space Asteroids are made of rock and metal, mostly nickel and iron. Contrary to popular imagery, the asteroid belt is mostly empty. Facts about the Asteroid Belt * Area: The main asteroid belt extends from 255 to 600 million km (2.15 to 3.3 astronomical units) from the Sun and may contain over a million objects bigger than 1 km across. * Diameter: The largest objects are Ceres (1,003 km), Pallas (608 km) and Vesta (538 km). * Total Mass: The total mass of all the asteroids is less than that of the Moon. There are 26 known asteroids larger than 200 km across. * Location: The Asteroid Belt is a region between the inner planets and outer planets where thousands of asteroids are found orbiting around the Sun. How big are the asteroids? Ceres is the largest asteroid in the asteroid belt and is the only dwarf planet in the belt. Pallas is second largest and the second asteroid to have been discovered. More than half the mass of the main belt is contained in the four largest objects: Ceres, Vesta, Pallas and Hygiea. The current main belt consists primarily of three categories of asteroids: C-type (carbonaceous asteroids), S-type (silicate asteroids) and M-type (metallic asteroids). Scientists have detected water-ice and carbon-based organic compounds on the surface of asteroid Themis. The discovery of water ice on 24 Themis was announced in April 2010. Two teams of researchers independently verified that the asteroid 24 Themis is coated in a layer of frost. This discovery has changed scientists perspectives on asteroids. The discovery may also be a boon to NASA’s new space exploration program which is aiming to send astronauts to visit a near-Earth asteroid in the future. Scientists believe the asteroids are the pieces of a planet that never formed. One possible theory is the ongoing gravitational tug-of-war between Jupiter and Mars has prevented the pieces from bonding together, hence, this planet was never created. The first flybys of asteroids were performed in 1991 and 1993 by NASA’s Galileo spacecraft and in 1996 by the Near Earth Asteroid Rendezvous (NEAR) spacecraft. In the 1980’s, the Soviet Union was planning to send a probe to Vesta. No information available on its status. NASA’s Dawn mission is on a 3-billion-km (1.7-billion-mile) journey to the asteroid belt to orbit asteroid Vesta and dwarf planet Ceres. Scientists hope to study the conditions of the solar system’s earliest days. More Facts on the Asteroid Belt – Did you know? Other regions of small solar system bodies include the Centaurs, the Kuiper belt and scattered disk and the Oort cloud. Beyond the orbit of Neptune lies an even large and more populous region of minor bodies known as the Kuiper belt. When was the Asteroid Belt Discovered? The process of discovering the asteroid belt began in the late 1700s. The first asteroid was discovered by Italian astronomer Giuseppe Piazzi was named Ceres in 1801. Ceres is now regarded as a dwarf planet. Ceres was given dwarf planet status in 2006, along with Pluto, Eris, Haumea and Makemake. Asteroids and Dwarf Planets and How to Observe Them (Astronomers’ Observing Guides) by Roger Dymock From Amazon.com, Amazon.co.uk, Amazon.ca, eCampus.com Guide to the Universe: Asteroids, Comets, and Dwarf Planets (Greenwood Guides to the Universe) by Andrew S. Rivkin From Amazon.com, Amazon.co.uk, Amazon.ca Impact!: The Threat of Comets and Asteroids by Gerrit L. Verschuur From Amazon.com, Amazon.co.uk, Amazon.ca The Asteroid Belt Links - Main Asteroid Belt - Space Asteroids: - StarChild: The Asteroid Belt - Origin of the Asteroid Belt - Sky & Telescope’s Asteroids Page: - Asteroids and Comets - Asteroid Belt Like Ours Spotted Around Another Star - Chaotic Routes Between the Asteroid Belt and Earth - ABCNEWS.com : Asteroid Belt May Be Nearby Any comments or suggestions on the Asteroid Belt, then click on Contact Info. My Other Web Site: Shopping Guide, UFOS, Hubble Telescope, x-33, ISS, Women in Space, Space Shuttle, x-37, Rocket Engines, Rockets. You must be logged in to post a comment.
Claudius Ptolemaeus, better known as Ptolemy (circa 100–178 AD) made many important contributions to geography and spatial thought. A Greek by descent, he was a native of Alexandria in Egypt, and became known as the most wise and learned man of his time. Although little is known about Ptolemy's life, he wrote on many topics, including geography, astrology, musical theory, optics, physics, and astronomy. Ptolemy's work in astronomy and geography have made him famous for the ages, despite the fact that many of his theories were in the following centuries proven wrong or changed. Ptolemy collected, analyzed, and presented geographical knowledge so that it could be preserved and perfected by future generations. These ideas include expressing locations by longitude and latitude, representing a spherical earth on a flat surface, and developing the first equal area map projection. Ptolemy's accomplishments reflect his understanding of spatial relationships among places on earth and of the Earth's spatial relationships to other celestial bodies. Ptolemy's most famous written works are the Almagest, a textbook of astronomy in which, among other things, he laid the foundations of modern trigonometry; the Tetrabiblos, a compendium of astrology and geography; and Geographica (his guide to "Geography"), which compiled and summarized much of the geographic information accumulated by the Greeks and Romans up to that time. Geographica, a work of seven volumes, the standard geography textbook until the 15th century, transmitted a vast amount of topographical detail to Renaissance scholars, profoundly influencing their conception of the world. Containing instructions for drawing maps of the entire "oikoumene" (inhabited world), Geographica was what we would now call an atlas. It included a world map, 26 regional maps, and 67 maps of smaller areas. They illustrated three different methods for projecting the Earth's surface on a map (an equal area projection, a stereographic projection, and a conic projection), the calculation of coordinate locations for some eight thousand places on the Earth, and the development of concepts of geographical latitude and longitude (Figure 1). Through his publications, Ptolemy dominated European cartography for nearly a century and inspired explorers like Christopher Columbus to test the spatial boundaries of the world. Ptolemy suggested that people remap his data, and in Book I of Geographica he offers advice on how to draw maps. Later in Geographica, Ptolemy explains how to calculate the location of a place by using longitude and latitude, and how to represent the entire world on a flat map. Copies and reprints of Ptolemy's world maps made up the majority of navigation and factual maps for centuries to come, providing the base information for early European explorers. Ptolemy also standardized the orientation of maps, with North at the top and East on the left, thereby placing the known world in the upper left, a standard that remains to this day. His ability to take in and understand the incredible amount of information developed before his time, add to it, and synthesis it into a map or a book of maps changed how people understood, perceived, and represented the world. Today, we still use some of Ptolemy's original theories and debate the same problems that he faced. Longitude and latitude are still used to determine precise location on Earth. The equal-area projection, though updated substantially since Ptolemy's time, remains a fundamental tool for representing geographical distributions, and scholars continue to debate the best means of portraying a spherical Earth on a flat surface. It is useful to speculate on how Ptolemy's work has influenced social understandings and the thinking and methodologies of the social sciences. His cartographic ideas provided a spatial framework for organizing and portraying information about the known world, allowing social thinkers to better understand the space in which different societies function. The first world atlas and the ideas of longitude and latitude facilitated a more accurate understanding of how societies work in space and compare to each other spatially. Furthermore, they encouraged speculation on possible relationships between social development and physical environments. The concept of the equal area map projection may be the most important of Ptolemy's contributions to the social sciences—providing for the mapping and display of distributional information (e.g., population, resources, and cultural, geological, archaeological, historical, and phenomena) that is not biased by area distortions typical of some other projection methods. The display of data in an equal-area format allows for the visualization and analysis of spatial patterns for anomalies and trends, tasks that are central to many issues in spatial social science and that are critical to the fair representation of information to the general public. While Claudius Ptolemy helped bring geography to the forefront of scientific thought, his contributions influenced a broad range of disciplines to the importance of accuracy in locational measures and to the need for an equal-area perspective in evaluating spatial relationships among diverse phenomena and in making geographical comparisons. Figure 1: This is an early map of the world constructed using map making techniques developed by Ptolemy. Note the organization with crisscrossing lines of latitude and longitude. Ptolemy, Claudius. Almagest translation by R. Catesby Taliaferro published in Great Books of the Western World, vol. 16 (1952). Ptolemy, Claudius. Tetrabiblos (Loeb Classical Library #435) Translation by F. E. Robbins, (October 1980). Ptolemy, Claudius. Geographica Translation by Edward Luther Stevenson, published by the New York Public Library, (1932). Very controversial for content accuracy
Differentiating with Area and Perimeter - use your tile floor in the classroom or hallway to help students practice identifying the area and perimeter of irregular polygons! They loved it in my classroom and were completely engrossed the whole time. This 4 page lesson includes a page that details the formulas for finding the area of a square, rectangle, triangle, parallelogram and trapezoid. The second and third pages are practice problems. Answer Keys included! LOVE this site!!! Not only does it have the FREE printable math worksheets indexed by grade level (pre-k through calculus and statistics), they are also indexed by content, i.e. area of triangle, counting money, order of operations, addition, word problems, factors. Perimeter and Area of Triangles and Trapezoids Trashketball - Get your students moving in math class. Students practice solving perimeter and area for triangles and trapezoids and shoot baskets at the end of each round. Students will beg to play, and even principals have enjoyed a round or two. Click to check out all of my trashketball games! $ gr 5-7
RSA (named after its inventors, Ron Rivest, Adi Shamir, and Leonard Adleman) is a widely-used public-key cryptography algorithm. It is used to secure sensitive information, such as financial transactions and communications, by encrypting the data in such a way that it can only be decrypted by someone with the correct private key. In RSA, a user generates a pair of keys, a public key and a private key, which are mathematically related. The public key can be shared with anyone, while the private key must be kept secret. When someone wants to send a message to the user, they can use the user’s public key to encrypt the message. The user can then use their private key to decrypt the message. This allows for secure communication, as only the user with the correct private key can decrypt the message. Here are a few examples of programs that use RSA: - Web browsers: Many web browsers use RSA to secure communication over the internet. The Secure Sockets Layer (SSL) protocol, which is used to secure web traffic, uses RSA to establish an encrypted connection between the client (web browser) and the server. - Email programs: Some email programs, such as Microsoft Outlook and Mozilla Thunderbird, use RSA to secure email messages. This allows users to send and receive messages that are encrypted and can only be read by the intended recipients. - Virtual private network (VPN) software: RSA is commonly used to secure communication over VPNs, which allow users to connect to a private network over the internet. VPNs use RSA to establish an encrypted connection between the client (the user’s device) and the server, which helps to protect against snooping and other types of attacks. - Secure file transfer protocols: RSA is often used in secure file transfer protocols, such as Secure File Transfer Protocol (SFTP) and Secure Shell (SSH), to establish an encrypted connection between the client and server. This allows users to transfer files securely over the internet. - Digital signature systems: RSA is used in digital signature systems, such as the Pretty Good Privacy (PGP) system, to create and verify digital signatures. Digital signatures are used to authenticate the identity of the sender and to ensure that the message has not been tampered with during transit. While RSA is considered to be a very secure algorithm, it is not completely foolproof and there are a few vulnerabilities that have been identified: - Key generation: RSA relies on the generation of strong, random keys in order to provide secure encryption. If the keys are not properly generated or are weak, it can make the encryption vulnerable to attacks. - Key length: The security of RSA encryption is directly related to the length of the keys. The longer the keys, the more secure the encryption. However, longer keys also result in slower encryption and decryption. As a result, there is a trade-off between security and performance when it comes to key length. - Key management: RSA relies on the private key being kept secret in order to provide secure encryption. If the private key is compromised or stolen, it can allow an attacker to decrypt the encrypted data. This means that it is important to properly manage and protect the private key. - Side-channel attacks: RSA is vulnerable to side-channel attacks, which are attacks that exploit information that is leaked through the physical implementation of the algorithm, rather than the algorithm itself. Examples of side-channel attacks include power analysis attacks and timing attacks. RSA vs AES RSA is a public-key cryptography algorithm that is used to secure communication by encrypting the data in such a way that it can only be decrypted by someone with the correct private key. RSA is considered to be very secure, but it can be relatively slow compared to some other algorithms and is more suitable for small amounts of data, such as the encryption of short messages or the establishment of secure connections. AES, on the other hand, is a symmetric-key cryptography algorithm that is used to encrypt and decrypt data. It uses the same key for both encryption and decryption, which makes it faster than RSA. AES is suitable for encrypting large amounts of data, such as files or entire disk drives. It is widely used in a variety of applications, including secure communication, file encryption, and disk encryption. In general, RSA is more suitable for securing communication and establishing secure connections, while AES is more suitable for encrypting and decrypting large amounts of data. Both algorithms are widely used and are considered to be very secure.
The triangles class 10 notes chapter 6 is provided here, is one of the most crucial study resources for the students studying in class 10. These CBSE chapter 6 notes are concise and cover all the concepts from this chapter from which questions might be included in the board exam. You will also come across theorems based on similar concepts. In your previous year classes, you must have learned about the basics of triangles such as the area of a triangle and its perimeters, etc. The main concepts from this chapter that are covered here are- - What is a triangle? - Similarity criteria of two polygons having the same number of sides - Similarity criteria of triangles - Proof of Pythagoras Theorem - Example Questions - Problems based on Triangles - Articles Related to Triangles What is Triangle? A triangle can be defined as a polygon which has three sides and three angles. The interior angles of a triangle sum up to 180 degrees and the exterior angles sum up to 360 degrees. Depending upon the angle and its length, a triangle can be categorized in the following types- - Scalene Triangle – three edges are of different length - Isosceles Triangle – has two equal sides - Equilateral Triangle – has three equal sides and has equal interior degrees of 60 - Acute angled Triangle – has all the angles less than 90 degrees - Right angle Triangle – has one 90 degrees angle - Obtuse-angled Triangle – has an angle which is more than 90 degrees Similarity Criteria of Two Polygons Having the Same Number of Sides Any two polygons which have the same number of sides are similar if the following two criteria are met- - Their corresponding angles are equal, and - Their corresponding sides are in the same ratio (or proportion) Similarity Criteria of Triangles There are four main criteria which determine whether two triangles are similar or not. These 4 criteria are: - AAA Similarity Criterion – if the corresponding angles of any two triangles are equal, then their corresponding side will be in the same ratio and the triangles will be similar. - AA Similarity Criterion – if two angles of one triangle are respectively equal to the two angles of the other triangle, then the two triangles are similar. - SSS Similarity Criterion – if the corresponding sides of any two triangles are in the same ratio, then their corresponding angles will be equal and they will be similar. - SAS Similarity Criterion – if one angle of a triangle is equal to one angle of another triangle and the sides including these angles are in the same ratio (proportional), then the triangles are similar. Proof of Pythagoras Theorem Statement – “In a right-angled triangle the sum of squares of two sides is equal to the square of the hypotenuse of the triangle.” Know more Pythagoras theorem and Pythagorean triplets here along with examples. Consider a right triangle, right angled at B. Draw BD ⊥ AC Now, △ADC ~ △ABC So, AD/AB = AB/AC or, AD. AC = AB2 ……………(i) Also, △BCD ~ △ ABC So, CD/BC = BC/AC or, CD. AC = BC2 ……………(ii) Adding (i) and (ii), AD. AC + CD. AC = AB2 + BC2 AC(AD + DC) = AB2 + BC2 AC(AC) = AB2 + BC2 ⇒ AC2 = AB2 + BC2 Problems Related to Triangles - A girl having a height of 90 cm is walking away from a lamp-post’s base at a speed of 1.2 m/s. Calculate the length of that girl’s shadow after 4 seconds if the lamp is 3.6 m above the ground. - S and T are points on sides PR and QR of triangle PQR such that angle P = angle RTS. Now, prove that triangle RPQ and triangle RTS are similar. - E is a point on the side AD produced of a parallelogram ABCD and BE intersects CD at F. Show that triangles ABE and CFB are similar. Access CBSE Class 10 Maths Sample Papers Here. Access NCERT Class 10 Maths Book Here. Articles Related to Triangles Get more such class 10 maths and science notes at BYJU’S. Also, access class 10 question papers, sample papers, and other study materials to prepare for the board exam in a more effective way.
Space-based solar power (SBSP) is considered one of the most promising technologies for addressing Climate Change. The concept calls for satellites in Low Earth Orbit (LEO) to collect power without interruption and beam it to receiving stations on Earth. This technology circumvents the main limiting factor of solar energy, which is how it is subject to the planet’s diurnal cycle and weather. While the prospect of SBSP has been considered promising for decades, it’s only in recent years that it has become practical, thanks to the declining costs of sending payloads to space. However, the technology has applications beyond providing Earth with abundant clean energy. The European Space Agency (ESA) is also investigating it as a means of proving power on the Moon through the “Clean Energy – New Ideas for Solar Power from Space” study, which recently yielded a technology demonstrator known as the Greater Earth Lunar Power Station (GEO-LPS). This technology could provide a steady supply of power for future operations on the Moon, which include creating a permanent lunar base like the ESA’s proposed Moon Village. Within the next decade, several space agencies and commercial space partners will send crewed missions to the Moon. Unlike the “footprints and flags” missions of the Apollo Era, these missions are aimed at creating a “sustained program of lunar exploration.” In other words, we’re going back to the Moon with the intent to stay, which means that infrastructure needs to be created. This includes spacecraft, landers, habitats, landing and launch pads, transportation, food, water, and power systems. As always, space agencies are looking for ways to leverage local resources to meet these needs. This process is known as in-situ resource utilization (ISRU), which reduces costs by limiting the number of payloads that need to be launched from Earth. Thanks to new research by a team from the Tallinn University of Technology (TalTech) in Estonia, it may be possible for astronauts to produce solar cells using locally-sources regolith (moon dust) to create a promising material known as pyrite. These findings could be a game-changer for missions in the near future, which include the ESA’s Moon Village, NASA’s Artemis Program, and the Sino-Russian International Lunar Research Station (ILRS). When the International Space Station (ISS) runs low on basic supplies – like food, water, and other necessities – they can be resupplied from Earth in a matter of hours. But when astronauts go the Moon for extended periods of time in the coming years, resupply missions will take much longer to get there. The same holds true for Mars, which can take months to get there while also being far more expensive. It’s little wonder then why NASA and other space agencies are looking to develop methods and technologies that will ensure that their astronauts have a degree of self-sufficiency. According to NASA-supported research conducted by Daniel Tompkins of Grow Mars and Anthony Muscatello (formerly of the NASA Kennedy Space Center), ISRU methods will benefit immensely from some input from nature. In the coming decades, many space agencies hope to conduct crewed missions to the Moon and even establish outposts there. In fact, between NASA, the European Space Agency (ESA), Roscosmos, and the Indian and Chinese space agencies, there are no shortages of plans to construct lunar bases and settlements. These will not only establish a human presence on the Moon, but facilitate missions to Mars and deeper into space. To put it simply, the entire surface of the Moon is covered in dust (aka. regolith) that is composed of fine particles of rough silicate. This dust was formed over the course of billions of years by constant meteorite impacts which pounded the silicate mantle into fine particles. It has remained in a rough and fine state due to the fact that the lunar surface experiences no weathering or erosion (due to the lack of an atmosphere and liquid water). Because it is so plentiful, reaching depths of 4-5 meters (13-16.5 feet) in some places – and up to 15 meters (49 feet) in the older highland areas – regolith is considered by many space agencies to be the building material of choice for lunar settlements. As Aidan Cowley, the ESA’s science advisor and an expert when it comes to lunar soil, explained in a recent ESA press release: “Moon bricks will be made of dust. You can create solid blocks out of it to build roads and launch pads, or habitats that protect your astronauts from the harsh lunar environment.” In addition to taking advantage of a seemingly inexhaustible local resource, the ESA’s plans to use lunar regolith to create this base and related infrastructure demonstrates their commitment to in-situ resource utilization. Basically, bases on the Moon, Mars, and other locations in the Solar System will need to be as self-sufficient as possible to reduce reliance on Earth for regular shipments of supplies – which would both expensive and resource-exhaustive. To test how lunar regolith would fare as a building material, ESA scientists have been using Moon dust simulants harvested right here on Earth. As Aiden explained, regolith on both Earth and the Moon are the product of volcanism and are basically basaltic material made up of silicates. “The Moon and Earth share a common geological history,” he said, “and it is not difficult to find material similar to that found on the Moon in the remnants of lava flows.” The simulant were harvested from the region around Cologne, Germany, that were volcanically active about 45 million years ago. Using volcanic powder from these ancient lava flows, which was determined to be a good match for lunar dust, researchers from the European Astronaut Center (EAC) began using the powder (which they’ve named EAC-1) to fashioning prototypes of the bricks that would be used to created the lunar village. Spaceship EAC, an ESA initiative designed to tackle the challenges of crewed spaceflight, is also working with EAC-1 to develop the technologies and concepts that will be needed to create a lunar outpost and for future missions to the Moon. One of their projects centers on how to use the oxygen in lunar dust (which accounts for 40% of it) to help astronauts have extended stays on the Moon. But before the ESA can sign off on lunar dust as a building material, a number of tests still need to be conducted. These include recreating the behavior of lunar dust in a radiation environment to simulate their electrostatic behavior. For decades, scientists have known that lunar dust is electrically-charged because of the way it is constantly bombarded by solar and cosmic radiation. This is what causes it to lift off the surface and cling to anything it touches (which the Apollo 11 astronauts noticed upon returning to the Lunar Module). As Erin Transfield – a member of ESA’s lunar dust topical team – indicated, scientists still do not fully understand lunar dust’s electrostatic nature, which could pose a problem when it comes to using it as a building material. What’s more, the radiation-environment experiments have not produced any conclusive results yet. As a biologist who dreams of being the first woman on the Moon, Transfield indicated that more research is necessary using actual lunar dust. “This gives us one more reason to go back to the Moon,” she said. “We need pristine samples from the surface exposed to the radiation environment.” Beyond establishing a human presence on the Moon and allowing for deep-space missions, the construction of the ESA’s proposed lunar village would also offer opportunities to leverage new technologies and forge partnerships between the public and private sector. For instance, the ESA has collaborated with the architectural design firm Foster + Partners to come up with the design for their lunar village, and other private companies have been recruited to help investigate other aspects of building it. This mission, a joint effort between the ESA and Roscosmos, will involve a Russian-built lander setting down in the Moon’s South Pole-Aitken Basin, where the PROSPECT probe will deploy and drill into the surface to retrieve samples of ice. Going forward, the ESA’s long-term plans also call for a series of missions to the Moon beginning in the 2020s that would involve robot workers paving the way for human explorers to land later. In the coming decades, the intentions of the world’s leading space agencies are clear – not only are we going back to the Moon, but we intend to stay there! To that end, considerable resources are being dedicated towards researching and developing the necessary technologies and concepts needed to make this happen. By the 2030s, we might just see astronauts (and even private citizens) coming and going from the Moon with regular frequency. And be sure to check out this video about the EAC’s efforts to study lunar regolith, courtesy of the ESA: In recent years, multiple space agencies have shared their plans to return astronauts to the Moon, not to mention establishing an outpost there. Beyond NASA’s plan to revitalize lunar exploration, the European Space Agency (ESA), Rocosmos, and the Chinese and Indian federal space agencies have also announced plans for crewed missions to the Moon that could result in permanent settlements. As with all things in this new age of space exploration, collaboration appears to be the key to making things happen. This certainly seems to be the case when it comes to the China National Space Administration (CNSA) and the ESA’s respective plans for lunar exploration. As spokespeople from both agencies announced this week, the CNSA and the ESA hope to work together to create a “Moon Village” by the 2020s. The announcement first came from the Secretary General of the Chinese space agency (Tian Yulong). On earlier today (Wednesday, April 26th) it was confirmed by the head of media relations for the ESA (Pal A. Hvistendahl). As Hvistendahl was quoted as saying by the Associated Press: “The Chinese have a very ambitious moon program already in place. Space has changed since the space race of the ’60s. We recognize that to explore space for peaceful purposes, we do international cooperation.” Yulong and Hvistendahl indicated that this base would aid in the development of lunar mining, space tourism, and facilitate missions deeper into space – particularly to Mars. It would also build upon recent accomplishments by both agencies, which have successfully deployed robotic orbiters and landers to the Moon in the past few decades. These include the CNSA’s Chang’e missions, as well as the ESA’s SMART-1 mission. As part of the Chang’e program, the Chinese landers explored the lunar surface in part to investigate the prospect of mining Helium-3, which could be used to power fusion reactors here on Earth. Similarly, the SMART-1 mission created detailed maps of the northern polar region of the Moon. By charting the geography and illumination of the lunar north pole, the probe helped to identify possible base sites where water ice could be harvested. In addition, its is likely that the construction of this base will rely on additive manufacture (aka. 3-d printing) techniques specially developed for the lunar environment. In 2013, the ESA announced that they had teamed up with renowned architects Foster+Partners to test the feasibility of using lunar soil to print walls that would protect lunar domes from harmful radiation and micrometeorites. This agreement could signal a new era for the CNSA, which has enjoyed little in the way of cooperation with other federal space agencies in the past. Due to the agency’s strong military connections, the U.S. government passed legislation in 2011 that barred the CSNA from participating in the International Space Station. But an agreement between the ESA and China could open the way for a three-party collaboration involving NASA. The ESA, NASA and Roscosmos also entered into talks back in 2012 about the possibility of creating a lunar base together. Assuming that all four nations can agree on a framework, any future Moon Village could involve astronauts from all the world’s largest space agencies. Such a outpost, where research could be conducted on the long-term effects of exposure to low-g and extra-terrestrial environments, would be invaluable to space exploration. In the meantime, the CNSA hopes to launch a sample-return mission to the Moon by the end of 2017 – Chang’e 5 – and to send the Chang’e 4 mission (whose launch was delayed in 2015) to the far side of the Moon by 2018. For its part, the ESA hopes to conduct a mission analysis on samples brought back by Chang’e 5, and also wants to send a European astronaut to Tiangong-2 (which just conducted its first automated cargo delivery) at some future date. As has been said countless times since the end of the Apollo Era – “We’re going back to the Moon. And this time, we intend to stay!”
In the realm of education, there exists a teaching approach that goes beyond traditional classroom lectures and textbooks. It is a method that immerses students in the learning process, allowing them to actively engage with the subject matter at hand. This approach, known as hands-on learning, is characterized by its emphasis on experiential activities and direct manipulation of materials or objects. By doing so, students are able to develop a deeper understanding of concepts and principles, while also honing their problem-solving skills and critical thinking abilities. Hands-on learning offers numerous benefits for students of all ages. Research has shown that it enhances retention and recall of information, as well as fosters creativity and innovation. Moreover, this approach promotes active participation and collaboration among learners, leading to a more dynamic and interactive classroom environment. Throughout this article, we will explore the definition and principles of hands-on learning, delve into its various benefits for students across different educational settings, provide examples of hands-on learning activities, discuss strategies for implementing this approach effectively in classrooms or other learning environments. Whether you are an educator seeking new instructional techniques or a curious reader eager to learn more about innovative teaching methods – this article aims to offer valuable insights into the world of hands-on learning. - Hands-on learning emphasizes experiential activities and direct manipulation of materials or objects. - Research shows that hands-on learning enhances retention, recall, creativity, and innovation. - Hands-on learning fosters active participation and collaboration among learners. - Benefits of hands-on learning include increased student engagement, development of critical thinking skills, enhanced retention and application of knowledge, and improved collaboration and communication skills. Definition and Principles of Hands-On Learning The concept of hands-on learning refers to a pedagogical approach that emphasizes active and experiential engagement, promoting deeper understanding and retention of knowledge through direct manipulation and practical application. Hands-on learning is not limited to traditional classroom settings; it also finds value in the workplace, where employees engage in practical tasks to enhance their skills and knowledge. In this context, hands-on learning allows individuals to apply theoretical concepts in real-world scenarios, thereby bridging the gap between theory and practice. Additionally, hands-on learning is relevant in the arts and humanities, as it encourages students to actively participate in creative processes such as painting or performing arts. By engaging directly with materials and techniques, learners can develop a better appreciation for artistic expression while honing their own skills. Benefits of Hands-On Learning for Students One potential advantage of engaging in hands-on learning is the opportunity it provides for students to actively participate and gain practical experience, fostering a deeper understanding of concepts. This approach has several benefits for students: Increased student engagement: Hands-on learning captures students’ attention and encourages active involvement, leading to higher levels of engagement in the learning process. Development of critical thinking skills: By actively participating in hands-on activities, students are challenged to think critically, solve problems, and make informed decisions. Enhanced retention and application of knowledge: Hands-on learning allows students to apply what they have learned in real-world contexts, promoting better retention of information and facilitating its transfer to new situations. Improved collaboration and communication skills: Through hands-on activities, students often work together in groups, which helps develop their ability to collaborate effectively and communicate their ideas clearly. Overall, hands-on learning offers numerous advantages that contribute to a more engaging and effective educational experience for students. Examples of Hands-On Learning Activities Examples of hands-on learning activities include conducting science experiments in a laboratory setting, engaging in mock trials to learn about the legal system, and building models or prototypes to understand engineering principles. In the context of science experiments, hands-on learning allows students to actively engage with the materials and concepts being taught. By physically manipulating equipment, measuring variables, and observing outcomes firsthand, students gain a deeper understanding of scientific principles. Similarly, hands-on learning in outdoor activities provides an opportunity for students to explore and apply theoretical knowledge in real-world settings. This could involve conducting ecological surveys, collecting samples for analysis, or participating in field studies. Through these experiences, students not only acquire practical skills but also develop critical thinking abilities as they navigate challenges encountered during the hands-on learning process. Implementing Hands-On Learning in Different Educational Settings Implementing hands-on learning in different educational settings presents an opportunity to foster active student engagement and enhance the application of theoretical knowledge through practical experiences. However, there are challenges when it comes to incorporating hands-on learning in online education. Online platforms may lack the physical resources and equipment required for hands-on activities, making it difficult to provide students with a truly interactive experience. To overcome these challenges, educators can utilize virtual simulations, online laboratories, and collaborative projects that encourage students to actively participate and engage with the material. Incorporating hands-on learning in STEM subjects requires specific strategies that encourage critical thinking and problem-solving skills. For example, teachers can design experiments or projects that allow students to apply scientific concepts in real-world scenarios. They can also incorporate technology tools such as 3D modeling software or coding programs to enable students to explore STEM subjects in a hands-on manner. By employing these strategies, educators can ensure that hands-on learning remains an integral part of the educational experience regardless of the setting or subject area. Tips for Effective Hands-On Learning Experiences To enhance the effectiveness of hands-on learning experiences, educators can employ various strategies that promote active student engagement and foster practical application of knowledge. Incorporating best practices for facilitating hands-on learning is crucial in ensuring optimal outcomes. Firstly, educators should provide clear objectives and expectations to guide students throughout the process. This helps students understand the purpose and relevance of the activity, enhancing their motivation to actively participate. Secondly, educators should create a supportive environment that encourages collaboration and critical thinking. This can be achieved by assigning group projects or incorporating discussions and problem-solving activities into the learning experience. Lastly, overcoming challenges in hands-on learning implementation requires flexibility and adaptability on the part of educators. They should be prepared to modify activities based on student needs and interests, providing individualized support as necessary. By following these strategies, educators can maximize student engagement and promote effective hands-on learning experiences. Frequently Asked Questions What are the different types of learners that benefit the most from hands-on learning? Hands-on learning benefits kinesthetic learners, who learn best through physical activities, and visual learners, who benefit from seeing and observing. It provides a practical approach to education that enhances understanding and retention of information. How does hands-on learning contribute to the development of critical thinking skills? Hands-on learning contributes to the development of critical thinking skills by providing students with opportunities to actively engage in problem-solving, decision-making, and analysis. It enhances their ability to think critically, make connections, and apply knowledge in practical situations. Incorporating hands-on activities in the curriculum benefits students by promoting deeper understanding and retention of concepts. Are there any specific subjects or academic areas where hands-on learning is particularly effective? Hands-on learning has proven to be particularly effective in subjects such as science, technology, engineering, and mathematics (STEM), as well as vocational training. Research suggests that this approach enhances academic effectiveness by promoting active engagement, problem-solving skills, and deeper understanding of concepts. Can hands-on learning be used in virtual or online educational settings? Virtual hands-on learning has shown great benefits in remote education. A recent study found that students who engage in virtual hands-on activities have higher levels of engagement and retention compared to traditional online instruction. How can parents or guardians support hands-on learning at home? Parents or guardians can support hands-on learning at home by engaging children in creative learning activities and practical tasks. This can be done through providing resources, setting up experiments, encouraging problem-solving, and promoting critical thinking skills. Hands-on learning, also known as experiential learning, is an educational approach that emphasizes active participation and practical application of knowledge. It involves students directly engaging in activities, experiments, and real-life experiences to gain a deeper understanding of the subject matter. This approach has numerous benefits for students, including enhanced critical thinking skills, improved problem-solving abilities, increased motivation and engagement, and better retention of information. Hands-on learning can be implemented in various educational settings such as classrooms, laboratories, outdoor environments, or even through virtual simulations. By incorporating hands-on activities into the learning process, educators can create dynamic and interactive experiences that foster deep learning and long-lasting knowledge acquisition. In conclusion, hands-on learning is like a spark igniting the fire of knowledge within students’ minds. By actively engaging in practical activities and real-world experiences, students are able to grasp concepts more effectively while developing essential skills for their future endeavors. Whether it’s conducting experiments in a science lab or participating in a group project outdoors, hands-on learning provides an invaluable opportunity for students to apply theoretical knowledge in meaningful ways. By embracing this experiential approach to education, educators can create a vibrant and stimulating learning environment that nurtures curiosity and encourages lifelong learning.
Prehistoric and recent extinctions of large-bodied terrestrial herbivores had significant and lasting impacts on Earth’s ecosystems due to the loss of their distinct trait combinations. The world’s surviving large-bodied avian and mammalian herbivores remain among the most threatened taxa. As such, a greater understanding of the ecological impacts of large herbivore losses is increasingly important. However, comprehensive and ecologically-relevant trait datasets for extinct and extant herbivores are lacking. Here, we present HerbiTraits, a comprehensive functional trait dataset for all late Quaternary terrestrial avian and mammalian herbivores ≥10 kg (545 species). HerbiTraits includes key traits that influence how herbivores interact with ecosystems, namely body mass, diet, fermentation type, habitat use, and limb morphology. Trait data were compiled from 557 sources and comprise the best available knowledge on late Quaternary large-bodied herbivores. HerbiTraits provides a tool for the analysis of herbivore functional diversity both past and present and its effects on Earth’s ecosystems. body weight • diet • digestion trait • habitat • limb morphology trait species of avian and mammalian herbivores Sample Characteristic - Organism avian herbivores • mammalian herbivores Sample Characteristic - Environment Machine-accessible metadata file describing the reported data: https://doi.org/10.6084/m9.figshare.13353416 Background & Summary Large-bodied terrestrial avian and mammalian herbivores strongly influenced terrestrial ecosystems through much of the Cenozoic–the last 66 million years of Earth history. However, many of the world’s large-bodied herbivore species became extinct or experienced significant range contractions beginning ~100,000 years ago in the late Quaternary. Human impacts were the primary driver of these extinctions and declines, though possibly in conjunction with climate change1,2,3. The world’s remaining large-bodied herbivores are among the most threatened species on the planet4,5, leading to urgent calls to protect these species and to better understand their distinct ecological roles6. Large-bodied herbivores are unique in their capacity to consume large quantities of plant biomass and, as the largest terrestrial animals, they are uniquely capable of causing disturbance to vegetation and soils. These taxa thus exert strong top-down control on ecological communities and ecosystem processes. Prehistoric and historic losses of large herbivores led to profound changes to Earth’s terrestrial ecosystems, including reductions in ecosystem productivity from reduced nutrient cycling, reduced carbon forest stocks from the loss of disturbance, increases in wildfire frequency and severity, and changes in plant communities7,8,9,10,11,12. The causes and ecological legacies of late Quaternary extinctions are key topics of rapidly growing research interest13,14,15,16,17,18. Likewise, the potential for introduced herbivores (either inadvertently or intentionally) to restore lost ecological processes is an important focus of research and debate today19,20,21,22,23,24,25,26,27. The capacity for organisms to affect the environment is driven by their functional trait combinations28 (Fig. 1). As such, the availability and accuracy of herbivore functional trait data is critical for understanding the patterns and ecological consequences of the late Quaternary extinctions, the implications of modern ecological changes, and to guide conservation action. However, datasets of herbivore traits are rare and suffer from poor documentation, incomplete species lists, and outdated taxonomies. Trait datasets have been particularly scarce and/or inconsistently available for extinct species. Furthermore, there often exists a trade-off between species coverage and the resolution of many datasets. Mammalian trait datasets such as PHYLACINE29 or MOM (Mass of Mammals)30 include data on many late Quaternary mammal species, including carnivorous, aquatic, and flying species. These datasets thus include traits that are universal across these disparate ecological niches but in doing so lack trait data relevant to understanding herbivores and their unique ecological roles in particular. Furthermore, few datasets have considered or included avian herbivores, which can be particularly important components of large vertebrate faunas, especially on islands. The lack of a consistent and high-resolution trait dataset for late Quaternary avian and mammalian herbivores stymies efforts to understand the consequences of ecological changes that followed late Quaternary extinctions and hinders modern responses to changes in this important functional group. Here, we present HerbiTraits, a comprehensive global trait dataset containing functional traits for all terrestrial avian (n = 34 species) and mammalian (511 species) herbivores ≥10 kg spanning the last ~130,000 years of the late Quaternary. HerbiTraits contains traits fundamental to understanding the multiple dimensions of herbivore ecology, including body mass, diet, fermentation type, habitat use, and limb morphology (Fig. 1, Table 1). These data are broadly useful for both paleo and modern ecological research, including potential conservation and rewilding efforts involving herbivores. Recent research using these data has yielded insight into the functionality of novel assemblages composed of introduced and native herbivores25. Compilation of Species List HerbiTraits includes all known herbivores over the last ~130,000 years from the start of the last interglacial period, which is ~30,000 years prior to onset of the earliest late Quaternary extinctions. The mammal species list was derived from PHYLACINE v1.2.129. Herbivorous birds ≥10 kg were gathered through a comprehensive review of the peer-reviewed literature, including handbooks31. Herbivores were selected as any species ≥10 kg with >50% plant in their diet, thus including several omnivorous taxa (e.g. bears). The 10 kg cut-off was chosen following Owen-Smith’s32 designation of a mesoherbivore, a category paradigmatic to many herbivore ecological analyses33 but missed by the commonly used ≥44 kg cutoff commonly used for ‘megafauna’34. Domestic species with wild introduced populations (e.g. horses Equus ferus caballus, water buffalo Bubalus arnee bubalis)26 were included separately in HerbiTraits as their trait values (particularly body mass) can differ substantially from their extant or extinct pre-domestic conspecifics. We included the status for all species, including ‘Extant’, ‘Extinct before 1500 CE’, ‘Extinct after 1500 CE’, ‘Extinct before 1500 CE, but wild in introduced range’ and ‘Extinct after 1500 CE, but wild in introduced range’. The latter two cases apply to species that are extinct in their native ranges (e.g. Camelus dromedarius, Bos primigenius, Oryx dammah) but which have wild, introduced populations. Species listed as Extinct in the Wild by the IUCN Red List are considered ‘Extinct after 1500 CE’ in the dataset. Functional trait data were collected from a variety of peer-reviewed literature (n = 502 references, 91% of total references), books (n = 28, e.g. Handbook of the Mammals of the World35), online databases (n = 7), theses (n = 9), and others (n = 11). For all taxa, multiple sources were consulted, and the most reliable source was used in trait designation. Reliability was based on the method of the source data (see Table 2 for the ranking system we employed). In cases where studies disagreed, we gave extra weight to studies with more reliable methods, larger sample sizes, and/or broader geographic and temporal coverage. We provide justification for our decision-making process in note fields. Body mass is strongly associated with a number of life history attributes and ecological effects, including metabolic and reproductive rates, the capacity to cause disturbance, the ability to digest coarse fibrous vegetation, and the vulnerability of herbivores to predation32,36 (Fig. 1). Mammal body mass (in grams) was sourced from PHYLACINE v1.2.129 and Mass of Mammals30(Table 1). Avian body masses were collected directly from the literature. We collected body mass data separately for domesticated species from AnAge: Animal Senescence and Aging database37, because their body masses can vary drastically from their pre-domesticated relatives. Given variability in mass estimation methods and their reliability, we tracked down the primary sources that the aforementioned datasets cited and coded the mass estimation method used. In general, the most reliable body mass estimates for extinct mammals were calculated with volumetric estimates (e.g. by measuring displacement of a fluid) or by allometric scaling equations. Isometric equations (which assume a simple linear relationship between morphology (e.g. tarsus length) and body mass were ranked lower, as were cases where body masses were estimated based on similar, often closely related species (Table 2). However, we restricted metadata gathering to extinct taxa as accounts of extant species rarely report how their mass estimates were generated (though in all likelihood they are derived from a measured voucher specimen). Furthermore, the mass estimates of extinct species are the most uncertain and the most difficult to verify for users who are not familiar with extinct species or paleobiological methods of mass reconstruction. The avian mass estimates were collected by the authors directly from the peer reviewed literature. Diet determines the type of plants herbivores consume and thus downstream effects on vegetation, nutrient cycling, wildfire, seed dispersal, and albedo (Fig. 1)19,33. Diet was collected as three ordinal variables describing graminoid consumption (i.e. grazing), browse and fruit consumption (i.e. browsing), and meat consumption (including vertebrate and invertebrate) (Table 1). Grazing and browsing have distinct effects on vegetation and ecosystems and are key dimensions of herbivore dietary differentiation33, reflecting a suite of strategies that have evolved across all major herbivore lineages. This is because grasses and their relatives (graminoids) and dicots (woody plants and herbaceous forbs) present different obstacles to herbivory. While graminoids are highly abrasive and composed primarily of cellulose, dicots are lignified and/or protected with secondary chemical compounds38. Frugivory is often impossible to differentiate from browsing based on paleobiological sources of data for extinct taxa and thus was included with browsing, though known records of fruit consumption are marked in the dataset’s diet notes column. The consumption of bamboo was considered browsing despite bamboo being a grass, as its lignification makes it structurally similar to wood39. Graminoid, browse, and meat consumption ranged from 0–3, with 0 indicating insignificant consumption and 3 indicating regular or heavy consumption. In general, 0 indicates 0–9% of diet, 1 indicates 10–19%, 2 indicates 20–49%, and 3 indicates 50–100%. For example, an obligate grazer that consumes 90% graminoids would have a 0 for browse, and a 3 for graze, whereas a grazer that consumes 70% graze and 30% browse would have a 3 for graze and a 2 for browse. Likewise, if a species consumed both graze and browse equally (e.g. a mixed feeder) they would receive a score of 3 for each. While dietary estimates for extinct taxa by necessity came from broad temporal and spatial scales40, the coarseness of our ordinal (0–3) diet designation allowed us to capture intraspecific and spatiotemporal variation, making extant and extinct species comparable. Diets for extant species (n = 321) were based on records from the Handbook of the Mammals of the World35, which represents a compiled, expert-reviewed synopsis of natural history data across mammals. However, to ensure that these diet designations were up to date, we conducted literature reviews for each species, searching for any papers published since the Handbook of the Mammals of the World (2009–2011 depending on taxonomic group). We also consulted region-specific handbooks, in particular Kingdon et al. 2013 Mammals of Africa41. In cases where percent diet composition was unavailable, we determined dietary values by converting textual descriptions into ordinal values (Table 3) following the methods outlined by MammalDIET42. Diets for extinct species were gathered from a variety of literature, as no systematic compilation of extinct herbivore diet is presently available. Discrepancies between sources were noted and described in the dietary notes field. The methods of the original source papers for extant and extinct were designated and ranked by reliability (Table 2), which was used in assigning final dietary values. We gave priority to direct observations, including fecal or stomach content analysis, coprolites, fossilized boluses (e.g. phytoliths or other vegetation remnants in teeth), and foraging observations. This category was followed by proxy data, such as stable carbon isotopes and dental microwear and mesowear. Inferences from functional morphology, direct observations with sample sizes ≤5, expert opinions, and inferences from extant relatives were considered to have the lowest reliability (Table 2). Herbivore diets can be highly variable, particularly across seasons and regions. In most cases where primary sources differed because of geographic variation in diets (e.g. a diet heavy in grass in one location and in browse in another), we increased the value of both dietary categories to reflect the mixed feeding capacity of the species across their range. However, we tempered this in cases of unusual diets in response to starvation, such as in the case of severe droughts, as consumption does not necessarily mean the species has the capacity to survive on these alternative diets. In these cases, we have noted the evidence and justified our decision-making process. In cases where no dietary data were available (n = 26 species), we imputed diet values based on a posterior distribution of 1,000 equally-likely phylogenies for mammals ≥10 kg from PHYLACINE v1.2.129,43. We used the R package “Rphylopars” v0.3.0 with a Brownian motion evolutionary model and took the median value from the 1,000 phylogenetic trees44,45. This model accounted for both the evolutionary correlation of the individual dietary values across the full phylogeny as well as the probability of diet values based on other traits, as some trait combinations (e.g. arboreality and grazing) are very rare. Given that this imputation was conducted across full mammal phylogenies (≥10 kg), we used life history traits from PHYLACINE v1.2.129,43, so that imputation for species only distantly related to other herbivores (e.g. bears) would be robust. Ordinal diet scores were further used to categorize species into two types of dietary guild classifications, one herbivore-specific which contained browsers (graze = 0-1, browse = 3), mixed-feeders (graze = 2-3, browse = 2-3), and grazers (graze = 3, browse = 0-1), and another guild containing omnivores (any species with meat consumption ≥2). Users can easily derive finer-scale dietary guilds (e.g. mixed-feeder preferring browse) from the ordinal scores if desired. Digestive physiology controls the quantity and quality of vegetation (e.g., fiber and nutrient content) that herbivores consume. Fermentation type therefore shapes effects on vegetation, gut passage rate, seed and nutrient dispersal distances, water requirements, and the resulting stoichiometry of excreta19,46,47,48,49 (Fig. 1). Following Hume46, fermentation type was collected as a categorical variable consisting of simple gut, hindgut colon, hindgut caecum, foregut non-ruminant, and ruminant (Table 1). These variables capture the range of fermentation adaptations across avian and mammalian herbivores. Based on these classifications and Hume46, we also ranked fermentation efficiencies (0–3) on an ordinal scale to these various digestive strategies, to facilitate quantitative functional diversity analyses (Table 1). Fermentation types show strong phylogenetic conservatism at the family level. Therefore, for the most part, if direct anatomical evidence was not available, we inferred fermentation types from extant relatives. However, some extinct herbivores possess no close modern relatives and may have been functionally non-analog (e.g. 23 extinct ground sloths, 3 notoungulates, 4 diprotodons, 16 glyptodonts, and 12 giant lemurs). In these cases, closest living relatives, expert opinions, and craniodental morphology were used to determine the most likely fermentation system. For example, notoungulates, an extinct group from South America, possess no close relatives yet their craniodental and appendicular morphology resemble extant hindgut fermenting taxa (rhinos), and hindgut fermentation is widely considered to be ancestral in ungulates50. In all cases, we describe our justification and the state of the debate in the current literature. Habitat use determines the components of ecosystems that herbivores interact with and is central to understanding their effects on vegetation, soils, and processes like nutrient dispersal (e.g. moving nutrients from terrestrial to aquatic environments51). We classified habitat with three non-exclusive binary variables (0 or 1) for the use of arboreal, terrestrial, and aquatic environments. We further classified this variable categorically as semi-aquatic, terrestrial, semi-arboreal, and arboreal. Defining habitat use is challenging as many terrestrial species use aquatic or arboreal environments opportunistically, and percentage habitat use data is unavailable for most species. To ensure habitat designations were consistent for extant and extinct species, we classified taxa on the basis of obligate habitat use across their geographic range and/or the possession of specialized adaptations (e.g. climbing ability) that would be evident in the morphology of fossil specimens. Further proof of habitat use by extinct species was inferred from close relatives or isotopic proxy data, when relevant. In cases where no specific information was available, we inferred habitat use from absence of evidence (e.g. there is no specific data regarding aquatic or arboreal habitat use by gemsbok Oryx gazella). Limb morphology is broadly associated with herbivore habitat preferences, locomotion (e.g., cursoriality, fossoriality, climbing), anti-predator responses, and rates of body size evolution52,53,54. Limb morphology also controls disturbance-related trampling effects on soils, with hoofed unguligrade taxa having stronger influences on soils than those with other morphologies55. Trampling has important effects on soils, hydrology, albedo, and vegetation7,56 and is often considered an essentially novel aspect of introduced herbivores in Australia and North America (e.g10,57,58.). Limb morphology was collected as a three-level categorical variable consisting of plantigrade (walking on soles of feet), digitigrade (walking on toes), and unguligrade (walking on hoof). For example, plantigrade species are more likely to be fossorial or scansorial in habit, digitigrade species are likely to be saltatory or ambulatory (e.g. extant kangaroos), while unguligrade species are often adapted for rocky, vertiginous terrain or cursoriality53,54. Limb morphology shows high phylogenetic conservatism across herbivore lineages and thus was primarily collected at the genus or family level from primary and secondary literature. HerbiTraits consists of an Excel workbook containing metadata (column names and descriptions), the trait dataset, and references as three separate sheets. The dataset is open-access and is hosted on Figshare59 as well as on GitHub (https://github.com/MegaPast2Future/HerbiTraits). The majority of functional trait data were collected from primary peer-reviewed literature (1,733 trait values from 456 articles), secondary peer-reviewed literature (1,294 values from 46 articles), or academic handbooks (1,099 trait values from 27 resources). Twenty-eight remaining resources consisted of theses (n = 39 trait values), databases (44), websites (39), conference proceedings (9), and grey literature (5). For transparency, justifications for trait designations (particularly relevant for extinct species) are described in the Notes columns and the highest quality evidence is ranked in trait-specific Reliability columns. Contradictions between sources have been noted and values have been based on the most empirically-robust methods or by averaging values across studies (see above). All data designations have been cross-checked (by EJL, SDS, JR, MD, and OM). We aim to maintain HerbiTraits with the best available data. We urge users to report errors or updates on newly published data for integration into HerbiTraits by filing an Issue on our GitHub (https://github.com/MegaPast2Future/HerbiTraits) repository page, or by emailing the corresponding authors. Furthermore, the GitHub (https://github.com/MegaPast2Future/HerbiTraits) page includes an incomplete trait file, which contains other ecologically relevant traits, such as adaptations for digging and free water dependence60. These traits remain unavailable for many taxa, but provide a starting point for further data collection and analysis. The authors declare no custom code necessary for the interpretation or use of dataset. Barnosky, A. D., Koch, P. L., Feranec, R. S., Wing, S. L. & Shabel, A. B. Assessing the causes of late Pleistocene extinctions on the continents. Science 306, 70–75, https://doi.org/10.1126/science.1101476 (2004). Sandom, C., Faurby, S., Sandel, B. & Svenning, J. C. Global late Quaternary megafauna extinctions linked to humans, not climate change. Proc. R. Soc. B. 281, 20133254, https://doi.org/10.1098/rspb.2013.3254 (2014). Metcalf, J. L. et al. Synergistic roles of climate warming and human occupation in Patagonian megafaunal extinctions during the Last Deglaciation. Science Advances 2, e1501682 (2016). Ripple, W. J. et al. Collapse of the world’s largest herbivores. Science Advances 1, e1400103 (2015). Atwood, T. B. et al. Herbivores at the highest risk of extinction among mammals, birds, and reptiles. Science Advances 6, eabb8458 (2020). Ripple, W. J. et al. Saving the world’s terrestrial megafauna. Bioscience 66, 807–812 (2016). Zimov, S. A. et al. Steppe-tundra transition: a herbivore-driven biome shift at the end of the Pleistocene. The American Naturalist 146, 765–794 (1995). Zhu, D. et al. The large mean body size of mammalian herbivores explains the productivity paradox during the Last Glacial Maximum. Nature Ecology & Evolution 2, 640–649, https://doi.org/10.1038/s41559-018-0481-y (2018). Berzaghi, F. et al. Carbon stocks in central African forests enhanced by elephant disturbance. Nature Geoscience 12, 725–729 (2019). Johnson, C. N. et al. Can trophic rewilding reduce the impact of fire in a more flammable world? Philos. Trans. R. Soc. Lond. B Biol. Sci. 373, https://doi.org/10.1098/rstb.2017.0443 (2018). Rule, S. et al. The aftermath of megafaunal extinction: ecosystem transformation in Pleistocene Australia. Science 335, 1483–1486, https://doi.org/10.1126/science.1214261 (2012). Gill, J. L., Williams, J. W., Jackson, S. T., Lininger, K. B. & Robinson, G. S. Pleistocene megafaunal collapse, novel plant communities, and enhanced fire regimes in North America. Science 326, 1100–1103 (2009). Smith, F. A., Elliott Smith, R. E., Lyons, S. K. & Payne, J. L. Body size downgrading of mammals over the late Quaternary. Science 360, 310–313, https://doi.org/10.1126/science.aao5987 (2018). Smith, F. A. et al. Unraveling the consequences of the terminal Pleistocene megafauna extinction on mammal community assembly. Ecography 39, 223–239, https://doi.org/10.1111/ecog.01779 (2015). Davis, M. What North America’s skeleton crew of megafauna tells us about community disassembly. Proc. R. Soc. B. 284, 20162116 (2017). Bakker, E. S. et al. Combining paleo-data and modern exclosure experiments to assess the impact of megafauna extinctions on woody vegetation. Proc. Natl. Acad. Sci. USA 113, 847–855 (2016). Bakker, E. S., Arthur, R. & Alcoverro, T. Assessing the role of large herbivores in the structuring and functioning of freshwater and marine angiosperm ecosystems. Ecography 39, 162–179 (2016). Rowan, J. & Faith, J. in The Ecology of Browsing and Grazing II 61–79 (Springer, 2019). Wallach, A. D. et al. Invisible megafauna. Conservation Biology. 32, 962–965 (2018). Sandom, C. J. et al. Trophic rewilding presents regionally specific opportunities for mitigating climate change. Philosophical Transactions of the Royal Society B 375, 20190125 (2020). Svenning, J. C. et al. Science for a wilder Anthropocene: Synthesis and future directions for trophic rewilding research. Proc. Natl. Acad. Sci. USA 113, 898–906, https://doi.org/10.1073/pnas.1502556112 (2016). Guyton, J. A. et al. Trophic rewilding revives biotic resistance to shrub invasion. Nature Ecology & Evolution, https://doi.org/10.1038/s41559-019-1068-y (2020). Derham, T. T., Duncan, R. P., Johnson, C. N. & Jones, M. E. Hope and caution: rewilding to mitigate the impacts of biological invasions. Philos. Trans. R. Soc. Lond. B Biol. Sci. 373, 20180127 (2018). Derham, T. & Mathews, F. Elephants as refugees. People and Nature 2, 103–110 (2020). Lundgren, E. J. et al. Introduced herbivores restore Late Pleistocene ecological functions. Proc. Natl. Acad. Sci. USA, https://doi.org/10.1073/pnas.1915769117 (2020). Lundgren, E. J., Ramp, D., Ripple, W. J. & Wallach, A. D. Introduced megafauna are rewilding the Anthropocene. Ecography 41, 857–866, https://doi.org/10.1111/ecog.03430 (2018). Donlan, C. J. et al. Pleistocene rewilding: an optimistic agenda for twenty-first century conservation. The American Naturalist 168, 660–681 (2006). Luck, G. W., Lavorel, S., McIntyre, S. & Lumb, K. Improving the application of vertebrate trait-based frameworks to the study of ecosystem services. J. Anim. Ecol. 81, 1065–1076, https://doi.org/10.1111/j.1365-2656.2012.01974.x (2012). Faurby, S. et al. PHYLACINE 1.2: The Phylogenetic Atlas of Mammal Macroecology. Ecology 99, 2626–2626 (2018). Smith, F. A. et al. Body mass of late Quaternary mammals. Ecology 84, 3403–3403 (2003). Hume, J. P. & Walters, M. Extinct birds. Vol. 217 (A&C Black, 2012). Owen-Smith, R. N. Megaherbivores: the influence of very large body size on ecology. (Cambridge University Press, 1988). Gordon, I. J. & Prins, H. H. Ecology Browsing and Grazing II. (Springer Nature, 2019). Martin, P. S. & Wright, H. E. Pleistocene extinctions; the search for a cause. (National Research Council (U.S.): International Association for Quaternary Research., 1967). Wilson, D. E. & Mittermeier, R. A. Handbook of the Mammals of the World Vol. 1-9 (Lynx Publishing, 2009-2019). Hopcraft, J. G. C., Olff, H. & Sinclair, A. R. E. Herbivores, resources and risks: alternating regulation along primary environmental gradients in savannas. Trends Ecol. Evol. 25, 119–128 (2010). AnAge: The Animal Ageing and Longevity Database. (2020). Clauss, M., Kaiser, T. & Hummel, J. in The ecology of browsing and grazing 47-88 (Springer, 2008). Van Soest, P. J. Allometry and ecology of feeding behavior and digestive capacity in herbivores: a review. Zoo Biology: Published in affiliation with the American Zoo and Aquarium Association 15, 455–479 (1996). Davis, M. & Pineda-Munoz, S. The temporal scale of diet and dietary proxies. Ecol. Evol. 6, 1883–1897, https://doi.org/10.1002/ece3.2054 (2016). Kingdon, J. et al. Mammals of Africa. Vol. I-VI (Bloomsbury Natural History, 2013). Kissling, W. D. et al. Establishing macroecological trait datasets: digitalization, extrapolation, and validation of diet preferences in terrestrial mammals worldwide. Ecol. Evol. 4, 2913–2930, https://doi.org/10.1002/ece3.1136 (2014). Faurby, S. & Svenning, J. C. A species-level phylogeny of all extant and late Quaternary extinct mammals using a novel heuristic-hierarchical Bayesian approach. Mol. Phylogenet. Evol. 84, 14–26, https://doi.org/10.1016/j.ympev.2014.11.001 (2015). Goolsby, E. W., Bruggeman, J. & Ané, C. Rphylopars: fast multivariate phylogenetic comparative methods for missing data and within‐species variation. Methods Ecol. Evol. 8, 22–27 (2017). Bruggeman, J., Heringa, J. & Brandt, B. W. PhyloPars: estimation of missing parameter values using phylogeny. Nucleic Acids Res. 37, W179–W184 (2009). Hume, I. D. Digestive strategies of mammals. Acta Zoologica Sinica 48, 1–19 (2002). Demment, M. W. & Van Soest, P. J. A nutritional explanation for body-size patterns of ruminant and nonruminant herbivores. The American Naturalist 125, 641–672 (1985). Doughty, C. E. et al. Global nutrient transport in a world of giants. Proc. Natl. Acad. Sci. USA 113, 868–873, https://doi.org/10.1073/pnas.1502549112 (2016). Hofmann, R. R. Evolutionary steps of ecophysiological adaptation and diversification of ruminants: a comparative view of their digestive system. Oecologia 78, 443–457 (1989). Prothero, D. R. & Foss, S. E. The evolution of artiodactyls. (JHU Press, 2007). Subalusky, A. L., Dutton, C. L., Rosi-Marshall, E. J. & Post, D. M. The hippopotamus conveyor belt: vectors of carbon and nutrients from terrestrial grasslands to aquatic systems in sub-Saharan Africa. Freshw. Biol. 60, 512–525, https://doi.org/10.1111/fwb.12474 (2015). Kubo, T., Sakamoto, M., Meade, A. & Venditti, C. Transitions between foot postures are associated with elevated rates of body size evolution in mammals. Proc. Natl. Acad. Sci. USA 116, 2618–2623 (2019). Brown, J. C. & Yalden, D. W. The description of mammals-2 limbs and locomotion of terrestrial mammals. Mammal Review 3, 107–134 (1973). Polly, P. D. in Fins into Limbs: Evolution, Development, and Transformation (ed B.K. Hall) 245-268 (2007). Cumming, D. H. M. & Cumming, G. S. Ungulate community structure and ecological processes: body size, hoof area and trampling in African savannas. Oecologia 134, 560–568 (2003). te Beest, M., Sitters, J., Ménard, C. B. & Olofsson, J. Reindeer grazing increases summer albedo by reducing shrub abundance in Arctic tundra. Environmental Research Letters 11, 125013, https://doi.org/10.1088/1748-9326/aa5128 (2016). Bennett, M. Foot areas, ground reaction forces and pressures beneath the feet of kangaroos, wallabies and rat-kangaroos (Marsupialia: Macropodoidea). J. Zool. 247, 365–369 (1999). Beever, E. A., Huso, M. & Pyke, D. A. Multiscale responses of soil stability and invasive plants to removal of non‐native grazers from an arid conservation reserve. Diversity and Distributions 12, 258–268 (2006). Lundgren, E. J. et al. Functional traits of the world’s late Quaternary large-bodied avian and mammalian herbivores. figshare https://doi.org/10.6084/m9.figshare.c.5001971 (2020). Kihwele, E. S. et al. Quantifying water requirements of African ungulates through a combination of functional traits. Ecological Monographs 90, e01404, https://doi.org/10.1002/ecm.1404 (2020). Abbazzi, L. Remarks on the validity of the generic name Praemegaceros portis 1920, and an overview on Praemegaceros species in Italy. Rendiconti Lincei 15, 115 (2004). Acevedo, P. & Cassinello, J. Biology, ecology and status of Iberian ibex Capra pyrenaica: a critical review and research prospectus. Mammal Review 39, 17–32 (2009). Adhikari, P. et al. Seasonal and altitudinal variation in roe deer (Capreolus pygargus tianschanicus) diet on Jeju Island, South Korea. Journal of Asia-Pacific Biodiversity 9, 422–428 (2016). Agenbroad, L. D. Mammuthus exilis from the California Channel Islands: height, mass, and geologic age. CIT 173, 536 (2010). Agetsuma, N., Agetsuma-Yanagihara, Y. & Takafumi, H. Food habits of Japanese deer in an evergreen forest: Litter-feeding deer. Mammalian Biology 76, 201–207 (2011). Ahmad, S. et al. Using an ensemble modelling approach to predict the potential distribution of Himalayan gray goral (Naemorhedus goral bedfordi) in Pakistan. Global Ecology and Conservation 21, e00845 (2020). Ahrestani, F. S., Heitkönig, I. M. & Prins, H. H. Diet and habitat-niche relationships within an assemblage of large herbivores in a seasonal tropical forest. J. Trop. Ecol., 385–394 (2012). Ahrestani, F. S., Heitkönig, I. M., Matsubayashi, H. & Prins, H. H. in The Ecology of Large Herbivores in South and Southeast Asia 99–120 (Springer, 2016). Aiba, K., Miura, S. & Kubo, M. O. Dental Microwear Texture Analysis in Two Ruminants, Japanese Serow (Capricornis crispus) and Sika Deer (Cervus nippon), from Central Japan. Mammal Study 44, 183-192, 110 (2019). Akbari, H., Habibipoor, A. & Mousavi, J. Investigation on Habitat Preferences and Group Sizes of Chinkara (Gazella bennettii) in Dareh-Anjeer Wildlife Refuge, Yazd province. Iranian Journal of Applied Ecology 2, 81–90 (2013). Akbari, H., Moradi, H. V., Rezaie, H.-R. & Baghestani, N. Winter foraging of chinkara (Gazella bennettii shikarii) in Central Iran. Mammalia 80, 163–169 (2016). Akersten, W. A., Foppe, T. M. & Jefferson, G. T. New source of dietary data for extinct herbivores. Quaternary Research 30, 92–97 (1988). Akram, F., Ilyas, O. & Haleem, A. Food and Feeding Habits of Indian Crested Porcupine in Pench Tiger Reserve, Madhya Pradesh, India. Ambient Sci 4, 0–5 (2017). Al Harthi, L. S., Robinson, M. D. & Mahgoub, O. Diets and resource sharing among livestock on the Saiq Plateau, Jebel Akhdar Mountains, Oman. International journal of ecology and environmental sciences 34, 113–120 (2008). Alberdi, M. T., Prado, J. L. & Ortiz-Jaureguizar, E. Patterns of body size changes in fossil and living Equini (Perissodactyla). Biological Journal of the Linnean Society 54, 349–370 (1995). Alcover, J. A. Vertebrate evolution and extinction on western and central Mediterranean Islands. Tropics 10, 103–123 (2000). Alcover, J. A., Perez-Obiol, R., Yll, E.-I. & Bover, P. The diet of Myotragus balearicus Bate 1909 (Artiodactyla: Caprinae), an extinct bovid from the Balearic Islands: evidence from coprolites. Biological Journal of the Linnean Society 66, 57–74 (1999). Ali, A. et al. An assessment of food habits and altitudinal distribution of the Asiatic black bear (Ursus thibetanus) in the Western Himalayas, Pakistan. Journal of Natural History 51, 689–701 (2017). Cornell Lab of Ornithology. All About Birds. Allaboutbirds.org (Cornell Lab of Ornithology, 2020). Myers, P. et al. (University of Michigan, 2019). Dantas, M. A. T. et al. Isotopic paleoecology of the Pleistocene megamammals from the Brazilian Intertropical Region: Feeding ecology (δ13C), niche breadth and overlap. Quaternary Science Reviews 170, 152–163 (2017). Arbouche, Y., Arbouche, H., Arbouche, F. & Arbouche, R. Valeur fourragere des especes prelevees par Gazella cuvieri Ogilby, 1841 au niveau du Djebel Metlili (Algerie). Archivos de Zootecnia 61, 145–148 (2012). Arman, S. D. & Prideaux, G. J. Dietary classification of extant kangaroos and their relatives (Marsupialia: Macropodoidea). Austral Ecol. 40, 909–922, https://doi.org/10.1111/aec.12273 (2015). Aryal, A. Habitat ecology of Himalayan serow (Capricornis sumatraensis ssp. thar) in Annapurna Conservation Area of Nepal. Tiger paper 34, 12–20 (2009). Aryal, A., Coogan, S. C., Ji, W., Rothman, J. M. & Raubenheimer, D. Foods, macronutrients and fibre in the diet of blue sheep (Psuedois nayaur) in the Annapurna Conservation Area of Nepal. Ecol. Evol. 5, 4006–4017 (2015). Asevedo, L., Winck, G. R., Mothé, D. & Avilla, L. S. Ancient diet of the Pleistocene gomphothere Notiomastodon platensis (Mammalia, Proboscidea, Gomphotheriidae) from lowland mid-latitudes of South America: Stereomicrowear and tooth calculus analyses combined. Quaternary International 255, 42–52, https://doi.org/10.1016/j.quaint.2011.08.037 (2012). Asensio, B. A., Méndez, J. R. & Prado, J. L. Patterns of body-size change in large mammals during the Late Cenozoic in the Northwestern Mediterranean. 464-479 (Museo Arqueológico Regional) (2004). Ashraf, N., Anwar, M., Hussain, I. & Nawaz, M. A. Competition for food between the markhor and domestic goat in Chitral, Pakistan. Turkish Journal of Zoology 38, 191–198 (2014). Ashraf, N. et al. Seasonal variation in the diet of the grey goral (Naemorhedus goral) in Machiara National Park (MNP), Azad Jammu and Kashmir, Pakistan. Mammalia 81, 235–244 (2017). The Australian Museum. Animal Fact Sheets. www.australian.museum/learn (New South Wales Government, New South Wales, 2019). Avaliani, N., Chunashvili, T., Sulamanidze, G. & Gurchiani, I. Supporting conservation of West Caucasian Tur (Capra caucasica) in Georgia. Conservation Leadership Pgoramme. Project No: 400206 (2007). Baamrane, M. A. A. et al. Assessment of the food habits of the Moroccan dorcas gazelle in M’Sabih Talaa, west central Morocco, using the trn L approach. PLoS One 7, e35643 (2012). Bailey, M., Petrie, S. A. & Badzinski, S. S. Diet of mute swans in lower Great Lakes coastal marshes. The Journal of wildlife Management 72, 726–732 (2008). Ballari, S. A. & Barrios‐García, M. N. A review of wild boar Sus scrofa diet and factors affecting food selection in native and introduced ranges. Mammal Review 44, 124–134 (2014). Barboza, P. & Hume, I. Digestive tract morphology and digestion in the wombats (Marsupialia: Vombatidae). Journal of Comparative Physiology B 162, 552–560 (1992). Bargo, M. S. The ground sloth Megatherium americanum: skull shape, bite forces, and diet. Acta Palaeontologica Polonica 46, 173–192 (2001). Bargo, M. S. & Vizcaíno, S. F. Paleobiology of Pleistocene ground sloths (Xenarthra, Tardigrada): biomechanics, morphogeometry and ecomorphology applied to the masticatory apparatus. Ameghiniana 45, 175–196 (2008). Bargo, M. S., Toledo, N. & Vizcaíno, S. F. Muzzle of South American Pleistocene ground sloths (Xenarthra, Tardigrada). J. Morphol. 267, 248–263 (2006). Barreto, G. R. & Quintana, R. D. in Capybara. (Springer, 2013). Baskaran, N., Kannan, V., Thiyagesan, K. & Desai, A. A. Behavioural ecology of four-horned antelope (Tetracerus quadricornis de Blainville, 1816) in the tropical forests of southern India. Mammalian Biology 76, 741–747 (2011). Baskaran, N., Ramkumaran, K. & Karthikeyan, G. Spatial and dietary overlap between blackbuck (Antilope cervicapra) and feral horse (Equus caballus) at Point Calimere Wildlife Sanctuary, Southern India: Competition between native versus introduced species. Mammalian Biology 81, 295–302 (2016). Basumatary, S. K., Singh, H., McDonald, H. G., Tripathi, S. & Pokharia, A. K. Modern botanical analogue of endangered Yak (Bos mutus) dung from India: Plausible linkage with extant and extinct megaherbivores. PLoS One 14, e0202723 (2019). Bedaso, Z. K., Wynn, J. G., Alemseged, Z. & Geraads, D. Dietary and paleoenvironmental reconstruction using stable isotopes of herbivore tooth enamel from middle Pliocene Dikika, Ethiopia: Implication for Australopithecus afarensis habitat and food resources. J. Hum. Evol. 64, 21–38 (2013). Benamor, N., Bounaceur, F., Baha, M. & Aulagnier, S. First data on the seasonal diet of the vulnerable Gazella cuvieri (Mammalia: Bovidae) in the Djebel Messaâd forest, northern Algeria. Folia Zoologica 68, 1–8 (2019). Bennett, C. V. & Goswami, A. Statistical support for the hypothesis of developmental constraint in marsupial skull evolution. BMC Biol. 11 (2013). Bergmann, G. T., Craine, J. M., Robeson, M. S. II & Fierer, N. Seasonal shifts in diet and gut microbiota of the American bison (Bison bison). PLoS One 10, e0142409 (2015). Bhat, S. A., Telang, S., Wani, M. A. & Sheikh, K. A. Food habits of Nilgai (Boselaphus tragocamelus) in Van Vihar National Park, Bhopal, Madhya Pradesh, India. Biomedical and Pharmacology Journal 5, 141–147 (2015). Bhattacharya, T., Kittur, S., Sathyakumar, S. & Rawat, G. Diet overlap between wild ungulates and domestic livestock in the greater Himalaya: implications for management of grazing practices in Proceedings of the Zoological Society. 11-21 (Springer). Bibi, F. & Kiessling, W. Continuous evolutionary change in Plio-Pleistocene mammals of eastern. Africa. Proc. Natl. Acad. Sci. USA 112, 10623–10628 (2015). Biknevicius, A. R., McFarlane, D. A. & MacPhee, R. D. E. Body size in Amblyrhiza inundata (Rodentia, Caviomorpha), an extinct megafaunal rodent from the Anguilla Bank, West Indies: estimates and implications. American Museum novitates; no. 3079 (1993). Cornell Lab of Ornithology. Birds of the World. https://birdsoftheworld.org/bow Cornell Lab of Ornithology (2020). Biswas, J. et al. The enigmatic Arunachal macaque: its biogeography, biology and taxonomy in Northeastern India. Am. J. Primatol. 73, 458–473, https://doi.org/10.1002/ajp.20924 (2011). Bocherens, H. et al. Isotopic insight on paleodiet of extinct Pleistocene megafaunal Xenarthrans from Argentina. Gondwana Research 48, 7–14, https://doi.org/10.1016/j.gr.2017.04.003 (2017). Boeskorov, G. G. et al. Woolly rhino discovery in the lower Kolyma River. Quaternary Science Reviews 30, 2262–2272 (2011). Bojarska, K. & Selva, N. Spatial patterns in brown bear Ursus arctos diet: the role of geographical and environmental factors. Mammal Review 42, 120–143 (2012). Bon, R., Rideau, C., Villaret, J.-C. & Joachim, J. Segregation is not only a matter of sex in Alpine ibex, Capra ibex ibex. Anim. Behav. 62, 495–504 (2001). Bond, W. J., Silander, J. A. Jr, Ranaivonasy, J. & Ratsirarson, J. The antiquity of Madagascar’s grasslands and the rise of C4 grassy biomes. Journal of Biogeography 35, 1743–1758, https://doi.org/10.1111/j.1365-2699.2008.01923.x (2008). Borgnia, M., Vilá, B. L. & Cassini, M. H. Foraging ecology of Vicuña, Vicugna vicugna, in dry Puna of Argentina. Small Rumin. Res. 88, 44–53 (2010). Bowman, D. M., Murphy, B. P. & McMahon, C. R. Using carbon isotope analysis of the diet of two introduced Australian megaherbivores to understand Pleistocene megafaunal extinctions. Journal of Biogeography 37, 499–505 (2010). Bradford, M. G., Dennis, A. J. & Westcott, D. A. Diet and dietary preferences of the southern cassowary (Casuarius casuarius) in North Queensland, Australia. Biotropica 40, 338–343 (2008). Bradham, J. L., DeSantis, L. R., Jorge, M. L. S. & Keuroghlian, A. Dietary variability of extinct tayassuids and modern white-lipped peccaries (Tayassu pecari) as inferred from dental microwear and stable isotope analysis. Palaeogeography, Palaeoclimatology, Palaeoecology 499, 93–101 (2018). Bravo-Cuevas, V. M., Rivals, F. & Priego-Vargas, J. Paleoecology (δ13C and δ18O stable isotopes analysis) of a mammalian assemblage from the late Pleistocene of Hidalgo, central Mexico and implications for a better understanding of environmental conditions in temperate North America (18°–36° N Lat.). Palaeogeography, Palaeoclimatology, Palaeoecology 485, 632–643 (2017). Bravo-Cuevas, V. M., Jiménez-Hidalgo, E., Perdoma, M. A. C. & Priego-Vargas, J. Taxonomy and notes on the paleobiology of the late Pleistocene (Rancholabrean) antilocaprids (Mammalia, Artiodactyla, Antilocapridae) from the state of Hidalgo, central Mexico. Revista mexicana de Ciencias Geológicas 30, 601–613 (2013). Buchsbaum, R., Wilson, J. & Valiela, I. Digestibility of plant constitutents by Canada Geese and Atlantic Brant. Ecology 67, 386–393 (1986). Buckland, R. & Guy, G. Goose Production Systems, http://www.fao.org/3/y4359e/y4359e00.htm#Contents (2002). Burness, G. P., Diamond, J. & Flannery, T. Dinosaurs, dragons, and dwarfs: the evolution of maximal body size. Proc. Natl. Acad. Sci. USA 98, 14518–14523 (2001). Burton, J., Hedges, S. & Mustari, A. The taxonomic status, distribution and conservation of the lowland anoa Bubalus depressicornis and mountain anoa Bubalus quarlesi. Mammal Review 35, 25–50 (2005). Butler, K., Louys, J. & Travouillon, K. Extending dental mesowear analyses to Australian marsupials, with applications to six Plio-Pleistocene kangaroos from southeast Queensland. Palaeogeography, Palaeoclimatology, Palaeoecology 408, 11–25, https://doi.org/10.1016/j.palaeo.2014.04.024 (2014). Cain, J. W., Avery, M. M., Caldwell, C. A., Abbott, L. B. & Holechek, J. L. Diet composition, quality and overlap of sympatric American pronghorn and gemsbok. Wildlife Biology 17, wlb.00296, https://doi.org/10.2981/wlb.00296 (2017). Campbell, J. L., Eisemann, J. H., Williams, C. V. & Glenn, K. M. Description of the Gastrointestinal Tract of Five Lemur Species: Propithecus tattersalli, Propithecus verreauxicoquereli, Varecia variegata, Hapalemur griseus, and Lemur catta. Am. J. Primatol. 52, 133–142 (2000). Carey, S. P. et al. A diverse Pleistocene marsupial trackway assemblage from the Victorian Volcanic Plains, Australia. Quaternary Science Reviews 30, 591–610 (2011). Cartelle, C. & Hartwig, W. C. A new extinct primate among the Pleistocene megafauna of Bahia, Brazil. Proc. Natl. Acad. Sci. USA 93, 6405–6409, https://doi.org/10.1073/pnas.93.13.6405 (1996). Cassini, G. H., Cerdeño, E., Villafañe, A. L. & Muñoz, N. A. Paleobiology of Santacrucian native ungulates (Meridiungulata: Astrapotheria, Litopterna and Notoungulata) in Early Miocene Paleobiology in Patagonia/Vizcaíno (Cambridge University Press) (2012). Cerdeño, E. Diversity and evolutionary trends of the Family Rhinocerotidae (Perissodactyla). Palaeogeography, Palaeoclimatology, Palaeoecology 141, 13–34, https://doi.org/10.1016/S0031-0182(98)00003-0 (1998). Cerling, T. E. & Viehl, K. Seasonal diet changes of the forest hog (Hylochoerus meinertzhageni Thomas) based on the carbon isotopic composition of hair. African Journal of Ecology 42, 88–92 (2004). Chaiyarat, R., Saengpong, S., Tunwattana, W. & Dunriddach, P. Habitat and food utilization by banteng (Bos javanicus d’Alton, 1823) accidentally introduced into the Khao Khieo-Khao Chomphu Wildlife Sanctuary, Thailand. Mammalia 82, 23–34 (2017). Chen, Y. et al. Activity Rhythms of Coexisting Red Serow and Chinese Serow at Mt. Gaoligong as Identified by Camera Traps. Animals 9, 1071 (2019). Choudhury, A. The decline of the wild water buffalo in north-east India. Oryx 28, 70–73 (1994). Christiansen, P. What size were Arctodus simus and Ursus spelaeus (Carnivora: Ursidae)? Annales Zoologici Fennici 36, 93–102 (1999). Christiansen, P. Body size in proboscideans, with notes on elephant metabolism. Zoological journal of the Linnean Society 140, 523–549 (2004). Chritz, K. L. et al. Palaeobiology of an extinct Ice Age mammal: Stable isotope and cementum analysis of giant deer teeth. Palaeogeography, Palaeoclimatology, Palaeoecology 282, 133–144 (2009). Clarke, S. J., Miller, G. H., Fogel, M. L., Chivas, A. R. & Murray-Wallace, C. V. The amino acid and stable isotope biogeochemistry of elephant bird (Aepyornis) eggshells from southern Madagascar. Quaternary Science Reviews 25, 2343–2356 (2006). Clauss, M. The potential interplay of posture, digestive anatomy, density of ingesta and gravity in mammalian herbivores: Why sloths do not rest upside down. Mammal Review 34, 241–245 (2004). Clauss, M. et al. The maximum attainable body size of herbivorous mammals: morphophysiological constraints on foregut, and adaptations of hindgut fermenters. Oecologia 136, 14–27 (2003). Clauss, M., Hummel, J., Vercammen, F. & Streich, W. J. Observations on the Macroscopic Digestive Anatomy of the Himalayan Tahr (Hemitragus jemlahicus). Anatomia Histologia Embryologia 34, 276–278 (2005). Clench, M. H. & Mathias, J. R. The avian cecum: a review. The Wilson Bulletin, 93–121 (1995). Cobb, M. A., KHelling, H. & Pyle, B. Summer diet and feeding location selection patterns of an irrupting mountain goat population on Kodiak Island, Alaska. Biennial Symposium of the Northern Wild Sheep and Goat Council 18, 122–135 (2012). Codron, D., Brink, J. S., Rossouw, L. & Clauss, M. The evolution of ecological specialization in southern African ungulates: competition- or physical environmental turnover? Oikos 117, 344–353, https://doi.org/10.1111/j.2007.0030-1299.16387.x (2008). Codron, D., Clauss, M., Codron, J. & Tütken, T. Within trophic level shifts in collagen–carbonate stable carbon isotope spacing are propagated by diet and digestive physiology in large mammal herbivores. Ecol. Evol. 8, 3983–3995 (2018). Comparatore, V. & Yagueddú, C. Diet of the Greater Rhea (Rhea americana) in an agroecosystem of the Flooding Pampa, Argentina. Ornitologia Neotropical 18, 187–194 (2007). Cooke, S. B. Paleodiet of extinct platyrrhines with emphasis on the Caribbean forms: three-dimensional geometric morphometrics of mandibular second molars. The Anatatomical Record 294, 2073–2091, https://doi.org/10.1002/ar.21502 (2011). Coombs, M. C. Large mammalian clawed herbivores: a comparative study. Transactions of the American Philosophical Society 73, 1–96 (1983). Cope, E. D. The extinct rodentia of North America. The American Naturalist 17, 43–57 (1883). Corona, A., Ubilla Gutierrez, M. & Perea Negreira, D. New records and diet reconstruction using dental microwear analysis for Neolicaphrium recens Frenguelli, 1921 (Litopterna, Proterotheriidae). Andean Geology, 2019 46(1), 153–167 (2019). Craine, J. M., Towne, E. G., Miller, M. & Fierer, N. Climatic warming and the future of bison as grazers. Sci. Rep. 5, 16738 (2015). Cransac, N., Valet, G., Cugnasse, J.-M. & Rech, J. Seasonal diet of mouflon (Ovis gmelini): comparison of population sub-units and sex-age classes. Revue d'écologie (1997). Creese, S., Davies, S. J. & Bowen, B. J. Comparative dietary analysis of the black-flanked rock-wallaby (Petrogale lateralis lateralis), the euro (Macropus robustus erubescens) and the feral goat (Capra hircus) from Cape Range National Park, Western Australia. Aust. Mammal. 41, 220–230 (2019). Croitor, R. Systematical position and paleoecology of the endemic deer Megaceroides algericus Lydekker, 1890 (Cervidae, Mammalia) from the late Pleistocene-early Holocene of North Africa. Geobios 49, 265–283, https://doi.org/10.1016/j.geobios.2016.05.002 (2016). Croitor, R., Bonifay, M.-F. & Brugal, J.-P. Systematic revision of the endemic deer Haploidoceros n. gen. mediterraneus (Bonifay, 1967)(Mammalia, Cervidae) from the Middle Pleistocene of Southern France. Paläontologische Zeitschrift 82, 325–346 (2008). Cromsigt, J. P. G. M., Kemp, Y. J. M., Rodrigues, E. & Kivit, H. Rewilding Europe’s large grazer community: how functionally diverse are the diets of European bison, cattle, and horses? Restoration Ecology 26, 891–899 (2017). Crowley, B. E. & Godfrey, L. R. in Leaping Ahead 173-182 (Springer, 2012). Crowley, B. E. & Samonds, K. E. Stable carbon isotope values confirm a recent increase in grasslands in northwestern Madagascar. The Holocene 23, 1066–1073, https://doi.org/10.1177/0959683613484675 (2013). Crowley, B. E., Godfrey, L. R. & Irwin, M. T. A glance to the past: subfossils, stable isotopes, seed dispersal, and lemur species loss in southern Madagascar. Am. J. Primatol. 73, 25–37 (2011). Cunningham, P. L. & Wacher, T. Changes in the distribution, abundance and status of Arabian Sand Gazelle (Gazella subgutturosa marica) in Saudi Arabia: a review. Mammalia 73, 203–210 (2009). Czerwonogora, A., Fariña, R. A. & Tonni, E. P. Diet and isotopes of Late Pleistocene ground sloths: first results for Lestodon and Glossotherium (Xenarthra, Tardigrada). Neues Jahrbuch fur Geologie und Paleontologie - Abhandlungen 262, 257–266, https://doi.org/10.1127/0077-7749/2011/0197 (2011). Domanov, T. A. Musk deer Moschus moschiferus nutrition in the Tukuringra Mountain Range, Russian Far East, during the snow season. Russian Journal of Theriology 12, 91–97 (2013). Dantas, M. A. T. & Cozzuol, M. A. in Marine Isotope Stage 3 in Southern South America, 60 KA B.P.-30 KA B.P. (eds Germán Mariano Gasparini, Jorge Rabassa, Cecilia Deschamps, & Eduardo Pedro Tonni) 207-226 (Springer International Publishing, 2016). Dantas, M. A. T. et al. Paleoecology and radiocarbon dating of the Pleistocene megafauna of the Brazilian Intertropical Region. Quaternary Research 79, 61–65, https://doi.org/10.1016/j.yqres.2012.09.006 (2013). Dantas, M. A. T. et al. Isotopic paleoecology (δ 13C) of mesoherbivores from Late Pleistocene of Gruta da Marota, Andaraí, Bahia, Brazil. Hist. Biol., 1–9 (2019). Dantas, M. A. T. et al. Isotopic paleoecology (δ13C) from mammals from IUIU/BA and paleoenvironmental reconstruction (δ13C, δ18O) for the Brazilian intertropical region through the late Pleistocene. Quaternary Science Reviews 242, 106469 (2020). Davids, A. H. Estimation of genetic distances and heterosis in three ostrich (Struthio camelus) breeds for the improvement of productivity, Stellenbosch: University of Stellenbosch, (2011). Davies, P. & Lister, A. M. in The World of Elephants International Congress 479-480 (International Congress, Rome 2001, 2001). Dawson, L. An ecophysiological approach to the extinction of large marsupial herbivores in middle and late Pleistocene Australia. Alcheringa: An Australasian Journal of Palaeontology 30, 89–114, https://doi.org/10.1080/03115510609506857 (2006). Dawson, T. J. et al. in Fauna of Australia (eds D. W. Walton & B. J. Richardson) (AGPS Canberra, 1989). De Iuliis, G., Bargo, M. S. & Vizcaíno, S. F. Variation in skull morphology and mastication in the fossil giant armadillos Pampatherium spp. and allied genera (Mammalia: Xenarthra: Pampatheriidae), with comments on their systematics and distribution. Journal of Vertebrate Paleontology 20, 743–754, https://doi.org/10.1671/0272-4634(2000)020[0743:vismam]2.0.co;2 (2000). de Oliveira, A. M. & Santos, C. M. D. Functional morphology and paleoecology of Pilosa (Xenarthra, Mammalia) based on a two‐dimensional geometric Morphometrics study of the Humerus. J. Morphol. 279, 1455–1467 (2018). de Oliveira, K. et al. Fantastic beasts and what they ate: Revealing feeding habits and ecological niche of late Quaternary Macraucheniidae from South America. Quaternary Science Reviews 231, 106178 (2020). DeSantis, L. R. G., Field, J. H., Wroe, S. & Dodson, J. R. Dietary responses of Sahul (Pleistocene Australia–New Guinea) megafauna to climate and environmental change. Paleobiology 43, 181–195, https://doi.org/10.1017/pab.2016.50 (2017). Desbiez, A. L. J., Santos, S. A., Alvarez, J. M. & Tomas, W. M. Forage use in domestic cattle (Bos indicus), capybara (Hydrochoerus hydrochaeris) and pampas deer (Ozotoceros bezoarticus) in a seasonal Neotropical wetland. Mammalian Biology 76, 351–357 (2011). Dierenfeld, E., Hintz, H., Robertson, J., Van Soest, P. & Oftedal, O. Utilization of bamboo by the giant panda. The Journal of Nutrition 112, 636–641 (1982). Djagoun, C., Codron, D., Sealy, J., Mensah, G. & Sinsin, B. Stable carbon isotope analysis of the diets of West African bovids in Pendjari Biosphere Reserve, Northern Benin. African Journal of Wildlife Research 43, 33–43 (2013). Domingo, L., Prado, J. L. & Alberdi, M. T. The effect of paleoecology and paleobiogeography on stable isotopes of Quaternary mammals from South America. Quaternary Science Reviews 55, 103–113 (2012). Dong, W. et al. Late Pleistocene mammalian fauna from Wulanmulan Paleolithic Site, Nei Mongol, China. Quaternary International 347, 139–147 (2014). Doody, J. S., Sims, R. A. & Letnic, M. Environmental Manipulation to Avoid a Unique Predator: Drinking Hole Excavation in the Agile Wallaby, Macropus agilis. Ethology 113, 128–136, https://doi.org/10.1111/j.1439-0310.2006.01298.x (2007). Dookia, S. & Jakher, G. R. Food and Feeding Habit of Indian Gazelle (Gazella bennettii), in the Thar Desert of Rajasthan. The Indian Forester 133 (2007). Downer, C. C. Observations on the diet and habitat of the mountain tapir (Tapirus pinchaque). J. Zool. 254, 279–291 (2001). Dunning, J. B. Jr CRC handbook of avian body masses. (CRC press, 2007). Dunstan, H., Florentine, S. K., Calviño-Cancela, M., Westbrooke, M. E. & Palmer, G. C. Dietary characteristics of Emus (Dromaius novaehollandiae) in semi-arid New South Wales, Australia, and dispersal and germination of ingested seeds. Emu-Austral Ornithology 113, 168–176 (2013). Endo, Y., Takada, H. & Takatsuki, S. Comparison of the Food Habits of the Sika Deer (Cervus nippon), the Japanese Serow (Capricornis crispus), and the Wild Boar (Sus scrofa), Sympatric Herbivorous Mammals from Mt. Asama, Central Japan. Mammal Study 42, 131-140, 110 (2017). Espunyes, J. et al. Seasonal diet composition of Pyrenean chamois is mainly shaped by primary production waves. PLoS One 14, e0210819 (2019). Evans, M. C., Macgregor, C. & Jarman, P. J. Diet and feeding selectivity of common wombats. Wildlife Research 33, 321–330 (2006). Faith, J. T. Late Quaternary dietary shifts of the Cape grysbok (Raphicerus melanotis) in southern Africa. Quaternary Research 75, 159–165 (2011). Faith, J. T. Late Pleistocene and Holocene mammal extinctions on continental Africa. Earth-Science Reviews 128, 105–121 (2014). Faith, J. T. & Behrensmeyer, A. K. Climate change and faunal turnover: testing the mechanics of the turnover-pulse hypothesis with South African fossil data. Paleobiology 39, 609–627 (2013). Faith, J. T. & Thompson, J. C. Fossil evidence for seasonal calving and migration of extinct blue antelope (Hippotragus leucophaeus) in southern Africa. Journal of Biogeography 40, 2108–2118 (2013). Faith, J. T. et al. New perspectives on middle Pleistocene change in the large mammal faunas of East Africa: Damaliscus hypsodon sp. nov. (Mammalia, Artiodactyla) from Lainyamok, Kenya. Palaeogeography, Palaeoclimatology, Palaeoecology 361-362, 84–93, https://doi.org/10.1016/j.palaeo.2012.08.005 (2012). Fanelli, F., Palombo, M. R., Pillola, G. L. & Ibba, A. Tracks and trackways of “Praemegaceros” cazioti (Depéret, 1897) (Artiodactyla, Cervidae) in Pleistocene coastal deposits from Sardinia (Western Mediterranean, Italy). Bollettino della Società Paleontologica Italiana 46, 47–54 (2007). Farhadinia, M. S. et al. Goitered Gazelle, Gazella subgutturosa: its habitat preference and conservation needs in Miandasht Wildlife Refuge, north-eastern Iran (Mammalia: Artiodactyla). Zoology in the middle east 46, 9–18 (2009). Fariña, R. A., Vizcaíno, S. F. & Bargo, M. S. Body mass estimations in Lujanian (late Pleistocene-early Holocene of South America) mammal megafauna. Mastozoología Neotropical 5, 87–108 (1998). Feranec, R. S. Stable isotopes, hypsodonty, and the paleodiet of Hemiauchenia (Mammalia: Camelidae): a morphological specialization creating ecological generalization. Paleobiology 29, 230–242 (2003). Feranec, R., García, N., Díez, J. & Arsuaga, J. Understanding the ecology of mammalian carnivorans and herbivores from Valdegoba cave (Burgos, northern Spain) through stable isotope analysis. Palaeogeography, Palaeoclimatology, Palaeoecology 297, 263–272 (2010). Fernández-Olalla, M., Martínez-Jauregui, M., Perea, R., Velamazán, M. & San Miguel, A. Threat or opportunity? Browsing preferences and potential impact of Ammotragus lervia on woody plants of a Mediterranean protected area. J. Arid Environ. 129, 9–15, https://doi.org/10.1016/j.jaridenv.2016.02.003 (2016). Ferretti, M. P. The dwarf elephant Palaeoloxodon mnaidriensis from Puntali Cave, Carini (Sicily; late Middle Pleistocene): Anatomy, systematics and phylogenetic relationships. Quaternary International 182, 90–108, https://doi.org/10.1016/j.quaint.2007.11.003 (2008). Figueirido, B. & Soibelzon, L. H. Inferring palaeoecology in extinct tremarctine bears (Carnivora, Ursidae) using geometric morphometrics. Lethaia 43, 209–222 (2010). Flannery, T. F. Pleistocene faunal loss: implications of the aftershock for Australia’s past and future. Archaeology in Oceania 25, 45–55 (1990). Flannery, T. F. Taxonomy of Dendrolagus goodfellowi (Macropodidae: Marsupialia) with description of a new subspecies. Records of the Australian Museum 45, 33–42, https://doi.org/10.3853/j.0067-1975.45.1993.128 (1993). Flannery, T. F. The Pleistocene mammal fauna of Kelangurr Cave, central montane Irian Jaya, Indonesia. Records of the Western Australian Museum 57, 341–350 (1999). Flannery, T. F., Martin, R. & Szalay, A. Tree kangaroos: a curious natural history. (Reed Books, 1996). Fleagle, J. G. & Gilbert, C. C. Elwyn Simons: a search for origins. (Springer Science & Business Media, 2007). Foerster, C. R. & Vaughan, C. Diet and foraging behavior of a female Baird’s tapir (Tapirus bairdi) in a Costa Rican lowland rainforest. Cuadernos de Investigación UNED 7, 259–267 (2015). Fooden, J. Systematic review of the Barbary Macaque, Macaca sylvanus (Linnaeus, 1758). Fieldiana Zoology 113, 1–58 (2007). Forasiepi, A. M. et al. Exceptional skull of Huayqueriana (Mammalia, Litopterna, Macraucheniidae) from the late Miocene of Argentina: anatomy, systematics, and paleobiological implications. Bulletin of the American Museum of Natural History 2016, 1–76 (2016). França, Ld. M. et al. Chronology and ancient feeding ecology of two upper Pleistocene megamammals from the Brazilian Intertropical Region. Quaternary Science Reviews 99, 78–83, https://doi.org/10.1016/j.quascirev.2014.04.028 (2014). França, Ld. M. et al. Review of feeding ecology data of Late Pleistocene mammalian herbivores from South America and discussions on niche differentiation. Earth-Science Reviews 140, 158–165, https://doi.org/10.1016/j.earscirev.2014.10.006 (2015). France, C. A., Zelanko, P. M., Kaufman, A. J. & Holtz, T. R. Carbon and nitrogen isotopic analysis of Pleistocene mammals from the Saltville Quarry (Virginia, USA): Implications for trophic relationships. Palaeogeography, Palaeoclimatology, Palaeoecology 249, 271–282 (2007). Fuller, B. T. et al. Pleistocene paleoecology and feeding behavior of terrestrial vertebrates recorded in a pre-LGM asphaltic deposit at Rancho La Brea, California. Palaeogeography, Palaeoclimatology, Palaeoecology 537, 109383, https://doi.org/10.1016/j.palaeo.2019.109383 (2020). Furley, C. W. Potential Use of Gazelles for Game Ranching in the Arabian Peninsula (This lecture was delivered at the Agro-Gulf Exhibition and Conference, Abu Dhabi, 1983.). Gad, S. D. & Shyama, S. K. Diet composition and quality in Indian bison (Bos gaurus) based on fecal analysis. Zoolog. Sci. 28, 264–267 (2011). Gagnon, M. & Chew, A. E. Dietary preferences in extant African Bovidae. J. Mammal. 81, 490–511 (2000). García, A., Carretero, E. M. & Dacar, M. A. Presence of Hippidion at two sites of western Argentina: Diet composition and contribution to the study of the extinction of Pleistocene megafauna. Quaternary International 180, 22–29 (2008). García‐Rangel, S. Andean bear Tremarctos ornatus natural history and conservation. Mammal Review 42, 85–119 (2012). Gardner, P. C., Ridge, S., Wern, J. G. E. & Goossens, B. The influence of logging upon the foraging behaviour and diet of the endangered Bornean banteng. Mammalia 83, 519–529 (2019). Garitano-Zavala, A., Nadal, J. & Ávila, P. The feeding ecology and digestive tract morphometry of two sympatric tinamous of the high plateau of the Bolivian Andes: the Ornate Tinamou (Nothoprocta ornata) and the Darwin’s Nothura (Nothura darwinii). Ornitología Neotropical 14, 173–194 (2003). Garrett, N. D. et al. Stable isotope paleoecology of Late Pleistocene Middle Stone Age humans from the Lake Victoria basin, Kenya. J. Hum. Evol. 82, 1–14 (2015). Gasparini, G. M., Kerber, L. & Oliveira, E. V. Catagonus stenocephalus (Lund in Reinhardt, 1880)(Mammalia, Tayassuidae) in the Touro Passo Formation (Late Pleistocene), Rio Grande do Sul, Brazil. Taxonomic and palaeoenvironmental comments. Neues Jahrbuch für Geologie und Paläontologie-Abhandlungen 254, 261–273 (2009). Gasparini, G. M., Soibelzon, E., Zurita, A. E. & Miño-Boilini, A. R. A review of the Quaternary Tayassuidae (Mammalia, Artiodactyla) from the Tarija Valley, Bolivia. Alcheringa: An Australasian Journal of Palaeontology 34, 7–20, https://doi.org/10.1080/03115510903277717 (2010). Gautier-Hion, A. & Gautier, J.-P. Cephalophus ogilbyi crusalbum Grubb 1978, described from coastal Gabon, is quite common in the Forêt des Abeilles, Central Gabon. Revue d’Écologie 2 (1994). Gautier-Hion, A., Emmons, L. H. & Dubost, G. A comparison of the diets of three major groups of primary consumers of Gabon (primates, squirrels and ruminants). Oecologia 45, 182–189 (1980). Gavashelishvili, A. Habitat selection by East Caucasian tur (Capra cylindricornis). Biol. Conserv. 120, 391–398 (2004). Gebremedhin, B. et al. DNA Metabarcoding Reveals Diet Overlap between the Endangered Walia Ibex and Domestic Goats - Implications for Conservation. PLoS One 11, e0159133, https://doi.org/10.1371/journal.pone.0159133 (2016). Geist, V. Deer of the world: their evolution, behaviour, and ecology. (Stackpole books, 1998). Ghosh, A., Thakur, M., Singh, S. K., Sharma, L. K. & Chandra, K. Gut microbiota suggests dependency of Arunachal Macaque (Macaca munzala) on anthropogenic food in Western Arunachal Pradesh, Northeastern India: Preliminary findings. Global Ecology and Conservation, e01030 (2020). Giles, F. H. The riddle of Cervus schomburgki. Journal of the Siam Society Natural History Supplement 10, 1–34 (1937). Gill, F. B. Ornithology. (W.H. Freeman and Company, 2001). Gillette, D. D. & Ray, C. E. Glyptodonts of North America. Vol. 40 (1981). Gingerich, P. D. Land-to-sea transition in early whales: evolution of Eocene Archaeoceti (Cetacea) in relation to skeletal proportions and locomotion of living semiaquatic mammals. Paleobiology 29, 429–454, 10.1666/0094-8373(2003)029<0429:LTIEWE>2.0.CO;2 (2003). Giri, S., Aryal, A., Koirala, R., Adhikari, B. & Raubenheimer, D. Feeding ecology and distribution of Himalayan serow (Capricornis thar) in Annapurna Conservation Area, Nepal. World Journal of Zoology 6, 80–85 (2011). Godfrey, L. R. et al. Dental use wear in extinct lemurs: evidence of diet and niche differentiation. J. Hum. Evol. 47, 145–169, https://doi.org/10.1016/j.jhevol.2004.06.003 (2004). González-Guarda, E. et al. Late Pleistocene ecological, environmental and climatic reconstruction based on megafauna stable isotopes from northwestern Chilean Patagonia. Quaternary Science Reviews 170, 188–202 (2017). Gazzolo, C. & Barrio, J. Feeding ecology of taruca (Hippocamelus antisensis) populations during the rainy and dry seasons in Central Peru. International Journal of Zoology 2016 (2016). Grass, A. D. Inferring lifestyle and locomotor habits of extinct sloths through scapula morphology and implications for convergent evolution in extant sloths PhD thesis, Graduate College of the University of Iowa, (2014). Gray, G. G. & Simpson, C. D. Ammotragus lervia. Mammalian Species 144, 1–7 (1980). Green, J. L. Dental microwear in the orthodentine of the Xenarthra (Mammalia) and its use in reconstructing the palaeodiet of extinct taxa: the case study of Nothrotheriops shastensis (Xenarthra, Tardigrada, Nothrotheriidae). Zoological Journal of the Linnean Society 156, 201–222 (2009). Green, J. L. & Kalthoff, D. C. Xenarthran dental microstructure and dental microwear analyses, with new data for Megatherium americanum (Megatheriidae). J. Mammal. 96, 645–657 (2015). Green, K., Davis, N. & Robinson, W. The diet of the common wombat (Vombatus ursinus) above the winter snowline in the decade following a wildfire. Aust. Mammal. 37, 146–156 (2015). Green, J. L., DeSantis, L. R. G. & Smith, G. J. Regional variation in the browsing diet of Pleistocene Mammut americanum (Mammalia, Proboscidea) as recorded by dental microwear textures. Palaeogeography, Palaeoclimatology, Palaeoecology 487, 59–70, https://doi.org/10.1016/j.palaeo.2017.08.019 (2017). Grignolio, S., Parrini, F., Bassano, B., Luccarini, S. & Apollonio, M. Habitat selection in adult males of Alpine ibex. Capra ibex ibex. Folia Zoologica-Praha 52, 113–120 (2003). Gröcke, D. R. Distribution of C3 and C4 plants in the late Pleistocene of South Australia recorded by isotope biogeochemistry of collagen in megafauna. Australian Journal of Botany 45, 607–617 (1997). Gröcke, D. & Bocherens, H. Isotopic investigation of an Australian island environment. Comptes Rendus de l’Academie des Sciences. Serie 2. Sciences de la Terre et des Planetes 322, 713–719 (1996). Groves, C. P. & Leslie, D. M. Jr Rhinoceros sondaicus (Perissodactyla: Rhinocerotidae). Mammalian Species 43, 190–208 (2011). Guerrero-Cardenas, I., Gallina, S., del Rio, P. C. M., Cardenas, S. A. & Orduña, R. R. Composición y selección de la dieta del borrego cimarrón (Ovis canadensis) en la Sierra El Mechudo, Baja California Sur, México. Therya (2016). Hadjisterkotis, E. & Reese, D. S. Considerations on the potential use of cliffs and caves by the extinct endemic late pleistocene hippopotami and elephants of Cyprus. European Journal of Wildlife Research 54, 122–133 (2008). Haleem, A. & Ilyas, O. Food and Feeding Habits of Gaur (Bos gaurus) in Highlands of Central India: A Case Study at Pench Tiger Reserve, Madhya Pradesh (India). Zoolog. Sci. 35, 57–68 (2018). Halenar, L. B. Reconstructing the Locomotor Repertoire of Protopithecus brasiliensis. II. Forelimb Morphology. The Anatomical Record 294, 2048–2063, https://doi.org/10.1002/ar.21499 (2011). Halenar, L. B. Paleobiology of Protopithecus brasiliensis, a plus-size Pleistocene platyrrhine from Brazil, City University of New York, (2012). Hamilton, W. J. III, Buskirk, R. & Buskirk, W. H. Intersexual dominance and differential mortality of Gemsbok Oryx gazella at Namib Desert waterholes. Madoqua 10, 5–19 (1977). Hansen, R. M. Shasta ground sloth food habits, Rampart Cave, Arizona. Paleobiology 4, 302–319 (1978). Hansford, J. P. & Turvey, S. T. Unexpected diversity within the extinct elephant birds (Aves: Aepyornithidae) and a new identity for the world’s largest bird. Royal Society open science 5, 181295 (2018). Harris, J. M. & Cerling, T. E. Dietary adaptations of extant and Neogene African suids. J. Zool. 256, 45–54 (2002). Hartwig, W. C. & Cartelle, C. A complete skeleton of the giant South American primate Protopithecus. Nature 381, 307–311 (1996). Heinen, J. H., van Loon, E. E., Hansen, D. M. & Kissling, W. D. Extinction‐driven changes in frugivore communities on oceanic islands. Ecography 41, 1245–1255 (2018). Hempson, G. P., Archibald, S. & Bond, W. J. A continent-wide assessment of the form and intensity of large mammal herbivory in Africa. Science 350, 1056–1061 (2015). Henry, O., Feer, F. & Sabatier, D. Diet of the lowland tapir (Tapirus terrestris L.) in French Guiana. Biotropica 32, 364–368 (2000). Herd, R. M. & Dawson, T. J. Fiber digestion in the emu, Dromaius novaehollandiae, a large bird with a simple gut and high rates of passage. Physiol. Zool. 57, 70–84 (1984). Herridge, V. L. & Lister, A. M. Extreme insular dwarfism evolved in a mammoth. Proc. R. Soc. B. 279, 3193–3200 (2012). Heywood, J. Functional anatomy of bovid upper molar occlusal surfaces with respect to diet. J. Zool. 281, 1–11 (2010). Hofreiter, M. et al. A molecular analysis of ground sloth diet through the last glaciation. Mol. Ecol. 9, 1975–1984 (2000). Hollis, C., Robertshaw, J. & Harden, R. Ecology of the swamp wallaby (Wallabia-Bicolor) in northeastern New-South-Wales. 1. Diet. Wildlife Research 13, 355–365 (1986). Hope, G. & Flannery, T. A preliminary report of changing Quaternary mammal faunas in subalpine New Guinea. Quaternary Research 40, 117–126 (1993). Hou, R. et al. Seasonal variation in diet and nutrition of the northern‐most population of Rhinopithecus roxellana. Am. J. Primatol. 80, e22755 (2018). Huffman, B. Rucervus schomburgki. Ultimate Ungulate. http://www.ultimateungulate.com/Artiodactyla/Rucervus_schomburgki.html (2020). Hullot, M., Antoine, P.-O., Ballatore, M. & Merceron, G. Dental microwear textures and dietary preferences of extant rhinoceroses (Perissodactyla, Mammalia). Mammal Research 64, 397–409 (2019). Hume, J. P. The history of the Dodo Raphus cucullatus and the penguin of Mauritius. Hist. Biol. 18, 69–93 (2006). Hummel, J. et al. Fluid and particle retention in the digestive tract of the addax antelope (Addax nasomaculatus)—Adaptations of a grazing desert ruminant. Comparative Biochemistry and Physiology Part A: Molecular & Integrative Physiology 149, 142–149 (2008). Iribarren, C. & Kotler, B. P. Foraging patterns of habitat use reveal landscape of fear of Nubian ibex Capra nubiana. Wildlife Biology 18, 194–201 (2012). Ismail, K., Kamal, K., Plath, M. & Wronski, T. Effects of an exceptional drought on daily activity patterns, reproductive behaviour, and reproductive success of reintroduced Arabian oryx (Oryx leucoryx). J. Arid Environ. 75, 125–131 (2011). IUCN Redlist. The International Union for the Conservation of Nature 2018. Iwaniuk, A. N., Pellis, S. M. & Whishaw, I. Q. The relative importance of body size, phylogeny, locomotion, and diet in the evolution of forelimb dexterity in fissiped carnivores (Carnivora). Can. J. Zool. 78, 1110–1125 (2000). Iwase, A., Hashizume, J., Izuho, M., Takahashi, K. & Sato, H. Timing of megafaunal extinction in the late Late Pleistocene on the Japanese Archipelago. Quaternary International 255, 114–124, https://doi.org/10.1016/j.quaint.2011.03.029 (2012). Jackson, J. The annual diet of the fallow deer (Dama dama) in the New Forest, Hampshire, as determined by rumen content analysis. J. Zool. 181, 465–473 (1977). Janis, C. M., Napoli, J. G., Billingham, C. & Martín-Serra, A. Proximal humerus morphology indicates divergent patterns of locomotion in extinct giant kangaroos. J. Mamm. Evol., 1–21 (2020). Jankowski, N. R., Gully, G. A., Jacobs, Z., Roberts, R. G. & Prideaux, G. J. A late Quaternary vertebrate deposit in Kudjal Yolgah Cave, south‐western Australia: refining regional late Pleistocene extinctions. Journal of Quaternary Science 31, 538–550 (2016). Janssen, R. et al. Tooth enamel stable isotopes of Holocene and Pleistocene fossil fauna reveal glacial and interglacial paleoenvironments of hominins in Indonesia. Quaternary Science Reviews 144, 145–154 (2016). Al-Jassim, R. & Hogan, J. in Proc. 3rd ISOCARD Conference. Keynote presentations. 29th January–1st February. 75–86. Jhala, Y. V. & Isvaran, K. in The Ecology of Large Herbivores in South and Southeast Asia 151–176 (Springer, 2016). Jiménez-Hidalgo, E. et al. Species diversity and paleoecology of Late Pleistocene horses from southern Mexico. Frontiers in Ecology and Evolution 7, 394 (2019). Johnson, C. Australia’s mammal extinctions: a 50,000-year history. (Cambridge University Press, 2006). Johnson, C. N. & Prideaux, G. J. Extinctions of herbivorous mammals in the late Pleistocene of Australia in relation to their feeding ecology: no evidence for environmental change as cause of extinction. Austral Ecol. 29, 553–557 (2004). Jones, T. et al. The Highland Mangabey Lophocebus kipunji: A New Species of African Monkey. Science 308, 1161–1164, https://doi.org/10.1126/science.1109191 (2005). Jones, K. E. et al. PanTHERIA: a species‐level database of life history, ecology, and geography of extant and recently extinct mammals: Ecological Archives E090‐184. Ecology 90, 2648–2648 (2009). Jones, D. B. & DeSantis, L. R. Dietary ecology of the extinct cave bear: evidence of omnivory as inferred from dental microwear textures. Acta Palaeontologica Polonica 61, 735–742 (2016). Jungers, W. L., Godfrey, L. R., Simons, E. L. & Chatrath, P. S. Phalangeal curvature and positional behavior in extinct sloth lemurs (Primates, Palaeopropithecidae). Proc. Natl. Acad. Sci. USA 94, 11998–12001 (1997). Jungers, W. L. et al. The hands and feet of Archaeolemur: metrical affinities and their functional significance. J. Hum. Evol. 49, 36–55, https://doi.org/10.1016/j.jhevol.2005.03.001 (2005). Kaczensky, P. et al. Stable isotopes reveal diet shift from pre-extinction to reintroduced Przewalski’s horses. Sci. Rep. 7, 5950, https://doi.org/10.1038/s41598-017-05329-6 (2017). Kartzinel, T. R. et al. DNA metabarcoding illuminates dietary niche partitioning by African large herbivores. Proc. Natl. Acad. Sci. U. S. A. 112, 8019–8024, https://doi.org/10.1073/pnas.1503283112 (2015). Kelly, E. M. & Sears, K. E. Limb specialization in living marsupial and eutherian mammals: constraints on mammalian limb evolution. J. Mammal. 92, 1038–1049 (2011). Kelt, D. A. & Meyer, M. D. Body size frequency distributions in African mammals are bimodal at all spatial scales. Glob. Ecol. Biogeogr. 18, 19–29, https://doi.org/10.1111/j.1466-8238.2008.00422.x (2008). Khadka, K. K., Singh, N., Magar, K. T. & James, D. A. Dietary composition, breadth, and overlap between seasonally sympatric Himalayan musk deer and livestock: Conservation implications. Journal for Nature Conservation 38, 30–36 (2017). Kim, B. J., Lee, N. S. & Lee, S. D. Feeding diets of the Korean water deer (Hydropotes inermis argyropus) based on a 202 bp rbcL sequence analysis. Conservation Genetics 12, 851–856 (2011). Kim, D. B., Koo, K. A., Kim, H. H., Hwang, G. Y. & Kong, W. S. Reconstruction of the habitat range suitable for long-tailed goral (Naemorhedus caudatus) using fossils from the Paleolithic sites. Quaternary International 519, 101–112 (2019). Koch, P. L. & Barnosky, A. D. Late Quaternary extinctions: state of the debate. Annu. Rev. Ecol. Evol. Syst. 37 (2006). Köhler, M. & Moyà-Solà, S. Reduction of brain and sense organs in the fossil insular bovid Myotragus. Brain. Behav. Evol. 63, 125–140 (2004). Kohn, M. J. & McKay, M. P. Paleoecology of late Pleistocene–Holocene faunas of eastern and central Wyoming, USA, with implications for LGM climate models. Palaeogeography, Palaeoclimatology, Palaeoecology 326–328, 42–53 (2012). Kohn, M. J., McKay, M. P. & Knight, J. L. Dining in the Pleistocene—who’s on the menu? Geology 33, 649–652 (2005). Koike, S., Nakashita, R., Naganawa, K., Koyama, M. & Tamura, A. Changes in diet of a small, isolated bear population over time. J. Mammal. 94, 361–368, https://doi.org/10.1644/11-mamm-a-403.1 (2013). Kosintsev, P. et al. Evolution and extinction of the giant rhinoceros Elasmotherium sibiricum sheds light on late Quaternary megafaunal extinctions. Nature Ecology & Evolution 3, 31–38 (2019). Kowalczyk, R. et al. Influence of management practices on large herbivore diet—Case of European bison in Białowieża Primeval Forest (Poland). For. Ecol. Manage. 261, 821–828 (2011). Kram, R. & Dawson, T. J. Energetics and biomechanics of locomotion by red kangaroos (Macropus rufus). Comparative Biochemistry and Physiology Part B: Biochemistry and Molecular Biology 120, 41–49 (1998). Krishna, Y. C., Clyne, P. J., Krishnaswamy, J. & Kumar, N. S. Distributional and ecological review of the four horned antelope, Tetracerus quadricornis. Mammalia 73, 1–6 (2009). Kropf, M., Mead, J. I. & Scott Anderson, R. Dung, diet, and the paleoenvironment of the extinct shrub-ox (Euceratherium Collinum) on the Colorado Plateau, USA. Quaternary Research 67, 143–151, https://doi.org/10.1016/j.yqres.2006.10.002 (2007). Kubo, M. O., Yamada, E., Fujita, M. & Oshiro, I. Paleoecological reconstruction of Late Pleistocene deer from the Ryukyu Islands, Japan: Combined evidence of mesowear and stable isotope analyses. Palaeogeography, Palaeoclimatology, Palaeoecology 435, 159–166 (2015). Kumar, R. S., Mishra, C. & Sinha, A. Foraging ecology and time-activity budget of the Arunachal macaque Macaca munzala – A preliminary study. Curr. Sci. 93, 532–539 (2007). Kuzmin, Y. V. Extinction of the woolly mammoth (Mammuthus primigenius) and woolly rhinoceros (Coelodonta antiquitatis) in Eurasia: review of chronological and environmental issues. Boreas 39, 247–261 (2010). Lambert, J. E. Primate digestion: interactions among anatomy, physiology, and feeding ecology. Evolutionary Anthropology 7, 8–20 (1998). Lamoot, I., Callebaut, J., Demeulenaere, E., Vandenberghe, C. & Hoffmann, M. Foraging behaviour of donkeys grazing in a coastal dune area in temperate climate conditions. Appl. Anim. Behav. Sci. 92, 93–112 (2005). Loponte, D. M. & Corriale, M. J. Isotopic values of diet of Blastocerus dichotomus (marsh deer) in Paraná Basin, South America. Journal of Archaeological Science 40, 1382–1388 (2013). Larramendi, A. Shoulder height, body mass, and shape of proboscideans. Acta Palaeontologica Polonica 61, 537–574 (2015). Latham, A. D. M. et al. A refined model of body mass and population density in flightless birds reconciles extreme bimodal population estimates for extinct moa. Ecography 43, 353–364 (2020). Latrubesse, E. M. et al. The Late Miocene paleogeography of the Amazon Basin and the evolution of the Amazon River system. Earth-Science Reviews 99, 99–124, https://doi.org/10.1016/j.earscirev.2010.02.005 (2010). Law, A., Jones, K. C. & Willby, N. J. Medium vs. short-term effects of herbivory by Eurasian beaver on aquatic vegetation. Aquat. Bot. 116, 27–34 (2014). Lazagabaster, I. A., Rowan, J., Kamilar, J. M. & Reed, K. E. Evolution of craniodental correlates of diet in African Bovidae. J. Mamm. Evol. 23, 385–396 (2016). Lazagabaster, I. A. et al. Fossil Suidae (Mammalia, Artiodactyla) from Lee Adoyta, Ledi-Geraru, lower Awash Valley, Ethiopia: Implications for late Pliocene turnover and paleoecology. Palaeogeography, Palaeoclimatology, Palaeoecology 504, 186–200 (2018). Lehmann, D. Dietary and spatial strategies of gemsbok (Oryx g. gazella) and springbok (Antidorcas marsupialis) in response to drought in the desert environment of the Kunene region, Namibia PhD thesis, Freie Universität Berlin (2015). Leslie, D. M. Boselaphus tragocamelus (Artiodactyla: Bovidae). Mammalian Species, 1–16 (2008). Leslie, D. M. Jr Procapra picticaudata (Artiodactyla: Bovidae). Mammalian Species 42, 138–148 (2010). Leslie, D. M. & Schaller, G. B. Pantholops hodgsonii (Artiodactyla: Bovidae). Mammalian Species, 1–13 (2008). Leslie, D. M. & Schaller, G. B. Bos grunniens and Bos mutus (Artiodactyla: Bovidae). Mammalian species, 1–17 (2009). Leslie, D. M. Jr, Groves, C. P. & Abramov, A. V. Procapra przewalskii (Artiodactyla: Bovidae). Mammalian Species 42, 124–137 (2010). Leslie, D. M. Jr, Lee, D. N. & Dolman, R. W. Elaphodus cephalophus (Artiodactyla: Cervidae). Mammalian Species 45, 80–91 (2013). Leus, K., Goodall, G. P. & Macdonald, A. A. Anatomy and histology of the babirusa (Babyrousa babyrussa) stomach. Comptes Rendus de l’Académie des Sciences - Series III - Sciences de la Vie 322, 1081–1092, https://doi.org/10.1016/S0764-4469(99)00107-9 (1999). Li, Y., Yu, Y.-Q. & Shi, L. Foraging and bedding site selection by Asiatic ibex (Capra sibirica) during summer in Central Tianshan Mountains. Pakistan Journal of Zoology 47, 1–6 (2015). Li, B., Xu, W., Blank, D. A., Wang, M. & Yang, W. Diet characteristics of wild sheep (Ovis ammon darwini) in the Mengluoke Mountains, Xinjiang. China Journal of Arid Land (2018). Liang, X., Kang, A. & Pettorelli, N. Understanding habitat selection of the Vulnerable wild yak Bos mutus on the Tibetan Plateau. Oryx 51, 361–369 (2017). Lister, A. M. & Stuart, A. J. The extinction of the giant deer Megaloceros giganteus (Blumenbach): New radiocarbon evidence. Quaternary International 500, 185–203 (2019). Liu, X., Stanford, C. B., Yang, J., Yao, H. & Li, Y. Foods Eaten by the Sichuan snub‐nosed monkey (Rhinopithecus roxellana) in Shennongjia National Nature Reserve, China, in relation to nutritional chemistry. Am. J. Primatol. 75, 860–871 (2013). Livezey, B. C. An ecomorphological review of the dodo (Raphus cucullatus) and solitaire (Pezophaps solitaria), flightless Columbiformes of the Mascarene Islands. J. Zool. 230, 247–292 (1993). Livezey, B. C. & Zusi, R. L. Higher-order phylogeny of modern birds (Theropoda, Aves: Neornithes) based on comparative anatomy. II. Analysis and discussion. Zoological journal of the Linnean Society 149, 1–95 (2007). Lobo, L. S. Estudo da morfologia dentária de Xenorhinotherium bahiense Cartelle & Lessa, 1988 (Litopterna, Macraucheniidae) Universidade Federal De Viçosa, (2015). Long, J. A., Archer, M., Flannery, T. & Hand, S. Prehistoric mammals of Australia and New Guinea: one hundred million years of evolution. (Johns Hopkins University Press, 2002). Louys, J., Meloro, C., Elton, S., Ditchfield, P. & Bishop, L. C. Mesowear as a means of determining diets in African antelopes. Journal of Archaeological Science 38, 1485–1495, https://doi.org/10.1016/j.jas.2011.02.011 (2011). Ma, J., Wang, Y., Jin, C., Hu, Y. & Bocherens, H. Ecological flexibility and differential survival of Pleistocene Stegodon orientalis and Elephas maximus in mainland southeast Asia revealed by stable isotope (C, O) analysis. Quaternary Science Reviews 212, 33–44 (2019). MacFadden, B. J. Fossil horses from “Eohippus”(Hyracotherium) to Equus: scaling, Cope’s Law, and the evolution of body size. Paleobiology 12, 355–369 (1986). MacFadden, B. J. Diet and habitat of toxodont megaherbivores (Mammalia, Notoungulata) from the late Quaternary of South and Central America. Quaternary Research 64, 113–124 (2005). MacFadden, B. J. & Shockey, B. J. Ancient feeding ecology and niche differentiation of Pleistocene mammalian herbivores from Tarija, Bolivia: morphological and isotopic evidence. Paleobiology 23, 77–100 (1997). MacPhee, R. D. E. & Sues, H.-D. Extinctions in Near Time: Causes, Contexts, and Consequences. (Springer, 1999). Madden, R. H. Hypsodonty in Mammals: Evolution, Geomorphology, and the Role of Earth System Processes. (Cambridge University Press, 2014). Al Majaini, H. Nutritional ecology of the Arabian tahr Hemitragus jayakari Thomas 1984 in Wadi Sareen Reserve area, M. Sc. thesis, Sultan Qaboos University, Oman. 97pages, (1999). Marcolino, C. P., dos Santos Isaias, R. M., Cozzuol, M. A., Cartelle, C. & Dantas, M. A. T. Diet of Palaeolama major (Camelidae) of Bahia, Brazil, inferred from coprolites. Quaternary international 278, 81–86 (2012). Marin, V. C. et al. Diet of the marsh deer in the Paraná River Delta, Argentina—a vulnerable species in an intensive forestry landscape. European Journal of Wildlife Research 66, 16 (2020). Marinero, N. V., Navarro, J. L. & Martella, M. B. Does food abundance determine the diet of the Puna Rhea (Rhea tarapacensis) in the Austral Puna desert in Argentina? Emu-Austral Ornithology 117, 199–206 (2017). Mayte, G.-B. et al. Diet and habitat of Mammuthus columbi (Falconer, 1857) from two Late Pleistocene localities in central western Mexico. Quaternary International 406, 137–146 (2016). McAfee, R. K. Feeding mechanics and dietary implications in the fossil sloth Neocnus (Mammalia: Xenarthra: Megalonychidae) from Haiti. J. Morphol. 272, 1204–1216 (2011). McDonald, H. G. Palecology of extinct Xenarthrans and the Great American Biotic Interchange. Bulletin of the Florida Museum of Natural History 45, 319–340 (2005). McDonald, H. G. & Pelikan, S. Mammoths and mylodonts: Exotic species from two different continents in North American Pleistocene faunas. Quaternary International 142–143, 229–241, https://doi.org/10.1016/j.quaint.2005.03.020 (2006). McDonald, H. G., Feranec, R. S. & Miller, N. First record of the extinct ground sloth, Megalonyx jeffersonii,(Xenarthra, Megalonychidae) from New York and contributions to its paleoecology. Quaternary International 530, 42–46 (2019). McFarlane, D. A., MacPhee, R. D. E. & Ford, D. C. Body Size Variability and a Sangamonian Extinction Model forAmblyrhiza, a West Indian Megafaunal Rodent. Quaternary Research 50, 80–89 (1998). McNamara, K. & Murray, P. Prehistoric Mammals of Western Australia. (Western Australian Museum, 2010). Mead, J. I., O’Rourke, M. K. & Foppe, T. M. Dung and diet of the extinct Harrington’s mountain goat (Oreamnos harringtoni). J. Mammal. 67, 284–293 (1986). Mead, J. I., Agenbroad, L. D., Phillips, A. M. III & Middleton, L. T. Extinct mountain goat (Oreamnos harringtoni) in southeastern Utah. Quaternary Research 27, 323–331 (1987). Meijaard, E. & Groves, C. Upgrading three subspecies of babirusa (Babyrousa sp.) to full species level. Asian Wild Pig News 2, 33–39 (2002). Meijaard, E. & Groves, C. P. Morphometrical relationships between South‐east Asian deer (Cervidae, tribe Cervini): Evolutionary and biogeographic implications. J. Zool. 263, 179–196 (2004). Meloro, C. & de Oliveira, A. M. Elbow joint geometry in bears (Ursidae, Carnivora): a tool to infer paleobiology and functional adaptations of Quaternary fossils. J. Mamm. Evol. 26, 133–146 (2019). Mengli, Z., Willms, W. D., Guodong, H. & Ye, J. Bactrian camel foraging behaviour in a Haloxylon ammodendron (C.A. Mey) desert of Inner Mongolia. Appl. Anim. Behav. Sci. 99, 330–343, https://doi.org/10.1016/j.applanim.2005.11.001 (2006). Miller, G. H. et al. Pleistocene extinction of Genyornis newtoni: human impact on Australian megafauna. Science 283, 205–208 (1999). Miller, G. H. Ecosystem collapse in Pleistocene Australia and a human role in megafaunal extinction. Science 309, 287–290, https://doi.org/10.1029/2004gl021592 (2005).
Dark matter is a hypothetical form of matter that should outweigh the regular stuff that makes us by five to one. It is invisible to our instruments and its effects can only be observed through gravity. A standard approach measures its distribution by looking at how its presence distorts the light of distant galaxies. This method, though effective, has limitations on how far back it can look into the past: up to 8 billion years ago in most cases. Now that has just been extended all the way to 12 billion years ago. Massive objects warp space-time around them, bending light to act like a lens on distant objects behind them. The biggest ones can create spectacular lensed images of distant galaxies. Smaller ones create smaller distortions but by measuring them, we can reconstruct exactly the distribution of mass in a lensed galaxy. In this way, astronomers can see the invisible dark matter. This approach works as long as you have a lot of bright background galaxies and the galaxies are lensing their light closer to us. For this reason, looking deeper into the universe – so further into the past – creates a limit: the first galaxies formed a few hundred million years after the Big Bang, and they weren’t all that bright. Reporting in Physical Review Letters, a collaboration led by scientists at Nagoya University approached the same method in a new way, revealing the distribution of dark matter around galaxies a whopping 12 billion years ago. They didn’t look for the light of distant galaxies to be distorted but instead looked at the very first light in the universe: the cosmic microwave background (CMB). The CMB is an emission that permeates all of the cosmos. About 380,000 years after the Big Bang, the universe was finally cool enough for light to move without being absorbed by matter; thus, this light was free. As the universe expanded, its wavelength has been stretched all the way to microwaves, but it is still affected by the gravity of massive objects. So by measuring lensing in the CMB, the researchers were able to push the distribution of dark matter further back in time and deeper into space. “Look at dark matter around distant galaxies? “It was a crazy idea. No one realized we could do this,” Professor Masami Ouchi of the University of Tokyo, who made many of the observations, said in a statement. “But after I gave a talk about a large distant galaxy sample, Hironao [Miyatake, research lead] came to me and said it may be possible to look at dark matter around these galaxies with the CMB.” “Most researchers use source galaxies to measure dark matter distribution from the present to 8 billion years ago”, added Assistant Professor Yuichi Harikane of the Institute for Cosmic Ray Research at the University of Tokyo. “However, we could look further back into the past because we used the more distant CMB to measure dark matter. For the first time, we were measuring dark matter from almost the earliest moments of the universe.” The most intriguing find is the measurement of the clumpiness of dark matter. According to the standard model of cosmology or Lambda-CDM, which underpins our understanding of the universe, dark matter forms regions of overdensity where over time galaxies form. But, the clumpiness measurement in this study is lower than predicted by theory. “Our finding is still uncertain,” Miyatake, from Nagoya University, said. “But if it is true, it would suggest that the entire model is flawed as you go further back in time. This is exciting because if the result holds after the uncertainties are reduced, it could suggest an improvement of the model that may provide insight into the nature of dark matter itself.” “At this point, we will try to get better data to see if the Lambda-CDM model is actually able to explain the observations that we have in the universe,” said Andrés Plazas Malagón, associate research scholar at Princeton University. “And the consequence may be that we need to revisit the assumptions that went into this model.” The team used data from the European Space Agency’s Planck observatory for the CMB and observations from the Subaru Hyper Suprime-Cam Survey (HSC). Only one-third of the HSC data has been analyzed so the team is now working to complete that.
Students participating in SAT math competitions will have to solve ratios, rates, and proportions problems. If you take a look at any SAT math practice test, you will see that ratios, rates, and proportions are among the most common challenges. So, let’s get started! What are ratios and how do we solve them? Ratios are in fact fractions, that compare between two numbers. They are called “ratios” because they have been initially written like 1:2 instead of 12. The reading of 1:2 would be one half is akin to a one to two ratio; 25 would be written like 2:5, and read like 2 fifths, or 2 to 5 ratio. We have the following exercise: What is the ratio of a:b, where 3a=7b? The first operation to do is turn it into a fraction, so we have ab; in order to do so we divide everything to 3, thus: 3a3= 7a3, resulting in a = 7b3. The next step is to bring a over b, so we divide both sides to b, as: ab = 7b3b , we eliminate the b’s and end up with ab = 73 , which leads us to the beautiful result of the ratio of a:b is 7:3. What are rates and how do we solve them? Rates are represented by the next formula: d = rt, where d is distance, r is rate, and t is time, the speed formula. We will stumble at SAT math tests upon exercises like: Mike can drive two laps at every 1.5 minutes in the national circuit. At this rate, how many minutes does Mike need to drive 12 laps? Laps refer to the distance of the circuit and 1.5 times is the time. Our formula for this particular exercise is: 2 = r x 1.5 How do we find r? We divide the whole formula by 1.5, and: 21.5 = r(1.5)1.5 , we reduce 1.5 from the r side and end up with: 21.5 = r . Still, we want to simplify the fraction more, so we will multiply the fraction with 2, resulting in: We still need to find the time it takes Mike to drive those 12 laps in a 43 rate, so: 12 (laps) = 43 (time). We reduce the formula by multiplying with 34, thus 3412 = 43(t) x 34; we reduce the 34 from one side, and result in: 34 x 12 = t, which reduced will end up being 9. So the answer to the question is that it will take Mike 9 minutes to drive 12 laps. What are proportions and how do we solve them? The interesting thing about the above SAT math exercise is that it could be solved through proportions, like this: 21.5 = 12x, where x is the time. We multiply them in a cross, and obtain: 1.5 x 12 = 18 and 2 x. 18 = 2x, so x = 9. Proportions are expressed through the next formula: ab = cd , so a fraction equals another fraction. When we solve proportions, we always cross multiply the values in the fraction, so we will multiply a to d, and b to c, meaning that bc = ad. Let’s solve a proportion! We have 16= x12 . What’s next? Cross multiplication. 6x = 12, thus x = 2. The solution is 16= 212 Once you start to get familiar with solving these SAT math problems, you will find that rates, ratios, and proportions are quite fun to solve. Also, you can easily apply them in real life to find out the right answer in recipes, or construction projects. At the Online Math Center… You will find experienced and devoted math tutors who will tailor the learning experience according to your needs. We offer individual or group tutoring to middle school students, but also to students that want to participate in math competitions. Contact us in order to find out more about our SAT Math tutoring services!
Despite the importance of our sense of hearing, both in our personal and professional lives, it is one of the senses that most of us take for granted. We punish our ears with loud noises at work and at home, and that ongoing noise pollution can lead to hearing losses that are hard to correct. This guide can help. We have collected information on how hearing works, how hearing loss happens within specific types of jobs, and what you can do to protect your hearing at home and at work. Let’s get started. How the Ear Works The human ear is a remarkable structure, made up of three separate parts that all contribute to our ability to hear. The exterior portion of the ear is shaped a bit like a funnel, with hard structures that point forward in the same direction as the eyes. This part of the ear is designed to capture sound waves as they move toward the front of the body. When those waves arrive, they move into the middle part of the ear. Here, there are three small bones that help to augment the sound. They move when exposed to sound, and they help the waves to travel even deeper into the body. The inner ear is where sounds become nerve impulses, according to the American Academy of Otolaryngology – Head and Neck Surgery. A structure deep in the inner ear, known as the cochlea, helps to translate those sound waves into electrical signals that can be picked up by the auditory nerve. That nerve transmits the electrical signals to the brain. This process is complex, but it takes less than a second to complete. You may not notice any delay between an event happening and your ability to hear it. The process is the same whether the noise is loud or quiet, but the damage left behind can vary dramatically depending on the level of sound you are exposed to. Volume is measured in decibels (dB), and according to the National Institutes of Health (NIH), events that generate sounds of 75 dB or less are considered safe, even if you are exposed to them for a long period of time. A humming refrigerator or a normal conversation fall within this range, NIH says. Louder noises cause damage to delicate structures inside the cochlea. The cochlea is lined with very small hair-like structures, and those hairs bend when they are exposed to waves of sound. At the peak of the bend, channels within the hairs open up and chemicals rush in. That process creates the electrical signal the brain needs to register sound. Very loud sounds can damage the hairs inside the cochlea, and according to NIH, damaged cells do not grow back. When they are broken, damaged, or dead, they do not repair themselves. They are gone for good. Fewer hair cells means fewer electrical impulses moving into the brain, and that means sounds become harder to hear. When enough cells die off, people can develop measurable hearing loss. Hearing can also deteriorate with age. The hairs within the ear can thin with age, and the bones within the middle ear can stop moving as easily as they once did. Nerves can also slow down, which can mean some sound signals are dropped before they reach the hearing centers within the brain. This type of age-related hearing loss is also not curable. People with this type of hearing loss will not get better with therapy. Doctors have developed thresholds, so they can determine who can hear normally and who has a condition that impedes good hearing. According to the World Health Organization, people with normal hearing have sound thresholds of 25 dB in both ears. That means they can hear sounds as quiet as 25 dB. Someone who cannot hear at this level has hearing loss. Doctors test hearing acuity by playing a variety of different sounds through headphones, and asking patients to discuss what they can hear. These tests take just minutes to complete, and they provide doctors with a complete picture of what you can — and cannot — hear. The Scope of Hearing Loss Hearing loss is surprisingly common. According to NIH, about 15 percent of Americans 18 and older say they have some trouble hearing. This difficulty could be caused by various issues. - Birth defects - Head injuries - Allergies and colds Often, however, the hearing loss we experience is due to overexposure to noise. According to the Center for Hearing and Communication, noise is one of the leading causes of hearing loss. In addition, people with hearing loss wait for an average of seven years before they ask for help with their hearing, and only 16 percent of doctors perform routine hearing loss screening. These statistics seem to suggest that there are many of us living with hearing loss, although the majority of us with hearing difficulties are unaware of the issue or unwilling to ask for help. This could mean that our hearing losses will get worse, as we will not take steps to protect our hearing. Living With Hearing Loss The way people experience hearing loss can differ dramatically, depending on the type of loss they are living with. According to a researcher quoted in a piece produced for National Public Radio, people with age-related hearing loss may be unable to hear the consonants in the words people use when they are speaking. That can make the spoken word seem garbled. Others with noise-related hearing loss may be unable to hear some sounds at all while other sounds are so loud that they are painful. For people with hearing loss, it can be difficult to: - Distinguish one voice out of many. - Hear people speaking in a crowded room. - Understand dialogue in television shows and movies. - Decipher what people say while whispering. Every moment that involves sound requires a little more interpretive effort, and according to the blog Living With Hearing Loss, written by someone with hearing loss, that interpretation is exhausting. This writer explains hearing loss as playing a constant game of fill in the blank. Each sound requires thought, so the listener can figure out what is there and what is missing. Another person with hearing loss, interviewed by Prevention, reports that people can become impatient when they are dealing with a person who has hearing loss. They may snap at recurrent requests to repeat what has been said, and that can force the person with hearing loss to simply attempt to make a guess about the conversation. Some people with hearing loss may avoid social situations altogether for fear of causing irritation or anger. The Risk of Hearing Loss Is Not Equal A decline in hearing acuity is part of the aging process, which means all of us run the risk of losing at least some of our hearing with each passing year. There is very little we can do to prevent this type of hearing loss. Everyone who ages is at risk. But there are some types of hearing loss that are very preventable, and they are more common among specific types of people. Some of the daily habits we participate in could also contribute to hearing loss. For example, in a study published in the journal Noise and Health, researchers examined headphone use among 280 teenagers. They found that about 10 percent of these teens listened to music at 90 dB to 100 dB for long periods of time. Some teens even listened to music at this volume while they were sleeping. As we mentioned earlier, volumes at the 75-dB range and below were not associated with hearing damage. Louder volumes were. Playing music at 100 dB could certainly put these teens at risk for hearing loss. The work we do could also play a role. For example, the Hearing Health Foundation reports that 30 million workers in the United States are exposed to hazardous noise levels. This means some people work in jobs in which the noise is so loud and so constant that they damage their ears each and every day they go to work. If these workers do not take precautions to protect their hearing while they are at work, they may take home hearing loss in addition to a paycheck. Musicians rely on an exceptional sense of hearing in order to excel in the work they do every day. Ironically, that work could impede their ability to perform the work they love. Consider this: The American Speech Language Hearing Association reports that a rock concert can produce noise at 112 dB. That is a volume the association considered extremely loud and dangerous to hearing. The association says that people attending a concert should wear earplugs to protect hearing. A person within the audience might experience this volume level for just a few hours on a rare occasion. But someone who is playing with that rock group could experience that level of noise multiple times during the same week while on tour. That same musician might also participate in ear-splitting sound checks before each concert just to make sure the room is properly set up for the festivities. People who participate in rock concerts like this over a long period of time can develop profound hearing loss. For example, Rolling Stone reports that AC/DC singer Brian Johnson was forced to leave in the midst of a tour due to profound hearing loss. It is not clear how this loss developed, but it is easy enough to assume that music played a part in this loss. This is a band known for loud concerts and raucous crowds. Both could have exposed the signer to very damaging sound waves over his long career. When the hearing loss was found, his doctors advised the signer to stop touring or risk complete deafness. It is not just rock music that can contribute to hearing loss in musicians. Even symphonic music, known for its soothing tones, can come with deafening volumes that can harm musicians. That risk became clear in 2017, according to The Economist, when a violinist in London filed a lawsuit, claiming that he lives with hearing loss sustained after two rehearsals for a Wagner piece. In the lawsuit, the violinist claims that the noise around him reached 137 dB. In an article published in Audiology Online, researchers measured the dangers experienced by people playing various types of instruments, and they found that nearly no one was safe. - Woodwinds: Brass players sitting behind them in an orchestra can harm hearing. - Flutes: Neighbor flutes can reach more than 105 dB. - String sections: Small strings can play at a volume louder than 110 dB, and they may also have brass instruments blaring behind them. - Singers: Soprano singers can reach volumes of more than 115 dB. - Amplified instruments, including guitars: Speakers and monitors to amplify sound can reach dangerous levels. In ensemble settings, musicians need to hear one another in order to keep the music playing. In traditional setups, musicians sit close to one another in order to hear complementary notes. In electronic settings, musicians use special speakers that point back at their feet. As the crowds get louder, musicians may turn up those monitors in order to hear their own notes. When one monitor grows louder, another musician might turn up another monitor in order to hear that person’s notes. It can become a game of warring speakers, with hearing as the loser. In-ear monitors were developed to help musicians hear the mix they need to hear without adding to the noise filling the room. They seemed like the perfect solution, but as Audiology Online points out, some in-ear monitors are capable of playing volumes of 130 dB. People who listen to this level of sound can struggle with permanent hearing loss. Musicians can work with audiologists, and they can be fitted with specialized equipment that allows them to hear what they need to hear without harming their ears. But musicians may be uncomfortable with using this protection. In a piece produced for the Health and Safety Executive in England, musicians stated that they did not wear hearing protection because: - They felt the equipment hindered their musical performance. - Protection made it hard for them to hear others. - The equipment was uncomfortable, hard to use, or unsightly. - Wearing protection made them look weak. Musicians who cannot hear the sounds around them can face difficult career choices. They may be forced to stop touring, so they will not be exposed to very loud stadiums. They may be asked to move into producer roles, so they can control the volume of the music as it plays. One thing they will not be able to do is fix the damage that has been done. As we mentioned, hearing loss caused by loud sounds cannot be corrected. People with this type of hearing loss can use hearing aids to amplify sound, but they cannot take a pill and see the problem magically disappear. Farmers, Gardeners, and Hearing Loss On the surface, people who garden or farm do not have very much in common with people who perform in rock concerts. In reality, both of these groups of people are exposed to hazardous sounds at work, and both could develop profound hearing loss. Musicians have added protection, as they may be represented by unions or employers who are required to protect the hearing of their employees. Farmers are different. According to the U.S. Department of Agriculture, 97 percent of the farms in operation in the United States are family owned. That means farmers must take individual precautions to protect hearing. They may have many dangers to protect against. In a brochure produced by the Department of Health and Human Services, the Centers for Disease Control and Prevention (CDC), and the National Institute for Occupational Safety and Health, these elements of a farm could reach levels above 85 dB: - Grain dryers - Squealing pigs - Tractors (in some conditions) - Chain saws - Circular saws Farmers may be exposed to these triggers for hours each day, and in time, they may develop subtle signs of hearing loss, such as ringing in the ears. These early warning signs should prompt you to invest in hearing protection that you wear all day, every day. You can also invest in newer equipment that does not produce such loud sounds. Construction Workers and Hearing Loss People who work in construction build the roads, bridges, and highways that we rely on to get from place to place. They may also build the houses we live in and the offices we work in. Without this team of talented people, modern life wouldn’t exist. The work they do is vital. It is also incredibly noisy. Tearing down buildings, either by hand or with explosives, can produce sharp attacks of sound that can be devastating to the delicate structures within the ear. The tools people use within construction, such as jackhammers, drills, and nail guns, can also produce very loud sounds that can impede hearing. The Occupational Safety and Health Administration recommends measuring construction noise with a sound level meter, and making sure that you wear protective equipment to keep your ears safe. But when you can’t use a meter, the administration says, you can use the 2-to-3-foot rule. If you have to raise your voice for someone 2-to-3 feet away from you to hear you, the area is probably at or above 85 dB. Moving away from the source of the noise, or containing the noise with a temporary sound barrier, may be wise. Classrooms, Daycare Centers, and Hearing Loss What is the best way to ensure that someone hears what you have to say? Small children often answer this question with volume. The louder they speak, the more likely it might be that others will pay attention to the message they are trying to convey. Put several children in a room, such as a daycare center or classroom, and noise levels can rise rapidly. In a study published in the journal HNO, researchers found that noise levels in nursery schools can average 80 dB, and they can reach peak levels of 112.55 dB. This is a level of noise that puts the hearing of both students and teachers at risk. Children enter these environments to learn, and teachers can teach lessons about volume control and hearing loss. Those lessons could help children to protect their own hearing, and it could help teachers to create a safer environment in which to spend the day. The American Speech Language Hearing Association also recommends: - Using carpets, rugs, curtains, and wall hangings to muffle sound. - Keeping windows and doors closed when possible. - Placing soft tips on chair and table bottoms. - Placing tables at an angle rather than in rows. First Responders and Hearing Loss When an emergency happens, first responders help us to recover. Often, they need to use sirens to clear traffic, so they can get to an emergency situation as it unfolds. According to NIH, a siren’s volume can reach 120 dB. Someone who drives a vehicle with a siren blaring could be exposed to that noise for long periods every day. Once first responders arrive on the scene, they may be exposed to even more loud sounds. For example, people in the midst of a medical emergency or people frightened about what is happening to them may scream. The Guinness World Record for the loudest scream goes to a woman in England who had a scream measured at 129 dB. That is a sound that can cause hearing damage. Firearms might also be part of the experience for first responders. Police may encounter people who are shooting at them, or they may be required to shoot their own guns. The American Speech Language Hearing Association reports that almost all firearms produce noises that are louder than 140 dB. That is a noise level associated with permanent hearing damage. First responders may not be able to wear headphones or earplugs during the entirety of their shifts. You need to communicate with the people around you, and that means you must have acute hearing. But you can work with your union representatives to ensure that you have access to protection during some of the loudest portions of your workdays, and you can demand appropriate rest breaks, so you can give your ears a break from relentless noise. Factory Work and Hearing Loss Factories are where the goods and products we use each day are made. Parts are stamped from hard metal, delicate pieces are welded together, segments roll down assembly lines, and workers stand guard to make sure everything runs according to plan. Factories tend to be big, cavernous spaces. The equipment that clogs up these spaces can be incredibly loud. That means the average person who works in a factory is exposed to a great deal of noise that hits many reflective surfaces. Each bounce can amplify the noise. Damage to hearing can happen very slowly, and that means many people who work in factories may have hearing loss they are just not aware of. For example, in a study of factory workers published in the journal Noise and Health, researchers found that 76 percent of workers said they had excellent hearing, but in reality, 42 percent had hearing loss. The modern factory environment is different than the factory floors of years past. As The Seattle Times points out, most factory jobs are considered skilled positions in which workers are required to understand computer programming and technical specifications. Working in a job like this might mean sitting in a booth next to a machine rather than standing next to a clanging machine for eight hours per day. Even so, a factory job might not be safe for your ears, especially if you take off your hearing protection on a regular basis to talk with friends or coworkers. The best way to stay safe in a job like this is to wear protection all day long, and to use your breaks to retreat to a quiet space to let your ears rest. Pilots, Flight Crews, and Hearing Loss In order to help a plane push past the draw of gravity and lift into the air, the work of many engines is required. Each engine can produce an amazing amount of noise, and the people who work in the aviation industry may be exposed to this spectacular noise on a regular basis. For example, Purdue University reports that a jet takeoff, when experienced at 25 meters, can reach 150 dB, which is enough to rupture an eardrum. A Boeing 737 at one nautical mile before landing, on the other hand, can reach a sound level of 97 dB. People who work on the ground of an airport or military base absolutely must wear hearing protection; otherwise, they will not be able to complete this work. But they are not the only ones who need to worry about the impact of aircraft sound. The Federal Aviation Administration reports that people inside of aircraft can experience noise caused by the equipment that helps the plane to fly, such as propellers and rotors, and the equipment that keeps the plane comfortable and safe, such as alert systems and pressurization systems. Pilots may experience so much noise that they must yell to one another to be heard, and flight crew walking into the cabin may be exposed to the same level of sound. Wearing earplugs, earmuffs, communication headsets, or a combination of all of these devices can help keep people on the ground and in the air safe from the damage noise can cause. Waiters, Servers, and Hearing Loss A restaurant is a place where people come together to share a meal and catch up on conversation. It is not surprising then that restaurants are exceedingly noisy spaces. In the Los Angeles Times, a reporter moved through several restaurants with a noise meter and claims that rooms registered between 87 dB and 90.3 dB. These are levels associated with hearing loss, and people who work in these environments may be exposed to these levels throughout their entire shift. Restaurants weren’t always so noisy. As an author writing for Vox points out, restaurants became much louder in the 1990s when a New York chef began pumping very loud music into the dining room to create a sense of excitement and bustle for customers. It was a trend that caught on, and now, it seems unusual to walk into a dining establishment that is quiet. But customers may not appreciate the din, and that gives the staff the opportunity to open up a conversation about noise. Perhaps a trial night of lower volumes could help to inspire owners to turn down the sound dial a bit. If that doesn’t work, staffers can consult with their union representatives and ask about noise protections. Military Workers and Hearing Loss Those who work in the military endure some of the most punishing noise levels available. They may fly in transport planes that do not muffle engine noise, they may fire guns or rockets, and they may endure ongoing yelling from staffers or civilians. It is no wonder that the Hearing Health Foundation estimates that three in five returning military service members come home with hearing loss. The military has equipment available that can protect the hearing of those who serve. Some helmets come with noise protections, and some soldiers are given special mufflers to wear when they are shooting handguns and rifles. Wearing this gear is not a sign of weakness. It is mandatory for those who want to return to civilian life with hearing intact. Simple Steps to Take to Protect Your Hearing The job you hold can play a big role in your ability to hear, and some of the risks you face at work are not easily corrected. But your ears face other challenges during a typical day, and amending those challenges could be a key step to take to protect and preserve your hearing. According to the Better Hearing Institute, a full one-third of permanent hearing loss is preventable, but it requires diligence and planning. These are a few steps you can take. 1. Pay attention to headphone volume. Headphones put the source of the sound very close to your ears, and that makes them quite dangerous. The National Health Service recommends listening to music at 60 percent of the device’s maximum volume. Some devices allow you to preset them so you cannot make them any louder. 2. Get accurate sound data. The CDC offers a sound level meter app you can use to determine just how loud a noise really is. Downloading that app and using it often can help you understand whether you should be using protection in any new environment you walk into. 3. Don’t be afraid to walk away. Sounds can seem quieter the farther you move from them. If you feel as though a sound is too loud, try moving away from it or walking into another room. 4. Keep protection close at hand. The CDC recommends keeping earplugs and other sound-blocking devices in your purse, pockets, and car. When sounds are too loud and you feel yourself yelling to be heard, it is time to use those protective devices. Protecting your hearing is vital, but it might also seem a little unusual. Your friends and family members may notice that you are wearing earplugs, steering clear of noisy environments, and otherwise working to protect your ears. Be open with them about your concerns, and share what you’re doing to preserve your hearing. You may be able to convince them to do the same. How to Communicate With Someone Who Has Hearing Loss Hearing aids help to amplify sounds, and they can be good tools for those who have hearing loss. For some people, the right hearing aid can open up worlds of experience they thought they had lost for good. People with hearing loss also appreciate sensitivity and adjustments from those around them. If you are living or working with someone who has hearing loss, these tips from the Hearing Loss Association of America may be helpful. - Ask how you can make yourself easier to hear. Would turning off a fan or a radio help? - Keep your face in the light in case the person needs to read your lips. - Don’t cover your mouth with your hands, and don’t chew anything while you’re talking. - Speak clearly, but resist the urge to shout. - If you are struggling to make a specific word understood, look for a way to rephrase the sentence using different words. Above all, be patient. It can be frustrating for both sides when communication is difficult. Approaching the issue with compassion can help to keep the spirit of cooperation alive. Why We Care About Your Ears At Cloud Cover Music, we don’t make protective devices that can protect your ears. We don’t make hearing aids. Instead, we make it possible for businesses to play the music you want to hear in their offices, salons, restaurants, and other communal spaces. We want you to be able to hear the music we play, and we want to ensure that artists can hear themselves, so they will keep making the music you want to hear. We hope this guide has been useful, as you look for ways to protect your hearing. And if you’re looking for a partner to smooth the process of playing music in public, we’d love to talk with you. Please contact us. We’d like to thank Lily Scott from Digital Advocates for getting in touch and asking us to share this article on preventing hearing loss. All content is the intellectual property of Cloud Cover Media. The original article and all footnotes and references can be viewed here.
A group of experts from American and European space agencies attended a week-long exercise led by “NASA” in which they faced a hypothetical scenario represented by an asteroid 35 million miles close to Earth, and it could hit it within six months, according to the scientific site “sciencealert”. With each passing day of the exercise, the participants learned more about the asteroid’s size, trajectory, and impact strength, and then had to collaborate and use their technological knowledge to see if anything could be done to stop the mass coming from space. The group concluded that none of the technologies on Earth can stop the hypothetical asteroid due to the narrow time frame, which means that they failed to prevent the asteroid collision with Earth, which destroyed the continent of Europe, according to the simulation. It is reported that the phantom asteroid in the simulation was called 2021PDC. The participants said: “If we encounter the hypothetical asteroid scenario in real life, we will not be able to launch any spacecraft in such a short time to repel it.” Participants in the exercise suggested launching lasers that could heat and vaporize the asteroid enough to change its trajectory. Another possibility is sending a spacecraft to collide with an incoming asteroid, which leads to the asteroid’s distance from its path, and this is the most preferred strategy for NASA. The site pointed out that asteroids do not currently pose any threat to Earth, but an estimated two-thirds of asteroids of 140.21 meters or larger are capable of causing great chaos but are not yet discovered. For this reason, NASA is trying to prepare for such a situation. “Ultimately, these exercises help the global defense community communicate with each other to ensure coordination between us all in the event of a potential future threat,” said Lindley Johnson, NASA Defense Officer.
What if a line were drawn outside a circle that appeared to touch the circle at only one point? How could you determine if that line were actually a tangent? After completing this Concept, you'll be able to apply theorems to solve tangent problems like this one. The tangent line and the radius drawn to the point of tangency have a unique relationship. Let’s investigate it here. Investigation: Tangent Line and Radius Property Tools needed: compass, ruler, pencil, paper, protractor - Using your compass, draw a circle. Locate the center and draw a radius. Label the radius , with as the center. - Draw a tangent line, , where is the point of tangency. To draw a tangent line, take your ruler and line it up with point . Make sure that is the only point on the circle that the line passes through. - Using your protractor, find . Tangent to a Circle Theorem: A line is tangent to a circle if and only if the line is perpendicular to the radius drawn to the point of tangency. To prove this theorem, the easiest way to do so is indirectly (proof by contradiction). Also, notice that this theorem uses the words “if and only if,” making it a biconditional statement. Therefore, the converse of this theorem is also true. Now let’s look at two tangent segments, drawn from the same external point. If we were to measure these two segments, we would find that they are equal. Two Tangents Theorem: If two tangent segments are drawn from the same external point, then the segments are equal. In is tangent at point . Find . Reduce any radicals. Solution: Because is tangent, , making a right triangle. We can use the Pythagorean Theorem to find . Find , in . Round your answer to the nearest hundredth. Find the perimeter of . Solution: , and . Therefore, the perimeter of . We say that is inscribed in . A circle is inscribed in a polygon, if every side of the polygon is tangent to the circle. Find the value of . Because and and are tangent to the circle and also congruent. Set and solve for . Watch this video for help with the Examples above. 1. Determine if the triangle below is a right triangle. Explain why or why not. 2. Find the distance between the centers of the two circles. Reduce all radicals. 3. If and are the centers and is tangent to both circles, find . 1. To determine if the triangle is a right triangle, use the Pythagorean Theorem. is the longest length, so we will set it equal to in the formula. is not a right triangle. And, from the converse of the Tangent to a Circle Theorem, is not tangent to . 2. The distance between the two circles is . They are not tangent, however, and . Let’s add , such that is a rectangle. Then, use the Pythagorean Theorem to find . 3. Because is tangent to both circles, it is perpendicular to both radii and and are similar. To find , use the Pythagorean Theorem. To find , use similar triangles. Determine whether the given segment is tangent to . Algebra Connection Find the value of the indicated length(s) in . and are points of tangency. Simplify all radicals. - and are points of tangency for and , respectively. - Is ? Why? - Find . - Find . - Using the trigonometric ratios, find . Round to the nearest tenth of a degree. - Fill in the blanks in the proof of the Two Tangents Theorem. Given: and with points of tangency at and . and are radii. Prove: |4.||Definition of perpendicular lines| |5.||Connecting two existing points| |6. and are right triangles| - From the above proof, we can also conclude (fill in the blanks): - is a _____________ (type of quadrilateral). - The line that connects the ___________ and the external point _________ and . - Points , and are all points of tangency for the three tangent circles. Explain why . - Circles tangent at are centered at and . is tangent to both circles at . Find the radius of the smaller circle if , and . - Four circles are arranged inside an equilateral triangle as shown. If the triangle has sides equal to 16 cm, what is the radius of the bigger circle? - Circles centered at and are tangent at . Explain why and are collinear. Answers for Explore More Problems To view the Explore More answers, open this PDF file and look for section 9.2.
Government of Japan |This article is part of a series on the politics and government of |Government of Japan| The government of Japan is a constitutional monarchy whereby the power of the Emperor is limited and is relegated primarily to ceremonial duties. Like in many other states, the Government is divided into three branches: the Executive branch, the Legislative branch and the Judicial branch, as defined by the current post-war Constitution of Japan. The Constitution defines the Government to be in a unitary form of a parliamentary system. Local governments are established as an act of devolution, under the Local Autonomy Law. The throne of the Emperor is retained, even though popular sovereignty is adopted. Enacted as a revision to the pre-war Constitution of the Empire of Japan, it enables a democratic type of governance whereby both the legislative and executive branches of the Government are held accountable to each other; a feature known as the Fusion of powers. The powers of the executive branch, which is explicitly vested in the Cabinet, must enjoy the support and confidence to be in office by the organ of the legislative branch, the National Diet. Likewise, the Prime minister, as the head of the Cabinet, has the power to dissolve the House of Representatives, one of the two houses of the Diet. However, unlike parliamentary republics, the executive branch do not conceptually derive legitimacy from the parliament in the form of parliamentary sovereignty, but instead derive its authority from its people through a parallel voting system. Thus, the National Diet is, under the Constitution, known as "the highest organ of state power"; strictly reflecting the sovereignty of the people as represented by the Diet. While the executive and legislative branches are intermingled together, the judicial branch is, however, strictly separated from the other branches. Its separation is guaranteed by the Constitution, and is stated as: "no extraordinary tribunal shall be established, nor shall any organ or agency of the Executive be given final judicial power"; a feature known as the Separation of Powers. The Emperor acts as the ceremonial head of state, and is defined by the Constitution to be "the symbol of the State and of the unity of the people". The Prime Minister is the head of government, and is formally appointed to office by the Emperor after being designated by the National Diet. - 1 The Emperor - 2 Executive - 3 Legislative - 4 Judicial - 5 Local Government - 6 See also - 7 External links - 8 References The Emperor of Japan (天皇) is the head of the Imperial Family and the ceremonial head of state. He is, however, not even the nominal Chief Executive and he possesses only certain ceremonially important powers. He has no real powers related to the Government as stated clearly in article 4 of the Constitution. - Appointment of the Prime Minister as designated by the Diet. - Appointment of the Chief Justice of the Supreme Court as designated by the Cabinet. While the Cabinet is the source of executive power and most of its power are exercised directly by the Prime Minister himself, several of its powers are exercised by the Emperor. The powers exercised via the Emperor, as stipulated by Article 7 of the Constitution, are: - Promulgation of amendments of the constitution, laws, cabinet orders and treaties. - Convocation of the Diet. - Dissolution of the House of Representatives. - Proclamation of general election of members of the Diet. - Attestation of the appointment and dismissal of Ministers of State and other officials as provided for by law, and of full powers and credentials of Ambassadors and Ministers. - Attestation of general and special amnesty, commutation of punishment, reprieve, and restoration of rights. - Awarding of honors. - Attestation of instruments of ratification and other diplomatic documents as provided for by law. - Receiving foreign ambassadors and ministers. - Performance of ceremonial functions. The Emperor is known to hold the nominal ceremonial authority. For example, the Emperor is the only person that has the authority to appoint the Prime Minister, even though the Diet has the actual power to designate the person fitted for the position. One such example can be prominently seen in the 2009 Dissolution of the House of Representatives. The House was expected to be dissolved on the advice of the Prime Minister, but was temporarily unable to do so for the next general election, as both the Emperor and Empress are visiting Canada. In this manner, the Emperor's modern role is often compared to those of the Shogunate period and much of Japan's history, whereby the Emperor held great symbolic authority but had little actual political power; which is often held by others nominally appointed by the Emperor himself. Today, a legacy has somewhat continued for a retired Prime Minister who still wields considerable power, to be called a Shadow Shogun (闇将軍). Unlike its european counterpart, the Emperor is not the source of sovereign power and the government does not act under his name. Instead, the Emperor represents the State and appoints other high officials in the name of the State, in which the Japanese people holds sovereignty. Article 5 of the Constitution, in accordance with the Imperial Household Law, allows a regency to be established in the Emperor's name, should the Emperor be unable to perform his duties. Historically, the Imperial House of Japan is said to be the oldest continuing hereditary monarchy in the world. According to the Kojiki and Nihon Shoki, it is said that Japan was founded by the Imperial House in 660 BC by Emperor Jimmu (神武天皇). Emperor Jimmu was the first Emperor of Japan and the ancestor of all of the Emperors that followed. He is the direct descendant of Amaterasu (天照大御神), the sun goddess of the native Shinto religion, through Ninigi, his great-grandfather. The Current Emperor of Japan (今上天皇) is Akihito, he is officially enthroned on November 12, 1990. He is styled as His Imperial Majesty (天皇陛下), and his reign bears the era name of Heisei (平成). Naruhito, the Crown Prince of Japan, is the heir apparent to the Chrysanthemum Throne. The Executive branch of Japan is headed by the Prime Minister. The Prime Minister is the head of the Cabinet, and is designated by the legislative organ, the National Diet. The Cabinet consists of the Ministers of State and may be appointed or dismissed by the Prime Minister at any time. Explicitly defined to be the source of executive power, it is in practice, however, mainly exercised by the Prime Minister. The practice of its powers is responsible to the Diet, and as a whole, the Diet may dismiss the Cabinet en masse with a motion of no confidence. The Prime Minister of Japan (内閣総理大臣) is designated by the National Diet and serves a term of four years or less; with no limits imposed on the number of terms the Prime Minister may hold. The Prime Minister heads the Cabinet and exercises "control and supervision" of the executive branch, and is the head of government and commander-in-chief of the Japan Self-Defense Forces. The Prime Minister is vested with the power to present bills to the Diet, to sign laws, to declare a state of emergency, and may also dissolve the Diet's House of Representatives at will. He or she presides over the Cabinet and appoints, or dismisses, the other Cabinet ministers. Both houses of the National Diet designates the Prime Minister with a ballot cast under the run-off system. Under the Constitution, should both houses not agree on a common candidate, then a joint committee is allowed to be established to agree on the matter; specifically within a period of ten days, exclusive of the period of recess. However, if both houses still do not agree to each other, the decision made by the House of Representatives is deemed to be that of the National Diet. Upon designation, the Prime Minister is presented with their commission, and then formally appointed to office by the Emperor. |No.||Name (English)||Name (Japanese)||Gender||Took Office||Left Office||Term||Alma Mater| |1||Junichiro Koizumi||小泉 純一郎||Male||April 26, 2001||September 26, 2006||5 Years||Keio University University College London |2||Shinzō Abe||安倍 晋三||Male||September 26, 2006||September 26, 2007||1 Year||Seikei University| |3||Yasuo Fukuda||福田 康夫||Male||September 26, 2007||September 24, 2008||1 Year||Waseda University| |4||Taro Aso||麻生 太郎||Male||September 24, 2008||September 16, 2009||1 Year||Gakushuin University London School of Economics |5||Yukio Hatoyama||鳩山 由紀夫||Male||September 16, 2009||June 2, 2010||1 Year||University of Tokyo |6||Naoto Kan||菅 直人||Male||June 8, 2010||September 2, 2011||1 Year||Tokyo Institute of Technology| |7||Yoshihiko Noda||野田 佳彦||Male||September 2, 2011||December 26, 2012||1 Year||Waseda University| |8||Shinzō Abe||安倍 晋三||Male||December 26, 2012||Present||Unknown||Seikei University| ※ As of December 21, 2014 The Cabinet of Japan (内閣) consists of the Ministers of State and the Prime Minister. The members of the Cabinet are appointed by the Prime Minister, and under the Cabinet Law, the number of members of the Cabinet appointed, excluding the Prime Minister himself, must be fourteen or less, but may only be increased to seventeen should a special need arises. Article 68 of the Constitution states that all members of the Cabinet must be civilians and the majority of them must be chosen from among the members of either house of the National Diet. The precise wording leaves an opportunity for the Prime Minister to appoint some non-elected Diet officials. The Cabinet is required to resign en masse while still continuing its functions, till the appointment of a new Prime Minister, when the following situation arises: - The Diet's House of Representatives passes a non-confidence resolution, or rejects a confidence resolution, unless the House of Representatives is dissolved within the next ten (10) days. - When there is a vacancy in the post of the Prime Minister, or upon the first convocation of the Diet after a general election of the members of the House of Representatives. - Administer the law faithfully; conduct affairs of state. - Manage foreign affairs. - Conclude treaties. However, it shall obtain prior or, depending on circumstances, subsequent approval of the Diet. - Administer the civil service, in accordance with standards established by law. - Prepare the budget, and present it to the Diet. - Enact cabinet orders in order to execute the provisions of this Constitution and of the law. However, it cannot include penal provisions in such cabinet orders unless authorized by such law. - Decide on general amnesty, special amnesty, commutation of punishment, reprieve, and restoration of rights. Under the Constitution, all laws and cabinet orders must be signed by the competent Minister and countersigned by the Prime Minister, before being formally promulgated by the Emperor. Also, all members of the Cabinet cannot be subject to legal action without the consent of the Prime Minister; however, without impairing the right to take legal action. |Prime Minister||Shinzō Abe| |Deputy Prime Minister Minister for Finance Minister of State for Financial Services Minister of State for Overcoming Deflation |Minister for Internal Affairs and Communications||Sanae Takaichi| |Minister for Justice||Yōko Kamikawa| |Minister for Foreign Affairs||Fumio Kishida| |Minister for Education, Culture, Sports, Science and Technology Minister of State for Education Rebuilding Minister of State for the Tokyo Olympic and Paralympic Games |Minister for Health, Labour and Welfare||Yasuhisa Shiozaki| |Minister for Agriculture, Forestry and Fisheries||Koya Nishikawa| |Minister for Economy, Trade and Industry Minister of State for Industrial Competitiveness Minister of State for the Response to the Economic Impact caused by the Nuclear Accident Minister of State for the Nuclear Damage Compensation and Decommissioning Facilitation Corporation |Minister for Land, Infrastructure, Transport and Tourism Minister of State for Water Cycle Policy |Minister for the Environment Minister of State for Nuclear Emergency Preparedness |Minister for Defence Minister of State for Security Legislation |Chief Cabinet Secretary Minister of State for Alleviating the Burden of the Bases in Okinawa |Minister of State for Reconstruction Minister of State for Comprehensive Policy Coordination for Revival from the Nuclear Accident at Fukushima |Chairperson of the National Public Safety Commission Minister of State for the Abduction Issue Minister of State for Ocean Policy and Territorial Issues Minister of State for Building National Resilience Minister of State for Disaster Management |Minister of State for Okinawa and Northern Territories Affairs Minister of State for Consumer Affairs and Food Safety Minister of State for Science and Technology Policy Minister of State for Space Policy Minister of State for Information Technology Policy Minister of State for the Challenge Again Initiative Minister of State for the Cool Japan Strategy |Minister of State for Promoting Women's Active Participation Minister of State for Administrative Reform Minister of State for Civil Service Reform Minister of State for Regulatory Reform Minister of State for Measures for Declining Birthrate Minister of State for Gender Equality |Minister of State for Economic Revitalisation Minister of State for Total Reform of Social Security and Tax Minister of State for Economic and Fiscal Policy |Minister of State for Overcoming Population Decline and Vitalizing Local Economy Minister of State for National Strategic Special Zones ※ As of December 21, 2014 The Ministries of Japan (行政機関) consist of eleven ministries and the Cabinet Office. Each ministry is headed by a Minister of State, which are mainly senior legislators, and are appointed from among the members of the Cabinet by the Prime Minister. The Cabinet Office, formally headed by the Prime Minister, is an agency that handles the day-to-day affairs of the Cabinet. The ministries are the most influential part of the daily-exercised executive power, and since few ministers serve for more than a year or so necessary to grab hold of the organisation, most of its powers lies within the senior bureaucrats. - Cabinet Office - ※Manages the Imperial Household. - Ministry of Internal Affairs and Communications - Ministry of Justice - Ministry of Foreign Affairs - Reconstruction Agency - Ministry of Finance - Ministry of Education, Culture, Sports, Science and Technology (MEXT) - Ministry of Health, Labour and Welfare - Ministry of Agriculture, Forestry and Fisheries - Ministry of Economy, Trade and Industry (METI) - ※Administers the laws relating to patents, utility models, designs, and trademarks. - Ministry of Land, Infrastructure, Transport and Tourism (MLIT) - Ministry of the Environment - Ministry of Defense ※ As of December 21, 2014 The Board of Audit (会計検査院) is the only unique body of the Government; in which, the Board is totally independent from the Diet and the Cabinet. It reviews government expenditures and submits an annual report to the Diet. Article 90 of the Constitution of Japan and the Board of Audit Act of 1947 gives this body substantial independence from both controls. The Legislative branch organ of Japan is the National Diet (国会). It is a bicameral legislature, composing of a lower house, the House of Representatives, and an upper house, the House of Councillors. Empowered by the Constitution to be "the highest organ of State power" and the only "sole law-making organ of the State", its houses are both directly elected under a parallel voting system and is ensured by the Constitution to have no discrimination on the qualifications of each members; whether be it based on "race, creed, sex, social status, family origin, education, property or income". The National Diet, therefore, reflects the sovereignty of the people; a principle of popular sovereignty whereby the supreme power lies within, in this case, the Japanese people. The Diet responsibilities includes the making of laws, the approval of the annual national budget, the approval of the conclusion of treaties and the selection of the Prime Minister. In addition, it has the power to initiate draft constitutional amendments, which, if approved, are to be presented to the people for ratification in a referendum before being promulgated by the Emperor, in the name of the people. The Constitution also enables both houses to conduct investigations in relation to government, demand the presence and testimony of witnesses, and the production of records, as well as allowing either house of the Diet to demand the presence of the Prime Minister or the other Minister of State, in order to give answers or explanations whenever so required. The Diet is also able to impeach Court judges convicted of criminal or irregular conduct. The Constitution, however, does not specify the voting methods, the number of members of each house, and all other matters pertaining to the method of election of the each members, and are thus, allowed to be determined for by law. Under the provisions of the Constitution and by law, all adults aged over 20 are eligible to vote, with a secret ballot and a universal suffrage, and those elected have certain protections from apprehension while the Diet is in session. Speeches, debates, and votes cast in the Diet also enjoy parliamentary privileges. Each house is responsible for disciplining its own members, and all deliberations are public unless two-thirds or more of those members present passes a resolution agreeing it otherwise. The Diet also requires the presence of at least one-third of the membership of either house in order to constitute a quorum. All decisions are decided by a majority of those present, unless otherwise stated by the Constitution, and in the case of a tie, the presiding officer has the right to decide the issue. A member cannot be expelled, however, unless a majority of two-thirds or more of those members present passes a resolution therefor. Under the Constitution, at least one session of the Diet must be convened each year. The Cabinet can also, at will, convoke extraordinary sessions of the Diet and is required to, when a quarter or more of the total members of either house demands it. During an election, only the House of Representatives is dissolved. The House of Councillors is however, not dissolved but only closed, and may, in times of national emergency, be convoked for an emergency session. The Emperor both convokes the Diet and dissolves the House of Representatives, but only does so on the advice of the Cabinet. For bills to become Law, they are to be first passed by both houses of the National Diet, signed by the Ministers of State, countersigned by the Prime Minister, and then finally promulgated by the Emperor; however, without specifically giving the Emperor the power to oppose legislation. House of Representatives The House of Representatives of Japan (衆議院) is the Lower house, with the members of the house being elected once every four years, or when dissolved, for a four-year term. As of December 24, 2014, it has 475 members. Of these, 180 members are elected from 11 multi-member constituencies by a party-list system of proportional representation, and 295 are elected from single-member constituencies. 238 seats are required for majority. The House of Representatives is the more powerful house out of the two, it is able to override vetoes on bills imposed by the House of Councillors with a two-thirds majority. It can, however, be dissolved by the Prime Minister at will. Members of the house must be of Japanese nationality; those aged 20 years and older may vote, while those aged 25 years and older may run for office in the lower house. The legislative powers of the House of Representatives is considered to be more powerful than that of the House of Councillors. While the House of Councillors has the ability to veto most decisions made by the House of Representatives, some however, can only be delayed. This includes the legislation of treaties, the budget, and the selection of the Prime Minister. The Prime Minister, and collectively his Cabinet, can in turn, however, dissolve the House of Representatives whenever intended. While the House of Representatives is considered to be officially dissolved upon the preparation of the document, the House is only formally dissolved by the dissolution ceremony. The dissolution ceremony of the House is as follows: - The document is rubber stamped by the Emperor, and wrapped in a purple silk cloth; an indication of a document of a state act, done on behalf of the people. - The document is passed on to the Chief Cabinet Secretary at the House of Representatives President's reception room. - The document is taken to the Chamber for preparation by the General-Secretary. - The General-Secretary prepares the document for reading by the Speaker. - The Speaker of the House of Representatives promptly declares the dissolution of the House. - The House of Representatives is formally dissolved. House of Councillors The House of Councillors of Japan (参議院) is the Upper house, with half the members of the house being elected once every three years, for a six-year term. As of December 24, 2014, it has 242 members. Of these, 73 are elected from the 47 prefectural districts, by single non-transferable votes, and 48 are elected from a nationwide list by proportional representation with open lists. The House of Councillors cannot be dissolved by the Prime Minister. Members of the house must be of Japanese nationality; only those aged 30 years and older may vote and run for office in the upper house. As the House of Councillors can veto a decision made by the House of Representatives, the House of Councillors can cause the House of Representatives to reconsider its decision. The House of Representatives however, can still insist on its decision by overwriting the veto by the House of Councillors with a two-thirds majority of its members present. Each year, and when required, the National Diet is convoked at the House of Councillors, on the advice of the Cabinet, for an extra or an ordinary session, by the Emperor. A short speech is, however, usually first made by the Speaker of the House of Representatives before the Emperor proceeds to convoke the Diet with his Speech from the throne. The Judicial branch of Japan consist of the Supreme Court, and four other lower courts; the High Courts, District Courts, Family Courts and Summary Courts. Divided into four basic tiers, the Court's independence from the executive and legislative branches are guaranteed by the Constitution. Article 76 of the Constitution states that all the Court judges are independent in the exercise of their own conscience and that they are only bounded by the Constitution and the laws. Court judges are removable only by public impeachment, and can only be removed, without impeachment, when they are judicially declared mentally or physically incompetent to perform their duties. The Constitution also explicitly denies any power for executive organs or agencies to administer disciplinary actions against judges. However, a Supreme Court judge may be dismissed by a majority in a referendum; of which, must occur during the first general election of the National Diet's House of Representatives following the judge's appointment, and also the first general election for every ten years lapse thereafter. Trials must be conducted, with judgment declared, publicly, unless the Court "unanimously determines publicity to be dangerous to public order or morals"; with the exception for trials of political offenses, offenses involving the press, and cases wherein the rights of people as guaranteed by the Constitution, which cannot be deemed and conducted privately. Court judges are appointed by the Cabinet, in attestation of the Emperor, while the Chief Justice is appointed by the Emperor, after being nominated by the Cabinet; which is in practice, known to be under the recommendation of the former Chief Justice. The Legal system in Japan has been historically influenced by Chinese law; developing independently during the Edo period through texts such as Kujikata Osadamegaki. It has, however, changed during the Meiji Restoration, and is now largely based on the European civil law; notably, the civil code based on the German model still remains in effect. A quasi-jury system has recently came into use, and the legal system also includes a bill of rights since May 3, 1947. The collection of Six Codes makes up the main body of the Japanese statutory law. All Statutory Laws in Japan are required to be rubber stamped by the Emperor with the Privy Seal of Japan (天皇御璽), and no Law can take effect without the Cabinet's signature, the Prime Minister's countersignature and the Emperor's promulgation. The Supreme Court of Japan (最高裁判所) is the court of last resort and has the power of Judicial review; as defined by the Constitution to be "the court of last resort with power to determine the constitutionality of any law, order, regulation or official act". The Supreme Court is also responsible for nominating judges to lower courts and determining judicial procedures. It also oversees the judicial system, overseeing activities of public prosecutors, and disciplining judges and other judicial personnel. The High Courts of Japan (高等裁判所) has the jurisdiction to hear appeals to judgments rendered by District Courts and Family Courts, excluding cases under the jurisdiction of the Supreme Court. Criminal appeals are directly handled by the High Courts, but Civil cases are first handled by District Courts. There are eight High Courts in Japan: the Tokyo, Osaka, Nagoya, Hiroshima, Fukuoka, Sendai, Sapporo, and Takamatsu High Courts. The Penal system of Japan (矯正施設) is operated by the Ministry of Justice. It is part of the criminal justice system, and is intended to resocialize, reform, and rehabilitate offenders. The ministry's Correctional Bureau administers the adult prison system, the juvenile correctional system, and three of the women's guidance homes, while the Rehabilitation Bureau operates the probation and the parole systems. The Local Governments of Japan (地方公共団体) is unitary, in which local jurisdictions largely depend on national government financially. Under the Constitution, all matters pertaining to the local self-government is allowed to be determined for by law; more specifically, the Local Autonomy Law. The Ministry of Internal Affairs and Communications intervenes significantly in local government, as do other ministries. This is done chiefly financially because many local government jobs need funding initiated by national ministries. This is dubbed as the "thirty-percent autonomy". The result of this power is a high level of organizational and policy standardization among the different local jurisdictions allowing them to preserve the uniqueness of their prefecture, city, or town. Some of the more collectivist jurisdictions, such as Tokyo and Kyoto, have experimented with policies in such areas as social welfare that later were adopted by the national government. Japan is divided into forty-seven administrative divisions, the prefectures are: one metropolitan district (Tokyo), two urban prefectures (Kyoto and Osaka), forty-three rural prefectures, and one "district", Hokkaidō. Large cities are subdivided into wards, and further split into towns, or precincts, or subprefectures and counties. Cities are self-governing units administered independently of the larger jurisdictions within which they are located. In order to attain city status, a jurisdiction must have at least 30,000 inhabitants, 60 percent of whom are engaged in urban occupations. There are self-governing towns outside the cities as well as precincts of urban wards. Like the cities, each has its own elected mayor and assembly. Villages are the smallest self-governing entities in rural areas. They often consist of a number of rural hamlets containing several thousand people connected to one another through the formally imposed framework of village administration. Villages have mayors and councils elected to four-year terms. Structure of Local Government All prefectural and municipal governments in Japan are organized following the Local Autonomy Law, a statute applied nationwide since 1947. Each jurisdiction has a chief executive, called a governor (知事 chiji?) in prefectures and a mayor (市町村長 shichōsonchō?) in municipalities. Most jurisdictions also have a unicameral assembly (議会 gikai?), although towns and villages may opt for direct governance by citizens in a general assembly (総会 sōkai?). Both the executive and assembly are elected by popular vote every four years. Local governments follow a modified version of the separation of powers used in the national government. An assembly may pass a vote of no confidence in the executive, in which case the executive must either dissolve the assembly within ten days or automatically lose their office. Following the next election, however, the executive remains in office unless the new assembly again passes a no confidence resolution. The primary methods of local lawmaking are local ordinance (条例 jōrei?) and local regulations (規則 kisoku?). Ordinances, similar to statutes in the national system, are passed by the assembly and may impose limited criminal penalties for violations (up to 2 years in prison and/or 1 million yen in fines). Regulations, similar to cabinet orders in the national system, are passed by the executive unilaterally, are superseded by any conflicting ordinances, and may only impose a fine of up to 50,000 yen. Local governments also generally have multiple committees such as school boards, public safety committees (responsible for overseeing the police), personnel committees, election committees and auditing committees. These may be directly elected or chosen by the assembly, executive or both. All prefectures are required to maintain departments of general affairs, finance, welfare, health, and labor. Departments of agriculture, fisheries, forestry, commerce, and industry are optional, depending on local needs. The Governor is responsible for all activities supported through local taxation or the national government. - Background notes of the US Department of State, Japan's Government - Search official Japanese Government documents and records - Facts about Japan by CIA's The World Factbook - "Did the Emperor of Japan really fall from being a ruler to a symbol" (PDF). Tsuneyasu Takeda. Instructor, Keio University. Retrieved 15 December 2014. - "2009 Japanese Emperor and Empress Visited in Vancouver". Retrieved 15 December 2014. - "A shadow of a shogun". The Economist. Retrieved 15 December 2014. - "Japan's royal family pose for unusual New Year photo". The Daily Telegraph. Retrieved 15 December 2014. - "天皇陛下御即位【完全版5/9】". Retrieved 15 December 2014. - "衆議院解散2012". Retrieved 15 December 2014. - "《臨時国会》第187回国会 開会式 平成26年9月29日参議院議場". Retrieved 15 December 2014. - "Overview of the Judicial System in Japan". Supreme Court of Japan. Retrieved 15 December 2014. - "Change at the top court's helm". Retrieved 15 December 2014. - "Japanese Civil Code". Encyclopædia Britannica. Retrieved 15 December 2014. - "MacArthur and the American Occupation of Japan". Retrieved 15 December 2014. - 三割自治 "Local Government". Retrieved 15 December 2014.
Start a 10-Day Free Trial to Unlock the Full Review Why Lesson Planet? Find quality lesson planning resources, fast! Share & remix collections to collaborate. Organize your curriculum with collections. Easy! Have time to be more creative & energetic with your students! In this differentiation activity, students solve and complete 20 various types of problems. First, they find each expression using explicit and implicit differentiation. Then, they find the equation line to a given problem using the points. 3 Views 5 Downloads Making Piecewise Functions Continuous and Differentiable Young scholars explore the concept of piecewise functions. In this piecewise functions lesson, students discuss how to make a piecewise function continuous and differentiable. Young scholars use their Ti-89 to find the limit of the... 11th - 12th Math Introduction to Calculus This heady calculus text covers the subjects of differential and integral calculus with rigorous detail, culminating in a chapter of physics and engineering applications. A particular emphasis on classic proof meshes with modern graphs,... 11th - Higher Ed Math CCSS: Adaptable Rolle’s Theorem and the Mean Value Theorem Using the derivative to apply the Mean Value Theorem and its more specific cousin, Rolle's Theorem, is valuable practice in determining differentiability and continuity on an interval. This presentation and accompanying worksheet walk... 10th - 12th Math CCSS: Adaptable Study Guide for the Advanced Placement Calculus AB Examination Is this going to be on the test? A calculus study guide provides an organized list of important topics and a few examples with answers. The topics include elementary functions, limits, differential calculus, and integral calculus,... 11th - 12th Math CCSS: Adaptable
The concept of inverse matrix is somewhat analogous to that of the reciprocal of a number. If a is a nonzero number, then 1/a is its reciprocal. The fraction 1/a is often written as a -1. Aside from the fact that only nonzero numbers have reciprocals, the key property of a nonzero number and its reciprocal is that their product is 1, that is, a • a -1 = 1. This makes a -1 the multiplicative inverse of the nonzero number a. Only nonsingular square matrices A have inverses. (A square matrix is nonsingular if and only if its determinant is nonzero.) When A is nonsingular, its inverse, denoted A -1 is unique and has the key property that A • A –1 = I =A –1 • A, where I denotes the n × n identity matrix. The determinant of a square matrix A (of any order) is a single scalar (number), say a = det(A ). If this number is nonzero, the matrix is nonsingular, and accordingly has a reciprocal. Moreover, when det(A ) ≠ 0, the inverse of A exists and its determinant is the reciprocal of det(A ). That is, det(A –1) = (det(A))-1. A tiny example will illustrate these concepts, albeit somewhat too simplistically. Let Then, the determinant of A is the number det (A) = a11a22 – a12a21. If det(A ) ≠ 0, then As a check, one can see that This formula for the inverse of a 2 × 2 matrix is useful for hand calculations, but its generalization to matrices of larger order is far more difficult conceptually and computationally. Indeed, the formula is where adj(A ) is the so-called adjoint (or adjugate) of A. The adjoint of A is the “transposed matrix of cofactors” of A, that is, the matrix B with elements bij = (–1)i+j det(A(j│i )). One way to carry out the inversion of a nonsingular matrix A is to consider the matrix equation A • X = I, where X stands for A -1. If A is n × n, then this equation can be viewed as a set of n separate equations of the form Ax = b where x is successively taken as the j th column of unknown matrix X and b is taken as the j th column of I (j = 1, …, n ). These equations can then be solved by Cramer’s rule. The concept of the inverse of a matrix is of great theoretical value, but as may be appreciated from the above discussion, its computation can be problematic, just from the standpoint of sheer labor, not to mention issues of numerical reliability. Fortunately, there are circumstances in which it is not necessary to know the inverse of an n × n matrix A in order to solve an equation like Ax = b. One such circumstance is where the nonsingular matrix A is lower (or upper) triangular and all its diagonal elements are nonzero. In the case of lower triangular matrices, this means (i) aii ≠ 0 for all i = 1, …, n, (ii) a ij = 0 for all i = 1, …, n – 1 and j > i. Thus, for instance is lower triangular; the fact that its diagonal elements 4,–1, and 5 are all nonzero makes this triangular matrix nonsingular. When A is nonsingular and lower triangular, solving the equation Ax = b is done by starting with the top equation a 11x 1 = b 1 and solving it for x. 1 In particular, x 1 = b 1│ a 11. This value is substituted into all the remaining equations. Then the process is repeated for the next equation. It gives x2 = [b2 – a21 (b1/a11)]a22. This sort of process is repeated until the last component of x is computed. This technique is called forward substitution. There is an analogous procedure called back substitution for nonsingular upper triangular matrices. Transforming a system of linear equations to triangular form makes its solution fairly uncomplicated. Matrix inversion is thought by some to be a methodological cornerstone of regression analysis. The desire to invert a matrix typically arises in solving the normal equations generated by applying the method of ordinary least squares (OLS) to the estimation of parameters in a linear regression model. It might be postulated that the linear relationship holds for some set of parameters β1, …,βk. To determine these unknown parameters, one runs a set of, say, n experiments by first choosing values X,i 2, …, Xik and then recording the outcome Y i for i = 2, …, n. In doing so, one uses an error term Ui for the i th experiment. This is needed because for a specific set of parameter values (estimates), there may be no solution to the set of simultaneous equations induced by (1). Thus, one writes The OLS method seeks values of β1, …, βk that minimize the sum of the squares errors, that is With this leads to the OLS problem of minimizing Y’Y – 2 Y’Xβ + β’X’Xβ. The first-order necessary and sufficient conditions for the minimizing vector β are the so-called normal equations X’Xβ = X’Y. If the matrix X’X is nonsingular, then Care needs to be taken in solving the normal equations. It can happen that X’X is singular. In that case, its inverse does not exist. Yet even when X’X is invertible, it is not always advisable to solve for β as in (3). For numerical reasons, this is particularly so when the order of the matrix is very large. SEE ALSO Determinants; Hessian Matrix; Jacobian Matrix; Matrix Algebra; Regression Analysis Marcus, Marvin, and Henryk Minc. 1964. A Survey of Matrix Theory and Matrix Inequalities. Boston: Allyn and Bacon. Strang, Gilbert. 1976. Linear Algebra and its Applications. New York: Academic Press. Richard W. Cottle where I denotes the identity matrix, then B is the inverse matrix of A and A is said to be invertible with B. If it exists, B is unique and is denoted by A–1.
A micrometeorite (MM) is a microscopic object from space that falls to Earth and survives the trip through the atmosphere. About 60-100 tons per day of micrometeorites hit our Earth, much more than all other meteorites and space debris put together. (see graphic below). Despite that, very few have been found outside of Antarctica, a few deserts and the deep ocean… until 2015, when Jon Larsen from Norway discovered the first “urban” micrometeorite. Since they are tiny (only a few times the width of a hair!), they are difficult to find. (see image below). Micrometeorites enter the atmosphere at 25,000 to 150,000 mph or more and usually heat up due to Friction and Ram Pressure (compression) and become spheroidal (not usually spheres but close). The textures seen are formed from the heating they receive from passing through the atmosphere. Depending on how micrometeorites ‘hit’ the atmosphere, they can produce heat from about 400-2800 degrees C (~750-5000 degrees F). In order to identify your micrometeorite, you must take an image since microscopes at high power only show ‘slices’ of objects in focus. By taking 100’s of images and Focus Stacking (using a program to take ‘in-focus’ parts of images and put them all together into one totally in-focus image) you can get an image to use for identification. (see image series below). Distribution of objects from space by size in mm and mass. As you can see, most space debris is tiny - micrometeorites Searching for micrometeorites on rooftops and gutters can put you at risk. If you decide to try this, you bear full responsibility for your actions! · Find a flat roof or gutters · Use a strong magnet to search for magnetic particles · Double bag the magnet to prevent particles from sticking to it · Collect material - wash thoroughly with laundry soap and ‘latex’ gloves · Let water settle and pour off/repeat until water is clear · Dry material · Sieve or screen material (window screen would work in a pinch) · Collect smaller material and examine it under 20-60x microscope/lens · Examine any spheroids found under 100-500x microscope · Take Focus-Stacked images of any possible micrometeorites · Remember that MANY objects look like micrometeorites but are human-made! Strong magnet in double plastic bags. Micrometeorites fall into three very broad classification categories: The four S-type micrometeorites we can identify are based on texture and mineralogy. They are: The S-Type micrometeorites we find have compositions similar to some meteorites (Chondrites) and this is reflected in their analysis. Almost all micrometeorites have a 'chondritic' composition. The SEM ( Scanning electron microscopy) images show the micrometeorites in fine detail. The EDS (Energy Dispersive X-ray Spectroscopy) charts show the chemistry of the micrometeorites. As with the 'Chondrites', the chart shows high Oxygen (O), Magnesium (Mg) and Silicon (Si) as well as much lower amounts of Iron (Fe) and Nickel (Ni). Thanks to Al Falster at the Maine Mineral and Gem Museum And Dr. Tasha Dunn at Colby for help with the SEM and EDS analyses below! SEM of MM 040, a Barred Olivine micrometeorite showing parallel Olivine crystals 1.To gain a better understanding of our solar system – micrometeorites were formed from the material that formed our solar system and can tell us more about that time in our solar system and Earth’s history. 2.They are beautiful and interesting – micrometeorites have amazing flight patterns ‘grown’ onto their surface and yet, many retain their structure and compositions within. Each is like a snowflake with no two alike. 3.There are a lot of things we don’t know about micrometeorites – they have been around for billions of years, and since they are by far the most abundant extraterrestrial material on Earth, we should endeavor to learn more about them and perhaps ourselves. •Delivery to Earth: –Meteorites are blasted off asteroids and float around in space in a random process that may eventually lead to crossing Earth’s orbit. –Micrometeorites are either blasted off or stream off the surface and the light from the Sun affects their motion – called Poynting-Robertson Light Drag. This slows the object down and makes it move inward, in slow spiral orbits, toward the Sun. This provides a reliable method to deliver material to the Earth within 10,000 years. Thus we get a better sampling of asteroids and other objects in our solar system than with meteorites. •Micrometeorites deliver organic molecules and elements to Earth and may be an important source of nutrients for organisms in the deep ocean. •When micrometeorites pass through Earth’s atmosphere, they react with oxygen and give us a record of the oxygen in the upper atmosphere. Material from: www.quantamagazine.org/matt-genge-uses-dust-from-space-to-tell-the-story-of-the-solar-system-20210204/
Naming of comets The simplest system names comets after the year in which they were observed (e.g. the Great Comet of 1680). Later a convention arose of using the names of people associated with the discovery (e.g. Comet Hale–Bopp) or the first detailed study (e.g. Halley's Comet) of each comet. During the twentieth century, improvements in technology and dedicated searches led to a massive increase in the number of comet discoveries, which led to the creation of a numeric designation scheme. The original scheme assigned codes in the order that comets passed perihelion (e.g. Comet 1970 II). This scheme operated until 1994, when continued increases in the numbers of comets found each year resulted in the creation of a new scheme. This system, which is still in operation, assigns a code based on the type of orbit and the date of discovery (e.g. C/2012 S1). Named by yearEdit Before any systematic naming convention was adopted, comets were named in a variety of ways. Prior to the early 20th century, most comets were simply referred to by the year when they appeared e.g. the "Comet of 1702". Particularly bright comets which came to public attention (i.e. beyond the astronomy community) would be described as the great comet of that year, such as the "Great Comet of 1680" and "Great Comet of 1882". If more than one great comet appeared in a single year, the month would be used for disambiguation e.g. the "Great January comet of 1910". Occasionally other additional adjectives might be used. Named after peopleEdit Possibly the earliest comet to be named after a person was Caesar's Comet in 44 BC, which was so named because it was observed shortly after the assassination of Julius Caesar and was interpreted as a sign of his deification. Later eponymous comets were named after the astronomer(s) who conducted detailed investigations on them, or later those who discovered the comet. After Edmond Halley demonstrated that the comets of 1531, 1607, and 1682 were the same body and successfully predicted its return in 1759, that comet became known as Halley's Comet. Similarly, the second and third known periodic comets, Encke's Comet and Biela's Comet, were named after the astronomers who calculated their orbits rather than their original discoverers. Later, periodic comets were usually named after their discoverers, but comets that had appeared only once continued to be referred to by the year of their apparition. The first comet to be named after the person who discovered it, rather than the one who calculated its orbit, was Comet Faye – discovered by Hervé Faye in 1843. However, this convention did not become widespread until the early 20th century. It remains common today. A comet can be named after up to three discoverers, either working together as a team or making independent discoveries (without knowledge of the other investigator's work). For example, Comet Swift–Tuttle was first found by Lewis Swift and then by Horace Parnell Tuttle a few days later; the discoveries were made independently and so both are honoured in the name. In recent years many comets have been discovered by large teams of astronomers, in this case comets may be named for the collaboration or instrument they used. For example, 160P/LINEAR was discovered by the Lincoln Near-Earth Asteroid Research (LINEAR) team. Comet IRAS–Araki–Alcock was discovered independently by a team using the Infrared Astronomy Satellite (IRAS) and the amateur astronomers Genichi Araki and George Alcock. In the past, when multiple comets were discovered by the same individual, group of individuals, or team, the comets' names were distinguished by adding a numeral to the discoverers' names (but only for periodic comets); thus Comets Shoemaker–Levy 1 to 9 (discovered by Carolyn Shoemaker, Eugene Shoemaker & David Levy). Today, the large numbers of comets discovered by some instruments makes this system impractical, and no attempt is made to ensure that each comet is given a unique name. Instead, the comets' systematic designations are used to avoid confusion. Until 1994, comets were first given a provisional designation consisting of the year of their discovery followed by a lowercase letter indicating its order of discovery in that year (for example, Comet 1969i (Bennett) was the 9th comet discovered in 1969). Once the comet had been observed through perihelion and its orbit had been established, the comet was given a permanent designation of the year of its perihelion, followed by a Roman numeral indicating its order of perihelion passage in that year, so that Comet 1969i became Comet 1970 II (it was the second comet to pass perihelion in 1970) Increasing numbers of comet discoveries made this procedure awkward, as did the delay between discovery and perihelion passage before the permanent name could be assigned. As a result, in 1994 the International Astronomical Union approved a new naming system. Comets are now provisionally designated by the year of their discovery followed by a letter indicating the half-month of the discovery and a number indicating the order of discovery (a system similar to that already used for asteroids). For example, the fourth comet discovered in the second half of February 2006 was designated 2006 D4. Prefixes are then added to indicate the nature of the comet: - P/ indicates a periodic comet (defined for these purposes as any comet with an orbital period of less than 200 years or confirmed observations at more than one perihelion passage). - C/ indicates a non-periodic comet (defined as any comet that is not periodic according to the preceding definition). - X/ indicates a comet for which no reliable orbit could be calculated (generally, historical comets). - D/ indicates a periodic comet that has disappeared, broken up, or been lost. - A/ indicates an object that was mistakenly identified as a comet, but is actually a minor planet. Only three such objects have been classified as such, 'Oumuamua (A/2017 U1), A/2017 U7, and A/2018 C2. - I/ indicates an interstellar object (added to the system in early November 2017). For example, Comet Hale–Bopp's designation is C/1995 O1. After their second observed perihelion passage, designations of periodic comets are given an additional prefix number, indicating the order of their discovery. Halley's Comet, the first comet identified as periodic, has the systematic designation 1P/1682 Q1. Separately to the systematic numbered designation, comets are routinely assigned a standard name by the IAU, which is almost always the name or names of their discoverers. When a comet has only received a provisional designation, the "name" of the comet is typically only included parenthetically after this designation, if at all. However, when a periodic comet receives a number and a permanent designation, the comet is usually notated by using its given name after its number and prefix. For instance, the unnumbered periodic comet P/2011 NO1 (Elenin) and the non-periodic comet C/2007 E2 (Lovejoy) are notated with their provisional systematic designation followed by their name in parentheses; however, the numbered periodic comet 67P/Churyumov–Gerasimenko is given a permanent designation of its numbered prefix ("67P/") followed by its name ("Churyumov-Gerasimenko"). Interstellar objects are also numbered in order of discovery and can receive names, as well as a systematic designation. The first example was 1I/ʻOumuamua, which has the formal designation 1I/2017 U1 (ʻOumuamua). Relationship with asteroid designationsEdit Sometimes it is unclear whether a newly discovered object is a comet or an asteroid (which would receive a minor planet designation). Any object that was initially misclassified as an asteroid but quickly corrected to a comet incorporates the minor planet designation into the cometary one. This can lead to some odd names such as for 227P/Catalina–LINEAR, whose alternative name is 227P/2004 EW38 (Catalina-LINEAR), derived from the original provisional minor planet designation 2004 EW38. In other cases, a known asteroid can begin to exhibit cometary characteristics (such as developing a coma) and thus be classified as both an asteroid and a comet. These receive designations under both systems. There are only eight such bodies that are cross-listed as both comets and asteroids: 2060 Chiron (95P/Chiron), 4015 Wilson–Harrington (107P/Wilson–Harrington), 7968 Elst–Pizarro (133P/Elst–Pizarro), 60558 Echeclus (174P/Echeclus), 118401 LINEAR (176P/LINEAR), (300163) 2006 VW139 (288P/2006 VW139), (323137) 2003 BM80 (282P/2003 BM80), and (457175) 2008 GO98 (362P/2008 GO98). - Ridpath, Ian (3 July 2008). "Halley and his Comet". A brief history of Halley's Comet. Retrieved 14 August 2013. - Kronk, Gary W. "2P/Encke". Gary W. Kronk's Cometography. Retrieved 14 August 2013. - Kronk, Gary W. "3D/Biela". Gary W. Kronk's Cometography. Retrieved 14 August 2013. - Arnett, William (Bill) David (14 January 2000). " 'Official' Astronomical Names". International Astronomical Union. Retrieved 2006-03-05. - "Cometary Designation System". Minor Planet Center. Retrieved 2011-07-03. - "MPEC 2017-V17 : NEW DESIGNATION SCHEME FOR INTERSTELLAR OBJECTS". Minor Planet Center. International Astronomical Union. 6 November 2017. Retrieved 6 November 2017. - "Cometary Designation System". Committee on Small Body Nomenclature. 1994. Retrieved 2010-08-24. - "Guidelines for Cometary Names". Solar System Support Pages at the University of Maryland - IAU Division III. IAU Division III Committee on Small Body Nomenclature. Retrieved 19 March 2019. - "Comet Names and Designations; Comet Naming and Nomenclature; Names of Comets". International Comet Quarterly. International Comet Quarterly. Retrieved 19 March 2019.
NASA wants to build a floating city above the clouds of Venus Artistic concept of the permanent city. NASA Langley Research Center A number of agencies, including, of course, NASA, are focusing solar system exploration efforts on Mars. At first glance, though, Mars doesn't really seem like the best candidate. Venus is much closer -- at a distance that ranges between 38 million kilometres and 261 million kilometres, compared to Mars' 56 million to 401 million kilometres, it's Earth's closest neighbour. It's also comparable in size to Earth -- a radius of 6,052km to Earth's 6,371 -- and has similar density and chemical composition. But everything else about it makes it almost utterly unvisitable. While probes have been sent to the planet's surface, they lasted, at most, just two hours before surface conditions on Venus destroyed them. These conditions include an atmospheric pressure up to 92 times greater than Earth's; a mean temperature of 462 degrees Celsius (863 degrees Fahrenheit); extreme volcanic activity; an extremely dense atmosphere consisting mostly of carbon dioxide, with a small amount of nitrogen; and a cloud layer made up of sulphuric acid. In short, Venus? Not a top holiday destination, really. NASA thinks it might have a solution that will allow sending humans up to check it out, though: Cloud City. The High Altitude Venus Operational Concept -- HAVOC -- is a conceptual spacecraft designed by a team at the Systems Analysis and Concepts Directorate at NASA Langley Research Center for the purposes of Venusian exploration. This lighter-than-air rocket would be designed to sit above the acidic clouds for a period of around 30 days, allowing a team of astronauts to collect data about the planet's atmosphere. While the surface of Venus would destroy a human, hovering above its clouds at an altitude of around 50 kilometres (30 miles) is a set of conditions similar to Earth. Its atmospheric pressure is comparable, and gravity is only slightly lower -- which would allow longer-term stays, effectively eliminating the ailments that occur during long-term stays in zero G. Temperature is about 75 degrees Celsius, which is hotter than is strictly comfortable, but would still be manageable. Finally, the atmosphere at that altitude offers protection from solar radiation comparable to living in Canada. Artist's concept of the cockpit of the crewed zeppelin. NASA Langley Research Center The mission would, NASA outlined to IEEE Spectrum. begin with a robotic probe deployed to Venus to perform initial checks and investigations. With the return of this data, a crewed mission would spend 30 days floating above the planet; followed by missions that would see teams of two astronauts spending a year each. The end goal would be a permanent human presence in a floating cloud city. While this city would be fixed, exploration would be made possible with a mobile unit -- a crewed, 130-metre-long Zeppelin filled with helium, accompanied by a smaller, 31-metre robotic Zeppelin. This Zeppelin would take advantage of Venus' closer proximity to the sun: its top would be adorned with over 1,000 square metres of solar panels for power. And it's all designed to be built using existing or near-to-existing technology -- although of course it's at least a decade or two from actual implementation. But, should it come to fruition, it may provide another way to see humanity inhabit the universe beyond Earth. The next step would be performing simulations of Venusian conditions on Earth -- and NASA is already across it, with a paper that outlines the current capabilities and facilities for performing just such tests. "Venus has value as a destination in and of itself for exploration and colonization, but it's also complementary to current Mars plans," said Chris Jones of the Langley Research Center. "If you did Venus first, you could get a leg up on advancing those technologies and those capabilities ahead of doing a human-scale Mars mission. It's a chance to do a practice run, if you will, of going to Mars."
A new high-coverage DNA sequencing method reconstructs the full genome of Denisovans–relatives to both Neandertals and humans–from genetic fragments in a single finger bone FRAGMENT OF A FINGER: This replica of the Denisovan finger bone shows just how small of a sample the researchers had to extract DNA from.Image: Image courtesy of Max Planck Institute for Evolutionary Anthropology Tens of thousands of years ago modern humans crossed paths with the group of hominins known as the Neandertals. Researchers now think they also met another, less-known group called the Denisovans. The only trace that we have found, however, is a single finger bone and two teeth, but those fragments have been enough to cradle wisps of Denisovan DNA across thousands of years inside a Siberian cave. Now a team of scientists has been able to reconstruct their entire genome from these meager fragments. The analysis adds new twists to prevailing notions about archaic human history. “Denisova is a big surprise,” says John Hawks, a biological anthropologist at the University of Wisconsin–Madison who was not involved in the new research. On its own, a simple finger bone in a cave would have been assumed to belong to a human, Neandertal or other hominin. But when researchers first sequenced a small section of DNA in 2010—a section that covered about 1.9 percent of the genome—they were able to tell that the specimen was neither. “It was the first time a new group of distinct humans was discovered” via genetic analysis rather than by anatomical description, said Svante Pääbo, a researcher at the Max Planck Institute (M.P.I.) for Evolutionary Anthropology in Germany, in a conference call with reporters. Now Pääbo and his colleagues have devised a new method of genetic analysis that allowed them to reconstruct the entire Denisovan genome with nearly all of the genome sequenced approximately 30 times over akin to what we can do for modern humans. Within this genome, researchers have found clues into not only this group of mysterious hominins, but also our own evolutionary past. Denisovans appear to have been more closely related to Neandertals than to humans, but the evidence also suggests that Denisovans and humans interbred. The new analysis also suggests new ways that early humans may have spread across the globe. The findings were published online August 30 in Science. Who were the Denisovans? Unfortunately, the Denisovan genome doesn’t provide many more clues about what this hominin looked like than a pinky bone does. The researchers will only conclude that Denisovans likely had dark skin. They also note that there are alleles “consistent” with those known to call for brown hair and brown eyes. Other than that, they cannot say. Yet the new genetic analysis does support the hypothesis that Neandertals and Denisovans were more closely related to one another than either was to modern humans. The analysis suggests that the modern human line diverged from what would become the Denisovan line as long as 700,000 years ago—but possibly as recently as 170,000 years ago. Denisovans also interbred with ancient modern humans, according to Pääbo and his team. Even though the sole fossil specimen was found in the mountains of Siberia, contemporary humans from Melanesia (a region in the South Pacific) seem to be the most likely to harbor Denisovan DNA. The researchers estimate that some 6 percent of contemporary Papuans’ genomes come from Denisovans. Australian aborigines and those from Southeast Asian islands also have traces of Denisovan DNA. This suggests that the two groups might have crossed paths in central Asia and then the modern humans continued on to colonize the islands of Oceania. Yet contemporary residents of mainland Asia do not seem to posses Denisovian traces in their DNA, a “very curious” fact, Hawks says. “We’re looking at a very interesting population scenario”—one that does not jibe entirely with what we thought we knew about how waves modern human populations migrated into and through Asia and out to Oceania’s islands. This new genetic evidence might indicate that perhaps an early wave of humans moved through Asia, mixed with Denisovans and then relocated to the islands—to be replaced in Asia by later waves of human migrants from Africa. “It’s not totally obvious that that works really well with what we know about the diversity of Asians and Australians,” Hawks says. But further genetic analysis and study should help to clarify these early migrations. Just as with modern Homo sapiens, the genome of a single individual cannot tell us exactly what genes and traits are specific to all Denisovans. Yet, just one genome can reveal the genetic diversity of an entire population. Each of our genomes contains information about generations far beyond those of our parents and grandparents, said David Reich, a researcher at the Massachusetts Institute of Technology–Harvard University Broad Institute and a co-author on the paper. Scientists can compare and contrast the set of genes on each chromosome—passed down from each parent—and extrapolate this process back through the generations. “You contain a multitude of ancestors within you,” Reich said, borrowing from Walt Whitman. The new research reveals that the Denisovans had low genetic diversity—just 26 to 33 percent of the genetic diversity of contemporary European or Asian populations. And for the Denisovans, the population on the whole seems to have been very small for hundreds of thousands of years, with relatively little genetic diversity throughout their history. Curiously, the researchers noted in their paper, the Denisovan population shows “a drastic decline in size at the time when the modern human population began to expand.” Why were modern humans so successful whereas Denisovans (and Neandertals) went extinct? Pääbo and his co-authors could not resist looking into the genetic factors that might be at work. Some of the key differences, they note, center around brain development and synaptic connectivity. “It makes sense that what pops up is connectivity in the brain,” Pääbo noted. Neandertals had a similar brain size–to-body ratio as we do, so rather than cranial capacity, it might have been underlying neurological differences that could explain why we flourished while they died out, he said. Hawks counters that it might be a little early to begin drawing conclusions about human brain evolution from genetic comparisons with archaic relatives. Decoding the genetic map of the brain and cognition from a genome is still a long way off, he notes—unraveling skin color is still difficult enough given our current technologies and knowledge. New sequencing for old DNA The Denisovan results rely on a new method of genetic analysis developed by paper co-author Matthias Meyer, also of M.P.I. The procedure allows the researchers to sequence the full genome by using single strands of genetic material rather than the typical double strands required. The technique, which they are calling a single-stranded library preparation, involves stripping the genetic material down to individual strands to copy and avoids a purification step, which can lose precious genetic material. The finger bone—just one disklike phalanx—is so small that it does not contain enough usable carbon for dating, the researchers note. But by counting the number of genetic mutations in a genome and comparing them with other living relatives, such as modern humans and chimpanzees, given assumed rates of mutations since breaking with a last common ancestor, “for the first time you can try to estimate this number into a date and provide molecular dating of the fossil,” Meyer said. With the new resolution, the researchers estimate the age of the bone to 74,000 to 82,000 years ago. But that is a wide window, and previous archaeological estimates for the bone are a bit younger, ranging from 30,000 to 50,000 years old. These genetic estimations are also still in limbo because of ongoing debate about the average rate of genetic mutations over time, which could skew the age. “Nevertheless,” the researchers noted in their paper, “the results suggest that in the future it will be possible to determine dates of fossils based on genome sequences.” This new sequencing approach can be used for any DNA that is too fragmented to be read well through more traditional methods. Meyer noted that it could come in handy for analysis of both ancient DNA and contemporary forensic evidence, which also often contains only fragments of genetic material. Hawks is excited about the new sequencing technology. It is also helpful to have a technology developed specifically for the evolutionary field, he notes. “We’re always using the new techniques from other fields, and this is a case where the new technique is developed just for this.” Hawks himself has heard from the researchers that have worked with the Denisovan samples that “the Denisovan pinky is just extraordinary” in terms of the amount of DNA preserved in it. Most bone fragments would be expected to contain less than 5 percent of the individual’s endogenous DNA, but this fortuitous finger had a surprising 70 percent, the researchers noted in the study. And many Neandertal fragments have been preserved in vastly different states—many are far worse off than this Denisovan finger bone. The new sequencing approach could also improve our understanding of known specimens and the evolutionary landscape as a whole. “It’s going to increase the yield from other fossils,” Hawks notes. Many of the Neandertal specimens, for example, have only a small fraction of their genome sequenced. “If we can go from 2 percent to the whole genome, that opens up a lot more,” Hawks says. “Going back further in time will be exciting,” he notes, and this new technique should allow us to do that. “There’s a huge race on—it’s exciting.” The Denisovans might be the first non-Neandertal archaic human to be sequenced, but they are likely not going to be the last. The researchers behind this new study are already at work using the new single-strand sequencing technique to reexamine older specimens. (Meyer said they were working on reassessing old samples but would not specify which specimens they were studying—the mysterious “hobbit” H. floresiensis would be a worthy candidate.) Pääbo suggests Asia as a particularly promising location to look for other Denisovan-like groups. “I would be surprised if there were not other groups to be found there in the future,” he said. Taking this technique to specimens from Africa is also likely to yield some exciting results, Hawks says. Africa, with its rich human evolutionary history, holds the greatest genetic diversity. The genomes of contemporary pygmy and hunter–gatherer tribes in Africa, for example, have roughly as many differences as do those of European modern humans and Neandertals. So “any ancient specimen that we find in Africa might be as different from us as Neandertals,” Hawks says. “Anything we find from the right place might be another Denisovan.”
Fast Fourier transform A fast Fourier transform (FFT) is an algorithm that samples a signal over a period of time (or space) and divides it into its frequency components. These components are single sinusoidal oscillations at distinct frequencies each with their own amplitude and phase. This transformation is illustrated in Diagram 1. Over the time period measured in the diagram, the signal contains 3 distinct dominant frequencies. An FFT algorithm computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IFFT). Fourier analysis converts a signal from its original domain to a representation in the frequency domain and vice versa. An FFT rapidly computes such transformations by factorizing the DFT matrix into a product of sparse (mostly zero) factors. As a result, it manages to reduce the complexity of computing the DFT from , which arises if one simply applies the definition of DFT, to , where is the data size. Fast Fourier transforms are widely used for many applications in engineering, science, and mathematics. The basic ideas were popularized in 1965, but some algorithms had been derived as early as 1805. In 1994, Gilbert Strang described the FFT as "the most important numerical algorithm of our lifetime" and it was included in Top 10 Algorithms of 20th Century by the IEEE journal Computing in Science & Engineering. - 1 Overview - 2 History - 3 Definition and speed - 4 Algorithms - 5 FFT algorithms specialized for real and/or symmetric data - 6 Computational issues - 7 Multidimensional FFTs - 8 Other generalizations - 9 Applications - 10 Research areas - 11 Language reference - 12 See also - 13 References - 14 Further reading - 15 External links This section needs additional citations for verification. (November 2017) (Learn how and when to remove this template message) There are many different FFT algorithms based on a wide range of published theories, from simple complex-number arithmetic to group theory and number theory; this article gives an overview of the available techniques and some of their general properties, while the specific algorithms are described in subsidiary articles linked below. The DFT is obtained by decomposing a sequence of values into components of different frequencies. This operation is useful in many fields (see discrete Fourier transform for properties and applications of the transform) but computing it directly from the definition is often too slow to be practical. An FFT is a way to compute the same result more quickly: computing the DFT of N points in the naive way, using the definition, takes O(N2) arithmetical operations, while an FFT can compute the same DFT in only O(N log N) operations. The difference in speed can be enormous, especially for long data sets where N may be in the thousands or millions. In practice, the computation time can be reduced by several orders of magnitude in such cases, and the improvement is roughly proportional to N log N. This huge improvement made the calculation of the DFT practical; FFTs are of great importance to a wide variety of applications, from digital signal processing and solving partial differential equations to algorithms for quick multiplication of large integers. The best-known FFT algorithms depend upon the factorization of N, but there are FFTs with O(N log N) complexity for all N, even for prime N. Many FFT algorithms only depend on the fact that is an N-th primitive root of unity, and thus can be applied to analogous transforms over any finite field, such as number-theoretic transforms. Since the inverse DFT is the same as the DFT, but with the opposite sign in the exponent and a 1/N factor, any FFT algorithm can easily be adapted for it. The development of fast algorithms for DFT can be traced to Gauss's unpublished work in 1805 when he needed it to interpolate the orbit of asteroids Pallas and Juno from sample observations. His method was very similar to the one published in 1965 by Cooley and Tukey, who are generally credited for the invention of the modern generic FFT algorithm. While Gauss's work predated even Fourier's results in 1822, he did not analyze the computation time and eventually used other methods to achieve his goal. Between 1805 and 1965, some versions of FFT were published by other authors. Frank Yates in 1932 published his version called interaction algorithm, which provided efficient computation of Hadamard and Walsh transforms. Yates' algorithm is still used in the field of statistical design and analysis of experiments. In 1942, G. C. Danielson and Cornelius Lanczos published their version to compute DFT for x-ray crystallography, a field where calculation of Fourier transforms presented a formidable bottleneck. While many methods in the past had focused on reducing the constant factor for computation by taking advantage of "symmetries", Danielson and Lanczos realized that one could use the "periodicity" and apply a "doubling trick" to get runtime. James Cooley and John Tukey published a more general version of FFT in 1965 that is applicable when N is composite and not necessarily a power of 2. Tukey came up with the idea during a meeting of President Kennedy's Science Advisory Committee where a discussion topic involved detecting nuclear tests by the Soviet Union by setting up sensors to surround the country from outside. To analyze the output of these sensors, a fast Fourier transform algorithm would be needed. In discussion with Tukey, Richard Garwin recognized the general applicability of the algorithm not just to national security problems, but also to a wide range of problems including one of immediate interest to him, determining the periodicities of the spin orientations in a 3-D crystal of Helium-3. Garwin gave Tukey's idea to Cooley (both worked at IBM's Watson labs) for implementation. Cooley and Tukey published the paper in a relatively short time of six months. As Tukey did not work at IBM, the patentability of the idea was doubted and the algorithm went into the public domain, which, through the computing revolution of the next decade, made FFT one of the indispensable algorithms in digital signal processing. Definition and speed An FFT computes the DFT and produces exactly the same result as evaluating the DFT definition directly; the most important difference is that an FFT is much faster. (In the presence of round-off error, many FFT algorithms are also much more accurate than evaluating the DFT definition directly, as discussed below.) Let x0, ...., xN−1 be complex numbers. The DFT is defined by the formula Evaluating this definition directly requires O(N2) operations: there are N outputs Xk, and each output requires a sum of N terms. An FFT is any method to compute the same results in O(N log N) operations. All known FFT algorithms require Θ(N log N) operations, although there is no known proof that a lower complexity score is impossible. To illustrate the savings of an FFT, consider the count of complex multiplications and additions for N=4096 data points. Evaluating the DFT's sums directly involves N2 complex multiplications and N(N−1) complex additions, of which O(N) operations can be saved by eliminating trivial operations such as multiplications by 1, leaving about 30 million operations. On the other hand, the radix-2 Cooley–Tukey algorithm, for N a power of 2, can compute the same result with only (N/2)log2(N) complex multiplications (again, ignoring simplifications of multiplications by 1 and similar) and N log2(N) complex additions, in total about 30,000 operations - a thousand times less than with direct evaluation. In practice, actual performance on modern computers is usually dominated by factors other than the speed of arithmetic operations and the analysis is a complicated subject (see, e.g., Frigo & Johnson, 2005), but the overall improvement from O(N2) to O(N log N) remains. By far the most commonly used FFT is the Cooley–Tukey algorithm. This is a divide and conquer algorithm that recursively breaks down a DFT of any composite size N = N1N2 into many smaller DFTs of sizes N1 and N2, along with O(N) multiplications by complex roots of unity traditionally called twiddle factors (after Gentleman and Sande, 1966). This method (and the general idea of an FFT) was popularized by a publication of Cooley and Tukey in 1965, but it was later discovered that those two authors had independently re-invented an algorithm known to Carl Friedrich Gauss around 1805 (and subsequently rediscovered several times in limited forms). The best known use of the Cooley–Tukey algorithm is to divide the transform into two pieces of size N/2 at each step, and is therefore limited to power-of-two sizes, but any factorization can be used in general (as was known to both Gauss and Cooley/Tukey). These are called the radix-2 and mixed-radix cases, respectively (and other variants such as the split-radix FFT have their own names as well). Although the basic idea is recursive, most traditional implementations rearrange the algorithm to avoid explicit recursion. Also, because the Cooley–Tukey algorithm breaks the DFT into smaller DFTs, it can be combined arbitrarily with any other algorithm for the DFT, such as those described below. Other FFT algorithms There are other FFT algorithms distinct from Cooley–Tukey. For N = N1N2 with coprime N1 and N2, one can use the prime-factor (Good–Thomas) algorithm (PFA), based on the Chinese remainder theorem, to factorize the DFT similarly to Cooley–Tukey but without the twiddle factors. The Rader–Brenner algorithm (1976) is a Cooley–Tukey-like factorization but with purely imaginary twiddle factors, reducing multiplications at the cost of increased additions and reduced numerical stability; it was later superseded by the split-radix variant of Cooley–Tukey (which achieves the same multiplication count but with fewer additions and without sacrificing accuracy). Algorithms that recursively factorize the DFT into smaller operations other than DFTs include the Bruun and QFT algorithms. (The Rader–Brenner and QFT algorithms were proposed for power-of-two sizes, but it is possible that they could be adapted to general composite n. Bruun's algorithm applies to arbitrary even composite sizes.) Bruun's algorithm, in particular, is based on interpreting the FFT as a recursive factorization of the polynomial zN − 1, here into real-coefficient polynomials of the form zM − 1 and z2M + azM + 1. Another polynomial viewpoint is exploited by the Winograd FFT algorithm, which factorizes zN − 1 into cyclotomic polynomials—these often have coefficients of 1, 0, or −1, and therefore require few (if any) multiplications, so Winograd can be used to obtain minimal-multiplication FFTs and is often used to find efficient algorithms for small factors. Indeed, Winograd showed that the DFT can be computed with only O(N) irrational multiplications, leading to a proven achievable lower bound on the number of multiplications for power-of-two sizes; unfortunately, this comes at the cost of many more additions, a tradeoff no longer favorable on modern processors with hardware multipliers. In particular, Winograd also makes use of the PFA as well as an algorithm by Rader for FFTs of prime sizes. Rader's algorithm, exploiting the existence of a generator for the multiplicative group modulo prime N, expresses a DFT of prime size n as a cyclic convolution of (composite) size N−1, which can then be computed by a pair of ordinary FFTs via the convolution theorem (although Winograd uses other convolution methods). Another prime-size FFT is due to L. I. Bluestein, and is sometimes called the chirp-z algorithm; it also re-expresses a DFT as a convolution, but this time of the same size (which can be zero-padded to a power of two and evaluated by radix-2 Cooley–Tukey FFTs, for example), via the identity Hexagonal Fast Fourier Transform aims at computing an efficient FFT for the hexagonally sampled data by using a new addressing scheme for hexagonal grids, called Array Set Addressing (ASA). FFT algorithms specialized for real and/or symmetric data In many applications, the input data for the DFT are purely real, in which case the outputs satisfy the symmetry and efficient FFT algorithms have been designed for this situation (see e.g. Sorensen, 1987). One approach consists of taking an ordinary algorithm (e.g. Cooley–Tukey) and removing the redundant parts of the computation, saving roughly a factor of two in time and memory. Alternatively, it is possible to express an even-length real-input DFT as a complex DFT of half the length (whose real and imaginary parts are the even/odd elements of the original real data), followed by O(N) post-processing operations. It was once believed that real-input DFTs could be more efficiently computed by means of the discrete Hartley transform (DHT), but it was subsequently argued that a specialized real-input DFT algorithm (FFT) can typically be found that requires fewer operations than the corresponding DHT algorithm (FHT) for the same number of inputs. Bruun's algorithm (above) is another method that was initially proposed to take advantage of real inputs, but it has not proved popular. There are further FFT specializations for the cases of real data that have even/odd symmetry, in which case one can gain another factor of (roughly) two in time and memory and the DFT becomes the discrete cosine/sine transform(s) (DCT/DST). Instead of directly modifying an FFT algorithm for these cases, DCTs/DSTs can also be computed via FFTs of real data combined with O(N) pre/post processing. Bounds on complexity and operation counts |Unsolved problem in computer science:| What is the lower bound on the complexity of fast Fourier transform algorithms? Can they be faster than ?(more unsolved problems in computer science) A fundamental question of longstanding theoretical interest is to prove lower bounds on the complexity and exact operation counts of fast Fourier transforms, and many open problems remain. It is not even rigorously proved whether DFTs truly require Ω(N log N) (i.e., order N log N or greater) operations, even for the simple case of power of two sizes, although no algorithms with lower complexity are known. In particular, the count of arithmetic operations is usually the focus of such questions, although actual performance on modern-day computers is determined by many other factors such as cache or CPU pipeline optimization. Following pioneering work by Winograd (1978), a tight Θ(N) lower bound is known for the number of real multiplications required by an FFT. It can be shown that only irrational real multiplications are required to compute a DFT of power-of-two length . Moreover, explicit algorithms that achieve this count are known (Heideman & Burrus, 1986; Duhamel, 1990). Unfortunately, these algorithms require too many additions to be practical, at least on modern computers with hardware multipliers (Duhamel, 1990; Frigo & Johnson, 2005). A tight lower bound is not known on the number of required additions, although lower bounds have been proved under some restrictive assumptions on the algorithms. In 1973, Morgenstern proved an Ω(N log N) lower bound on the addition count for algorithms where the multiplicative constants have bounded magnitudes (which is true for most but not all FFT algorithms). This result, however, applies only to the unnormalized Fourier transform (which is a scaling of a unitary matrix by a factor of ), and does not explain why the Fourier matrix is harder to compute than any other unitary matrix (including the identity matrix) under the same scaling. Pan (1986) proved an Ω(N log N) lower bound assuming a bound on a measure of the FFT algorithm's "asynchronicity", but the generality of this assumption is unclear. For the case of power-of-two N, Papadimitriou (1979) argued that the number of complex-number additions achieved by Cooley–Tukey algorithms is optimal under certain assumptions on the graph of the algorithm (his assumptions imply, among other things, that no additive identities in the roots of unity are exploited). (This argument would imply that at least real additions are required, although this is not a tight bound because extra additions are required as part of complex-number multiplications.) Thus far, no published FFT algorithm has achieved fewer than complex-number additions (or their equivalent) for power-of-two N. A third problem is to minimize the total number of real multiplications and additions, sometimes called the "arithmetic complexity" (although in this context it is the exact count and not the asymptotic complexity that is being considered). Again, no tight lower bound has been proven. Since 1968, however, the lowest published count for power-of-two N was long achieved by the split-radix FFT algorithm, which requires real multiplications and additions for N > 1. This was recently reduced to (Johnson and Frigo, 2007; Lundy and Van Buskirk, 2007). A slightly larger count (but still better than split radix for N≥256) was shown to be provably optimal for N≤512 under additional restrictions on the possible algorithms (split-radix-like flowgraphs with unit-modulus multiplicative factors), by reduction to a satisfiability modulo theories problem solvable by brute force (Haynal & Haynal, 2011). Most of the attempts to lower or prove the complexity of FFT algorithms have focused on the ordinary complex-data case, because it is the simplest. However, complex-data FFTs are so closely related to algorithms for related problems such as real-data FFTs, discrete cosine transforms, discrete Hartley transforms, and so on, that any improvement in one of these would immediately lead to improvements in the others (Duhamel & Vetterli, 1990). All of the FFT algorithms discussed above compute the DFT exactly (i.e. neglecting floating-point errors). A few "FFT" algorithms have been proposed, however, that compute the DFT approximately, with an error that can be made arbitrarily small at the expense of increased computations. Such algorithms trade the approximation error for increased speed or other properties. For example, an approximate FFT algorithm by Edelman et al. (1999) achieves lower communication requirements for parallel computing with the help of a fast multipole method. A wavelet-based approximate FFT by Guo and Burrus (1996) takes sparse inputs/outputs (time/frequency localization) into account more efficiently than is possible with an exact FFT. Another algorithm for approximate computation of a subset of the DFT outputs is due to Shentov et al. (1995). The Edelman algorithm works equally well for sparse and non-sparse data, since it is based on the compressibility (rank deficiency) of the Fourier matrix itself rather than the compressibility (sparsity) of the data. Conversely, if the data are sparse—that is, if only K out of N Fourier coefficients are nonzero—then the complexity can be reduced to O(K log(N)log(N/K)), and this has been demonstrated to lead to practical speedups compared to an ordinary FFT for N/K > 32 in a large-N example (N = 222) using a probabilistic approximate algorithm (which estimates the largest K coefficients to several decimal places). Even the "exact" FFT algorithms have errors when finite-precision floating-point arithmetic is used, but these errors are typically quite small; most FFT algorithms, e.g. Cooley–Tukey, have excellent numerical properties as a consequence of the pairwise summation structure of the algorithms. The upper bound on the relative error for the Cooley–Tukey algorithm is O(ε log N), compared to O(εN3/2) for the naïve DFT formula, where ε is the machine floating-point relative precision. In fact, the root mean square (rms) errors are much better than these upper bounds, being only O(ε √) for Cooley–Tukey and O(ε √) for the naïve DFT (Schatzman, 1996). These results, however, are very sensitive to the accuracy of the twiddle factors used in the FFT (i.e. the trigonometric function values), and it is not unusual for incautious FFT implementations to have much worse accuracy, e.g. if they use inaccurate trigonometric recurrence formulas. Some FFTs other than Cooley–Tukey, such as the Rader–Brenner algorithm, are intrinsically less stable. In fixed-point arithmetic, the finite-precision errors accumulated by FFT algorithms are worse, with rms errors growing as O(√) for the Cooley–Tukey algorithm (Welch, 1969). Moreover, even achieving this accuracy requires careful attention to scaling to minimize loss of precision, and fixed-point FFT algorithms involve rescaling at each intermediate stage of decompositions like Cooley–Tukey. To verify the correctness of an FFT implementation, rigorous guarantees can be obtained in O(N log N) time by a simple procedure checking the linearity, impulse-response, and time-shift properties of the transform on random inputs (Ergün, 1995). As defined in the multidimensional DFT article, the multidimensional DFT transforms an array xn with a d-dimensional vector of indices by a set of d nested summations (over for each j), where the division n/N, defined as , is performed element-wise. Equivalently, it is the composition of a sequence of d sets of one-dimensional DFTs, performed along one dimension at a time (in any order). This compositional viewpoint immediately provides the simplest and most common multidimensional DFT algorithm, known as the row-column algorithm (after the two-dimensional case, below). That is, one simply performs a sequence of d one-dimensional FFTs (by any of the above algorithms): first you transform along the n1 dimension, then along the n2 dimension, and so on (or actually, any ordering works). This method is easily shown to have the usual O(N log N) complexity, where is the total number of data points transformed. In particular, there are N/N1 transforms of size N1, etcetera, so the complexity of the sequence of FFTs is: In two dimensions, the xk can be viewed as an matrix, and this algorithm corresponds to first performing the FFT of all the rows (resp. columns), grouping the resulting transformed rows (resp. columns) together as another matrix, and then performing the FFT on each of the columns (resp. rows) of this second matrix, and similarly grouping the results into the final result matrix. In more than two dimensions, it is often advantageous for cache locality to group the dimensions recursively. For example, a three-dimensional FFT might first perform two-dimensional FFTs of each planar "slice" for each fixed n1, and then perform the one-dimensional FFTs along the n1 direction. More generally, an asymptotically optimal cache-oblivious algorithm consists of recursively dividing the dimensions into two groups and that are transformed recursively (rounding if d is not even) (see Frigo and Johnson, 2005). Still, this remains a straightforward variation of the row-column algorithm that ultimately requires only a one-dimensional FFT algorithm as the base case, and still has O(N log N) complexity. Yet another variation is to perform matrix transpositions in between transforming subsequent dimensions, so that the transforms operate on contiguous data; this is especially important for out-of-core and distributed memory situations where accessing non-contiguous data is extremely time-consuming. There are other multidimensional FFT algorithms that are distinct from the row-column algorithm, although all of them have O(N log N) complexity. Perhaps the simplest non-row-column FFT is the vector-radix FFT algorithm, which is a generalization of the ordinary Cooley–Tukey algorithm where one divides the transform dimensions by a vector of radices at each step. (This may also have cache benefits.) The simplest case of vector-radix is where all of the radices are equal (e.g. vector-radix-2 divides all of the dimensions by two), but this is not necessary. Vector radix with only a single non-unit radix at a time, i.e. , is essentially a row-column algorithm. Other, more complicated, methods include polynomial transform algorithms due to Nussbaumer (1977), which view the transform in terms of convolutions and polynomial products. See Duhamel and Vetterli (1990) for more information and references. An O(N5/2log N) generalization to spherical harmonics on the sphere S2 with N2 nodes was described by Mohlenkamp, along with an algorithm conjectured (but not proven) to have O(N2 log2(N)) complexity; Mohlenkamp also provides an implementation in the libftsh library. A spherical-harmonic algorithm with O(N2log N) complexity is described by Rokhlin and Tygert. The fast folding algorithm is analogous to the FFT, except that it operates on a series of binned waveforms rather than a series of real or complex scalar values. Rotation (which in the FFT is multiplication by a complex phasor) is a circular shift of the component waveform. Various groups have also published "FFT" algorithms for non-equispaced data, as reviewed in Potts et al. (2001). Such algorithms do not strictly compute the DFT (which is only defined for equispaced data), but rather some approximation thereof (a non-uniform discrete Fourier transform, or NDFT, which itself is often computed only approximately). More generally there are various other methods of spectral estimation. FFT's importance derives from the fact that in signal processing and image processing it has made working in frequency domain equally computationally feasible as working in temporal or spatial domain. Some of the important applications of FFT includes, - Fast large integer and polynomial multiplication - Efficient matrix-vector multiplication for Toeplitz, circulant and other structured matrices - Filtering algorithms (see overlap-add and overlap-save methods) - Fast algorithms for discrete cosine or sine transforms (example, Fast DCT used for JPEG, MP3/MPEG encoding) - Fast Chebyshev approximation - Fast discrete Hartley transform - Solving difference equations - Computation of isotopic distributions. - Big FFTs: With the explosion of big data in fields such as astronomy, the need for 512k FFTs has arisen for certain interferometry calculations. The data collected by projects such as MAP and LIGO require FFTs of tens of billions of points. As this size does not fit into main memory, so called out-of-core FFTs are an active area of research. - Approximate FFTs: For applications such as MRI, it is necessary to compute DFTs for nonuniformly spaced grid points and/or frequencies. Multipole based approaches can compute approximate quantities with factor of runtime increase. - Group FFTs: The FFT may also be explained and interpreted using group representation theory that allows for further generalization. A function on any compact group, including non cyclic, has an expansion in terms of a basis of irreducible matrix elements. It remains active area of research to find efficient algorithm for performing this change of basis. Applications including efficient spherical harmonic expansion, analyzing certain markov processes, robotics etc. - Quantum FFTs: Shor's fast algorithm for integer factorization on a quantum computer has a subroutine to compute DFT of a binary vector. This is implemented as sequence of 1- or 2-bit quantum gates now known as quantum FFT, which is effectively the Cooley–Tukey FFT realized as a particular factorization of the Fourier matrix. Extension to these ideas is currently being explored. - Cooley–Tukey FFT algorithm - Prime-factor FFT algorithm - Bruun's FFT algorithm - Rader's FFT algorithm - Bluestein's FFT algorithm - Goertzel algorithm – Computes individual terms of discrete Fourier transform - ALGLIB – C++ and C# library with real/complex FFT implementation. - FFTW "Fastest Fourier Transform in the West" – C library for the discrete Fourier transform (DFT) in one or more dimensions. - FFTS – The Fastest Fourier Transform in the South. - FFTPACK – another Fortran FFT library (public domain) - Math Kernel Library - Overlap add/Overlap save – efficient convolution methods using FFT for long signals - Odlyzko–Schönhage algorithm applies the FFT to finite Dirichlet series. - Schönhage–Strassen algorithm - asymptotically fast multiplication algorithm for large integers - Butterfly diagram – a diagram used to describe FFTs. - Spectral music (involves application of FFT analysis to musical composition) - Spectrum analyzer – any of several devices that perform an FFT - Time series - Fast Walsh–Hadamard transform - Generalized distributive law - Multidimensional transform - Multidimensional discrete convolution - DFT matrix - Audio, NTi. "How an FFT works". www.nti-audio.com. - Van Loan, Charles (1992). Computational Frameworks for the Fast Fourier Transform. SIAM. - Heideman, Michael T.; Johnson, Don H.; Burrus, Charles Sidney (1984). "Gauss and the history of the fast Fourier transform" (PDF). IEEE ASSP Magazine. 1 (4): 14–21. doi:10.1109/MASSP.1984.1162257. - Strang, Gilbert (May–June 1994). "Wavelets". American Scientist. 82 (3): 250–255. JSTOR 29775194. - Kent, Ray D.; Read, Charles (2002). Acoustic Analysis of Speech. ISBN 0-7693-0112-6. - Dongarra, Jack; Sullivan, Francis (January 2000). "Guest Editors Introduction to the top 10 algorithms". Computing in Science Engineering. 2 (1): 22–23. doi:10.1109/MCISE.2000.814652. ISSN 1521-9615. - Gauss, Carl Friedrich (1866). "Theoria interpolationis methodo nova tractata" [Theory regarding a new method of interpolation]. Nachlass (Unpublished manuscript). Werke (in Latin and German). 3. Göttingen, Germany: Königlichen Gesellschaft der Wissenschaften zu Göttingen. pp. 265–303. - Heideman, Michael T.; Johnson, Don H.; Burrus, Charles Sidney (1985-09-01). "Gauss and the history of the fast Fourier transform". Archive for History of Exact Sciences. 34 (3): 265–277. doi:10.1007/BF00348431. ISSN 0003-9519. - Yates, Frank (1937). "The design and analysis of factorial experiments". Technical Communication no. 35 of the Commonwealth Bureau of Soils. - Danielson, Gordon C.; Lanczos, Cornelius (1942). "Some improvements in practical Fourier analysis and their application to x-ray scattering from liquids". Journal of the Franklin Institute. 233 (4): 365–380. doi:10.1016/S0016-0032(42)90767-1. - Lanczos, Cornelius (1956). Applied Analysis. Prentice–Hall. - Cooley, James W.; Lewis, Peter A. W.; Welch, Peter D. (June 1967). "Historical notes on the fast Fourier transform". IEEE Transactions on Audio and Electroacoustics. 15 (2): 76–79. doi:10.1109/TAU.1967.1161903. ISSN 0018-9278. - Cooley, James W.; Tukey, John W. (1965). "An algorithm for the machine calculation of complex Fourier series". Mathematics of Computation. 19 (90): 297–301. doi:10.1090/S0025-5718-1965-0178586-1. ISSN 0025-5718. - Cooley, James W. (1987). The Re-Discovery of the Fast Fourier Transform Algorithm (PDF). Mikrochimica Acta. III. Vienna, Austria. pp. 33–45. - Garwin, Richard (June 1969). "The Fast Fourier Transform As an Example of the Difficulty in Gaining Wide Use for a New Technique" (PDF). IEEE Transactions on Audio and Electroacoustics. AU-17 (2): 68–72. - Rockmore, Daniel N. (January 2000). "The FFT: an algorithm the whole family can use". Computing in Science Engineering. 2 (1): 60–64. doi:10.1109/5992.814659. ISSN 1521-9615. - Frigo, Matteo; Johnson, Steven G. (January 2007) [2006-12-19]. "A Modified Split-Radix FFT With Fewer Arithmetic Operations". IEEE Transactions on Signal Processing. 55 (1): 111–119. doi:10.1109/tsp.2006.882087. - Frigo, Matteo; Johnson, Steven G. (2005). "The Design and Implementation of FFTW3" (PDF). Proceedings of the IEEE. 93: 216–231. doi:10.1109/jproc.2004.840301. - Gentleman, W. Morven; Sande, G. (1966). "Fast Fourier transforms—for fun and profit". Proceedings of the AFIPS. 29: 563–578. doi:10.1145/1464291.1464352. - Gauss, Carl Friedrich (1866) . Theoria interpolationis methodo nova tractata. Werke (in Latin and German). 3. Göttingen, Germany: Königliche Gesellschaft der Wissenschaften. pp. 265–327. - Brenner, Norman M.; Rader, Charles M. (1976). "A New Principle for Fast Fourier Transformation". IEEE Transactions on Acoustics, Speech, and Signal Processing. 24 (3): 264–266. doi:10.1109/TASSP.1976.1162805. - Winograd, Shmuel (1978). "On computing the discrete Fourier transform". Mathematics of Computation. 32 (141): 175–199. doi:10.1090/S0025-5718-1978-0468306-4. JSTOR 2006266. PMC . PMID 16592303. - Winograd, Shmuel (1979). "On the multiplicative complexity of the discrete Fourier transform". Advances in Mathematics. 32: 83–117. doi:10.1016/0001-8708(79)90037-9. - Sorensen, Henrik V.; Jones, Douglas L.; Heideman, Michael T.; Burrus, Charles Sidney (1987). "Real-valued fast Fourier transform algorithms". IEEE Transactions on Acoustics, Speech, and Signal Processing. 35 (6): 849–863. doi:10.1109/TASSP.1987.1165220. - Sorensen, Henrik V.; Jones, Douglas L.; Heideman, Michael T.; Burrus, Charles Sidney (1987). "Corrections to "Real-valued fast Fourier transform algorithms"". IEEE Transactions on Acoustics, Speech, and Signal Processing. 35 (9): 1353–1353. doi:10.1109/TASSP.1987.1165284. - Heideman, Michael T.; Burrus, Charles Sidney (1986). "On the number of multiplications necessary to compute a length-2n DFT". IEEE Transactions on Acoustics, Speech, and Signal Processing. 34 (1): 91–95. doi:10.1109/TASSP.1986.1164785. - Duhamel, Pierre (1990). "Algorithms meeting the lower bounds on the multiplicative complexity of length-2n DFTs and their connection with practical algorithms". IEEE Transactions on Acoustics, Speech, and Signal Processing. 38 (9): 1504–1511. doi:10.1109/29.60070. - Morgenstern, Jacques (1973). "Note on a lower bound of the linear complexity of the fast Fourier transform". Journal of the ACM. 20 (2): 305–306. doi:10.1145/321752.321761. - Pan, Victor Ya. (1986-01-02). "The trade-off between the additive complexity and the asynchronicity of linear and bilinear algorithms". Information Processing Letters. 22 (1): 11–14. doi:10.1016/0020-0190(86)90035-9. Retrieved 2017-10-31. - Papadimitriou, Christos H. (1979). "Optimality of the fast Fourier transform". Journal of the ACM. 26: 95–102. doi:10.1145/322108.322118. - Lundy, Thomas J.; Van Buskirk, James (2007). "A new matrix approach to real FFTs and convolutions of length 2k". Computing. 80 (1): 23–45. doi:10.1007/s00607-007-0222-6. - Haynal, Steve; Haynal, Heidi (2011). "Generating and Searching Families of FFT Algorithms" (PDF). Journal on Satisfiability, Boolean Modeling and Computation. 7: 145–187. Archived from the original (PDF) on 2012-04-26. - Duhamel, Pierre; Vetterli, Martin (1990). "Fast Fourier transforms: a tutorial review and a state of the art". Signal Processing. 19: 259–299. doi:10.1016/0165-1684(90)90158-U. - Edelman, Alan; McCorquodale, Peter; Toledo, Sivan (1999). "The Future Fast Fourier Transform?" (PDF). SIAM Journal on Scientific Computing. 20: 1094–1114. doi:10.1137/S1064827597316266. - Guo, Haitao; Burrus, Charles Sidney (1996). "Fast approximate Fourier transform via wavelets transform". Proceedings of SPIE - The International Society for Optical Engineering. 2825: 250–259. CiteSeerX . doi:10.1117/12.255236. - Shentov, Ognjan V.; Mitra, Sanjit K.; Heute, Ulrich; Hossen, Abdul N. (1995). "Subband DFT. I. Definition, interpretations and extensions". Signal Processing. 41 (3): 261–277. doi:10.1016/0165-1684(94)00103-7. - Hassanieh, Haitham; Indyk, Piotr; Katabi, Dina; Price, Eric (January 2012). "Simple and Practical Algorithm for Sparse Fourier Transform" (PDF). ACM-SIAM Symposium On Discrete Algorithms (SODA). Kyoto, Japan. (NB. See also the sFFT Web Page.) - Schatzman, James C. (1996). "Accuracy of the discrete Fourier transform and the fast Fourier transform". SIAM Journal on Scientific Computing. 17: 1150–1166. doi:10.1137/s1064827593247023. - Welch, Peter D. (1969). "A fixed-point fast Fourier transform error analysis". IEEE Transactions on Audio and Electroacoustics. 17 (2): 151–157. doi:10.1109/TAU.1969.1162035. - Ergün, Funda (1995). "Testing multivariate linear functions: Overcoming the generator bottleneck". Proceedings of the 27th ACM Symposium on the Theory of Computing: 407–416. doi:10.1145/225058.225167. - Nussbaumer, Henri J. (1977). "Digital filtering using polynomial transforms". Electronics Letters. 13 (13): 386–387. doi:10.1049/el:19770280. - Mohlenkamp, Martin J. (1999). "A Fast Transform for Spherical Harmonics" (PDF). Journal of Fourier Analysis and Applications. 5 (2–3): 159–184. doi:10.1007/BF01261607. Retrieved 2018-01-11. - libftsh library - Rokhlin, Vladimir; Tygert, Mark (2006). "Fast Algorithms for Spherical Harmonic Expansions" (PDF). SIAM Journal on Scientific Computing. 27 (6): 1903–1928. doi:10.1137/050623073. Retrieved 2014-09-18. - Potts, Daniel; Steidl, Gabriele; Tasche, Manfred (2001). "Fast Fourier transforms for nonequispaced data: A tutorial". In Benedetto, J. J.; Ferreira, P. Modern Sampling Theory: Mathematics and Applications (PDF). Birkhäuser. - Chu, Eleanor; George, Alan. "Chapter 16". Inside the FFT Black Box: Serial and Parallel Fast Fourier Transform Algorithms. CRC Press. pp. 153–168. ISBN 978-1-42004996-1. - Fernandez-de-Cossio Diaz, Jorge; Fernandez-de-Cossio, Jorge (2012-08-08). "Computation of Isotopic Peak Center-Mass Distribution by Fourier Transform". Analytical Chemistry. 84 (16): 7052–7056. doi:10.1021/ac301296a. ISSN 0003-2700. - Cormen, Thomas H.; Nicol, David M. (1998). "Performing out-of-core FFTs on parallel disk systems" (PDF). Parallel Computing. 24 (1): 5–20. doi:10.1016/S0167-8191(97)00114-2. - Dutt, Alok; Rokhlin, Vladimir (1993-11-01). "Fast Fourier Transforms for Nonequispaced Data". SIAM Journal on Scientific Computing. 14 (6): 1368–1393. doi:10.1137/0914081. ISSN 1064-8275. - Rockmore, Daniel N. (2004). Byrnes, Jim, ed. Recent Progress and Applications in Group FFTs. Computational Noncommutative Algebra and Applications. NATO Science Series II: Mathematics, Physics and Chemistry. 136. Springer Netherlands. pp. 227–254. doi:10.1007/1-4020-2307-3_9. ISBN 978-1-4020-1982-1. - Brigham, E. Oran (2002). "The Fast Fourier Transform". New York, USA: Prentice-Hall. - Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001). "Chapter 30: Polynomials and the FFT". Introduction to Algorithms (2 ed.). MIT Press / McGraw-Hill. ISBN 0-262-03293-7. - Elliott, Douglas F.; Rao, K. Ramamohan (1982). Fast transforms: Algorithms, analyses, applications. New York, USA: Academic Press. - Guo, Haitao; Sitton, Gary A.; Burrus, Charles Sidney (1994). "The Quick Discrete Fourier Transform". Proceedings on the IEEE Conference on Acoustics, Speech, and Signal Processing (ICASSP). 3: 445–448. doi:10.1109/ICASSP.1994.389994. - Johnson, Steven G.; Frigo, Matteo (2007). "A modified split-radix FFT with fewer arithmetic operations" (PDF). IEEE Transactions on Signal Processing. 55 (1): 111–119. doi:10.1109/tsp.2006.882087. - Press, William H.; Teukolsky, Saul A.; Vetterling, William T.; Flannery, Brian P. (2007). "Chapter 12. Fast Fourier Transform". Numerical Recipes: The Art of Scientific Computing (3 ed.). New York, USA: Cambridge University Press. ISBN 978-0-521-88068-8. - Singleton, Richard Collom (June 1969). "A Short Bibliography on the Fast Fourier Transform". Special Issue on Fast Fourier Transform. IEEE Transactions on Audio and Electroacoustics. AU-17. IEEE Audio and Electroacoustics Group. pp. 166–169. Retrieved 2017-10-31. (NB. Contains extensive bibliography.) - Fast Fourier Transform Introduction - Radix-2 Decimation in Time FFT Algorithm - Radix-2 Decimation in Frequency FFT Algorithm - Fast Fourier Algorithm - Fast Fourier Transforms, Connexions online book edited by Charles Sidney Burrus, with chapters by Charles Sidney Burrus, Ivan Selesnick, Markus Pueschel, Matteo Frigo, and Steven G. Johnson (2008). - Links to FFT code and information online. - National Taiwan University – FFT - FFT programming in C++ — Cooley–Tukey algorithm. - Online documentation, links, book, and code. - Using FFT to construct aggregate probability distributions - Sri Welaratna, "Thirty years of FFT analyzers", Sound and Vibration (January 1997, 30th anniversary issue). A historical review of hardware FFT devices. - FFT Basics and Case Study Using Multi-Instrument - FFT Textbook notes, PPTs, Videos at Holistic Numerical Methods Institute. - ALGLIB FFT Code GPL Licensed multilanguage (VBA, C++, Pascal, etc.) numerical analysis and data processing library. - MIT's sFFT MIT Sparse FFT algorithm and implementation. - VB6 FFT VB6 optimized library implementation with source code. - Fast Fourier transform illustrated Demo examples and FFT calculator.
Constructivism is both a theory of construction of knowledge and learning philosophy. Its proponents include Piaget, Vygotsky and Glaserfeld. The constructivist approach is new trend in teaching of mathematics by many enthusiastic pedagogues and teachers in many countries. Constructivist pedagogy does not consist of a single teaching strategy. Instead, it has several features that should be attended to simultaneously in a classroom. It has been asserted that for a successful constructivist strategy the teaching has not only to be student centered and the teacher a mere facilitator, but the teacher has the added responsibility to create a conducive classroom environment. Research has established that constructive methods of mathematics teaching have been much more successful than the traditional methods. Constructivism is a theory of knowledge i.e., epistemology and a theory of learning. It is not a particular pedagogy. Constructivists believe that human beings are active information receivers. They use their existing experience to construct understanding that makes sense to them. Humans assimilate and accommodate new knowledge and build their own understanding knowledge is viewed as personal an subjective. Reality resides in the mind of each person. Learning is based on the previous experience and knowledge. Thus, multiple interpretations of an event are possible and multiple answers to a question are source of creativity in learners. It is held by constructivists that learner need time to reflect on their experiences in relation to what they already know. After some time, they reach consensus about what specific experience means to them. Constructivism views learning as a process of constructing meaningful representations of external reality through experiences. Construction of internal representation of knowledge is depends on the degree to which learners integrate new idea with the previous one. It is significant to note that in constructivist view knowledge constructing takes place in working memory. How the teacher has constructed the content and the activities as well for guiding the learner’s construction of idea is a key component in this context. CONSTRUCTIVISM IS AN EPISTEMOLOGICAL VIEW OF LEARNING RATHER THAN TEACHING Students’ previous knowledge and their active participation in problem solving and critical thinking all play a vital role in the formation of knowledge. One of the most important goals of constructivism is to develop students’ “critical thinking skills”, which is possible only in a conducive learning environment in the class. The teacher may have to improvise the day’s lesson or change the sequence of activities, depending on the needs of the students or due to any other unexpected development. Such flexibility is said to be a valuable quality of a positive learning environment. The following are the some of the important features of a constructivist learning environment: - Learners are encouraged to become active presenter than passive listeners. - Learning environment should encourage interpersonal discussion and dialogue. - Learners should be challenged by ideas and problems that generate inner cognitive conflicts. Constructivist learning environment emphasize authentic tasks in a meaningful context rather than abstract instruction out of context. The classroom climate of constructivist approach gives an importance to construction of knowledge rather than the reproduction of knowledge The complexity of the real world is establishes through multiple representations. Students should be given sufficient time for reflection for constructing relationship and for discussion. MATHEMATICS CLASSROOM AND CONSTRUCTIVISM There is no single constructivist strategy for instruction in the class. Different pedagogies and researches have highlighted various elements in varying degrees for the benefit of classroom instructors. Even so, there are several common themes which can be described here. Education is a student-centered process and the teacher is only facilitator. Learning depends on shared and imbibe experience with peers and teachers. Collaboration and cooperation is a major teaching method. Students actively explore and use hands on experience. The constructivist views knowledge as being formulated in a social context. It is an active social process. Learners cannot construct understanding alone; they do it collaboratively through interactions. Learning is an active process hence the learner should be encouraged for imagination and intuitive learning. To solve the problem in the hand the “thinking” should be focused so in constructivist learning ‘thinking’ effectively is focused to greater extent. The ‘Understanding’ is another objective followed the knowledge construction. So proper understanding of knowledge is leads to correct thinking hence understanding should be clear. In metcognition the learners’ thinks of his/her own thinking style that is purposeful thoughtfulness. A motivated and thinking learner tries to check his errors and tries to find why he failed in his earlier attempt. Such a learner’s knowledge would be deep and durable. As Yager says, “one only knows something if one can explain it”(Yager, 1999). One the other hand, a novice learner does not check for quality in his work and thus he fails to make amends to his earlier errors. THE CONSTRUCTIVIST MATHEMATI CS CLASSROOM AND ROLE OF TEACHER Towards the higher goals: Mathematicas content teaching is the narrower goal as compare to creating mathematical learning environments. The content areas of mathematics addressed in our schools do offer a solid foundation, while there can be disputes over what gets taught at which grade and over the level of detail included in a specific theme, there is broad agreement that the content areas (arithmetic, algebra, geometry, mensuration, trigonometry, data analysis) cover essential ground. What can be leveled as a major criticism against our extant curriculum and pedagogy is its failure with regard to mathematical processes. We mean a whole range of processes here: formal problem solving, use of heuristics, estimation and approximation, optimization, use of patterns, visualization, representation, reasoning and proof, making connections, mathematical communication. Giving importance to these processes constitutes the difference between doing mathematics and swallowing mathematics, between mathematisation of thinking and memorizing formulas, between trival mathematics and important mathematics, between working towards traditional teaching and constructivism teaching. In school mathematics, certainly emphasis does need to be attached to factual idea, procedural fluency and conceptual understanding. New idea is to be constructed from experience and prior knowledge using conceptual elements. However, invariably emphasis on procedure gains ascendancy at the cost of conceptual understanding as well as construction of idea based on experience. This can be seen as a central cause for the fear of mathematics in children. On the other side, the emphasis on exploratory problem solving, activities and the processes referred to above constitute learning environments that invite participation, engage children and offer a sense of success. Transforming our classrooms into constructivism based approach paradigm and designing mathematics curricula that enable such a transformation is to be accorded the highest priority. i.e., - Mathematics that people use, - Use of technology i.e., technology innovation and learning. A teacher is not a ultimate. He does not lecture. He is a facilitator or mentor. He helps the learner. The facilitator has to create proper environment in the class so that the students are motivated, challenged and think deeply to arrive at his own conclusion. As a facilitator, the teacher has to support the learners to becoming effective thinkers. The facilitator and the learners, both learn from each other. Students should be encouraged to arrive at their own version of truth and then compare it with that of the instructor as well as with that of their peer. Teachers have only to observe in the beginning of a session and assess the progress. They should pose questions to create right environment. They should intervene if any ‘conflict’ arises or if the process of learning is going astray. An important task for a constructivist mathematics teacher is to create a “learning friendly environment” which facilitates students thinking and motivate them to explore. An authentic planning environment is obtained if real-life complexities and a real-world situation is simulated. A mathematics teacher creates congenial learning environment when learning goals are negotiated through consensus and discuss with students. Direct instructions are not appropriate. Learning should take place by “active involvement of the students by doing”, by generating their own ideas. In a well-planned classroom environment students learn how to learn. Learning is like a spiral. Students reflect on their previous experience and integrate new experience. The constructivist environment in a mathematics classroom will be created by adopting the following: Provide experience with the knowledge construction process The teacher presents a topic to the learners and guide them to explore the topic through experimentation. The learners are encouraged to construct a research question and teacher helps them to answer the research question constructed by them through scaffolding. Experience in and appreciation for multiple perspectives All learners are different to each other in their way of thinking and so the need arises to look at a problem from multiple perspectives and provide the opportunities to learners to experiment and discuss their alternative ways of thinking. Here, the students are encouraged to work in groups. Finally, all the groups can share their opinions on the topic with each other. Provide social and emotional learning The social and emotional aspects of learning will be taught to the students in an integrated manner. The five aspects of social and emotional learning which could be covered in the teaching are as follows: self-awareness, managing feelings, motivation, empathy and social skills. Use multiple modes of representation The multiple modes of representation also assist the goal of experiencing multiple perspectives. Use of multiple media to enrich the learning environment provides the learners to view the topic being discussed in the class from multiple dimensions. The teacher should prepare a list of media available and supporting the topic. The teacher should also decide the use of media in supporting the authentic nature of the task. A combination of the following learning strategies can be used by the mathematics teachers to create constructivist learning environment - Use of multimedia/teaching aids - supporting system - Case studies - Role playing - Group discussions/Group activities (reciprocal Learning). - Deep interrogation. - Project based learning - Use of learning strategies for social and emotional learning of students. Teachers can use various strategies to promote and strengthen “think about their thinking”. Eggen.P and Kauchak. D (2007) have suggested the following strategies for the purpose. - Teachers should posses some provocative questions to students and also encourage them to frame their own questions on the problem at hand. - PQ4R strategy: PQ4R is an acronym for Preview, Questions, Read, Reflect, Recite and Review. - IDEAL strategy: IDEAL is an acronym for Identify, Define, Explore, Act and Look. - KWL strategy: Teachers should teach the students to be aware of 1). What they already Know, 2). What they want to Learn, and 3). What they have eventually Learnt. So from the above discussion, constructivism based paradigm shift in teaching-learning process (i.e., in mathematics classroom). - Objectivist learning theory Constructivist learning theory. - Teacher centered Student centered - Teacher as expert, information Giver Teacher as facilitator, guide, coach - Teacher as knowledge transmitter Learner as knowledge constructor - Teacher in control Learner in control - Focus on whole class room teaching Focus on individual and group learning. In the NCF 2005 and 2009 it is clearly mentioned that the consrstructivism approach is the best strategy over the behaviouristic approach. The childcenterd education is the new paradigm shift in education ,so it will be perfectly fulfilled by constructivism based approach. The teacher training in this regard is must other wise this pedagogic approach will be get failed. Off course it consume time so patience will be play the key role in the success of adaptation of constructivism to teach Mathematics in the classrooms. - Bhatia, R.P (2009) ’Features and Effectiveness of E-learning Tools”, Perspectives in Education, 25(3). - Caprio, M. W (1994), Easing into constructivism, connecting meaningful learning with student experience, Journal of College Science Teaching, 23(4), 210-212. - Chambers, P. (2010), Teaching Mathematics: Developing as a Reflective Secondary Teacher, Sage, New Delhi. - Dewey, J.(1933),How we think: a restatement of the relation of reflective thinking to the educative process, Chieago: Henry Regnery. - Etuk , E.N. et.al. (2011), Constructivist Instructional Strategy. In Bulgarian Journal of Science and Education Policy,, Vol5, No1, 2011. - Mathews, M. R (2000), Editorial of the Monographic issue on Constructivism, Epistemology and the Learning of Science, Science & Education, 9(3). - Vygotsky, L. S. (1986), Thought and Language. Cambridge Massachustts, MIT Press.
African-American Civil Rights Movement (1954–68) |African-American Civil Rights Movement| |Location||United States, especially the South| |Goals||End of racial segregation| |Methods||nonviolence, direct action, voter registration, boycott, civil resistance, civil disobedience, community education| |Result||1964 Civil Rights Act 1965 Voting Rights Act 1968 Fair Housing Act |Parties to the civil conflict| |African American topics| The African-American Civil Rights Movement or 1960s Civil Rights Movement encompasses social movements in the United States whose goals were to end racial segregation and discrimination against black Americans and to secure legal recognition and federal protection of the citizenship rights enumerated in the Constitution and federal law. This article covers the phase of the movement between 1954 and 1968, particularly in the South. The leadership was African-American, much of the political and financial support came from labor unions (led by Walter Reuther), major religious denominations, and prominent white politicians such as Hubert Humphrey and Lyndon B. Johnson. The movement was characterized by major campaigns of civil resistance. Between 1955 and 1968, acts of nonviolent protest and civil disobedience produced crisis situations and productive dialogues between activists and government authorities. Federal, state, and local governments, businesses, and communities often had to respond immediately to these situations that highlighted the inequities faced by African Americans. Forms of protest and/or civil disobedience included boycotts such as the successful Montgomery Bus Boycott (1955–56) in Alabama; "sit-ins" such as the influential Greensboro sit-ins (1960) in North Carolina; marches, such as the Selma to Montgomery marches (1965) in Alabama; and a wide range of other nonviolent activities. Noted legislative achievements during this phase of the civil rights movement were passage of the Civil Rights Act of 1964, that banned discrimination based on "race, color, religion, or national origin" in employment practices and public accommodations; the Voting Rights Act of 1965, that restored and protected voting rights; the Immigration and Nationality Services Act of 1965, that dramatically opened entry to the U.S. to immigrants other than traditional Northern European and Germanic groups; and the Fair Housing Act of 1968, that banned discrimination in the sale or rental of housing. African Americans re-entered politics in the South, and across the country young people were inspired to take action. A wave of inner city riots in black communities from 1964 through 1970 undercut support from the white community. The emergence of the Black Power movement, which lasted from about 1966 to 1975, challenged the established black leadership for its cooperative attitude and its nonviolence, and instead demanded political and economic self-sufficiency. While most popular representations of the movement are centered on the leadership and philosophy of Martin Luther King Jr., many scholars note that the movement was far too diverse to be credited to one person, organization, or strategy. Sociologist Doug McAdam has stated that, "in King's case, it would be inaccurate to say that he was the leader of the modern civil rights movement...but more importantly, there was no singular civil rights movement. The movement was, in fact, a coalition of thousands of local efforts nationwide, spanning several decades, hundreds of discrete groups, and all manner of strategies and tactics—legal, illegal, institutional, non-institutional, violent, non-violent. Without discounting King's importance, it would be sheer fiction to call him the leader of what was fundamentally an amorphous, fluid, dispersed movement." - 1 Background - 2 Mass action replacing litigation - 3 Key events - 3.1 Brown v. Board of Education, 1954 - 3.2 Rosa Parks and the Montgomery Bus Boycott, 1955–1956 - 3.3 Desegregating Little Rock Central High School, 1957 - 3.4 Robert F. Williams and the debate on nonviolence, 1959–1964 - 3.5 Sit-ins, 1958–1960 - 3.6 Freedom Rides, 1961 - 3.7 Voter registration organizing - 3.8 Integration of Mississippi universities, 1956–65 - 3.9 Albany Movement, 1961–62 - 3.10 Birmingham Campaign, 1963 - 3.11 "Rising tide of discontent" and Kennedy's Response, 1963 - 3.12 March on Washington, 1963 - 3.13 Malcolm X joins the movement, 1964–1965 - 3.14 St. Augustine, Florida, 1963–64 - 3.15 Mississippi Freedom Summer, 1964 - 3.16 Civil Rights Act of 1964 - 3.17 Mississippi Freedom Democratic Party, 1964 - 3.18 King awarded Nobel Peace Prize - 3.19 Boycott of New Orleans by American Football League players, January 1965 - 3.20 Selma Voting Rights Movement and the Voting Rights Act, 1965 - 3.21 Fair housing movements, 1966–1968 - 3.22 Memphis, King assassination and the Poor People's March 1968 - 3.23 Civil Rights Act of 1968 - 4 Other issues - 4.1 Competing ideas - 4.2 Avoiding the "Communist" label - 4.3 Kennedy administration, 1961–63 - 4.4 American Jewish community and the Civil Rights movement - 4.5 Fraying of alliances - 4.6 Race riots, 1963–70 - 4.7 Black power, 1966 - 5 Prison reform - 6 Cold War - 7 Documentary films - 8 Activist organizations - 9 Individual activists - 10 See also - 11 References - 12 Further reading - 13 External links |This section needs additional citations for verification. (April 2008)| Following the American Civil War, three constitutional amendments were passed, including the 13th Amendment (1865) that ended slavery; the 14th Amendment (1868) that gave African Americans citizenship, adding their total population of four million to the official population of southern states for Congressional apportionment; and the 15th Amendment (1870) that gave African-American males the right to vote (only males could vote in the U.S. at the time). From 1865 to 1877, the United States underwent a turbulent Reconstruction Era trying to establish free labor and civil rights of freedmen in the South after the end of slavery. Many whites resisted the social changes, leading to insurgent movements such as the Ku Klux Klan, whose members attacked black and white Republicans to maintain white supremacy. In 1871, President Ulysses S. Grant, the U.S. Army, and U.S. Attorney General Amos T. Akerman, initiated a campaign to repress the KKK under the Enforcement Acts. Some states were reluctant to enforce the federal measures of the act; by the early 1870s, other white supremacist groups arose that violently opposed African-American legal equality and suffrage. After the disputed election of 1876 resulted in the end of Reconstruction and federal troops were withdrawn, whites in the South regained political control of the region's state legislatures by the end of the century, after having intimidated and violently attacked blacks during elections, and lost power during a biracial fusionist coalition of Populists and Republicans in the late century. From 1890 to 1908, southern states passed new constitutions and laws to disfranchise African Americans by creating barriers to voter registration; voting rolls were dramatically reduced as blacks were forced out of electoral politics. While progress was made in some areas, this status lasted in most southern states until national civil rights legislation was passed in the mid-1960s to provide federal enforcement of constitutional voting rights. For more than 60 years, blacks in the South were not able to elect anyone to represent their interests in Congress or local government. Since they could not vote, they could not serve on local juries. During this period, the white-dominated Democratic Party maintained political control of the South. Because whites controlled all the seats representing the total population of the South, they had a powerful voting block in Congress. The Republican Party—the "party of Lincoln"—which had been the party that most blacks belonged to, shrank to insignificance as black voter registration was suppressed. Until 1965, the "solid South" was a one-party system under the Democrats. Outside a few areas (usually in remote Appalachia), the Democratic Party nomination was tantamount to election for state and local office. In 1901, President Theodore Roosevelt invited Booker T. Washington to dine at the White House, making him the first African American to attend an official dinner there. "The invitation was roundly criticized by southern politicians and newspapers." Washington persuaded the president to appoint more blacks to federal posts in the South and to try to boost African American leadership in state Republican organizations. However, this was resisted by both white Democrats and white Republicans as an unwanted federal intrusion into state politics. During the same time as African Americans were being disenfranchised, white Democrats imposed racial segregation by law. Violence against blacks increased, with numerous lynchings through the turn of the century. The system of de jure state-sanctioned racial discrimination and oppression that emerged from the post-Reconstruction South became known as the "Jim Crow" system. The United States Supreme Court upheld the constitutionality of those state laws that required racial segregation in public facilities in its 1896 decision Plessy v. Ferguson, legitimizing them through the "separate but equal" doctrine. Segregation remained intact into the mid-1950s, when many states began to gradually integrate their schools following the Supreme Court decision in Brown v. Board of Education that overturned Plessy v. Ferguson. The early 20th century is a period often referred to as the "nadir of American race relations". While problems and civil rights violations were most intense in the South, social discrimination and tensions affected African Americans in other regions as well. At the national level, the Southern bloc controlled important committees in Congress, defeated passage of laws against lynching, and exercised considerable power beyond the number of whites in the South. Characteristics of the post-Reconstruction period: - Racial segregation. By law, public facilities and government services such as education were divided into separate "white" and "colored" domains. Characteristically, those for colored were underfunded and of inferior quality. - Disenfranchisement. When white Democrats regained power, they passed laws that made voter registration more restrictive, essentially forcing black voters off the voting rolls. The number of African-American voters dropped dramatically, and they no longer were able to elect representatives. From 1890 to 1908, Southern states of the former Confederacy created constitutions with provisions that disfranchised tens of thousands of African Americans and U.S. states such as Alabama disfranchised poor whites as well. - Exploitation. Increased economic oppression of blacks, Latinos, and Asians, denial of economic opportunities, and widespread employment discrimination. - Violence. Individual, police, paramilitary, organizational, and mob racial violence against blacks (and Latinos in the Southwest and Asians in California). African Americans and other ethnic minorities rejected this regime. They resisted it in numerous ways and sought better opportunities through lawsuits, new organizations, political redress, and labor organizing (see the African-American Civil Rights Movement (1896–1954)). The National Association for the Advancement of Colored People (NAACP) was founded in 1909. It fought to end race discrimination through litigation, education, and lobbying efforts. Its crowning achievement was its legal victory in the Supreme Court decision Brown v. Board of Education in 1954 when the Court rejected separate white and colored school systems and, by implication, overturned the "separate but equal" doctrine established in Plessy v. Ferguson of 1896. The integration of Southern public libraries involved many of the same characteristics seen in the larger Civil Rights Movement. This includes sit-ins, beatings, and white resistance. For example, in 1963 in the city of Anniston, Alabama, two black ministers were brutally beaten for attempting to integrate the public library. Though there was resistance and violence, the integration of libraries were generally quicker than integration of other public institutions. Black veterans of the military after both World Wars pressed for full civil rights and often led activist movements. In 1948 they gained integration in the military under President Harry Truman, who issued Executive Order 9981 to accomplish it. The situation for blacks outside the South was somewhat better (in most states they could vote and have their children educated, though they still faced discrimination in housing and jobs). From 1910 to 1970, African Americans sought better lives by migrating north and west out of the South. Nearly seven million blacks left the South in what was known as the Great Migration. So many people migrated that the demographics of some previously black-majority states changed to white majority (in combination with other developments). Housing segregation was a nationwide problem, persistent well outside the South. Although the federal government had become increasingly involved in mortgage lending and development in the 1930s and 40s, it did not reject the use of race-restrictive covenants until 1950. Suburbanization was already connected with white flight by this time, a situation perpetuated by real estate agents' continuing discrimination. In particular, from the 1930s to the 1960s the National Association of Real Estate Boards (NAREB) issued guidelines that specified that a realtor "should never be instrumental in introducing to a neighborhood a character or property or occupancy, members of any race or nationality, or any individual whose presence will be clearly detrimental to property values in a neighborhood." Invigorated by the victory of Brown and frustrated by the lack of immediate practical effect, private citizens increasingly rejected gradualist, legalistic approaches as the primary tool to bring about desegregation. They were faced with "massive resistance" in the South by proponents of racial segregation and voter suppression. In defiance, African American activists adopted a combined strategy of direct action, nonviolence, nonviolent resistance, and many events described as civil disobedience, giving rise to the African-American Civil Rights Movement of 1954–1968. Mass action replacing litigation The strategy of public education, legislative lobbying, and litigation that had typified the civil rights movement during the first half of the 20th century broadened after Brown to a strategy that emphasized "direct action:" primarily boycotts, sit-ins, Freedom Rides, marches and similar tactics that relied on mass mobilization, nonviolent resistance and civil disobedience. This mass action approach typified the movement from 1960 to 1968. Churches, local grassroots organizations, fraternal societies, and black-owned businesses mobilized volunteers to participate in broad-based actions. This was a more direct and potentially more rapid means of creating change than the traditional approach of mounting court challenges used by the NAACP and others. In 1952, the Regional Council of Negro Leadership (RCNL), led by T. R. M. Howard, a black surgeon, entrepreneur, and planter, organized a successful boycott of gas stations in Mississippi that refused to provide restrooms for blacks. Through the RCNL, Howard led campaigns to expose brutality by the Mississippi state highway patrol and to encourage blacks to make deposits in the black-owned Tri-State Bank of Nashville which, in turn, gave loans to civil rights activists who were victims of a "credit squeeze" by the White Citizens' Councils. After Rosa Parks' arrest in December 1955, Jo Ann Gibson-Robinson of the Montgomery Women's Political Council seized the opportunity to put a long-considered Bus Boycott protest in motion. Late that night, she, two students, and John Cannon, chairman of the Business Department at Alabama State, mimeographed and distributed approximately 52,500 leaflets calling for a boycott of the buses. The first day of the boycott having been successful, King and other civic and religious leaders created the Montgomery Improvement Association—so as to continue the Montgomery Bus Boycott The MIA managed to keep the boycott going for over a year until a federal court order required Montgomery to desegregate its buses. The success in Montgomery made its leader Dr. Martin Luther King, Jr. a nationally known figure. It also inspired other bus boycotts, such as the highly successful Tallahassee, Florida, boycott of 1956–57. In 1957 Dr. King and Rev. Ralph Abernathy, the leaders of the Montgomery Improvement Association, joined with other church leaders who had led similar boycott efforts, such as Rev. C. K. Steele of Tallahassee and Rev. T. J. Jemison of Baton Rouge; and other activists such as Rev. Fred Shuttlesworth, Ella Baker, A. Philip Randolph, Bayard Rustin and Stanley Levison, to form the Southern Christian Leadership Conference. The SCLC, with its headquarters in Atlanta, Georgia, did not attempt to create a network of chapters as the NAACP did. It offered training and leadership assistance for local efforts to fight segregation. The headquarters organization raised funds, mostly from Northern sources, to support such campaigns. It made non-violence both its central tenet and its primary method of confronting racism. In 1959, Septima Clarke, Bernice Robinson, and Esau Jenkins, with the help of Myles Horton's Highlander Folk School in Tennessee, began the first Citizenship Schools in South Carolina's Sea Islands. They taught literacy to enable blacks to pass voting tests. The program was an enormous success and tripled the number of black voters on Johns Island. SCLC took over the program and duplicated its results elsewhere. Brown v. Board of Education, 1954 In the Spring of 1951, black students in Virginia protested their unequal status in the state's segregated educational system. Students at Moton High School protested the overcrowded conditions and failing facility. Some local leaders of the NAACP had tried to persuade the students to back down from their protest against the Jim Crow laws of school segregation. When the students did not budge, the NAACP joined their battle against school segregation. The NAACP proceeded with five cases challenging the school systems; these were later combined under what is known today as Brown v. Board of Education. On May 17, 1954, the U.S. Supreme Court handed down its decision regarding the case called Brown v. Board of Education of Topeka, Kansas, in which the plaintiffs charged that the education of black children in separate public schools from their white counterparts was unconstitutional. The Court stated that the segregation of white and colored children in public schools has a detrimental effect upon the colored children. The impact is greater when it has the sanction of the law; for the policy of separating the races is usually interpreted as denoting the inferiority of the Negro group. The lawyers from the NAACP had to gather some plausible evidence in order to win the case of Brown vs. Education. Their way of addressing the issue of school segregation was to enumerate several arguments. One of them pertained to having an exposure to interracial contact in a school environment. It was said that it would, in turn, help to prevent children to live with the pressures that society exerts in regards to race. Therefore, having a better chance of living in democracy. In addition, another was in reference to the emphasis of how "'education' comprehends the entire process of developing and training the mental, physical and moral powers and capabilities of human beings". Risa Goluboff wrote that the NAACP's intention was to show the Court's that African American children were the victims of school segregation and their futures were at risk. The Court ruled that both Plessy v. Ferguson (1896), which had established the "separate but equal" standard in general, and Cumming v. Richmond County Board of Education (1899), which had applied that standard to schools, were unconstitutional. The federal government filed a friend of the court brief in the case urging the judges to consider the effect that segregation had on America's image in the Cold War. Secretary of State Dean Acheson was quoted in the brief stating that "The United States is under constant attack in the foreign press, over the foreign radio, and in such international bodies as the United Nations because of various practices of discrimination in this country." The following year, in the case known as Brown II, the Court ordered segregation to be phased out over time, "with all deliberate speed". Brown v. Board of Education of Topeka, Kansas (1954) did not overturn Plessy v. Ferguson (1896). Plessy v. Ferguson was segregation in transportation modes. Brown v. Board of Education dealt with segregation in education. Brown v. Board of Education did set in motion the future overturning of 'separate but equal'. On May 18, 1954 Greensboro, North Carolina became the first city in the South to publicly announce that it would abide by the Supreme Court's Brown v. Board of Education ruling. "It is unthinkable,' remarked School Board Superintendent Benjamin Smith, 'that we will try to [override] the laws of the United States." This positive reception for Brown, together with the appointment of African American Dr. David Jones to the school board in 1953, convinced numerous white and black citizens that Greensboro was heading in a progressive direction. Integration in Greensboro occurred rather peacefully compared to the process in Southern states such as Alabama, Arkansas, and Virginia where "massive resistance" was practiced by top officials and throughout the states. In Virginia, some counties closed their public schools rather than integrate, and many white Christian private schools were founded to accommodate students who used to go to public schools. Even in Greensboro, much local resistance to desegregation continued, and in 1969, the federal government found the city was not in compliance with the 1964 Civil Rights Act. Transition to a fully integrated school system did not begin until 1971. Many Northern cities also had de facto segregation policies, which resulted in a vast gulf in educational resources between black and white communities. In Harlem, New York for example, neither a single new school was built since the turn of the century, nor did a single nursery school exist – even as the Second Great Migration was causing overcrowding. Existing schools tended to be dilapidated and staffed with inexperienced teachers. Brown helped stimulate activism among New York City parents like Mae Mallory who, with support of the NAACP, initiated a successful lawsuit against the city and state on Brown's principles. Mallory and thousands of other parents bolstered the pressure of the lawsuit with a school boycott in 1959. During the boycott, some of the first freedom schools of the period were established. The city responded to the campaign by permitting more open transfers to high-quality, historically-white schools. (New York's African-American community, and Northern desegregation activists generally, now found themselves contending with the problem of white flight, however.) Rosa Parks and the Montgomery Bus Boycott, 1955–1956 Civil rights leaders focused on Montgomery Alabama, highlight extreme forms of segregation there. Local black leader Rosa Parks on December 1, 1955, refused to give up her seat on a public bus to make room for a white passenger; she was arrested And received national publicity, hailed as the "mother of the civil rights movement." Parks was secretary of the Montgomery NAACP chapter and had recently returned from a meeting at the Highlander Center in Tennessee where nonviolent civil disobedience as a strategy Was taught. African-American gathered and organized the Montgomery Bus Boycott to demand a bus system in which passengers would be treated equally. After the city rejected many of their suggested reforms, the NAACP, led by E.D. Nixon, pushed for full desegregation of public buses. With the support of most of Montgomery's 50,000 African Americans, the boycott lasted for 381 days, until the local ordinance segregating African Americans and whites on public buses was repealed. Ninety percent of African Americans in Montgomery partook in the boycotts, which reduced bus revenue significantly, as they comprised the majority of the riders. In November 1956, a federal court ordered Montgomery's buses desegregated and the boycott ended. Local leaders established the Montgomery Improvement Association to focus their efforts. Martin Luther King, Jr., was elected President of this organization. The lengthy protest attracted national attention for him and the city. His eloquent appeals to Christian brotherhood and American idealism created a positive impression on people both inside and outside the South. Desegregating Little Rock Central High School, 1957 A crisis erupted in Little Rock, Arkansas when Governor of Arkansas Orval Faubus called out the National Guard on September 4 to prevent entry to the nine African-American students who had sued for the right to attend an integrated school, Little Rock Central High School. The nine students had been chosen to attend Central High because of their excellent grades. On the first day of school, only one of the nine students showed up because she did not receive the phone call about the danger of going to school. She was harassed by white protesters outside the school, and the police had to take her away in a patrol car to protect her. Afterward, the nine students had to carpool to school and be escorted by military personnel in jeeps. Faubus was not a proclaimed segregationist. The Arkansas Democratic Party, which then controlled politics in the state, put significant pressure on Faubus after he had indicated he would investigate bringing Arkansas into compliance with the Brown decision. Faubus then took his stand against integration and against the Federal court ruling. Faubus' resistance received the attention of President Dwight D. Eisenhower, who was determined to enforce the orders of the Federal courts. Critics had charged he was lukewarm, at best, on the goal of desegregation of public schools. But, Eisenhower federalized the National Guard in Arkansas and ordered them to return to their barracks. Eisenhower deployed elements of the 101st Airborne Division to Little Rock to protect the students. The students attended high school under harsh conditions. They had to pass through a gauntlet of spitting, jeering whites to arrive at school on their first day, and to put up with harassment from other students for the rest of the year. Although federal troops escorted the students between classes, the students were teased and even attacked by white students when the soldiers were not around. One of the Little Rock Nine, Minnijean Brown, was suspended for spilling a bowl of chili on the head of a white student who was harassing her in the school lunch line. Later, she was expelled for verbally abusing a white female student. Only Ernest Green of the Little Rock Nine graduated from Central High School. After the 1957–58 school year was over, Little Rock closed its public school system completely rather than continue to integrate. Other school systems across the South followed suit. Robert F. Williams and the debate on nonviolence, 1959–1964 The Jim Crow system employed "terror as a means of social control," with the most organized manifestations being the Ku Klux Klan and their collaborators in local police departments. This violence played a key role in blocking the progress of the civil rights movement in the late 1950s. Some black organizations in the South began practicing armed self-defense. The first to do so openly was the Monroe, North Carolina chapter of the NAACP led by Robert F. Williams. Williams had rebuilt the chapter after its membership was terrorized out of public life by the Klan. He did so by encouraging a new, more working-class membership to arm itself thoroughly and defend against attack. When Klan nightriders attacked the home of NAACP member Dr. Albert Perry in October 1957, Williams' militia exchanged gunfire with the stunned Klansmen, who quickly retreated. The following day, the city council held an emergency session and passed an ordinance banning KKK motorcades. One year later, Lumbee Indians in North Carolina would have a similarly successful armed stand-off with the Klan (known as the Battle of Hayes Pond) which resulted in KKK leader James W. "Catfish" Cole being convicted of incitement to riot. After the acquittal of several white men charged with sexually assaulting black women in Monroe, Williams announced to United Press International reporters that he would "meet violence with violence" as a policy. Williams' declaration was quoted on the front page of The New York Times, and The Carolina Times considered it "the biggest civil rights story of 1959." NAACP National chairman Roy Wilkins immediately suspended Williams from his position, but the Monroe organizer won support from numerous NAACP chapters across the country. Ultimately, Wilkins resorted to bribing influential organizer Daisy Bates to campaign against Williams at the NAACP national convention and the suspension was upheld. The convention nonetheless passed a resolution which stated: "We do not deny, but reaffirm the right of individual and collective self-defense against unlawful assaults." Martin Luther King Jr. argued for Williams' removal, but Ella Baker and WEB Dubois both publicly praised the Monroe leader's position. Robert F. Williams – along with his wife, Mabel Williams – continued to play a leadership role in the Monroe movement, and to some degree, in the national movement. The Williamses published The Crusader, a nationally circulated newsletter, beginning in 1960, and the influential book Negroes With Guns in 1962. Williams did not call for full militarization in this period, but "flexibility in the freedom struggle." Williams was well-versed in legal tactics and publicity, which he had used successfully in the internationally known "Kissing Case" of 1958, as well as nonviolent methods, which he used at lunch counter sit-ins in Monroe – all with armed self-defense as a complementary tactic. Williams led the Monroe movement in another armed stand-off with white supremacists during an August 1961 Freedom Ride; he had been invited to participate in the campaign by Ella Baker and James Forman of the Student Nonviolent Coordinating Committee (SNCC). The incident (along with his campaigns for peace with Cuba) resulted in him being targeted by the FBI and prosecuted for kidnapping; he was cleared of all charges in 1976. Meanwhile, armed self-defense continued discreetly in the Southern movement with such figures as SNCC's Amzie Moore, Hartman Turnbow, and Fannie Lou Hamer all willing to use arms to defend their lives from nightrides. Taking refuge from the FBI in Cuba, the Willamses broadcast the radio show Radio Free Dixie throughout the eastern United States via Radio Progresso beginning in 1962. In this period, Williams advocated guerilla warfare against racist institutions, and saw the large ghetto riots of the era as a manifestation of his strategy. University of North Carolina historian Walter Rucker has written that "the emergence of Robert F Williams contributed to the marked decline in anti-black racial violence in the US…After centuries of anti-black violence, African-Americans across the country began to defend their communities aggressively – employing overt force when necessary. This in turn evoked in whites real fear of black vengeance…" This opened up space for African-Americans to use nonviolent demonstration with less fear of deadly reprisal. Of the many civil rights activists who share this view, the most prominent was Rosa Parks. Parks gave the eulogy at Williams' funeral in 1996, praising him for "his courage and for his commitment to freedom," and concluding that "The sacrifices he made, and what he did, should go down in history and never be forgotten." In July 1958, the NAACP Youth Council sponsored sit-ins at the lunch counter of a Dockum Drug Store in downtown Wichita, Kansas. After three weeks, the movement successfully got the store to change its policy of segregated seating, and soon afterward all Dockum stores in Kansas were desegregated. This movement was quickly followed in the same year by a student sit-in at a Katz Drug Store in Oklahoma City led by Clara Luper, which also was successful. Mostly black students from area colleges led a sit-in at a Woolworth's store in Greensboro, North Carolina. On February 1, 1960, four students, Ezell A. Blair, Jr., David Richmond, Joseph McNeil, and Franklin McCain from North Carolina Agricultural & Technical College, an all-black college, sat down at the segregated lunch counter to protest Woolworth's policy of excluding African Americans from being served there. The four students purchased small items in other parts of the store and kept their receipts, then sat down at the lunch counter and asked to be served. After being denied service, they produced their receipts and asked why their money was good everywhere else at the store, but not at the lunch counter. The protesters had been encouraged to dress professionally, to sit quietly, and to occupy every other stool so that potential white sympathizers could join in. The Greensboro sit-in was quickly followed by other sit-ins in Richmond, Virginia; Nashville, Tennessee; and Atlanta, Georgia. The most immediately effective of these was in Nashville, where hundreds of well organized and highly disciplined college students conducted sit-ins in coordination with a boycott campaign. As students across the south began to "sit-in" at the lunch counters of local stores, police and other officials sometimes used brute force to physically escort the demonstrators from the lunch facilities. The "sit-in" technique was not new—as far back as 1939, African-American attorney Samuel Wilbert Tucker organized a sit-in at the then-segregated Alexandria, Virginia library. In 1960 the technique succeeded in bringing national attention to the movement. On March 9, 1960 an Atlanta University Center group of students released An Appeal for Human Rights as a full page advertisement in newspapers, including the Atlanta Constitution, Atlanta Journal, and Atlanta Daily World. Known as the Committee on the Appeal for Human Rights (COAHR), the group initiated the Atlanta Student Movement and began to lead sit-ins starting on March 15, 1960. By the end of 1960, the proces of sit-ins had spread to every southern and border state, and even to facilities in Nevada, Illinois, and Ohio that discriminated against blacks. Demonstrators focused not only on lunch counters but also on parks, beaches, libraries, theaters, museums, and other public facilities. In April 1960 activists who had led these sit-ins were invited by SCLC activist Ella Baker to hold a conference at Shaw University, a historically black university in Raleigh, North Carolina. This conference led to the formation of the Student Nonviolent Coordinating Committee (SNCC). SNCC took these tactics of nonviolent confrontation further, and organized the freedom rides. As the constitution protected interstate commerce, they decided to challenge segregation on interstate buses and in public bus facilities by putting interracial teams on them, to travel from the North through the segregated South. Freedom Rides, 1961 Freedom Rides were journeys by Civil Rights activists on interstate buses into the segregated southern United States to test the United States Supreme Court decision Boynton v. Virginia, (1960) 364 U.S., which ruled that segregation was unconstitutional for passengers engaged in interstate travel. Organized by CORE, the first Freedom Ride of the 1960s left Washington D.C. on May 4, 1961, and was scheduled to arrive in New Orleans on May 17. During the first and subsequent Freedom Rides, activists traveled through the Deep South to integrate seating patterns on buses and desegregate bus terminals, including restrooms and water fountains. That proved to be a dangerous mission. In Anniston, Alabama, one bus was firebombed, forcing its passengers to flee for their lives. In Birmingham, Alabama, an FBI informant reported that Public Safety Commissioner Eugene "Bull" Connor gave Ku Klux Klan members fifteen minutes to attack an incoming group of freedom riders before having police "protect" them. The riders were severely beaten "until it looked like a bulldog had got a hold of them." James Peck, a white activist, was beaten so badly that he required fifty stitches to his head. In a similar occurrence in Montgomery, Alabama, the Freedom Riders followed in the footsteps of Rosa Parks and rode an integrated Greyhound bus from Birmingham. Although they were protesting interstate bus segregation in peace, they were met with violence in Montgomery as a large, white mob attacked them for their activism. They caused an enormous, 2-hour long riot which resulted in 22 injuries, five of whom were hospitalized. Mob violence in Anniston and Birmingham temporarily halted the rides. SNCC activists from Nashville brought in new riders to continue the journey from Birmingham to New Orleans. In Montgomery, Alabama, at the Greyhound Bus Station, a mob charged another bus load of riders, knocking John Lewis unconscious with a crate and smashing Life photographer Don Urbrock in the face with his own camera. A dozen men surrounded James Zwerg, a white student from Fisk University, and beat him in the face with a suitcase, knocking out his teeth. On May 24, 1961, the freedom riders continued their rides into Jackson, Mississippi, where they were arrested for "breaching the peace" by using "white only" facilities. New freedom rides were organized by many different organizations and continued to flow into the South. As riders arrived in Jackson, they were arrested. By the end of summer, more than 300 had been jailed in Mississippi. - "...When the weary Riders arrive in Jackson and attempt to use 'white only' restrooms and lunch counters they are immediately arrested for Breach of Peace and Refusal to Obey an Officer. Says Mississippi Governor Ross Barnett in defense of segregation: 'The Negro is different because God made him different to punish him.' From lockup, the Riders announce 'Jail No Bail' — they will not pay fines for unconstitutional arrests and illegal convictions — and by staying in jail they keep the issue alive. Each prisoner will remain in jail for 39 days, the maximum time they can serve without loosing [sic] their right to appeal the unconstitutionality of their arrests, trials, and convictions. After 39 days, they file an appeal and post bond..." The jailed freedom riders were treated harshly, crammed into tiny, filthy cells and sporadically beaten. In Jackson, some male prisoners were forced to do hard labor in 100-degree heat. Others were transferred to the Mississippi State Penitentiary at Parchman, where they were treated to harsh conditions. Sometimes the men were suspended by "wrist breakers" from the walls. Typically, the windows of their cells were shut tight on hot days, making it hard for them to breathe. Public sympathy and support for the freedom riders led John F. Kennedy's administration to order the Interstate Commerce Commission (ICC) to issue a new desegregation order. When the new ICC rule took effect on November 1, 1961, passengers were permitted to sit wherever they chose on the bus; "white" and "colored" signs came down in the terminals; separate drinking fountains, toilets, and waiting rooms were consolidated; and lunch counters began serving people regardless of skin color. The student movement involved such celebrated figures as John Lewis, a single-minded activist; James Lawson, the revered "guru" of nonviolent theory and tactics; Diane Nash, an articulate and intrepid public champion of justice; Bob Moses, pioneer of voting registration in Mississippi; and James Bevel, a fiery preacher and charismatic organizer and facilitator. Other prominent student activists included Charles McDew, Bernard Lafayette, Charles Jones, Lonnie King, Julian Bond, Hosea Williams, and Stokely Carmichael. Voter registration organizing After the Freedom Rides, local black leaders in Mississippi such as Amzie Moore, Aaron Henry, Medgar Evers, and others asked SNCC to help register black voters and to build community organizations that could win a share of political power in the state. Since Mississippi ratified its new constitution in 1890 with provisions such as poll taxes, residency requirements, and literacy tests, it made registration more complicated and stripped blacks from voter rolls and voting. In addition, violence at the time of elections had earlier suppressed black voting. By the mid-20th century, preventing blacks from voting had become an essential part of the culture of white supremacy. In the fall of 1961, SNCC organizer Robert Moses began the first voter registration project in McComb and the surrounding counties in the Southwest corner of the state. Their efforts were met with violent repression from state and local lawmen, the White Citizens' Council, and the Ku Klux Klan. Activists were beaten, there were hundreds of arrests of local citizens, and the voting activist Herbert Lee was murdered. White opposition to black voter registration was so intense in Mississippi that Freedom Movement activists concluded that all of the state's civil rights organizations had to unite in a coordinated effort to have any chance of success. In February 1962, representatives of SNCC, CORE, and the NAACP formed the Council of Federated Organizations (COFO). At a subsequent meeting in August, SCLC became part of COFO. In the Spring of 1962, with funds from the Voter Education Project, SNCC/COFO began voter registration organizing in the Mississippi Delta area around Greenwood, and the areas surrounding Hattiesburg, Laurel, and Holly Springs. As in McComb, their efforts were met with fierce opposition—arrests, beatings, shootings, arson, and murder. Registrars used the literacy test to keep blacks off the voting roles by creating standards that even highly educated people could not meet. In addition, employers fired blacks who tried to register, and landlords evicted them from their rental homes. Despite these actions, over the following years, the black voter registration campaign spread across the state. Similar voter registration campaigns—with similar responses—were begun by SNCC, CORE, and SCLC in Louisiana, Alabama, southwest Georgia, and South Carolina. By 1963, voter registration campaigns in the South were as integral to the Freedom Movement as desegregation efforts. After passage of the Civil Rights Act of 1964, protecting and facilitating voter registration despite state barriers became the main effort of the movement. It resulted in passage of the Voting Rights Act of 1965, which had provisions to enforce the constitutional right to vote for all citizens. Integration of Mississippi universities, 1956–65 ||This section contains weasel words: vague phrasing that often accompanies biased or unverifiable information. (May 2010)| Beginning in 1956, Clyde Kennard, a black Korean War-veteran, wanted to enroll at Mississippi Southern College (now the University of Southern Mississippi) under the GI Bill at Hattiesburg. Dr. William David McCain, the college president, used the Mississippi State Sovereignty Commission, in order to prevent his enrollment by appealing to local black leaders and the segregationist state political establishment. The state-funded organization tried to counter the civil rights movement by positively portraying segregationist policies. More significantly, it collected data on activists, harassed them legally, and used economic boycotts against them by threatening their jobs (or causing them to lose their jobs) to try to suppress their work. Kennard was twice arrested on trumped-up charges, and eventually convicted and sentenced to seven years in the state prison. After three years at hard labor, Kennard was paroled by Mississippi Governor Ross Barnett. Journalists had investigated his case and publicized the state's mistreatment of his colon cancer. McCain's role in Kennard's arrests and convictions is unknown. While trying to prevent Kennard's enrollment, McCain made a speech in Chicago, with his travel sponsored by the Mississippi State Sovereignty Commission. He described the blacks' seeking to desegregate Southern schools as "imports" from the North. (Kennard was a native and resident of Hattiesburg.) McCain said: We insist that educationally and socially, we maintain a segregated society. ... In all fairness, I admit that we are not encouraging Negro voting ... The Negroes prefer that control of the government remain in the white man's hands. Note: Mississippi had passed a new constitution in 1890 that effectively disfranchised most blacks by changing electoral and voter registration requirements; although it deprived them of constitutional rights authorized under post-Civil War amendments, it survived US Supreme Court challenges at the time. It was not until after passage of the 1965 Voting Rights Act that most blacks in Mississippi and other southern states gained federal protection to enforce the constitutional right of citizens to vote. In September 1962, James Meredith won a lawsuit to secure admission to the previously segregated University of Mississippi. He attempted to enter campus on September 20, on September 25, and again on September 26. He was blocked by Mississippi Governor Ross Barnett, who said, "[N]o school will be integrated in Mississippi while I am your Governor." The Fifth U.S. Circuit Court of Appeals held Barnett and Lieutenant Governor Paul B. Johnson, Jr. in contempt, ordering them arrested and fined more than $10,000 for each day they refused to allow Meredith to enroll. Attorney General Robert Kennedy sent in a force of U.S. Marshals. On September 30, 1962, Meredith entered the campus under their escort. Students and other whites began rioting that evening, throwing rocks and firing on the U.S. Marshals guarding Meredith at Lyceum Hall. Two people, including a French journalist, were killed; 28 marshals suffered gunshot wounds; and 160 others were injured. President John F. Kennedy sent regular US Army forces to the campus to quell the riot. Meredith began classes the day after the troops arrived. Kennard and other activists continued to work on public university desegregation. In 1965 Raylawni Branch and Gwendolyn Elaine Armstrong became the first African-American students to attend the University of Southern Mississippi. By that time, McCain helped ensure they had a peaceful entry. In 2006, Judge Robert Helfrich ruled that Kennard was factually innocent of all charges for which he had been convicted in the 1950s. Albany Movement, 1961–62 The SCLC, which had been criticized by some student activists for its failure to participate more fully in the freedom rides, committed much of its prestige and resources to a desegregation campaign in Albany, Georgia, in November 1961. King, who had been criticized personally by some SNCC activists for his distance from the dangers that local organizers faced—and given the derisive nickname "De Lawd" as a result—intervened personally to assist the campaign led by both SNCC organizers and local leaders. The campaign was a failure because of the canny tactics of Laurie Pritchett, the local police chief, and divisions within the black community. The goals may not have been specific enough. Pritchett contained the marchers without violent attacks on demonstrators that inflamed national opinion. He also arranged for arrested demonstrators to be taken to jails in surrounding communities, allowing plenty of room to remain in his jail. Prichett also foresaw King's presence as a danger and forced his release to avoid King's rallying the black community. King left in 1962 without having achieved any dramatic victories. The local movement, however, continued the struggle, and it obtained significant gains in the next few years. Birmingham Campaign, 1963 The Albany movement was shown to be an important education for the SCLC, however, when it undertook the Birmingham campaign in 1963. Executive Director Wyatt Tee Walker carefully planned the early strategy and tactics for the campaign. It focused on one goal—the desegregation of Birmingham's downtown merchants, rather than total desegregation, as in Albany. The movement's efforts were helped by the brutal response of local authorities, in particular Eugene "Bull" Connor, the Commissioner of Public Safety. He had long held much political power, but had lost a recent election for mayor to a less rabidly segregationist candidate. Refusing to accept the new mayor's authority, Connor intended to stay in office. The campaign used a variety of nonviolent methods of confrontation, including sit-ins, kneel-ins at local churches, and a march to the county building to mark the beginning of a drive to register voters. The city, however, obtained an injunction barring all such protests. Convinced that the order was unconstitutional, the campaign defied it and prepared for mass arrests of its supporters. King elected to be among those arrested on April 12, 1963. While in jail, King wrote his famous "Letter from Birmingham Jail" on the margins of a newspaper, since he had not been allowed any writing paper while held in solitary confinement. Supporters appealed to the Kennedy administration, which intervened to obtain King's release. King was allowed to call his wife, who was recuperating at home after the birth of their fourth child, and was released early on April 19. The campaign, however, faltered as it ran out of demonstrators willing to risk arrest. James Bevel, SCLC's Director of Direct Action and Director of Nonviolent Education, then came up with a bold and controversial alternative: to train high school students to take part in the demonstrations. As a result, in what would be called the Children's Crusade, more than one thousand students skipped school on May 2 to meet at the 16th Street Baptist Church to join the demonstrations. More than six hundred marched out of the church fifty at a time in an attempt to walk to City Hall to speak to Birmingham's mayor about segregation. They were arrested and put into jail. In this first encounter the police acted with restraint. On the next day, however, another one thousand students gathered at the church. When Bevel started them marching fifty at a time, Bull Connor finally unleashed police dogs on them and then turned the city's fire hoses water streams on the children. National television networks broadcast the scenes of the dogs attacking demonstrators and the water from the fire hoses knocking down the schoolchildren. Widespread public outrage led the Kennedy administration to intervene more forcefully in negotiations between the white business community and the SCLC. On May 10, the parties announced an agreement to desegregate the lunch counters and other public accommodations downtown, to create a committee to eliminate discriminatory hiring practices, to arrange for the release of jailed protesters, and to establish regular means of communication between black and white leaders. Not everyone in the black community approved of the agreement— the Rev. Fred Shuttlesworth was particularly critical, since he was skeptical about the good faith of Birmingham's power structure from his experience in dealing with them. Parts of the white community reacted violently. They bombed the Gaston Motel, which housed the SCLC's unofficial headquarters, and the home of King's brother, the Reverend A. D. King. In response, thousands of blacks rioted, burning numerous buildings and stabbing a police officer. Kennedy prepared to federalize the Alabama National Guard if the need arose. Four months later, on September 15, a conspiracy of Ku Klux Klan members bombed the Sixteenth Street Baptist Church in Birmingham, killing four young girls. "Rising tide of discontent" and Kennedy's Response, 1963 Birmingham was only one of over a hundred cities rocked by chaotic protest that spring and summer, some of them in the North. During the March on Washington, Martin Luther King would refer to such protests as "the whirlwinds of revolt." In Chicago, blacks rioted through the South Side in late May after a white police officer shot a fourteen year old black boy who was fleeing the scene of a robbery. Violent clashes between black activists and white workers took place in both Philadelphia and Harlem in successful efforts to integrate state construction projects. On June 6, over a thousand whites attacked a sit-in in Lexington, North Carolina; blacks fought back and one white man was killed. Edwin C. Berry of the National Urban League warned of a complete breakdown in race relations: "My message from the beer gardens and the barbershops all indicate the fact that the Negro is ready for war." In Cambridge, Maryland, a working‐class city on the Eastern Shore, Gloria Richardson of SNCC led a movement that pressed for desegregation but also demanded low‐rent public housing, job‐training, public and private jobs, and an end to police brutality. On June 14, struggles between blacks and whites escalated to the point where local authorities declared martial law, and Attorney General Robert F. Kennedy directly intervened to negotiate a desegregation agreement. Richardson felt that the increasing participation of poor and working-class blacks was expanding both the power and parameters of the movement, asserting that "The people as a whole really do have more intelligence than a few of their leaders.ʺ In their deliberations during this wave of protests, the Kennedy administration privately felt that militant demonstrations were ʺbad for the countryʺ and that "Negroes are going to push this thing too far." On May 24, Robert Kennedy had a meeting with prominent black intellectuals to discuss the racial situation. The blacks criticized Kennedy harshly for vacillating on civil rights, and said that the African-American community's thoughts were increasingly turning to violence. The meeting ended with ill will on all sides. Nonetheless, the Kennedys ultimately decided that new legislation for equal public accommodations was essential to drive activists "into the courts and out of the streets." On June 11, 1963, George Wallace, Governor of Alabama, tried to block the integration of the University of Alabama. President John F. Kennedy sent a military force to make Governor Wallace step aside, allowing the enrollment of Vivian Malone Jones and James Hood. That evening, President Kennedy addressed the nation on TV and radio with his historic civil rights speech, where he lamented "a rising tide of discontent that threatens the public safety." He called on Congress to pass new civil rights legislation, and urged the country to embrace civil rights as "a moral issue...in our daily lives." In the early hours of June 12, Medgar Evers, field secretary of the Mississippi NAACP, was assassinated by a member of the Klan. The next week, as promised, on June 19, 1963, President Kennedy submitted his Civil Rights bill to Congress. March on Washington, 1963 A. Philip Randolph had planned a march on Washington, D.C. in 1941 to support demands for elimination of employment discrimination in defense industries; he called off the march when the Roosevelt administration met the demand by issuing Executive Order 8802 barring racial discrimination and creating an agency to oversee compliance with the order. Randolph and Bayard Rustin were the chief planners of the second march, which they proposed in 1962. In 1963, the Kennedy administration initially opposed the march out of concern it would negatively impact the drive for passage of civil rights legislation. However, Randolph and King were firm that the march would proceed. With the march going forward, the Kennedys decided it was important to work to ensure its success. Concerned about the turnout, President Kennedy enlisted the aid of additional church leaders and the UAW union to help mobilize demonstrators for the cause. The march was held on August 28, 1963. Unlike the planned 1941 march, for which Randolph included only black-led organizations in the planning, the 1963 march was a collaborative effort of all of the major civil rights organizations, the more progressive wing of the labor movement, and other liberal organizations. The march had six official goals: - meaningful civil rights laws - a massive federal works program - full and fair employment - decent housing - the right to vote - adequate integrated education. Of these, the march's major focus was on passage of the civil rights law that the Kennedy administration had proposed after the upheavals in Birmingham. National media attention also greatly contributed to the march's national exposure and probable impact. In his section "The March on Washington and Television News," William Thomas notes: "Over five hundred cameramen, technicians, and correspondents from the major networks were set to cover the event. More cameras would be set up than had filmed the last presidential inauguration. One camera was positioned high in the Washington Monument, to give dramatic vistas of the marchers". By carrying the organizers' speeches and offering their own commentary, television stations framed the way their local audiences saw and understood the event. |Problems playing this file? See media help.| The march was a success, although not without controversy. An estimated 200,000 to 300,000 demonstrators gathered in front of the Lincoln Memorial, where King delivered his famous "I Have a Dream" speech. While many speakers applauded the Kennedy administration for the efforts it had made toward obtaining new, more effective civil rights legislation protecting the right to vote and outlawing segregation, John Lewis of SNCC took the administration to task for not doing more to protect southern blacks and civil rights workers under attack in the Deep South. After the march, King and other civil rights leaders met with President Kennedy at the White House. While the Kennedy administration appeared sincerely committed to passing the bill, it was not clear that it had the votes in Congress to do it. However when President Kennedy was assassinated on November 22, 1963, the new President Lyndon Johnson decided to use his influence in Congress to bring about much of Kennedy's legislative agenda. Malcolm X joins the movement, 1964–1965 In March 1964, Malcolm X (Malik El-Shabazz), national representative of the Nation of Islam, formally broke with that organization, and made a public offer to collaborate with any civil rights organization that accepted the right to self-defense and the philosophy of Black nationalism (which Malcolm said no longer required Black separatism). Gloria Richardson – head of the Cambridge, Maryland chapter of SNCC, leader of the Cambridge rebellion and an honored guest at The March on Washington – immediately embraced Malcolm's offer. Mrs. Richardson, "the nation's most prominent woman [civil rights] leader," told The Baltimore Afro-American that "Malcolm is being very practical…The federal government has moved into conflict situations only when matters approach the level of insurrection. Self-defense may force Washington to intervene sooner." Earlier, in May 1963, James Baldwin had stated publicly that "the Black Muslim movement is the only one in the country we can call grassroots, I hate to say it…Malcolm articulates for Negroes, their suffering…he corroborates their reality..." On the local level, Malcolm and the NOI had been allied with the Harlem chapter of the Congress of Racial Equality (CORE) since at least 1962. On March 26, 1964, as the Civil Rights Act was facing stiff opposition in Congress, Malcolm had a public meeting with Martin Luther King Jr. at the Capitol building. Malcolm had attempted to begin a dialog with Dr. King as early as 1957, but King had rebuffed him. Malcolm had responded by calling King an "Uncle Tom" who turned his back on black militancy in order to appease the white power structure. However, the two men were on good terms at their face-to-face meeting. There is evidence that King was preparing to support Malcolm's plan to formally bring the US government before the United Nations on charges of human rights violations against African-Americans. Malcolm now encouraged Black nationalists to get involved in voter registration drives and other forms of community organizing to redefine and expand the movement. Civil rights activists became increasingly combative in the 1963 to 1964 period, owing to events such as the thwarting of the Albany campaign, police repression and Ku Klux Klan terrorism in Birmingham, and the assassination of Medgar Evers. Mississippi NAACP Field Director Charles Evers–Medgar Evers' brother–told a public NAACP conference on February 15, 1964 that "non-violence won't work in Mississippi…we made up our minds…that if a white man shoots at a Negro in Mississippi, we will shoot back." The repression of sit-ins in Jacksonville, Florida provoked a riot that saw black youth throwing Molotov cocktails at police on March 24, 1964. Malcolm X gave extensive speeches in this period warning that such militant activity would escalate further if African-Americans' rights were not fully recognized. In his landmark April 1964 speech "The Ballot or the Bullet", Malcolm presented an ultimatum to white America: "There's new strategy coming in. It'll be Molotov cocktails this month, hand grenades next month, and something else next month. It'll be ballots, or it'll be bullets." As noted in Eyes on the Prize, "Malcolm X had a far reaching effect on the civil rights movement. In the South, there had been a long tradition of self reliance. Malcolm X's ideas now touched that tradition". Self-reliance was becoming paramount in light of the 1964 Democratic National Convention's decision to refuse seating to the Mississippi Freedom Democratic Party (MFDP) and to seat the state delegation elected in violation of the party's rules through Jim Crow law instead. SNCC moved in an increasingly militant direction and worked with Malcolm X on two Harlem MFDP fundraisers in December 1964. When Fannie Lou Hamer spoke to Harlemites about the Jim Crow violence that she'd suffered in Mississippi, she linked it directly to the Northern police brutality against blacks that Malcolm protested against; When Malcolm asserted that African-Americans should emulate the Mau Mau army of Kenya in efforts to gain their independence, many in SNCC applauded. During the Selma campaign for voting rights in 1965, Malcolm made it known that he'd heard reports of increased threats of lynching around Selma, and responded in late January with an open telegram to George Lincoln Rockwell, the head of the American Nazi Party, stating: "if your present racist agitation against our people there in Alabama causes physical harm to Reverend King or any other black Americans…you and your KKK friends will be met with maximum physical retaliation from those of us who are not handcuffed by the disarming philosophy of nonviolence." The following month, the Selma chapter of SNCC invited Malcolm to speak to a mass meeting there. On the day of Malcolm's appearance, President Johnson made his first public statement in support of the Selma campaign. Paul Ryan Haygood, a co-director of the NAACP Legal Defense Fund, credits Malcolm with a role in stimulating the responsiveness of the federal government. Haygood noted that "shortly after Malcolm's visit to Selma, a federal judge, responding to a suit brought by the Department of Justice, required Dallas County registrars to process at least 100 Black applications each day their offices were open." St. Augustine, Florida, 1963–64 St. Augustine, on the northeast coast of Florida was famous as the "Nation's Oldest City," founded by the Spanish in 1565. It became the stage for a great drama leading up to the passage of the landmark Civil Rights Act of 1964. A local movement, led by Dr. Robert B. Hayling, a black dentist and Air Force veteran, and affiliated with the NAACP, had been picketing segregated local institutions since 1963, as a result of which Dr. Hayling and three companions, James Jackson, Clyde Jenkins, and James Hauser, were brutally beaten at a Ku Klux Klan rally in the fall of that year. Nightriders shot into black homes, and teenagers Audrey Nell Edwards, JoeAnn Anderson, Samuel White, and Willie Carl Singleton (who came to be known as "The St. Augustine Four") spent six months in jail and reform school after sitting in at the local Woolworth's lunch counter. It took a special action of the governor and cabinet of Florida to release them after national protests by the Pittsburgh Courier, Jackie Robinson, and others. In response to the repression, the St. Augustine movement practiced armed self-defense in addition to nonviolent direct action. In June 1963, Dr. Hayling publicly stated that "I and the others have armed. We will shoot first and answer questions later. We are not going to die like Medgar Evers." The comment made national headlines. When Klan nightriders terrorized black neighborhoods in St. Augustine, Hayling's NAACP members often drove them off with gunfire, and in October, a Klansman was killed. In 1964, Dr. Hayling and other activists urged the Southern Christian Leadership Conference to come to St. Augustine. The first action came during spring break, when Hayling appealed to northern college students to come to the Ancient City, not to go to the beach, but to take part in demonstrations. Four prominent Massachusetts women—Mrs. Mary Parkman Peabody, Mrs. Esther Burgess, Mrs. Hester Campbell (all of whose husbands were Episcopal bishops), and Mrs. Florence Rowe (whose husband was vice president of John Hancock Insurance Company) came to lend their support. The arrest of Mrs. Peabody, the 72-year-old mother of the governor of Massachusetts, for attempting to eat at the segregated Ponce de Leon Motor Lodge in an integrated group, made front page news across the country, and brought the civil rights movement in St. Augustine to the attention of the world. Widely publicized activities continued in the ensuing months, as Congress saw the longest filibuster against a civil rights bill in its history. Dr. Martin Luther King, Jr. was arrested at the Monson Motel in St. Augustine on June 11, 1964, the only place in Florida he was arrested. He sent a "Letter from the St. Augustine Jail" to a northern supporter, Rabbi Israel Dresner of New Jersey, urging him to recruit others to participate in the movement. This resulted, a week later, in the largest mass arrest of rabbis in American history—while conducting a pray-in at the Monson. A famous photograph taken in St. Augustine shows the manager of the Monson Motel pouring acid in the swimming pool while blacks and whites are swimming in it. The horrifying photograph was run on the front page of the Washington newspaper the day the senate went to vote on passing the Civil Rights Act of 1964. Mississippi Freedom Summer, 1964 In the summer of 1964, COFO brought nearly 1,000 activists to Mississippi—most of them white college students—to join with local black activists to register voters, teach in "Freedom Schools," and organize the Mississippi Freedom Democratic Party (MFDP). Many of Mississippi's white residents deeply resented the outsiders and attempts to change their society. State and local governments, police, the White Citizens' Council and the Ku Klux Klan used arrests, beatings, arson, murder, spying, firing, evictions, and other forms of intimidation and harassment to oppose the project and prevent blacks from registering to vote or achieving social equality. On June 21, 1964, three civil rights workers disappeared. James Chaney, a young black Mississippian and plasterer's apprentice; and two Jewish activists, Andrew Goodman, a Queens College anthropology student; and Michael Schwerner, a CORE organizer from Manhattan's Lower East Side, were found weeks later, murdered by conspirators who turned out to be local members of the Klan, some of them members of the Neshoba County sheriff's department. This outraged the public, leading the U.S. Justice Department along with the FBI (the latter which had previously avoided dealing with the issue of segregation and persecution of blacks) to take action. The outrage over these murders helped lead to the passage of the Civil Rights Act. (See Mississippi civil rights workers murders for details). From June to August, Freedom Summer activists worked in 38 local projects scattered across the state, with the largest number concentrated in the Mississippi Delta region. At least 30 Freedom Schools, with close to 3,500 students were established, and 28 community centers set up. Over the course of the Summer Project, some 17,000 Mississippi blacks attempted to become registered voters in defiance of the red tape and forces of white supremacy arrayed against them—only 1,600 (less than 10%) succeeded. But more than 80,000 joined the Mississippi Freedom Democratic Party (MFDP), founded as an alternative political organization, showing their desire to vote and participate in politics. Though Freedom Summer failed to register many voters, it had a significant effect on the course of the Civil Rights Movement. It helped break down the decades of people's isolation and repression that were the foundation of the Jim Crow system. Before Freedom Summer, the national news media had paid little attention to the persecution of black voters in the Deep South and the dangers endured by black civil rights workers. The progression of events throughout the South increased media attention to Mississippi. The deaths of affluent northern white students and threats to other northerners attracted the full attention of the media spotlight to the state. Many black activists became embittered, believing the media valued lives of whites and blacks differently. Perhaps the most significant effect of Freedom Summer was on the volunteers, almost all of whom—black and white—still consider it to have been one of the defining periods of their lives. Civil Rights Act of 1964 Although President Kennedy had proposed civil rights legislation and it had support from Northern Congressmen and Senators of both parties, Southern Senators blocked the bill by threatening filibusters. After considerable parliamentary maneuvering and 54 days of filibuster on the floor of the United States Senate, President Johnson got a bill through the Congress. On July 2, 1964, Johnson signed the Civil Rights Act of 1964, that banned discrimination based on "race, color, religion, sex or national origin" in employment practices and public accommodations. The bill authorized the Attorney General to file lawsuits to enforce the new law. The law also nullified state and local laws that required such discrimination. Mississippi Freedom Democratic Party, 1964 Blacks in Mississippi had been disfranchised by statutory and constitutional changes since the late 19th century. In 1963 COFO held a Freedom Vote in Mississippi to demonstrate the desire of black Mississippians to vote. More than 80,000 people registered and voted in the mock election, which pitted an integrated slate of candidates from the "Freedom Party" against the official state Democratic Party candidates. In 1964, organizers launched the Mississippi Freedom Democratic Party (MFDP) to challenge the all-white official party. When Mississippi voting registrars refused to recognize their candidates, they held their own primary. They selected Fannie Lou Hamer, Annie Devine, and Victoria Gray to run for Congress, and a slate of delegates to represent Mississippi at the 1964 Democratic National Convention. The presence of the Mississippi Freedom Democratic Party in Atlantic City, New Jersey, was inconvenient, however, for the convention organizers. They had planned a triumphant celebration of the Johnson administration's achievements in civil rights, rather than a fight over racism within the Democratic Party. All-white delegations from other Southern states threatened to walk out if the official slate from Mississippi was not seated. Johnson was worried about the inroads that Republican Barry Goldwater's campaign was making in what previously had been the white Democratic stronghold of the "Solid South", as well as support that George Wallace had received in the North during the Democratic primaries. Johnson could not, however, prevent the MFDP from taking its case to the Credentials Committee. There Fannie Lou Hamer testified eloquently about the beatings that she and others endured and the threats they faced for trying to register to vote. Turning to the television cameras, Hamer asked, "Is this America?" Johnson offered the MFDP a "compromise" under which it would receive two non-voting, at-large seats, while the white delegation sent by the official Democratic Party would retain its seats. The MFDP angrily rejected the "compromise." The MFDP kept up its agitation at the convention, after it was denied official recognition. When all but three of the "regular" Mississippi delegates left because they refused to pledge allegiance to the party, the MFDP delegates borrowed passes from sympathetic delegates and took the seats vacated by the official Mississippi delegates. National party organizers removed them. When they returned the next day, they found convention organizers had removed the empty seats that had been there the day before. They stayed and sang "freedom songs". The 1964 Democratic Party convention disillusioned many within the MFDP and the Civil Rights Movement, but it did not destroy the MFDP. The MFDP became more radical after Atlantic City. It invited Malcolm X to speak at one of its conventions and opposed the war in Vietnam. King awarded Nobel Peace Prize Boycott of New Orleans by American Football League players, January 1965 After the 1964 professional American Football League season, the AFL All-Star Game had been scheduled for early 1965 in New Orleans' Tulane Stadium. After numerous black players were refused service by a number of New Orleans hotels and businesses, and white cabdrivers refused to carry black passengers, black and white players alike lobbied for a boycott of New Orleans. Under the leadership of Buffalo Bills' players, including Cookie Gilchrist, the players put up a unified front. The game was moved to Jeppesen Stadium in Houston. The discriminatory practices that prompted the boycott were illegal under the Civil Rights Act of 1964, which had been signed in July 1964. This new law likely encouraged the AFL players in their cause. It was the first boycott by a professional sports event of an entire city. Selma Voting Rights Movement and the Voting Rights Act, 1965 |Problems playing these files? See media help.| SNCC had undertaken an ambitious voter registration program in Selma, Alabama, in 1963, but by 1965 had made little headway in the face of opposition from Selma's sheriff, Jim Clark. After local residents asked the SCLC for assistance, King came to Selma to lead several marches, at which he was arrested along with 250 other demonstrators. The marchers continued to meet violent resistance from police. Jimmie Lee Jackson, a resident of nearby Marion, was killed by police at a later march in February 17, 1965. Jackson's death prompted James Bevel, director of the Selma Movement, to initiate a plan to march from Selma to Montgomery, the state capital. On March 7, 1965, acting on Bevel's plan, Hosea Williams of the SCLC and John Lewis of SNCC led a march of 600 people to walk the 54 miles (87 km) from Selma to the state capital in Montgomery. Only six blocks into the march, at the Edmund Pettus Bridge, state troopers and local law enforcement, some mounted on horseback, attacked the peaceful demonstrators with billy clubs, tear gas, rubber tubes wrapped in barbed wire, and bull whips. They drove the marchers back into Selma. John Lewis was knocked unconscious and dragged to safety. At least 16 other marchers were hospitalized. Among those gassed and beaten was Amelia Boynton Robinson, who was at the center of civil rights activity at the time. The national broadcast of the news footage of lawmen attacking unresisting marchers' seeking to exercise their constitutional right to vote provoked a national response, as had scenes from Birmingham two years earlier. The marchers were able to obtain a court order permitting them to make the march without incident two weeks later. After a second march on March 9 to the site of Bloody Sunday, local whites attacked Rev. James Reeb, another voting rights supporter. He died of his injuries in a Birmingham hospital March 11. On March 25, four Klansmen shot and killed Detroit homemaker Viola Liuzzo as she drove marchers back to Selma at night after the successfully completed march to Montgomery. Eight days after the first march, President Johnson delivered a televised address to support the voting rights bill he had sent to Congress. In it he stated: But even if we pass this bill, the battle will not be over. What happened in Selma is part of a far larger movement which reaches into every section and state of America. It is the effort of American Negroes to secure for themselves the full blessings of American life. Their cause must be our cause too. Because it is not just Negroes, but really it is all of us, who must overcome the crippling legacy of bigotry and injustice. And we shall overcome. Johnson signed the Voting Rights Act of 1965 on August 6. The 1965 act suspended poll taxes, literacy tests, and other subjective voter registration tests. It authorized Federal supervision of voter registration in states and individual voting districts where such tests were being used. African Americans who had been barred from registering to vote finally had an alternative to taking suits to local or state courts, which had seldom prosecuted their cases to success. If discrimination in voter registration occurred, the 1965 act authorized the Attorney General of the United States to send Federal examiners to replace local registrars. Johnson reportedly told associates of his concern that signing the bill had lost the white South as voters for the Democratic Party for the foreseeable future. The act had an immediate and positive effect for African Americans. Within months of its passage, 250,000 new black voters had been registered, one third of them by federal examiners. Within four years, voter registration in the South had more than doubled. In 1965, Mississippi had the highest black voter turnout at 74% and led the nation in the number of black public officials elected. In 1969, Tennessee had a 92.1% turnout among black voters; Arkansas, 77.9%; and Texas, 73.1%. Several whites who had opposed the Voting Rights Act paid a quick price. In 1966 Sheriff Jim Clark of Alabama, infamous for using cattle prods against civil rights marchers, was up for reelection. Although he took off the notorious "Never" pin on his uniform, he was defeated. At the election, Clark lost as blacks voted to get him out of office. Clark later served a prison term for drug dealing. Blacks' regaining the power to vote changed the political landscape of the South. When Congress passed the Voting Rights Act, only about 100 African Americans held elective office, all in northern states. By 1989, there were more than 7,200 African Americans in office, including more than 4,800 in the South. Nearly every Black Belt county (where populations were majority black) in Alabama had a black sheriff. Southern blacks held top positions in city, county, and state governments. Atlanta elected a black mayor, Andrew Young, as did Jackson, Mississippi, with Harvey Johnson, Jr., and New Orleans, with Ernest Morial. Black politicians on the national level included Barbara Jordan, elected as a Representative from Texas in Congress, and President Jimmy Carter appointed Andrew Young as United States Ambassador to the United Nations. Julian Bond was elected to the Georgia State Legislature in 1965, although political reaction to his public Opposition to the U.S. involvement in the Vietnam War prevented him from taking his seat until 1967. John Lewis represents Georgia's 5th congressional district in the United States House of Representatives, where he has served since 1987. Fair housing movements, 1966–1968 The first major blow against housing segregation in the era, the Rumford Fair Housing Act, was passed in California in 1963. It was overturned by white California voters and real estate lobbyists the following year with Proposition 14, a move which helped precipitate the Watts Riots. In 1966, the California Supreme Court invalidated Proposition 14 and reinstated the Fair Housing Act. Struggles for fair housing laws became a major project of the movement over the next two years, with Martin Luther King Jr. leading the Chicago Freedom Movement around the issue in 1966. In the following year, Father James Groppi and the NAACP Youth Council also attracted national attention with a fair housing campaign in Milwaukee. Both movements faced violent mob resistance from white homeowners and legal opposition from conservative politicians. The fair housing bill was the most contentious civil rights legislation of the era. Senator Walter Mondale, who advocated for the bill, noted that over successive years, it was the most filibustered legislation in US history. It was opposed by most Northern and Southern senators, as well as the National Association of Real Estate Boards. A proposed "Civil Rights Act of 1966" had collapsed completely because of its fair housing provision. Mondale commented that: - A lot of civil rights [legislation] was about making the South behave and taking the teeth from George Wallace, [but] this came right to the neighborhoods across the country. This was civil rights getting personal. Memphis, King assassination and the Poor People's March 1968 |Problems playing this file? See media help.| Rev. James Lawson invited King to Memphis, Tennessee, in March 1968 to support a sanitation workers' strike. These workers launched a campaign for union representation after two workers were accidentally killed on the job, and King considered their struggle to be a vital part of the Poor People's Campaign he was planning. A day after delivering his stirring "I've Been to the Mountaintop" sermon, which has become famous for his vision of American society, King was assassinated on April 4, 1968. Riots broke out in black neighborhoods in more than 110 cities across the United States in the days that followed, notably in Chicago, Baltimore, and in Washington, D.C. The damage done in many cities destroyed black businesses and homes, and slowed economic development for a generation. The day before King's funeral, April 8, Coretta Scott King and three of the King children led 20,000 marchers through the streets of Memphis, holding signs that read, "Honor King: End Racism" and "Union Justice Now". Armed National Guardsmen lined the streets, sitting on M-48 tanks, to protect the marchers, and helicopters circled overhead. On April 9 Mrs. King led another 150,000 people in a funeral procession through the streets of Atlanta. Her dignity revived courage and hope in many of the Movement's members, cementing her place as the new leader in the struggle for racial equality. Coretta Scott King said, [Martin Luther King, Jr.] gave his life for the poor of the world, the garbage workers of Memphis and the peasants of Vietnam. The day that Negro people and others in bondage are truly free, on the day want is abolished, on the day wars are no more, on that day I know my husband will rest in a long-deserved peace. Rev. Ralph Abernathy succeeded King as the head of the SCLC and attempted to carry forth King's plan for a Poor People's March. It was to unite blacks and whites to campaign for fundamental changes in American society and economic structure. The march went forward under Abernathy's plainspoken leadership but did not achieve its goals. Civil Rights Act of 1968 As 1968 began, the fair housing bill was being filibustered once again, but two developments revived it. The Kerner Commission report on the 1967 ghetto riots was delivered to Congress on March 1, and it strongly recommended "a comprehensive and enforceable federal open housing law" as a remedy to the civil disturbances. The Senate was moved to end their filibuster that week. - some Senators and Representatives publicly stated they would not be intimidated or rushed into legislating because of the disturbances. Nevertheless, the news coverage of the riots and the underlying disparities in income, jobs, housing, and education, between White and Black Americans helped educate citizens and Congress about the stark reality of an enormous social problem. Members of Congress knew they had to act to redress these imbalances in American life to fulfill the dream that King had so eloquently preached. The House passed the legislation on April 10, and President Johnson signed it the next day. The Civil Rights Act of 1968 prohibited discrimination concerning the sale, rental, and financing of housing based on race, religion, national origin. It also made it a federal crime to "by force or by threat of force, injure, intimidate, or interfere with anyone … by reason of their race, color, religion, or national origin." Despite the common notion that the ideas of Martin Luther King, Jr., Malcolm X and Black Power only conflicted with each other and were the only ideologies of the Civil Rights Movement, there were other sentiments felt by many blacks. Fearing the events during the movement were occurring too quickly, there were some blacks who felt that leaders should take their activism at a slower pace. Others had reservations on how focused blacks were on the movement and felt that such attention was better spent on reforming issues within the black community. Those who blatantly rejected integration usually had a legitimate rationale for doing so, such as fearing a change in the status quo they had been used to for so long, or fearing for their safety if they found themselves in environments where whites were much more present. However, there were also those who defended segregation for the sake of keeping ties with the white power structure from which many relied on for social and economic mobility above other blacks. Based on her interpretation of a 1966 study made by Donald Matthews and James Prothro detailing the relative percentage of blacks for integration, against it or feeling something else, Lauren Winner asserts that: Black defenders of segregation look, at first blush, very much like black nationalists, especially in their preference for all-black institutions; but black defenders of segregation differ from nationalists in two key ways. First, while both groups criticize NAACP-style integration, nationalists articulate a third alternative to integration and Jim Crow, while segregationists preferred to stick with the status quo. Second, absent from black defenders of segregation's political vocabulary was the demand for self-determination. They called for all-black institutions, but not autonomous all-black institutions; indeed, some defenders of segregation asserted that black people needed white paternalism and oversight in order to thrive. Oftentimes, African-American community leaders would be staunch defenders of segregation. Church ministers, businessmen and educators were among those who wished to keep segregation and segregationist ideals in order to retain the privileges they gained from patronage from whites, such as monetary gains. In addition, they relied on segregation to keep their jobs and economies in their communities thriving. It was feared that if integration became widespread in the South, black-owned businesses and other establishments would lose a large chunk of their customer base to white-owned businesses, and many blacks would lose opportunities for jobs that were presently exclusive to their interests. On the other hand, there were the everyday, average black people who criticized integration as well. For them, they took issue with different parts of the Civil Rights Movement and the potential for blacks to exercise consumerism and economic liberty without hindrance from whites. For Martin Luther King, Jr., Malcolm X and other leading activists and groups during the movement, these opposing viewpoints acted as an obstacle against their ideas. These different views made such leaders' work much harder to accomplish, but they were nonetheless important in the overall scope of the movement. For the most part, the black individuals who had reservations on various aspects of the movement and ideologies of the activists were not able to make a game-changing dent in their efforts, but the existence of these alternate ideas gave some blacks an outlet to express their concerns about the changing social structure. Avoiding the "Communist" label On December 17, 1951, the Communist Party–affiliated Civil Rights Congress delivered the petition We Charge Genocide: "The Crime of Government Against the Negro People", often shortened to We Charge Genocide, to the United Nations in 1951, arguing that the U.S. federal government, by its failure to act against lynching in the United States, was guilty of genocide under Article II of the UN Genocide Convention. The petition was presented to the United Nations at two separate venues: Paul Robeson, concert singer and activist, to a UN official in New York City, while William L. Patterson, executive director of the CRC, delivered copies of the drafted petition to a UN delegation in Paris. Patterson, the editor of the petition, was a leader in the Communist Party USA and head of the International Labor Defense, a group that offered legal representation to communists, trade unionists, and African-Americans in cases involving issues of political or racial persecution. The ILD was known for leading the defense of the Scottsboro boys in Alabama in 1931, where the Communist Party had considerable influence among African Americans in the 1930s. This had largely declined by the late 1950s, although they could command international attention. As earlier Civil Rights figures such as Robeson, Du Bois and Patterson became more politically radical (and therefore targets of Cold War anti-Communism by the US. Government), they lost favor with both mainstream Black America and the NAACP. In order to secure a place in the mainstream and gain the broadest base, the new generation of civil rights activists believed they had to openly distance themselves from anything and anyone associated with the Communist party. According to Ella Baker, the Southern Christian Leadership Conference adopted "Christian" into its name to deter charges of Communism. The FBI under J Edgar Hoover had been concerned about communism since the early 20th century, and continued to label as "Communist" or "subversive" some of the civil rights activists, whom it kept under close surveillance. In the early 1960s, the practice of distancing the Civil Rights Movement from "Reds" was challenged by the Student Nonviolent Coordinating Committee who adopted a policy of accepting assistance and participation by anyone, regardless of political affiliation, who supported the SNCC program and was willing to "put their body on the line." At times this political openness put SNCC at odds with the NAACP. Kennedy administration, 1961–63 During the years preceding his election to the presidency, John F. Kennedy's record of voting on issues of racial discrimination had been minimal. Kennedy openly confessed to his closest advisors that during the first months of his presidency, his knowledge of the civil rights movement was "lacking". For the first two years of the Kennedy administration, civil rights activists had mixed opinions of both the president and attorney general, Robert F. Kennedy. Many viewed the administration with suspicion. A well of historical cynicism toward white liberal politics had left African Americans with a sense of uneasy disdain for any white politician who claimed to share their concerns for freedom. Still, many had a strong sense that the Kennedys represented a new age of political dialogue. Although observers frequently assert the phrases "The Kennedy administration" or "President Kennedy" when discussing the executive and legislative support of the Civil Rights movement between 1960 and 1963, many of the initiatives resulted from Robert Kennedy's passion. Through his rapid education in the realities of racism, Robert Kennedy underwent a thorough conversion of purpose as Attorney-General. The President came to share his brother's sense of urgency on the matters; the Attorney-General succeeded in urging the president to address the issue in a speech to the nation. Robert Kennedy first became seriously concerned with civil rights in mid-May 1961 during the Freedom Rides, when photographs of the burning bus and savage beatings in Aniston and Birmingham were broadcast around the world. They came at an especially embarrassing time, as President Kennedy was about to have a summit with the Soviet premier in Vienna. The White House was concerned with its image among the populations of newly independent nations in Africa and Asia, and Robert Kennedy responded with an address for Voice of America stating that great progress had been made on the issue of race relations. Meanwhile, behind the scenes, the administration worked to resolve the crisis with a minimum of violence and prevent the Freedom Riders from generating a fresh crop of headlines that might divert attention from the President's international agenda. The Freedom Riders documentary notes that, "The back burner issue of civil rights had collided with the urgent demands of Cold War realpolitik." On May 21, when a white mob attacked and burned the First Baptist Church in Montgomery, Alabama, where King was holding out with protesters, Robert Kennedy telephoned King to ask him to stay in the building until the U.S. Marshals and National Guard could secure the area. King proceeded to berate Kennedy for "allowing the situation to continue". King later publicly thanked Robert Kennedy's commanding the force to break up an attack, which might otherwise have ended King's life. With a very small majority in Congress, the president's ability to press ahead with legislation relied considerably on a balancing game with the Senators and Congressmen of the South. Without the support of Vice-President Lyndon Johnson, a former Senator who had years of experience in Congress and longstanding relations there, many of the Attorney-General's programs would not have progressed. By late 1962, frustration at the slow pace of political change was balanced by the movement's strong support for legislative initiatives: housing rights, administrative representation across all US Government departments, safe conditions at the ballot box, pressure on the courts to prosecute racist criminals. King remarked by the end of the year, This administration has reached out more creatively than its predecessors to blaze new trails, [notably in voting rights and government appointments]. Its vigorous young men [had launched] imaginative and bold forays [and displayed] a certain élan in the attention they give to civil-rights issues. From squaring off against Governor George Wallace, to "tearing into" Vice-President Johnson (for failing to desegregate areas of the administration), to threatening corrupt white Southern judges with disbarment, to desegregating interstate transport, Robert Kennedy came to be consumed by the Civil Rights movement. He continued to work on these social justice issues in his bid for the presidency in 1968. On the night of Governor Wallace's capitulation to African-American enrollment at the University of Alabama, President Kennedy gave an address to the nation, which marked the changing tide, an address that was to become a landmark for the ensuing change in political policy as to civil rights. In it President Kennedy spoke of the need to act decisively and to act now: "We preach freedom around the world, and we mean it, and we cherish our freedom here at home, but are we to say to the world, and much more importantly, to each other that this is the land of the free except for the Negroes; that we have no second-class citizens except Negroes; that we have no class or caste system, no ghettoes, no master race except with respect to Negroes? Now the time has come for this Nation to fulfill its promise. The events in Birmingham and elsewhere have so increased the cries for equality that no city or State or legislative body can prudently choose to ignore them."—President Kennedy, Assassination cut short the life and careers of both the Kennedy brothers and Dr. Martin Luther King, Jr. The essential groundwork of the Civil Rights Act 1964 had been initiated before John F. Kennedy was assassinated. The dire need for political and administrative reform was driven home on Capitol Hill by the combined efforts of the Kennedy brothers, Dr. King (and other leaders) and President Lyndon Johnson. In 1966, Robert Kennedy undertook a tour of South Africa in which he championed the cause of the anti-apartheid movement. His tour gained international praise at a time when few politicians dared to entangle themselves in the politics of South Africa. Kennedy spoke out against the oppression of the black population. He was welcomed by the black population as though a visiting head of state. In an interview with LOOK Magazine he said: At the University of Natal in Durban, I was told the church to which most of the white population belongs teaches apartheid as a moral necessity. A questioner declared that few churches allow black Africans to pray with the white because the Bible says that is the way it should be, because God created Negroes to serve. "But suppose God is black", I replied. "What if we go to Heaven and we, all our lives, have treated the Negro as an inferior, and God is there, and we look up and He is not white? What then is our response?" There was no answer. Only silence.—Robert Kennedy , LOOK Magazine American Jewish community and the Civil Rights movement Many in the Jewish community supported the Civil Rights Movement. In fact, statistically Jews were one of the most actively involved non-black groups in the Movement. Many Jewish students worked in concert with African Americans for CORE, SCLC, and SNCC as full-time organizers and summer volunteers during the Civil Rights era. Jews made up roughly half of the white northern volunteers involved in the 1964 Mississippi Freedom Summer project and approximately half of the civil rights attorneys active in the South during the 1960s. Jewish leaders were arrested while heeding a call from Rev. Dr. Martin Luther King, Jr. in St. Augustine, Florida, in June 1964, where the largest mass arrest of rabbis in American history took place at the Monson Motor Lodge—a nationally important civil rights landmark that was demolished in 2003 so that a Hilton Hotel could be built on the site. Abraham Joshua Heschel, a writer, rabbi, and professor of theology at the Jewish Theological Seminary of America in New York, was outspoken on the subject of civil rights. He marched arm-in-arm with Dr. King in the 1965 Selma to Montgomery march. In the Mississippi civil rights workers' murders of 1964, the two white activists killed, Andrew Goodman and Michael Schwerner, were both Jewish. Brandeis University, the only nonsectarian Jewish-sponsored college university in the world, created the Transitional Year Program (TYP) in 1968, in part response to Rev. Dr. Martin Luther King's assassination. The faculty created it to renew the University's commitment to social justice. Recognizing Brandeis as a university with a commitment to academic excellence, these faculty members created a chance to disadvantaged students to participate in an empowering educational experience. The program began by admitting 20 black males. As it developed, two groups have been given chances. The first group consists of students whose secondary schooling experiences and/or home communities may have lacked the resources to foster adequate preparation for success at elite colleges like Brandeis. For example, their high schools do not offer AP or honors courses nor high quality laboratory experiences. Students selected had to have excelled in the curricula offered by their schools. The second group of students includes those whose life circumstances have created formidable challenges that required focus, energy, and skills that otherwise would have been devoted to academic pursuits. Some have served as heads of their households, others have worked full-time while attending high school full-time, and others have shown leadership in other ways. While Jews were very active in the civil rights movement in the South, in the North, many had experienced a more strained relationship with African Americans. In communities experiencing white flight, racial rioting, and urban decay, Jewish Americans were more often the last remaining whites in the communities most affected. With Black militancy and the Black Power movements on the rise, Black Anti-Semitism increased leading to strained relations between Blacks and Jews in Northern communities. In New York City, most notably, there was a major socio-economic class difference in the perception of African Americans by Jews. Jews from better educated Upper Middle Class backgrounds were often very supportive of African American civil rights activities while the Jews in poorer urban communities that became increasingly minority were often less supportive largely in part due to more negative and violent interactions between the two groups. Despite large Jewish organisations such as the American Jewish Committee, American Jewish Congress and the ADL being actively involved in the Movement, many Jewish individuals in the Southern states who supported civil rights for African-Americans tended to keep a low profile on "the race issue", in order to avoid attracting the attention of the anti-Black and antisemitic Ku Klux Klan. However, Klan groups exploited the issue of African-American integration and Jewish involvement in the struggle to launch acts of violent antisemitism. As an example of this hatred, in one year alone, from November 1957 to October 1958, temples and other Jewish communal gatherings were bombed and desecrated in Atlanta, Nashville, Jacksonville, and Miami, and dynamite was found under synagogues in Birmingham, Charlotte, and Gastonia, North Carolina. Some rabbis received death threats, but there were no injuries following these outbursts of violence. Fraying of alliances King reached the height of popular acclaim during his life in 1964, when he was awarded the Nobel Peace Prize. His career after that point was filled with frustrating challenges. The liberal coalition that had gained passage of the Civil Rights Act of 1964 and the Voting Rights Act of 1965 began to fray. King was becoming more estranged from the Johnson administration. In 1965 he broke with it by calling for peace negotiations and a halt to the bombing of Vietnam. He moved further left in the following years, speaking of the need for economic justice and thoroughgoing changes in American society. He believed change was needed beyond the civil rights gained by the movement. King's attempts to broaden the scope of the Civil Rights Movement were halting and largely unsuccessful, however. King made several efforts in 1965 to take the Movement north to address issues of employment and housing discrimination. SCLC's campaign in Chicago publicly failed, as Chicago Mayor Richard J. Daley marginalized SCLC's campaign by promising to "study" the city's problems. In 1966, white demonstrators holding "white power" signs in notoriously racist Cicero, a suburb of Chicago, threw stones at marchers demonstrating against housing segregation. Race riots, 1963–70 By the end of World War II, more than half of the country's black population lived in Northern and Western industrial cities rather than Southern rural areas. Migrating to those cities for better job opportunities, education and to escape legal segregation, African Americans often found segregation that existed in fact rather than in law. While after the 1920s, the Ku Klux Klan was not prevalent, by the 1960s other problems prevailed in northern cities. Beginning in the 1950s, deindustrialization and restructuring of major industries: railroads and meatpacking, steel industry and car industry, markedly reduced working-class jobs, which had earlier provided middle-class incomes. As the last population to enter the industrial job market, blacks were disadvantaged by its collapse. At the same time, investment in highways and private development of suburbs in the postwar years had drawn many ethnic whites out of the cities to newer housing in expanding suburbs. Urban blacks who did not follow the middle class out of the cities became concentrated in the older housing of inner city neighborhoods, among the poorest in most major cities. Because jobs in new service areas and parts of the economy were being created in suburbs, unemployment was much higher in many black than in white neighborhoods, and crime was frequent. African Americans rarely owned the stores or businesses where they lived. Many were limited to menial or blue-collar jobs, although union organizing in the 1930s and 1940s had opened up good working environments for some. African Americans often made only enough money to live in dilapidated tenements that were privately owned, or poorly maintained public housing. They also attended schools that were often the worst academically in the city and that had fewer white students than in the decades before WWII. The racial makeup of most major city police departments, largely ethnic white (especially Irish), was a major factor in adding to racial tensions. Even a black neighborhood such as Harlem had a ratio of one black officer for every six white officers. The majority-black city of Newark, New Jersey had only 145 blacks among its 1322 police officers. Police forces in Northern cities were largely composed of white ethnics, descendants of 19th-century immigrants: mainly Irish, Italian, and Eastern European officers. They had established their own power bases in the police departments and in territories in cities. Some would routinely harass blacks with or without provocation. Harlem riot of 1964 One of the first major race riots took place in Harlem, New York, in the summer of 1964. A white Irish-American police officer, Thomas Gilligan, shot 15-year-old James Powell, who was black, for allegedly charging him armed with a knife. It was found that Powell was unarmed. A group of black citizens demanded Gilligan's suspension. Hundreds of young demonstrators marched peacefully to the 67th Street police station on July 17, 1964, the day after Powell's death. The police department did not suspend Gilligan. Although the precinct had promoted the NYPD's first black station commander, neighborhood residents were frustrated with racial inequalities. Rioting broke out, and Bedford-Stuyvesant, a major black neighborhood in Brooklyn erupted next. That summer, rioting also broke out in Philadelphia, for similar reasons. In the aftermath of the riots of July 1964, the federal government funded a pilot program called Project Uplift. Thousands of young people in Harlem were given jobs during the summer of 1965. The project was inspired by a report generated by HARYOU called Youth in the Ghetto. HARYOU was given a major role in organizing the project, together with the National Urban League and nearly 100 smaller community organizations. Permanent jobs at living wages were still out of reach of many young black men. Watts riot (1965) In 1965, President Lyndon B. Johnson signed the Voting Rights Act, but the new law had no immediate effect on living conditions for blacks. A few days after the act became law, a riot broke out in the South Central Los Angeles neighborhood of Watts. Like Harlem, Watts was an impoverished neighborhood with very high unemployment. Its residents were supervised by a largely white police department that had a history of abuse against blacks. While arresting a young man for drunk driving, police officers argued with the suspect's mother before onlookers. The conflict triggered a massive destruction of property through six days of rioting. Thirty-four people were killed and property valued at about $30 million was destroyed, making the Watts Riots among the most expensive in American history. With black militancy on the rise, ghetto residents directed acts of anger at the police. Black residents growing tired of police brutality continued to riot. Some young people joined groups such as the Black Panthers, whose popularity was based in part on their reputation for confronting police officers. Riots among blacks occurred in 1966 and 1967 in cities such as Atlanta, San Francisco, Oakland, Baltimore, Seattle, Tacoma, Cleveland, Cincinnati, Columbus, Newark, Chicago, New York City (specifically in Brooklyn, Harlem and the Bronx), and worst of all in Detroit. Detroit riot of 1967 In Detroit, a small black middle class had begun to develop among those African-Americans who worked at unionized jobs in the automotive industry; these workers still contended with unsafe working conditions and racist practices, concerns which the United Auto Workers channeled into bureaucratic and ineffective grievance procedures. White mobs enforced the segregation of housing up through the 1960s; upon learning that a new homebuyer was black, whites would congregate outside the home picketing, often breaking windows, committing arson, and attacking their new neighbors. Blacks who were not upwardly mobile were living in substandard conditions, subject to the same problems as African-Americans in Watts and Harlem. When white police officers shut down an illegal bar and arrested a large group of patrons during the hot summer, furious residents rioted. Blacks looted and destroyed property for five days, and National Guardsmen and federal troops patrolled in tanks through the streets. Residents reported that police officers shot at black people before even determining if the suspects were armed or dangerous. After five days, 41 people had been killed, hundreds injured and thousands left homeless. $40 to $45 million worth of damage was caused. State and local governments responded to the riot with a dramatic increase in minority hiring. Mayor Cavanaugh in May 1968 appointed a Special Task Force on Police Recruitment and Hiring, and by July 1972, blacks made up 14 percent of the Detroit police, more than double their percentage in 1967. The Michigan government used its reviews of contracts issued by the state to secure a 21 percent increase in nonwhite employment. In the aftermath of the turmoil, the Greater Detroit Board of Commerce launched a campaign to find jobs for ten thousand "previously unemployable" persons, a preponderant number of whom were black. Prior to the disorder, Detroit enacted no ordinances to end housing segregation, and few had been enacted in the state of Michigan at all. Governor George Romney immediately responded to the riot of 1967 with a special session of the Michigan legislature where he forwarded sweeping housing proposals that included not only fair housing, but "important relocation, tenants' rights and code enforcement legislation." Romney had supported such proposals in 1965, but abandoned them in the face of organized opposition. White conservative resistance was powerful in 1967 as well, but this time Romney did not relent and once again proposed the housing laws at the regular 1968 session of the legislature. The governor publicly warned that if the housing measures were not passed, "it will accelerate the recruitment of revolutionary insurrectionists." The laws passed both houses of the legislature. The Michigan Historical Review wrote that: "The Michigan Fair Housing Act, which took effect on November 15, 1968, was stronger than the federal fair housing law…and than just about all the existing state fair housing acts. It is probably more than a coincidence that the state that had experienced the most severe racial disorder of the 1960s also adopted one of the strongest state fair housing acts." Detroit's decline had begun in the 1950s, during which the city lost almost a tenth of its population. It has been argued – including by Mayor Coleman Young – that the riot was the primary accelerator of "white flight", an ethnic succession by which white residents moved out of inner-city neighborhoods into the suburbs. In contrast, urban affairs experts largely blame a Supreme Court decision against NAACP lawsuits on school desegregation – 1974's Milliken v. Bradley case – which maintained the suburban schools as a lily-white refuge. In his dissenting opinion, Supreme Court Justice William O. Douglas wrote that the Milliken decision perpetuated "restrictive covenants" that "maintained...black ghettos." (Detroit lost 12.8% of its white population in the 1950s, 15.2% of its white population in the 1960s, and 21.2% of its white population in the 1970s.) Nationwide riots of 1967 In additon to Detroit, over 100 US cities experienced riots in 1967, including Newark, Cincinnati, Cleveland, and Washington D.C. President Johnson created the National Advisory Commission on Civil Disorders in 1967. The commission's final report called for major reforms in employment and public assistance for black communities. It warned that the United States was moving toward separate white and black societies. King riots (1968) In April 1968 after the assassination of Dr. Martin Luther King, Jr. in Memphis, Tennessee, rioting broke out in cities across the country from frustration and despair. These included Cleveland, Baltimore, Washington, D.C., Chicago, New York City and Louisville, Kentucky. As in previous riots, most of the damage was done in black neighborhoods. In some cities, it has taken more than a quarter of a century for these areas to recover from the damage of the riots; in others, little recovery has been achieved. Programs in affirmative action resulted in the hiring of more black police officers in every major city. Today blacks make up a proportional majority of the police departments in cities such as Baltimore, Washington, New Orleans, Atlanta, Newark, and Detroit. Civil rights laws have reduced employment discrimination. The conditions that led to frequent rioting in the late 1960s have receded, but not all the problems have been solved. With industrial and economic restructuring, hundreds of thousands of industrial jobs disappeared since the later 1950s from the old industrial cities. Some moved South, as has much population following new jobs, and others out of the U.S. altogether. Civil unrest broke out in Miami in 1980, in Los Angeles in 1992, and in Cincinnati in 2001. Black power, 1966 During the Freedom Summer campaign of 1964, numerous tensions within the civil rights movement came to the forefront. Many blacks in SNCC developed concerns that white activists from the North were taking over the movement. The massive presence of white students was also not reducing the amount of violence that SNCC suffered, but seemed to be increasing it. Additionally, there was profound disillusionment at Lyndon Johnson's denial of voting status for the Mississippi Freedom Democratic Party. Meanwhile, during CORE's work in Louisiana that summer, that group found the federal government would not respond to requests to enforce the provisions of the Civil Rights Act of 1964, or to protect the lives of activists who challenged segregation. For the Louisiana campaign to survive it had to rely on a local African-American militia called the Deacons for Defense and Justice, who used arms to repel white supremacist violence and police repression. CORE's collaboration with the Deacons was effective against breaking Jim Crow in numerous Louisiana areas. In 1965, SNCC helped organize an independent political party, the Lowndes County Freedom Organization (LCFO), in the heart of Alabama Klan territory, and permitted its black leaders to openly promote the use of armed self-defense. Meanwhile, the Deacons for Defense and Justice expanded into Mississippi and assisted Charles Evers' NAACP chapter with a successful campaign in Natchez. The same year, the Watts Rebellion took place in Los Angeles, and seemed to show that most black youth were now committed to the use of violence to protest inequality and oppression. During the March Against Fear in 1966, SNCC and CORE fully embraced the slogan of "black power" to describe these trends towards militancy and self-reliance. In Mississippi, Stokely Carmichael declared, "I'm not going to beg the white man for anything that I deserve, I'm going to take it. We need power." Several people engaging in the Black Power movement started to gain more of a sense in black pride and identity as well. In gaining more of a sense of a cultural identity, several blacks demanded that whites no longer refer to them as "Negroes" but as "Afro-Americans." Up until the mid-1960s, blacks had dressed similarly to whites and straightened their hair. As a part of gaining a unique identity, blacks started to wear loosely fit dashikis and had started to grow their hair out as a natural afro. The afro, sometimes nicknamed the "'fro," remained a popular black hairstyle until the late 1970s. Black Power was made most public, however, by the Black Panther Party, which was founded by Huey Newton and Bobby Seale in Oakland, California, in 1966. This group followed the ideology of Malcolm X, a former member of the Nation of Islam, using a "by-any-means necessary" approach to stopping inequality. They sought to rid African American neighborhoods of police brutality and created a ten-point plan amongst other things. Their dress code consisted of black leather jackets, berets, slacks, and light blue shirts. They wore an afro hairstyle. They are best remembered for setting up free breakfast programs, referring to police officers as "pigs", displaying shotguns and a raised fist, and often using the statement of "Power to the people". Black Power was taken to another level inside prison walls. In 1966, George Jackson formed the Black Guerrilla Family in the California San Quentin State Prison. The goal of this group was to overthrow the white-run government in America and the prison system. In 1970, this group displayed their dedication after a white prison guard was found not guilty of shooting and killing three black prisoners from the prison tower. They retaliated by killing a white prison guard. |Problems playing this file? See media help.| Numerous popular cultural expressions associated with black power appeared at this time. Released in August 1968, the number one Rhythm & Blues single for the Billboard Year-End list was James Brown's "Say It Loud – I'm Black and I'm Proud". In October 1968, Tommie Smith and John Carlos, while being awarded the gold and bronze medals, respectively, at the 1968 Summer Olympics, donned human rights badges and each raised a black-gloved Black Power salute during their podium ceremony. King was not comfortable with the "Black Power" slogan, which sounded too much like black nationalism to him. When King was murdered in 1968, Stokely Carmichael stated that whites murdered the one person who would prevent rampant rioting and that blacks would burn every major city to the ground. Gates v. Collier Conditions at the Mississippi State Penitentiary at Parchman, then known as Parchman Farm, became part of the public discussion of civil rights after activists were imprisoned there. In the spring of 1961, Freedom Riders came to the South to test the desegregation of public facilities. By the end of June 1963, Freedom Riders had been convicted in Jackson, Mississippi. Many were jailed in Mississippi State Penitentiary at Parchman. Mississippi employed the trusty system, a hierarchical order of inmates that used some inmates to control and enforce punishment of other inmates. In 1970 the civil rights lawyer Roy Haber began taking statements from inmates. He collected 50 pages of details of murders, rapes, beatings and other abuses suffered by the inmates from 1969 to 1971 at Mississippi State Penitentiary. In a landmark case known as Gates v. Collier (1972), four inmates represented by Haber sued the superintendent of Parchman Farm for violating their rights under the United States Constitution. Federal Judge William C. Keady found in favor of the inmates, writing that Parchman Farm violated the civil rights of the inmates by inflicting cruel and unusual punishment. He ordered an immediate end to all unconstitutional conditions and practices. Racial segregation of inmates was abolished. And the trustee system, which allow certain inmates to have power and control over others, was also abolished. The prison was renovated in 1972 after the scathing ruling by Judge Keady; he wrote that the prison was an affront to "modern standards of decency." Among other reforms, the accommodations were made fit for human habitation. The system of "trusties" was abolished. (The prison had armed lifers with rifles and given them authority to oversee and guard other inmates, which led to many abuses and murders.) In integrated correctional facilities in northern and western states, blacks represented a disproportionate number of the prisoners, in excess of their proportion of the general population. They were often treated as second-class citizens by white correctional officers. Blacks also represented a disproportionately high number of death row inmates. Eldridge Cleaver's book Soul on Ice was written from his experiences in the California correctional system; it contributed to black militancy. There was an international context for the actions of the U.S. Federal government during these years. It had stature to maintain in Europe and a need to appeal to the people in the Third World. In Cold War Civil Rights: Race and the Image of American Democracy, the historian Mary L. Dudziak wrote that Communists critical of the United States accused the nation for its hypocrisy in portraying itself as the "leader of the free world," when so many of its citizens were subjected to severe racial discrimination and violence. She argued that this was a major factor in the government moving to support civil rights legislation. - Freedom on My Mind, 110 minutes, 1994, Producer/Directors: Connie Field and Marilyn Mulford, 1994 Academy Award Nominee, Best Documentary Feature - Eyes on the Prize (1987 and 1990), PBS television series; released again in 2006 and 2009. - Dare Not Walk Alone, about the civil rights movement in St. Augustine, Florida. Nominated in 2009 for an NAACP Image Award. - Crossing in St. Augustine (2010), produced by Andrew Young, who participated in the civil rights movement in St. Augustine in 1964. Information available from AndrewYoung.Org. - Freedom Riders (2010), 120 min. PBS, American Experience. - National/regional civil rights organizations - Congress of Racial Equality (CORE) - Deacons for Defense and Justice - Leadership Conference on Civil Rights (LCCR) - Medical Committee for Human Rights (MCHR) - National Association for the Advancement of Colored People (NAACP) - National Council of Negro Women (NCNW) - Organization of Afro-American Unity - Southern Christian Leadership Conference (SCLC) - Student Nonviolent Coordinating Committee (SNCC) - Southern Conference Educational Fund (SCEF) - Southern Student Organizing Committee (SSOC) - National economic empowerment organizations - Local civil rights organizations - Albany Movement (Albany, GA) - Council of Federated Organizations (Mississippi) - Montgomery Improvement Association (Montgomery, AL) - Regional Council of Negro Leadership (Mississippi) - Women's Political Council (Montgomery, AL) - Ralph Abernathy - Victoria Gray Adams - Maya Angelou - Ella Baker - James Baldwin - Marion Barry - Daisy Bates - Fay Bellamy Powell - James Bevel - Claude Black - Unita Blackwell - Julian Bond - Amelia Boynton - Anne Braden - Carl Braden - Mary Fair Burks - Stokely Carmichael - Septima Clark - Albert Cleage - Charles E. Cobb, Jr. - Annie Lee Cooper - Dorothy Cotton - Claudette Colvin - Jonathan Daniels - Annie Devine - Doris Derby - Marian Wright Edelman - Medgar Evers - Myrlie Evers-Williams - James L. Farmer, Jr. - Karl Fleming - Sarah Mae Flemming - James Forman - Frankie Muse Freeman - Fred Gray - Dick Gregory - Prathia Hall - Fannie Lou Hamer - Lorraine Hansberry - Lola Hendricks - Aaron Henry - Myles Horton - T. R. M. Howard - Winson Hudson - Jesse Jackson - Jimmie Lee Jackson - Esau Jenkins - Gloria Johnson-Powell - Clyde Kennard - Coretta Scott King - Martin Luther King, Jr. - Bernard Lafayette - W. W. Law - James Lawson - John Lewis - Viola Liuzzo - Joseph Lowery - Autherine Lucy - Clara Luper - Thurgood Marshall - James Meredith - Loren Miller - Jack Minnis - Anne Moody - Harry T. Moore - E. Frederic Morrow - Robert Parris Moses - Bill Moyer - Diane Nash - Denise Nicholas - E. D. Nixon - David Nolan - James Orange - Nan Grogan Orrock - Rosa Parks - Rutledge Pearson - George Raymond Jr. - James Reeb - Frederick D. Reese - Gloria Richardson - Amelia Boynton Robinson - Jo Ann Robinson - Ruby Doris Smith-Robinson - Bayard Rustin - Cleveland Sellers - Charles Sherrod - Fred Shuttlesworth - Modjeska Monteith Simkins - Rev. Charles Kenzie Steele - Dempsey Travis - C. T. Vivian - Wyatt Tee Walker - Hosea Williams - Robert F. Williams - Malcolm X - Andrew Young Related activists and artists - Joan Baez - Harry Belafonte - Ralph Bunche - Guy Carawan - Robert Carter - William Sloane Coffin - Ossie Davis - Ruby Dee - James Dombrowski - W. E. B. Du Bois - Virginia Durr - Bob Dylan - John Hope Franklin - Jack Greenberg - Anna Arnold Hedgeman - Dorothy Height - Charlton Heston - Mahalia Jackson - Clarence Jordan - Stetson Kennedy - Arthur Kinoy - William Kunstler - Staughton Lynd - Constance Baker Motley - Nichelle Nichols - Phil Ochs - Sidney Poitier - A. Philip Randolph - Paul Robeson - Jackie Robinson - Pete Seeger - Nina Simone - Norman Thomas - Roy Wilkins - Whitney Young - Howard Zinn |Wikimedia Commons has media related to History of civil rights in the United States.| - African-American Civil Rights Movement (1865–95) - African-American Civil Rights Movement (1896–1954) - Timeline of the African American Civil Rights Movement - List of civil rights leaders - Executive Order 9981, ending segregated units in the United States military - Photographers of the American Civil Rights Movement - "We Shall Overcome", unofficial movement anthem - List of Kentucky women in the civil rights era - African-American Civil Rights Movement in popular culture - Seattle Civil Rights and Labor History Project - Read's Drug Store (Baltimore), site of a 1955 desegregation sit-in Post-Civil Rights Movement: - Barbara J. Shircliffe "Review of Race, Real Estate, and Uneven Development by Kevin Fox Gotham" Humanities and Social Science Online, November, 2004 - Christopher Bonastia, Knocking on the Door: The Federal Government's Attempt to Desegregate the Suburbs (Princeton University Press, 2010), p. 113-114 - Civil Rights Act of 1964 - Timothy B. Tyson, "Robert F. Williams, 'Black Power,' and the Roots of the African American Freedom Struggle," Journal of American History 85, No. 2 (Sep., 1998): 540–570 - Doug McAdam "Occupy the Future:What Should a Sustained Movement Look Like?" Boston Review, June 26, 2012 - Smith, Jean Edward (2001). Grant. Simon and Schuster. pp. 244–247. ISBN 9780743217019. - Wormser, Richard. "The Enforcement Acts (1870–71)". PBS: Jim Crow Stories. Retrieved May 12, 2012. - Black-American Representatives and Senators by Congress, 1870–Present—U.S. House of Representatives - Otis H Stephens, Jr; John M Scheb, II (2007). American Constitutional Law: Civil Rights and Liberties. Cengage Learning. p. 528. - Paul Finkelman, ed. (2009). Encyclopedia of African American History. Oxford University Press. pp. 199–200 of vol 4. - C. Vann Woodward, The Strange Career of Jim Crow, 3rd rev. ed. (Oxford University Press, 1974), pp. 67–109. - Birmingham Segregation Laws – Civil Rights Movement Veterans - Fultz, M. (2006). Black Public Libraries in the South in the Era of De Jure Segregation. Libraries & The Cultural Record, 41(3), 338–346. - Nikole Hannah-Jones "Living Apart: How the Government Betrayed a Landmark Civil Rights" ProPublica, Oct. 28, 2012 - "How We Got Here: The Historical Roots of Housing Segregation" National Commission on Fair Housing and Equal Opportunity - David T. Beito and Linda Royster Beito, Black Maverick: T.R.M. Howard's Fight for Civil Rights and Economic Power, Urbana: University of Illinois Press, 2009, pp.81. 99–100. - http://mlk-kpp01.stanford.edu/index.php/encyclopedia/encyclopedia/enc_robinson_jo_ann_1912_1992/ , retrieved February 1, 2015 - Robinson, Jo Ann & Garrow, David J. (forward by Coretta Scott King) The Montgomery Bus Boycott and the Women Who Started It (1986) ISBN 0-394-75623-1 Knoxville, University of Tennessee Press - "The Tallahassee Bus Boycott—Fifty Years Later," The Tallahassee Democrat, May 21, 2006 - Klarman, Michael J.,Brown v. Board of Education and the civil rights movement [electronic resource] : abridged edition of From Jim Crow to Civil Rights : The Supreme Court and the Struggle for Racial Equality, Oxford ; New York : Oxford University Press, 2007, p.55 - Risa L. Goluboff, The Lost Promise of Civil Rights,Harvard University Press, MA:Cambridge,2007, p. 249–251 - Antonly Lester, "Brown v. Board of Education Overseas" PROCEEDINGS OF THE AMERICAN PHILOSOPHICAL SOCIETY VOL. 148, NO. 4, DECEMBER 2004 - Mary L Dudziak "Brown as a Cold War Case" Journal of American History, June 2004 - Brown v Board of Education Decision – Civil Rights Movement Veterans - Desegregation and Integration of Greensboro's Public Schools, 1954–1974 - Melissa F. Weiner, Power, Protest, and the Public Schools: Jewish and African American Struggles in New York City (Rutgers University Press, 2010) p. 51-66 - Adina Back "Exposing the Whole Segregation Myth: The Harlem Nine and New York City Schools" in Freedom north: Black freedom struggles outside the South, 1940–1980, Jeanne Theoharis, Komozi Woodard, eds.(Palgrave Macmillan, 2003) p. 65-91 - W. Chafe, The Unfinished Journey - The Little Rock Nine – Civil Rights Movement Veterans - Minnijean Brown Trickey, America.gov - Francis Fox Piven and Richard Cloward, Poor People's Movements: How They Succeed, How They Fail (Random House, 1977), 182 - Timothy B. Tyson, Radio Free Dixie: Robert F. Williams and the Roots of "Black Power" (University of North Carolina Press, 1999), 79–80 - Tyson, Radio Free Dixie, 88–89 - Nicholas Graham, "January 1958: The Lumbees face the Klan", This Month in North Carolina History - Tyson, Radio Free Dixie, 149 - Tyson, Radio Free Dixie, 159 -164 - "Williams, Robert Franklin" King Encyclopedia, eds. Tenisha Armstrong, et al, Martin Luther King Jr. Research and Education Institute website - 8. Barbara Ransby , Ella Baker and the Black Freedom Movement: A Radical Democratic Vision (University of North Carolina Press, 2003), 213–216 - Timothy B. Tyson, "Robert F. Williams, 'Black Power,' and the Roots of the African American Freedom Struggle," Journal of American History 85, No. 2 (Sep., 1998): 540–570 - "The Black Power Movement, Part 2: The Papers of Robert F. Williams" A Guide to the Microfilm Editions of the Black Studies Research Sources (University Publications of America) - Tyson, Journal of American History (Sep. 1998) - Taylor Branch, Parting the Waters: America in the King Years 1954-1963 (Simon and Schuster, 1988), 781 - Simon Wendt, The Spirit and the Shotgun: Armed Resistance and the Struggle for Civil Rights (University of Florida Press, 2007), 121–122; Mike Marqusee, "By Any Means Necessary" The Nation, September 24, 2004 – http://www.thenation.com/article/any-means-necessary# - Walter Rucker, "Crusader in Exile: Robert F. Williams and the International Struggle for Black Freedom in America" The Black Scholar 36, No. 2-3 (Summer-Fall 2006): 19–33. URL - Timothy B. Tyson, "Robert Franklin Williams: A Warrior For Freedom, 1925–1996", Investigating U.S. History (City University of New York) - "Kansas Sit-In Gets Its Due at Last"; NPR; October 21, 2006 - First Southern Sit-in, Greensboro NC – Civil Rights Movement Veterans - Chafe, William Henry (1980). Civilities and civil rights : Greensboro, North Carolina, and the Black struggle for freedom. New York: Oxford University Press. p. 81. ISBN 0-19-502625-X. - Greensboro Sit-Ins at Woolworth's, February–July 1960 - Southern Spaces - Atlanta Sit-ins – Civil Rights Veterans - "Atlanta Sit-Ins", The New Georgia Encyclopedia - Houston, Benjamin (2012). The Nashville Way: Racial Etiquette and the Struggle for Social Justice in a Southern City. Athens, Georgia: University of Georgia Press. ISBN 0-8203-4326-9. - Nashville Student Movement – Civil Rights Movement Veterans - "America's First Sit-Down Strike: The 1939 Alexandria Library Sit-In". City of Alexandria. Retrieved 2010-02-11. - Davis, Townsend (1998). Weary Feet, Rested Souls: A Guided History of the Civil Rights Movement. New York: W. W. Norton & Company. p. 311. ISBN 0-393-04592-7. - An Appeal for Human Rights – Committee on the Appeal for Human Rights (COAHR) - Atlanta Sit-Ins - The Committee on the Appeal for Human Rights (COAHR) and the Atlanta Student Movement – The Committee on the Appeal for Human Rights and the Atlanta Student Movement - Students Begin to Lead – The New Georgia Encyclopedia—Atlanta Sit-Ins - Carson, Clayborne (1981). In Struggle: SNCC and the Black Awakening of the 1960s. Cambridge: Harvard University Press. p. 311. ISBN 0-674-44727-1. - Student Nonviolent Coordinating Committee Founded – Civil Rights Movement Veterans - Freedom Rides – Civil Rights Movement Veterans - Arsenault, Raymond (2006). Freedom Riders: 1961 and the Struggle for Racial Justice. Oxford Press. - Black Protest (1961) - Hartford, Bruce Hartford. "Arrests in Jackson MS". The Civil Rights Movement Veterans website. Westwind Writers Inc. Retrieved October 21, 2011. - Voter Registration & Direct-action in McComb MS – Civil Rights Movement Veterans - Council of Federated Organizations Formed in Mississippi – Civil Rights Movement Veterans - Mississippi Voter Registration—Greenwood – Civil Rights Movement Veterans - "Carrying the burden: the story of Clyde Kennard", District 125, Mississippi, Retrieved November 5, 2007 - William H. Tucker, The Funding of Scientific Racism: Wickliffe Draper and the Pioneer Fund, University of Illinois Press (May 30, 2007), pp 165–66. - Neo-Confederacy: A Critical Introduction, Edited by Euan Hague, Heidi Beirich, Edward H. Sebesta, University of Texas Press (2008) pp. 284–85 - "A House Divided |". Southern Poverty Law Center. Archived from the original on October 28, 2010. Retrieved 2010-10-30. - Jennie Brown, Medgar Evers, Holloway House Publishing, 1994, pp. 128–132 - United States v. Barnett, 376 U.S. 681 (1964) - "James Meredith Integrates Ole Miss", Civil Rights Movement Veterans - , University of Southern Mississippi Library[dead link] - Albany GA, Movement – Civil Rights Movement Veterans - The Birmingham Campaign – Civil Rights Movement Veterans - Letter from a Birmingham Jail ~ King Research & Education Institute at Stanford Univ. - Bass, S. Jonathan (2001) Blessed Are The Peacemakers: Martin Luther King, Jr., Eight White Religious Leaders, and the "Letter from Birmingham Jail". Baton Rouge: LSU Press. ISBN 0-8071-2655-1 - Freedom-Now" Time, May 17, 1963; Glenn T. Eskew, But for Birmingham: The Local and National Struggles in the Civil Rights Movement (University of North Carolina Press, 1997), 301. - Nicholas Andrew Bryant, The Bystander: John F. Kennedy And the Struggle for Black Equality (Basic Books, 2006), pg. 2 - Thomas J Sugrue, "Affirmative Action from Below: Civil Rights, Building Trades, and the Politics of Racial Equality in the Urban North, 1945–1969" The Journal of American History, Vol. 91, Issue 1 - Pennsylvania Historical and Museum Commission website, "The Civil Rights Movement" - T he Daily Capital News(Missouri) June 14, 1963, pg. 4 - The Dispatch (North Carolina), December 28, 1963 - Maryland State Archives "The Cambridge Riots of 1963 and 1967" - Thomas F. Jackson, "Jobs and Freedom: The Black Revolt of 1963 and the Contested Meanings of the March on Washginton" Virginia Foundation for the Humanities April 2, 2008, pg. 10–14 - Tony Ortega "Miss Lorraine Hansberry & Bobby Kennedy" Village Voice, May 4, 2009 - James Hilty, Robert Kennedy: Brother Protector (Temple University Press, 2000), p. 355 - Schlesinger, Robert Kennedy and His Times (1978), p. 332-333. - "Book Reviews-The Bystander by Nicholas A. Bryant" The Journal of American History (2007) 93 (4) - Standing In the Schoolhouse Door – Civil Rights Movement Veterans - "Radio and Television Report to the American People on Civil Rights," June 11, 1963, transcript from the JFK library. - Medgar Evers, a worthwhile article, on The Mississippi Writers Page, a website of the University of Mississippi English Department. - Medgar Evers Assassination – Civil Rights Movement Veterans - Civil Rights bill submitted, and date of JFK murder, plus graphic events of the March on Washington. This is an Abbeville Press website, a large informative article apparently from the book The Civil Rights Movement (ISBN 0-7892-0123-2). - Rosenberg, Jonathan; Karabell, Zachary (2003). Kennedy, Johnson, and the Quest for Justice: The Civil Rights Tapes. WW Norton & Co. p. 130. ISBN 0-393-05122-6. - Schlesinger, Jr., Arthur M. (2002) . Robert Kennedy and His Times. Houghton Mifflin Books. pp. 350, 351. ISBN 0-618-21928-5. - "Television News and the Civil Rights Struggle: The Views in Virginia and Mississippi". Southern Spaces. November 3, 2004. Retrieved 2012-11-08. - "Cambridge, Maryland, activists campaign for desegregation, USA, 1962–1963". Global Nonviolent Action Database. Swarthmore College. Retrieved January 13, 2015. - "Mrs. Richardson OKs Malcolm" The Baltimore Afro-American, March 10, 1964 - "The Negro and the American Promise," produced by Boston public television station WGBH in 1963 - Harlem CORE, "Film clip of Harlem CORE chairman Gladys Harrington speaking on Malcolm X". - "Malcolm X" The King Encyclopedia, eds. Tenisha Armstrong, et al, Martin Luther King Jr. Research and Education Institute website, - Manning Marable, Malcolm X: A Life of Reinvention (Penguin Books, 2011) - American Public Radio, "Malcolm X: The Ballot or the Bullet-Background" - Akinyele Umoja, We Will Shoot Back: Armed Resistance in the Mississippi Freedom Movement (NYU Press, 2013), p. 126 - Francis Fox Piven and Richard Cloward, Regulating the Poor (Random House 1971), p. 238; Abel A. Bartley, Keeping the Faith: Race, Politics and Social Development in Jacksonville, 1940–1970 (Greenwood Publishing Group, 2000), 111 - Malcolm X, "The Ballet or the Bullet, Cleveland version" April 3, 1964 - Blackside Productions, Eyes on the Prize: America's Civil Rights Movement 1954–1985, ", The Time Has Come" Public Broadcasting System - Lewis, John (1998). Walking With the Wind. Simon & Schuster. - Fannie Lou Hamer, Speech Delivered with Malcolm X at the Williams Institutional CME Church, Harlem, New York, December 20, 1964. - George Breitman, ed. Malcolm X Speaks: Selected Speeches and Statements (Grove Press, 1965), pp. 106–109 - Christopher Strain, Pure Fire:Self-Defense as Activism in the Civil Rights Era (University of Georgia Press, 2005), pp. 92–93 - Juan Williams, et al, Eyes on the Prize: America's Civil Rights Years 1954–1965 (Penguin Group, 1988), p. 262 - Paul Ryan Haygood, "Malcolm's Contribution to Black Voting Rights", The Black Commentator - Civil Rights Movement Veterans. "St. Augustine FL, Movement — 1963" ; "Hayling, Robert B. (1929–)", Martin Luther King Jr. Research & Education Institute, Stanford University ; "Black History: Dr. Robert B. Hayling", Augustine.com; David J. Garrow, Bearing the Cross: Martin Luther King Jr. and the Southern Christian Leadership Conference (Harper Collins, 1987) p 316–318 - Civil Rights Movement Veterans. "St. Augustine FL, Movement — 1963" ; David J. Garrow, Bearing the Cross: Martin Luther King Jr. and the Southern Christian Leadership Conference (Harper Collins, 1987) p 317; - The Mississippi Movement & the MFDP – Civil Rights Movement Veterans - Mississippi: Subversion of the Right to Vote – Civil Rights Movement Veterans - McAdam, Doug (1988). Freedom Summer. Oxford University Press. ISBN 0-19-504367-7. - Carson, Clayborne (1981). In Struggle: SNCC and the Black Awakening of the 1960s. Harvard University Press. - Veterans Roll Call – Civil Rights Movement Veterans - Reeves 1993, pp. 521–524. - Freedom Ballot in MS – Civil Rights Movement Veterans - MLK's Nobel Peace Prize acceptance speech on December 10, 1964. - Robert O. Self, American Babylon: Race and the Struggle for Postwar Oakland (Princeton University Press, 2005), p. 271-273 - Valerie Reitman and Mitchell Landsberg "Watts Riots, 40 Years Later" The Los Angeles Times, August 11, 2005 - "No on Proposition 14: California Fair Housing Initiative Collection" Online Archive of California - "Milwaukee, Wisconsin: The Selma of the North" University of Wisconsin-Osh Kosh - Burt Folkart "James Groppi, Ex-Priest, Civil Rights Activist, Dies" The Los Angeles Times, November 05, 1985|B - Darren Miles "Everett Dirksen's Role in Civil Rights Legislation" Western Illinois Historical Review, Vol. I Spring 2009 - Nikole Hannah-Jones, "Living Apart: How the Government Betrayed a Landmark Civil Rights Law" Propublica, Oct. 28, 2012, - "Coretta Scott King". Spartacus Educational Publishers. Retrieved 2010-10-30. - Honorable Charles Mathias, Jr. "Fair Housing Legislation: Not an Easy Row To Hoe" US Department of Housing and Urban Development, Office of Policy Development and Research - Peter B. Levy, "The Dream Deferred: The Assassination of Martin Luther King, Jr., and the Holy Week Uprisings of 1968" in Baltimore '68 : Riots and Rebirth in an American city(Temple University Press, 2011), p. 6 - Public Law 90-284, Government Printing Office - Winner, Lauren F. "Doubtless Sincere: New Characters in the Civil Rights Cast." In The Role of Ideas in the Civil Rights South, edited by Ted Ownby. Jackson: University Press of Mississippi, 2002, p. 158-159. - Winner, Doubtless Sincere, 164–165. - Winner, Doubtless Sincere, 166–167. - We Charge Genocide – Civil Rights Movement Veterans - Carson, Clayborne (1981). In Struggle: SNCC and the Black Awakening of the 1960s. Harvard University Press. - Ella Baker Oral History [02:05:57 – 02:13:07] - Schlesinger, Arthur Jr, Robert Kennedy And His Times (2002) - "Freedom Riders-The Cold War" Freedom Riders, American Experience, PBS website - Martin Luther King, Jr. Nation March 3, 1962 - Michael E. Eidenmuller (June 11, 1963). "John F. Kennedy – Civil Rights Address". American Rhetoric. Retrieved 2010-10-30. - Ripple of Hope in the Land of Apartheid: Robert Kennedy in South Africa, June 1966 - From Swastika to Jim Crow—PBS Documentary - Cannato, Vincent "The Ungovernable City: John Lindsay and his struggle to save New York" Better Books, 2001. ISBN 0-465-00843-7 - Sachar, Howard (2 November 1993). A History of Jews in America. myjewishlearning.com (Vintage Books). Retrieved 1 March 2015. - "No Place Like Home" Time Magazine. - Dr. Max Herman, "Ethnic Succession and Urban Unrest in Newark and Detroit During the Summer of 1967", Rutgers University, July 2002 - Max A. Herman, ed. "The Detroit and Newark Riots of 1967", Rutgers-Newark University, Department of Sociology and Anthropology - "How a Campaign for Racial Trust Turned Sour". Aliciapatterson.org. July 17, 1964. Retrieved 2010-10-30. - Youth in the Ghetto: A Study of the Consequences of Powerlessness, Harlem Youth Opportunities Unlimited, Inc., 1964 - Poverty and Politics in Harlem, Alphnso Pinkney and Roger Woock, College & University Press Services, Inc., 1970 - Karen Miller (University of Michigan) "Review of 'Detroit:I Do Mind Dying" H-Net Online - "Michigan: Riots and Police Brutality" American Experience-Eyes on the Prize website - Sidney Fine, Expanding the Frontier of Civil Rights: Michigan, 1948–1968 (Wayne State University Press, 2000) p. 325 - Sidney Fine, Expanding the Frontier of Civil Rights: Michigan, 1948–1968 (Wayne State University Press, 2000), p. 327 - Sidney Fine, Expanding the Frontier of Civil Rights: Michigan, 1948–1968 (Wayne State University Press, 2000), p. 326 - Sidney Fine, "Michigan and Housing Discrimination 1949–1969" Michigan Historical Review, Fall 1997 - Edward L. Glaeser "In Detroit, bad policies bear bitter fruit" The Boston Globe, July 23, 2013 - Coleman Young, Hard Stuff: The Autobiography of Mayor Coleman Young (1994) p.179. - Meinke, Samantha (September 2011). "Milliken v Bradley: The Northern Battle for Desegregation". Michigan Bar Journal 90 (9): 20–22. Retrieved July 27, 2012. - James, David R. (December 1989). "City Limits on Racial Equality: The Effects of City-Suburb Boundaries on Public-School Desegregation, 1968–1976". American Sociological Review 54 (6). Retrieved 29 July 2012. - Mike Alberti, "Squandered opportunities leave Detroit isolated" RemappingDebate.org - Milliken v. Bradley/Dissent Douglas – Wikisource, the free online library. En.wikisource.org. Retrieved on 2013-07-16. See also: "Milliken v. Bradley" by Thurgood Marshall, Dissenting Opinion - Gibson, Campbell; Kay Jung (February 2005). "Table 23. Michigan – Race and Hispanic Origin for Selected Large Cities and Other Places: Earliest Census to 1990". United States Census Bureau. - "A Walk Through Newark: History of Newark-The Riots" WNET-Thirteen - Tom Adam Davies "SNCC, the Federal Government and the Road to Black Power" Paper given at the Historians of the Twentieth Century United States Conference in July 2010 - Allen J. Matusow "From Civil Rights to Black Power: The Case of SNCC" in Twentieth Century America: Recent Interpretations (Harcourt Press, 1972), p. 367-378 - Mike Marqussee "By Any Means Necessary" The Nation, June 17, 2004 - Douglas Martin, "Robert Hicks, Leader in Armed Rights Group, Dies at 81" The New York Times, April 24, 2010 - Lance Hill, The Deacons for Defense: Armed Resistance and the Civil Rights Movement (University of North Carolina Press, 2006) p. 200-204 - "The Time Has Come, 1964–1966" Eyes on the Prize, Blackside Productions, PBS American Experience - "Year End Charts – Year-end Singles – Hot R&B/Hip-Hop Songs". Billboard.com. Archived from the original on December 11, 2007. Retrieved 2009-09-08. - "Riding On". Time (Time Inc.). July 7, 2007. Retrieved 2007-10-23. - "ACLU Parchman Prison". Retrieved 2007-11-29. - "Parchman Farm and the Ordeal of Jim Crow Justice". Archived from the original on August 26, 2006. Retrieved 2006-08-28. - Goldman, Robert M. Goldman (April 1997). ""Worse Than Slavery": Parchman Farm and the Ordeal of Jim Crow Justice – book review". Hnet-online. Archived from the original on August 29, 2006. Retrieved 2006-08-29. - Cleaver, Eldridge (1967). Soul on Ice. New York, NY: McGraw-Hill. - Dudziak, M.L.: Cold War Civil Rights: Race and the Image of American Democracy - Abel, Elizabeth. Signs of the Times: The Visual Politics of Jim Crow. Berkeley: University of California Press, 2010. - Arsenault, Raymond. Freedom Riders: 1961 and the Struggle for Racial Justice. New York: Oxford University Press, 2006. ISBN 0-19-513674-8 - Barnes, Catherine A. Journey from Jim Crow: The Desegregation of Southern Transit, Columbia University Press, 1983. - Berger, Martin A. Seeing through Race: A Reinterpretation of Civil Rights Photography. Berkeley: University of California Press, 2011. - Beito, David T. and Beito, Linda Royster, Black Maverick: T.R.M. Howard's Fight for Civil Rights and Economic Power, University of Illinois Press, 2009. ISBN 978-0-252-03420-6 - Branch, Taylor. At Canaan's Edge: America In the King Years, 1965–1968. New York: Simon & Schuster, 2006. ISBN 0-684-85712-X - Branch, Taylor. Parting the waters : America in the King years, 1954–1963. New York: Simon & Schuster, 1988; Pillar of fire : America in the King years, 1963–1965. (1998); Branch, Taylor. At Canaan's edge: America in the King years, 1965-68(2007). - Breitman, George. The Assassination of Malcolm X. New York: Pathfinder Press. 1976. - Carson, Clayborne. In Struggle: SNCC and the Black Awakening of the 1960s. Cambridge, MA: Harvard University Press. 1980. ISBN 0-374-52356-8. - Carson, Clayborne; Garrow, David J.; Kovach, Bill; Polsgrove, Carol, eds. Reporting Civil Rights: American Journalism 1941–1963 and Reporting Civil Rights: American Journalism 1963–1973. New York: Library of America, 2003. ISBN 1-931082-28-6 and ISBN 1-931082-29-4. - Chandra, Siddharth and Angela Williams-Foster. "The 'Revolution of Rising Expectations,' Relative Deprivation, and the Urban Social Disorders of the 1960s: Evidence from State-Level Data." Social Science History, 29(2):299–332, 2005. - Dann, Jim. Challenging the Mississippi Firebombers, Memories of Mississippi 1964–65. Baraka Books 2013. ISBN 978-1-926824-87-1 - Fairclough, Adam. To Redeem the Soul of America: The Southern Christian Leadership Conference & Martin Luther King. The University of Georgia Press, 1987. - Foner, Eric and Joshua Brown, Forever Free: The Story of Emancipation and Reconstruction. Alfred A. Knopf: New York, 2005. p. 225–238. ISBN 978-0-375-70274-7 - Garrow, David J. Bearing the Cross: Martin Luther King and the Southern Christian Leadership Conference. 800 pages. New York: William Morrow, 1986. ISBN 0-688-04794-7. - Garrow, David J. The FBI and Martin Luther King. New York: W.W. Norton. 1981. Viking Press Reprint edition. 1983. ISBN 0-14-006486-9. Yale University Press; Revised and Expanded edition. 2006. ISBN 0-300-08731-4. - Greene, Christina. Our Separate Ways: Women and the Black Freedom Movement in Durham. North Carolina. Chapel Hill: University of North Carolina Press, 2005. - Horne, Gerald. The Fire This Time: The Watts Uprising and the 1960s. Charlottesville: University Press of Virginia. 1995. Da Capo Press; 1st Da Capo Press ed edition. October 1, 1997. ISBN 0-306-80792-0 - Kirk, John A. Martin Luther King, Jr.. London: Longman, 2005. ISBN 0-582-41431-8 - Kirk, John A. Redefining the Color Line: Black Activism in Little Rock, Arkansas, 1940–1970. Gainesville: University of Florida Press, 2002. ISBN 0-8130-2496-X - Kousser, J. Morgan, "The Supreme Court And The Undoing of the Second Reconstruction," National Forum, (Spring 2000). - Kryn, Randy. "James L. Bevel, The Strategist of the 1960s Civil Rights Movement", 1984 paper with 1988 addendum, printed in "We Shall Overcome, Volume II" edited by David Garrow, New York: Carlson Publishing Co., 1989. - Malcolm X (with the assistance of Alex Haley). The Autobiography of Malcolm X. New York: Random House, 1965. Paperback ISBN 0-345-35068-5. Hardcover ISBN 0-345-37975-6. - Marable, Manning. Race, Reform and Rebellion: The Second Reconstruction in Black America, 1945–1982. 249 pages. University Press of Mississippi, 1984. ISBN 0-87805-225-9. - McAdam, Doug. Political Process and the Development of Black Insurgency, 1930–1970, Chicago: University of Chicago Press. 1982. - McAdam, Doug, 'The US Civil Rights Movement: Power from Below and Above, 1945–70', in Adam Roberts and Timothy Garton Ash (eds.), Civil Resistance and Power Politics: The Experience of Non-violent Action from Gandhi to the Present. Oxford & New York: Oxford University Press, 2009. ISBN 978-0-19-955201-6. - Minchin, Timothy J. Hiring the Black Worker: The Racial Integration of the Southern Textile Industry, 1960–1980. University of North Carolina Press, 1999. ISBN 0-8078-2470-4. - Morris, Aldon D. The Origins of the Civil Rights Movement: Black Communities Organizing for Change. New York: The Free Press, 1984. ISBN 0-02-922130-7 - Sokol, Jason. There Goes My Everything: White Southerners in the Age of Civil Rights, 1945–1975. New York: Knopf, 2006. - Payne, Charles M. I've Got the Light of Freedom: The Organizing Tradition and the Mississippi Freedom Struggle. Berkeley: University of California Press, 1995. - Patterson, James T. Brown v. Board of Education, a Civil Rights Milestone and Its Troubled Legacy. Oxford University Press, 2002. ISBN 0-19-515632-3 - Raiford, Leigh. Imprisoned in a Luminous Glare: Photography and the African American Freedom Struggle. Chapel Hill: University of North Carolina Press, 2011. - Ransby, Barbara. Ella Baker and the Black Freedom Movement, a Radical Democratic Vision. The University of North Carolina Press, 2003. - Reeves, Richard (1993). President Kennedy: Profile of Power. New York: Simon & Schuster. ISBN 978-0-671-64879-4. - Shawki, Ahmed Black Liberation and Socialism (Haymarket Books, 2006), ISBN 1-931859-26-4. - Sitkoff, Howard. The Struggle for Black Equality (2nd ed. 2008) - Tsesis, Alexander. We Shall Overcome: A History of Civil Rights and the Law. (Yale University Press, 2008). ISBN 978-0-300-11837-7 - Williams, Juan. Eyes on the Prize: America's Civil Rights Years, 1954–1965. New York: Penguin Books, 1987. ISBN 0-14-009653-1 Historiography and memory - Fairclough, Adam. "Historians and the Civil Rights Movement." Journal of American Studies (1990) 24#3 pp: 387-398. - Frost, Jennifer. "Using "Master Narratives" to Teach History: The Case of the Civil Rights Movement." History Teacher (2012) 45#3 pp: 437-446. Online - Hall, Jacquelyn Dowd. "The long civil rights movement and the political uses of the past." Journal of American History (2005) 91#4 pp: 1233-1263. - Lawson, Steven F. "Freedom Then, Freedom Now: The Historiography of the Civil Rights Movement," American Historical Review (1991) 96#2 , pp. 456–471 in JSTOR - Sandage, Scott A. "A marble house divided: The Lincoln Memorial, the civil rights movement, and the politics of memory, 1939-1963." Journal of American History (1993): 135-167. Online - Holsaert, Faith et al. Hands on the Freedom Plow Personal Accounts by Women in SNCC. University of Illinois Press, 2010. ISBN 978-0-252-03557-9. - Civil Rights Greensboro provides access to archival resources documenting the modern civil rights era in Greensboro, North Carolina, from the 1940s to the early 1980s - St. Augustine Civil Rights Movement and Freedom Trail marking its sites. - Civil Rights Resource Guide, from the Library of Congress - The Civil Rights Era Library of Congress - Civil Rights Digital Library Digital Library of Georgia - Civil Rights Movement Veterans ~ Movement history, personal stories, documents, and photos. - Civil Rights Movement 1955–1965 - Civil Rights as a People's Movement American University Course Syllabus - Let Justice Roll Down: The Civil Rights Movement Through Film Yale-New Haven Teachers Institute - University of Southern Mississippi's Civil Rights Documentation Project, includes an extensive Timeline - President Kennedy's Address to the nation on Civil Rights - What Was Jim Crow? (The racial caste system that precipitated the Civil Rights Movement) - History and images of the sit-in movement - WDAS Radio's Enduring Impact on the Civil Rights Movement - The Committee on the Appeal for Human Rights and the Atlanta Student Movement - The Georgia Movement - Black Leaders of the Civil Rights Movement - The Albany Movement (entry in the New Georgia Encyclopedia) - Materials relating to the desegregation of Ole Miss in 1962 - Images of the Civil Rights Movement in Florida from the State Archives of Florida
This article needs additional citations for verification. (April 2020) In economics, economic equilibrium is a situation in which economic forces such as supply and demand are balanced and in the absence of external influences the (equilibrium) values of economic variables will not change. For example, in the standard text perfect competition, equilibrium occurs at the point at which quantity demanded and quantity supplied are equal. Market equilibrium in this case is a condition where a market price is established through competition such that the amount of goods or services sought by buyers is equal to the amount of goods or services produced by sellers. This price is often called the competitive price or market clearing price and will tend not to change unless demand or supply changes, and quantity is called the "competitive quantity" or market clearing quantity. But the concept of equilibrium in economics also applies to imperfectly competitive markets, where it takes the form of a Nash equilibrium. |A solution concept in game theory| |Subset of||Equilibrium, Free market| |Superset of||Competitive equilibrium, Nash equilibrium, Intertemporal equilibrium, Recursive competitive equilibrium| |Used for||mostly Perfect competition, but also some Imperfect competition| Properties of equilibriumEdit Equilibrium property P1: The behavior of agents is consistent. Equilibrium property P2: No agent has an incentive to change its behavior. Equilibrium property P3: Equilibrium is the outcome of some dynamic process (stability). Example: competitive equilibriumEdit In a competitive equilibrium, supply equals demand. Property P1 is satisfied, because at the equilibrium price the amount supplied is equal to the amount demanded. Property P2 is also satisfied. Demand is chosen to maximize utility given the market price: no one on the demand side has any incentive to demand more or less at the prevailing price. Likewise supply is determined by firms maximizing their profits at the market price: no firm will want to supply any more or less at the equilibrium price. Hence, agents on neither the demand side nor the supply side will have any incentive to alter their actions. To see whether Property P3 is satisfied, consider what happens when the price is above the equilibrium. In this case there is an excess supply, with the quantity supplied exceeding that demanded. This will tend to put downward pressure on the price to make it return to equilibrium. Likewise where the price is below the equilibrium point there is a shortage in supply leading to an increase in prices back to equilibrium. Not all equilibria are "stable" in the sense of equilibrium property P3. It is possible to have competitive equilibria that are unstable. However, if an equilibrium is unstable, it raises the question of reaching it. Even if it satisfies properties P1 and P2, the absence of P3 means that the market can only be in the unstable equilibrium if it starts off there. In most simple microeconomic stories of supply and demand a static equilibrium is observed in a market; however, economic equilibrium can be also dynamic. Equilibrium may also be economy-wide or general, as opposed to the partial equilibrium of a single market. Equilibrium can change if there is a change in demand or supply conditions. For example, an increase in supply will disrupt the equilibrium, leading to lower prices. Eventually, a new equilibrium will be attained in most markets. Then, there will be no change in price or the amount of output bought and sold — until there is an exogenous shift in supply or demand (such as changes in technology or tastes). That is, there are no endogenous forces leading to the price or the quantity. Example: Nash equilibriumEdit The Nash equilibrium is widely used in economics as the main alternative to competitive equilibrium. It is used whenever there is a strategic element to the behavior of agents and the "price taking" assumption of competitive equilibrium is inappropriate. The first use of the Nash equilibrium was in the Cournot duopoly as developed by Antoine Augustin Cournot in his 1838 book. Both firms produce a homogenous product: given the total amount supplied by the two firms, the (single) industry price is determined using the demand curve. This determines the revenues of each firm (the industry price times the quantity supplied by the firm). The profit of each firm is then this revenue minus the cost of producing the output. Clearly, there is a strategic interdependence between the two firms. If one firm varies its output, this will in turn affect the market price and so the revenue and profits of the other firm. We can define the payoff function which gives the profit of each firm as a function of the two outputs chosen by the firms. Cournot assumed that each firm chooses its own output to maximize its profits given the output of the other firm. The Nash equilibrium occurs when both firms are producing the outputs which maximize their own profit given the output of the other firm. In terms of the equilibrium properties, we can see that P2 is satisfied: in a Nash equilibrium, neither firm has an incentive to deviate from the Nash equilibrium given the output of the other firm. P1 is satisfied since the payoff function ensures that the market price is consistent with the outputs supplied and that each firms profits equal revenue minus cost at this output. Is the equilibrium stable as required by P3? Cournot himself argued that it was stable using the stability concept implied by best response dynamics. The reaction function for each firm gives the output which maximizes profits (best response) in terms of output for a firm in terms of a given output of the other firm. In the standard Cournot model this is downward sloping: if the other firm produces a higher output, the best response involves producing less. Best response dynamics involves firms starting from some arbitrary position and then adjusting output to their best-response to the previous output of the other firm. So long as the reaction functions have a slope of less than -1, this will converge to the Nash equilibrium. However, this stability story is open to much criticism. As Dixon argues: "The crucial weakness is that, at each step, the firms behave myopically: they choose their output to maximize their current profits given the output of the other firm, but ignore the fact that the process specifies that the other firm will adjust its output...". There are other concepts of stability that have been put forward for the Nash equilibrium, evolutionary stability for example. Most economists, for example Paul Samuelson,: Ch.3, p.52 caution against attaching a normative meaning (value judgement) to the equilibrium price. For example, food markets may be in equilibrium at the same time that people are starving (because they cannot afford to pay the high equilibrium price). Indeed, this occurred during the Great Famine in Ireland in 1845–52, where food was exported though people were starving, due to the greater profits in selling to the English – the equilibrium price of the Irish-British market for potatoes was above the price that Irish farmers could afford, and thus (among other reasons) they starved. In most interpretations, classical economists such as Adam Smith maintained that the free market would tend towards economic equilibrium through the price mechanism. That is, any excess supply (market surplus or glut) would lead to price cuts, which decrease the quantity supplied (by reducing the incentive to produce and sell the product) and increase the quantity demanded (by offering consumers bargains), automatically abolishing the glut. Similarly, in an unfettered market, any excess demand (or shortage) would lead to price increases, reducing the quantity demanded (as customers are priced out of the market) and increasing in the quantity supplied (as the incentive to produce and sell a product rises). As before, the disequilibrium (here, the shortage) disappears. This automatic abolition of non-market-clearing situations distinguishes markets from central planning schemes, which often have a difficult time getting prices right and suffer from persistent shortages of goods and services. This view came under attack from at least two viewpoints. Modern mainstream economics points to cases where equilibrium does not correspond to market clearing (but instead to unemployment), as with the efficiency wage hypothesis in labor economics. In some ways parallel is the phenomenon of credit rationing, in which banks hold interest rates low to create an excess demand for loans, so they can pick and choose whom to lend to. Further, economic equilibrium can correspond with monopoly, where the monopolistic firm maintains an artificial shortage to prop up prices and to maximize profits. Finally, Keynesian macroeconomics points to underemployment equilibrium, where a surplus of labor (i.e., cyclical unemployment) co-exists for a long time with a shortage of aggregate demand. Solving for the competitive equilibrium priceEdit To find the equilibrium price, one must either plot the supply and demand curves, or solve for the expressions for supply and demand being equal. An example may be: In the diagram, depicting simple set of supply and demand curves, the quantity demanded and supplied at price P are equal. At any price above P supply exceeds demand, while at a price below P the quantity demanded exceeds that supplied. In other words, prices where demand and supply are out of balance are termed points of disequilibrium, creating shortages and oversupply. Changes in the conditions of demand or supply will shift the demand or supply curves. This will cause changes in the equilibrium price and quantity in the market. Consider the following demand and supply schedule: - The equilibrium price in the market is $5.00 where demand and supply are equal at 12,000 units - If the current market price was $3.00 – there would be excess demand for 8,000 units, creating a shortage. - If the current market price was $8.00 – there would be excess supply of 12,000 units. When there is a shortage in the market we see that, to correct this disequilibrium, the price of the good will be increased back to a price of $5.00, thus lessening the quantity demanded and increasing the quantity supplied thus that the market is in balance. When there is an oversupply of a good, such as when price is above $6.00, then we see that producers will decrease the price to increase the quantity demanded for the good, thus eliminating the excess and taking the market back to equilibrium. Influences changing priceEdit A change in equilibrium price may occur through a change in either the supply or demand schedules. For instance, starting from the above supply-demand configuration, an increased level of disposable income may produce a new demand schedule, such as the following: Here we see that an increase in disposable income would increase the quantity demanded of the good by 2,000 units at each price. This increase in demand would have the effect of shifting the demand curve rightward. The result is a change in the price at which quantity supplied equals quantity demanded. In this case we see that the two now equal each other at an increased price of $6.00. Note that a decrease in disposable income would have the exact opposite effect on the market equilibrium. We will also see similar behaviour in price when there is a change in the supply schedule, occurring through technological changes, or through changes in business costs. An increase in technological usage or know-how or a decrease in costs would have the effect of increasing the quantity supplied at each price, thus reducing the equilibrium price. On the other hand, a decrease in technology or increase in business costs will decrease the quantity supplied at each price, thus increasing equilibrium price. The process of comparing two static equilibria to each other, as in the above example, is known as comparative statics. For example, since a rise in consumers' income leads to a higher price (and a decline in consumers' income leads to a fall in the price — in each case the two things change in the same direction), we say that the comparative static effect of consumer income on the price is positive. This is another way of saying that the total derivative of price with respect to consumer income is greater than zero. Whereas in a static equilibrium all quantities have unchanging values, in a dynamic equilibrium various quantities may all be growing at the same rate, leaving their ratios unchanging. For example, in the neoclassical growth model, the working population is growing at a rate which is exogenous (determined outside the model, by non-economic forces). In dynamic equilibrium, output and the physical capital stock also grow at that same rate, with output per worker and the capital stock per worker unchanging. Similarly, in models of inflation a dynamic equilibrium would involve the price level, the nominal money supply, nominal wage rates, and all other nominal values growing at a single common rate, while all real values are unchanging, as is the inflation rate. The process of comparing two dynamic equilibria to each other is known as comparative dynamics. For example, in the neoclassical growth model, starting from one dynamic equilibrium based in part on one particular saving rate, a permanent increase in the saving rate leads to a new dynamic equilibrium in which there are permanently higher capital per worker and productivity per worker, but an unchanged growth rate of output; so it is said that in this model the comparative dynamic effect of the saving rate on capital per worker is positive but the comparative dynamic effect of the saving rate on the output growth rate is zero. Disequilibrium characterizes a market that is not in equilibrium. Disequilibrium can occur extremely briefly or over an extended period of time. Typically in financial markets it either never occurs or only momentarily occurs, because trading takes place continuously and the prices of financial assets can adjust instantaneously with each trade to equilibrate supply and demand. At the other extreme, many economists view labor markets as being in a state of disequilibrium—specifically one of excess supply—over extended periods of time. Goods markets are somewhere in between: prices of some goods, while sluggish in adjusting due to menu costs, long-term contracts, and other impediments, do not stay at disequilibrium levels indefinitely. - Varian, Hal R. (1992). Microeconomic Analysis (Third ed.). New York: Norton. ISBN 0-393-95735-7. - Dixon, H. (1990). "Equilibrium and Explanation". In Creedy (ed.). The Foundations of Economic Thought. Blackwells. pp. 356–394. ISBN 0-631-15642-9. (reprinted in Surfing Economics). - Augustin Cournot (1838), Theorie mathematique de la richesse sociale and of recherches sur les principles mathematiques de la theorie des richesses, Paris - Dixon (1990), page 369. - Paul A. Samuelson (1947; Expanded ed. 1983), Foundations of Economic Analysis, Harvard University Press. ISBN 0-674-31301-1 - See citations at Great Famine (Ireland): Food exports to England, including Cecil Woodham-Smith The Great Hunger; Ireland 1845–1849, and Christine Kinealy, 'Irish Famine: This Great Calamity and A Death-Dealing Famine' - Smith, Adam (1776), Wealth of Nations Archived 2013-10-20 at the Wayback Machine, Penn State Electronic Classics edition, republished 2005, Chapter 7: p.51-58 - Turnovsky, Stephen J. (2000). Methods of Macroeconomic Dynamics. MIT Press. ISBN 0-262-20123-2. - O'Sullivan, Arthur; Sheffrin, Steven M. (2003). Economics: Principles in Action. Upper Saddle River, New Jersey 07458: Pearson Prentice Hall. p. 550. ISBN 0-13-063085-3.CS1 maint: location (link)
The Foucault pendulum ( foo-KOH; French pronunciation: [fu'ko]) or Foucault's pendulum is a simple device named after French physicist Léon Foucault and conceived as an experiment to demonstrate the Earth's rotation. The pendulum was introduced in 1851 and was the first experiment to give simple, direct evidence of the earth's rotation. Today, Foucault pendulums are popular displays in science museums and universities. The first public exhibition of a Foucault pendulum took place in February 1851 in the Meridian of the Paris Observatory. A few weeks later, Foucault made his most famous pendulum when he suspended a 28-kg brass-coated lead bob with a 67-m-long wire from the dome of the Panthéon, Paris. The plane of the pendulum's swing rotated clockwise approximately 11.3° per hour, making a full circle in approximately 31.8 hours. The original bob used in 1851 at the Panthéon was moved in 1855 to the Conservatoire des Arts et Métiers in Paris. A second temporary installation was made for the 50th anniversary in 1902. During museum reconstruction in the 1990s, the original pendulum was temporarily displayed at the Panthéon (1995), but was later returned to the Musée des Arts et Métiers before it reopened in 2000. On April 6, 2010, the cable suspending the bob in the Musée des Arts et Métiers snapped, causing irreparable damage to the pendulum and to the marble flooring of the museum. An exact copy of the original pendulum has been operating under the dome of the Panthéon, Paris since 1995. At either the North Pole or South Pole, the plane of oscillation of a pendulum remains fixed relative to the distant masses of the universe while Earth rotates underneath it, taking one sidereal day to complete a rotation. So, relative to Earth, the plane of oscillation of a pendulum at the North Pole undergoes a full clockwise rotation during one day; a pendulum at the South Pole rotates counterclockwise. When a Foucault pendulum is suspended at the equator, the plane of oscillation remains fixed relative to Earth. At other latitudes, the plane of oscillation precesses relative to Earth, but more slowly than at the pole; the angular speed, ? (measured in clockwise degrees per sidereal day), is proportional to the sine of the latitude, ?: where latitudes north and south of the equator are defined as positive and negative, respectively. For example, a Foucault pendulum at 30° south latitude, viewed from above by an earthbound observer, rotates counterclockwise 360° in two days. To demonstrate rotation directly rather than indirectly via the swinging pendulum, Foucault used a gyroscope in an 1852 experiment. The inner gimbal of the Foucault gyroscope was balanced on knife edge bearings on the outer gimbal and the outer gimbal was suspended by a fine, torsion-free thread in such a manner that the lower pivot point carried almost no weight. The gyro was spun to 9000-12000 revolutions per minute with an arrangement of gears before being placed into position, which was sufficient time to balance the gyroscope and carry out 10 minutes of experimentation. The instrument could be observed either with a microscope viewing a tenth of a degree scale or by a long pointer. At least three more copies of a Foucault gyro were made in convenient travelling and demonstration boxes and copies survive in the UK, France, and the US. A Foucault pendulum requires care to set up because imprecise construction can cause additional veering which masks the terrestrial effect. The initial launch of the pendulum is critical; the traditional way to do this is to use a flame to burn through a thread which temporarily holds the bob in its starting position, thus avoiding unwanted sideways motion (see a detail of the launch at the 50th anniversary in 1902). Air resistance damps the oscillation, so some Foucault pendulums in museums incorporate an electromagnetic or other drive to keep the bob swinging; others are restarted regularly, sometimes with a launching ceremony as an added attraction. A 'pendulum day' is the time needed for the plane of a freely suspended Foucault pendulum to complete an apparent rotation about the local vertical. This is one sidereal day divided by the sine of the latitude. In a near-inertial frame moving in tandem with Earth, but not sharing the rotation of the earth about its own axis, the suspension point of the pendulum traces out a circular path during one sidereal day. At the latitude of Paris, 48 degrees 51 minutes north, a full precession cycle takes just under 32 hours, so after one sidereal day, when the Earth is back in the same orientation as one sidereal day before, the oscillation plane has turned by just over 270 degrees. If the plane of swing was north-south at the outset, it is east-west one sidereal day later. This also implies that there has been exchange of momentum; the Earth and the pendulum bob have exchanged momentum. The Earth is so much more massive than the pendulum bob that the Earth's change of momentum is unnoticeable. Nonetheless, since the pendulum bob's plane of swing has shifted, the conservation laws imply that an exchange must have occurred. Rather than tracking the change of momentum, the precession of the oscillation plane can efficiently be described as a case of parallel transport. For that, it can be demonstrated, by composing the infinitesimal rotations, that the precession rate is proportional to the projection of the angular velocity of Earth onto the normal direction to Earth, which implies that the trace of the plane of oscillation will undergo parallel transport. After 24 hours, the difference between initial and final orientations of the trace in the Earth frame is , which corresponds to the value given by the Gauss-Bonnet theorem. ? is also called the holonomy or geometric phase of the pendulum. When analyzing earthbound motions, the Earth frame is not an inertial frame, but rotates about the local vertical at an effective rate of radians per day. A simple method employing parallel transport within cones tangent to the Earth's surface can be used to describe the rotation angle of the swing plane of Foucault's pendulum. From the perspective of an Earth-bound coordinate system with its x-axis pointing east and its y-axis pointing north, the precession of the pendulum is described by the Coriolis force. Consider a planar pendulum with natural frequency ? in the small angle approximation. There are two forces acting on the pendulum bob: the restoring force provided by gravity and the wire, and the Coriolis force. The Coriolis force at latitude ? is horizontal in the small angle approximation and is given by where ? is the rotational frequency of Earth, Fc,x is the component of the Coriolis force in the x-direction and Fc,y is the component of the Coriolis force in the y-direction. The restoring force, in the small-angle approximation, is given by Using Newton's laws of motion this leads to the system of equations Switching to complex coordinates , the equations read To first order in ?/? this equation has the solution If time is measured in days, then and the pendulum rotates by an angle of -2? sin(?) during one day. Many physical systems precess in a similar manner to a Foucault pendulum. As early as 1836, the Scottish mathematician Edward Sang contrived and explained the precession of a spinning top. In 1851, Charles Wheatstone described an apparatus that consists of a vibrating spring that is mounted on top of a disk so that it makes a fixed angle with the disk. The spring is struck so that it oscillates in a plane. When the disk is turned, the plane of oscillation changes just like the one of a Foucault pendulum at latitude . Similarly, consider a nonspinning, perfectly balanced bicycle wheel mounted on a disk so that its axis of rotation makes an angle with the disk. When the disk undergoes a full clockwise revolution, the bicycle wheel will not return to its original position, but will have undergone a net rotation of . Foucault-like precession is observed in a virtual system wherein a massless particle is constrained to remain on a rotating plane that is inclined with respect to the axis of rotation. Spin of a relativistic particle moving in a circular orbit precesses similar to the swing plane of Foucault pendulum. The relativistic velocity space in Minkowski spacetime can be treated as a sphere S3 in 4-dimensional Euclidean space with imaginary radius and imaginary timelike coordinate. Parallel transport of polarization vectors along such sphere gives rise to Thomas precession, which is analogous to the rotation of the swing plane of Foucault pendulum due to parallel transport along a sphere S2 in 3-dimensional Euclidean space. There are numerous Foucault pendulums at universities, science museums, and the like throughout the world. The United Nations headquarters in New York City has one; the largest is at the Oregon Convention Center. The experiment has also been carried out at the South Pole, where it was assumed that the rotation of the earth would have maximum effect at the Amundsen-Scott South Pole Station, in a six-story staircase of a new station under construction. The pendulum had a length of 33 m and the bob weighed 25 kg. The location was ideal: no moving air could disturb the pendulum, and the low viscosity of the cold air reduced air resistance. The researchers confirmed about 24 hours as the rotation period of the plane of oscillation.
« ΠροηγούμενηΣυνέχεια » 22. On a base of given length describe a triangle equal to a given triangle and having an angle equal to an angle of the given triangle. 23. Construct a triangle equal in area to a given triangle, and having a given altitude. 24. On a base of given length construct a triangle equal to a given triangle, and having its vertex on a given straight line. 25. On a base of given length describe (i) an isosceles triangle; (ii) a right-angled triangle, equal to a given triangle. 26. Construct a triangle equal to the sum or difference of two given triangles. [See Ex. 16, p. 118.] 27. ABC is a given triangle, and X a given point: describe a triangle equal to ABC, having its vertex at X, and its base in the same straight line as BC. 28. ABCD is a quadrilateral. On the base AB construct a triangle equal in area to ABCD, and having the angle at A common with the quadrilateral. [Join BD. Through C draw CX parallel to BD, meeting AD produced in X; join BX.] 29. Construct a rectilineal figure equal to a given rectilineal figure, and having fewer sides by one than the given figure. Hence shew how to construct a triangle equal to a given rectilineal figure. 30. ABCD is a quadrilateral: it is required to construct a triangle equal in area to ABCD, having its vertex at a given point X in DC, and its base in the same straight line as AB. Construct a rhombus equal to a given parallelogram. 32. Construct a parallelogram which shall have the same area and perimeter as a given triangle. 33. Bisect a triangle by a straight line drawn through one of its angular points. 34. Trisect a triangle by straight lines drawn through one of its angular points. [See Ex. 19, p. 110, and 1. 38.] 35. Divide a triangle into any number of equal parts by straight lines drawn through one of its angular points. [See Ex. 19, p. 107, and 1. 38.] 36. Bisect a triangle by a straight line drawn through a given point in one of its sides. [Let ABC be the given A, and P the given point in the side AB. Bisect AB at Z; and join CZ, CP. Through Z draw ZQ parallel to CP. Join PQ. Then shall PQ bisect the ▲.* See Ex. 21, p. 119.] B Trisect a triangle by straight lines drawn from a given point in one of its sides. 38. Cut off from a given triangle a fourth, fifth, sixth, or any part required by a straight line drawn from a given point in one of its sides. [See Ex. 19, p. 107, and Ex. 21, p. 119.] 39. Bisect a quadrilateral by a straight line drawn through an angular point. [Two constructions may be given for this problem: the first will be suggested by Exercises 28 and 33, p. 120. The second method proceeds thus. Let ABCD be the given quadrilateral, and A the given angular point. Join AC, BD, and bisect BD in X. Through X_draw PXQ parallel to AC, meeting BC in P; join AP. Then shall AP bisect the quadrilateral. Join AX, CX, and use I. 37, 38.] A 40. Cut off from a given quadrilateral a third, a fourth, a fifth, or any part required, by a straight line drawn through a given angular point. [See Exercises 28 and 35, p. 120.] The following Theorems depend on 1. 47. In the figure of I. 47, shew that (i) the sum of the squares on AB and AE is equal to the sum of the squares on AC and AD. (ii) the square on EK is equal to the square on AB with four times the square on AC. (iii) the sum of the squares on EK and FD is equal to five times the square on BC. 42. If a straight line is divided into any two parts, the square on the straight line is greater than the sum of the squares on the two parts. 43. If the square on one side of a triangle is less than the squares on the remaining sides, the angle contained by these sides is acute; if greater, obtuse. 44. ABC is a triangle, right-angled at A; the sides AB, AC are intersected by a straight line PQ, and BQ, PC are joined: shew that the sum of the squares on BQ, PC is equal to the sum of the squares on BC, PQ. 45. In a right-angled triangle four times the sum of the squares on the medians which bisect the sides containing the right angle is equal to five times the square on the hypotenuse. 46. Describe a square whose area shall be three times that of a given square. 47. Divide a straight line into two parts such that the sum of their squares shall be equal to a given square. IX. ON LOCI. In many geometrical problems we are required to find the position of a point which satisfies given conditions; and all such problems hitherto considered have been found to admit of a limited number of solutions. This, however, will not be the case if only one condition is given. For example: (i) Required a point which shall be at a given distance from a given point. This problem is evidently indeterminate, that is to say, it admits of an indefinite number of solutions; for the condition stated is satisfied by any point on the circumference of the circle described from the given point as centre, with a radius equal to the given distance. Moreover this condition is satisfied by no other point within or without the circle. (ii) Required a point which shall be at a given distance from a given straight line. Here again there are an infinite number of such points, and they lie on two parallel straight lines drawn on either side of the given straight line at the given distance from it: further, no point that is not on one or other of these parallels satisfies the given condition. Hence we see that one condition is not sufficient to determine the position of a point absolutely, but it may have the effect of restricting it to some definite line or lines, straight or curved. This leads us to the following definition. DEFINITION. The Locus of a point satisfying an assigned condition consists of the line, lines, or part of a line, to which the point is thereby restricted; provided that the condition is satisfied by every point on such line or lines, and by no other. A locus is sometimes defined as the path traced out by a point which moves in accordance with an assigned law. Thus the locus of a point, which is always at a given distance from a given point, is a circle of which the given point is the centre: and the locus of a point, which is always at a given distance from a given straight line, is a pair of parallel straight lines. We now see that in order to infer that a certain line, or system of lines, is the locus of a point under a given condition, it is necessary to prove (i) that any point which fulfils the given condition is on the supposed locus; (ii) that every point on the supposed locus satisfies the given condition. 1. Find the locus of a point which is always equidistant from two given points. Let A, B be the two given points. (a) Let P be any point equidistant from A and B, so that AP=BP. Bisect AB at X, and join PX. Because and PX is common to both, also AP = BP, the LPXA= the L PXB; and they are adjacent .. PX is perp. to AB. Def. 10. .. any point which is equidistant from A and B is on the straight line which bisects AB at right angles. (B) Also every point in this line is equidistant from A and B. For let Q be any point in this line. Join AQ, BQ. Then in the As AXQ, BXQ, and XQ is common to both; also the AXQ=the L BXQ, being rt. Ls; That is, Q is equidistant from A and B. Hence we conclude that the locus of the point equidistant from two given points A, B is the straight line which bisects AB at right angles. 2. To find the locus of the middle point of a straight line drawn from a given point to meet a given straight line of unlimited length. Let A be the given point, and BC the given straight line of unlimited length. (a) Let AX be any straight line drawn through A to meet BC, and let P be its middle point. Draw AF perp. to BC, and bisect AF at E. Join EP, and produce it indefinitely. Since AFX is a ▲, and E, P the middle points of the two sides AF, AX, .. EP is parallel to the remaining side FX. Ex. 2, p. 104. .. P is on the straight line which passes through the fixed point E, and is parallel to BC. (B) Again, every point in EP, or EP produced, fulfils the required For, in this straight line take any point Q. Then FAY is a ▲, and through E, the middle point of the side AF, Qis the middle point of AY. Ex. 1, p. 104. Hence the required locus is the straight line drawn parallel to BC, and passing through E, the middle point of the perp. from A to BC.
A group of astronomers, including Penn State scientists, has announced the likely discovery of a highly obscured black hole existing only 850 million years after the Big Bang, using NASA’s Chandra X-ray Observatory. This is the first evidence for a cloaked black hole at such an early time. Supermassive black holes typically grow by pulling in material from a disk of surrounding matter. For the most rapid growth, this process generates prodigious amounts of radiation in a very small region around the black hole, and produces an extremely bright, compact source called a quasar. Theoretical calculations indicate that most of the early growth of black holes occurs while the black hole and disk are surrounded by a dense cloud of gas that feeds material into the disk. As the black hole grows, the gas in the cloud is depleted until the black hole and its bright disk are uncovered. “It’s extraordinarily challenging to find quasars in this cloaked phase because so much of their radiation is absorbed and cannot be detected by current instruments,” said Fabio Vito, CAS-CONICYT Fellow at the Pontificia Universidad Católica de Chile, who led the study, which he started as a postdoctoral researcher at Penn State. “Thanks to Chandra and the ability of X-rays to pierce through the obscuring cloud, we think we’ve finally succeeded.” The discovery resulted from observations of a quasar called PSO 167-13, which was first discovered by Pan-STARRS, an optical-light telescope in Hawaii. Optical observations from these and other surveys have resulted in the detection of about 200 quasars already shining brightly when the universe was less than a billion years old, or about 8 percent of its present age. These surveys were only considered effective at finding unobscured black holes, because the radiation they detect is suppressed by even thin clouds of surrounding gas and dust. Therefore PSO 167-13 was expected to be unobscured. Vito’s team were able to test this idea by making Chandra observations of PSO 167-13 and nine other quasars discovered with optical surveys. After 16 hours of observation only three X-ray photons were detected from PSO 167-13, all with relatively high energies. Low energy X-rays are more readily absorbed than higher energy ones, so the likely explanation for the Chandra observation is that the quasar is highly obscured by gas, allowing only high energy X-rays to be detected. “This was a complete surprise,” said co-author Niel Brandt, Verne M. Willaman Professor of Astronomy and Astrophysics and professor of physics at Penn State. “It was like we were expecting a moth but saw a cocoon instead. None of the other nine quasars we observed were cloaked, which is what we anticipated.” Data from NASA’s Chandra X-ray Observatory have revealed what may be the most distant shrouded black hole, which may have existed only 850 million years after the Big Bang, or approximately half a billion years earlier than the previous record-holder. The small, central region marked with a red cross in the main image—from the optical PanSTARRS survey— contains the quasar PSO167-13, which was first discovered with PanSTARRS. The left inset contains X-rays detected with Chandra from this region, with PSO167-13 in the middle. The right inset shows the same field of view as seen by the Atacama Large Millimeter Array (ALMA) of radio dishes in Chile. The bright source is the quasar and a faint, nearby companion galaxy appears in the lower left. Credit: X-ray: NASA/CXO/Pontificia Universidad Catolica de Chile/F. Vito; Radio: ALMA (ESO/NAOJ/NRAO); Optical: Pan-STARRS An interesting twist for PSO 167-13 is that the galaxy hosting the quasar has a close companion galaxy visible in data previously obtained with the Atacama Large Millimeter Array (ALMA) of radio dishes in Chile and NASA’s Hubble Space Telescope. Because of their close separation and the faintness of the X-ray source, the team was unable to determine whether the newly-discovered X-ray emission is associated with the quasar PSO 167-13 or with the companion galaxy. If the X-rays come from the known quasar, then astronomers need to develop an explanation for why the quasar appeared highly obscured in X-rays but not in optical light. One possibility is that there has been a large and rapid increase in obscuration of the quasar during the 3 years between when the optical and the X-ray observations were made. On the other hand, if instead the X-rays arise from the companion galaxy, then it represents the detection of a new quasar in close proximity to PSO 167-13. This quasar pair would be the most distant yet detected, breaking the record of 1.2 billion years after the Big Bang. In either of these two cases, the quasar detected by Chandra would be the most distant cloaked one yet seen. The previous record holder is observed 1.3 billion years after the Big Bang. The authors plan to make a more refined characterization of the source with follow-up observations. “With a longer Chandra observation, we'll be able to get a better estimate of how obscured this black hole is,” said co-author Franz Bauer, also from the Pontificia Universidad Católica de Chile and a former Penn State postdoctoral researcher, “and make a confident identification of the X-ray source with either the known quasar or the companion galaxy.” The authors also plan to search for more examples of highly obscured black holes. “We suspect that the majority of supermassive black holes in the early universe are cloaked: it’s then crucial to detect and study them to understand how they could grow to masses of a billion suns so quickly,” said co-author Roberto Gilli of INAF in Bologna, Italy. A paper describing these results appears online August 8 in the journal Astronomy and Astrophysics. NASA's Marshall Space Flight Center manages the Chandra program. The Smithsonian Astrophysical Observatory’s Chandra X-ray Center controls science and flight operations from Cambridge, MA. The data utilized in this research were gathered using the Advanced CCD Imaging Spectrometer on Chandra, an instrument conceived and designed by a team led by Penn State Evan Pugh Professor Emeritus of Astronomy and Astrophysics Gordon Garmire. In addition to Vito, Brandt, and Bauer, the research team also includes former Penn State postdoctoral researchers Ohad Shemmer, Cristian Vignali, and Bin Luo, who also earned his doctoral degree at Penn State.
Visualization is the thousand words generated by one picture, image or animation in a web browser. Even though they can be challenging to understand and program, algorithms were chosen to facilitate the meaning and beauty of visualization. The visualization of algorithms in a browser should give the learner or the curious an appreciation of the beauty of the algorithm’s code as visualized in a browser. The visualization of algorithms in a web browser introduce programing challenges. The challenges require the interfacing or calling of various computer languages to build the model of the visualized algorithm data to be displayed and run in the web browser. The complexity of the high level programing code is reduced to HTML mark-up to display in simple browsers. An algorithm is a well defined computer program that takes a value or set of values as input and produces an expected output related to the input. The algorithm transforms the input to the expected output of the given input. Algorithms are tools for solving well defined computational problems. The statement of the problems defines the algorithm’s transformational relationship between input and output. For example the statement, An algorithm to sort takes as … Input: a sequence of n numbers (a1, a2, …, an) to produce a sorted sequence with … Output: (a’1, a’2, …. a’n) such that a’1 <= a'2 <= … <= a'n. An algorithm is said to be correct if for all input the algorithm halt with the correct output. However, there are incorrect algorithms that can be useful. The algorithms presented here are correct algorithms. Any Pair of Segments Intersect An algorithm for determining whether any two line segments in a set of segments intersect. The algorithm uses a technique called sweeping. In sweeping, an imaginary vertical line passes through the given set of geometric objects from left to right. The geometric object in this case are line segments. The sweeping line moves in the x-direction as a dimension of time. The sweeping line orders the line segments by placing them in a dynamic data structure to compare the segments relationships. As the the sweeping line moves from left to right, the sweeping line considers the endpoint of the line segments determines if segments intersect. Align Three Sequences An algorithm to compare similarities. For biological applications the algorithm would compare the DNA of two (or more) different organisms. One reason to compare strands of DNA is to determine how “similar” the strands are, as some measure of how closely related the organisms are. Similarity can be defined in different way. For example, if strands of DNA are substrings of other strands. Similarity in this case is if the number of changes needed to turn DNA strands into the other strands is small, then the organism are similar. Align Three Sequences is a three “string/strand” version of Longest common subsequence. An algorithm to compute the greatest rate at which “materials” flow through a “system” from a source to sink without violating any capacity constraints. Materials and systems utilizing a Maximum Flow algorithm include liquids flowing through pipes, parts on an assembly line, current through electrical networks, and information through a network. The Ford-Fulkerson method is used to solve the Maximum Flow algorithm. The three important ideas of the Ford-Fulkerson that transcend he method and are relevant to many flow algorithms and problems: residual networks, augmenting paths, and cuts. Protein Data Banks An interface that allows users to select and view 3D structure of a biological macromolecule in a selected protein data bank. The Protein Data Bank (PDB) was established as the 1st open access digital data resource in all of biology and medicine. Through an internet information portal and downloadable data archive, the PDB provides access to 3D structure data for large biological molecules (proteins, DNA, and RNA). These are the molecules of life, found in all organisms on the planet. Knowing the 3D structure of a biological macromolecule is essential for understanding its role in human and animal health and disease, its function in plants and food and energy production, and its importance to other topics related to global prosperity and sustainability. The Disk Visualizer is a simple interface for making disk images. The user is able to locate he disc on the canvas using X, Y coordinates. The user also inputs the SIZE or radius of the disk. A RGB color is selected. Various filters are applied to the disc: cubic, gaussian, hat, quad and tent. Finally a weighting factor is added. Multiple disc can be added to the palette as illustrated in the accompanying image. The disc(s) maybe viewed in various file formats: Static GIF, Animated GIF, AVI, FLASH, MOV and MPG. First, check-out the Demos. Then move over to CREATE. View the disc with the default settings. Make multiple disc by checking KEEP. Select various file formats to view the disc image. Have fun! An interface to view the archive of experimentally determined three dimensional structures of biological macromolecules that from the Protein Data Banks which serves the community of researchers, educators and students. ATOMTV allows the user to view and rotate various proteins from selectable protein data banks. There is an download option to download a particular PDB file and view the molecule in the viewer of ATOMTV. AITCloud is a suite of client-server software for creating and using file hosting services. It is enterprise-ready with comprehensive support options. Free and open-source means that anyone is allowed to install and operate it on their own private server devices. Files are stored in conventional directory structures, accessible via WebDAV if necessary. User files are encrypted during transit and optionally at rest. AITCloud can synchronise with local clients running Windows, macOS, or various Linux distributions. AITCloud permits user and group administration. Content can be shared by defining granular read/write permissions between users and groups. Alternatively, users can create public URLs when sharing files. Surveillance is a software application for monitoring via closed-circuit television. Control is via a web-based interface. The application can use standard cameras or IP-based camera devices. Surveillance supports multiple cameras, reviewable simultaneously. Recording starts when the application detects changes between camera frames; one can select zones within field of view that the software will ignore. Surveillance supports cameras compatible with ONVIF standard. A surveillance camera from the camera server is pointing at a livefish tank. Different cameras types are tested. The images are live feeds from my camera server: Fish Cam is a small peep-hole camera rigged-up to my office aquarium. Note: There are times when maintenance is in order. During those times, displayed will be a blue or black blank screen. After about an hour the fish can surveilled. A surveillance camera from the camera server is pointing at a Humming Bird feeder. Different cameras types are tested. The images are live feeds from my camera server: Bird Cam is a small dome camera with Night-time vision. It is pointed at the humming bird feeder outside Allen Integrated Technologies (AIT). [ Note: There are times when maintenance is in order. During those times, displayed will be a blue | black blank screen. After about an hour you can surveil the fish or the humming birds. Also, the humming birds will return during the Spring. Until then the humming bird cam displays the blue blank screen. ] Autonomous robot are mobile robot that move unsupervised through real-world environments to fulfill its tasks. Fundamental to the study and application of autonomous robots is kinematics: The understanding of the mechanical behavior of the robot both in order to design appropriate mobile robots for tasks and to create control software for an instance of mobile robot hardware. A Braitenberg vehicle is an autonomous robot that can autonomously move around based on its sensor inputs.This Braitenberg Vehicle is called the Deliberate Explorer. The Deliberate Explorer turns slowly turn from beacons. Imagine the Deliberate Explorer is scanning planets for signs of life! After exploring, it speeds away to the next planetary galaxy. … or beacon. Anyway there is a trace of the Deliberate Explorer:
Cancer of the Larynx & Voice Box The larynx is an organ at the front of your neck. It is also called the voice box. It is about 2 inches long and 2 inches wide. It is above the windpipe (trachea). Below and behind the larynx is the esophagus. The larynx has two bands of muscle that form the vocal cords. The cartilage at the front of the larynx is sometimes called the Adam’s apple. The larynx has three main parts: - The top part of the larynx is the supraglottis. - The glottis is in the middle. Your vocal cords are in the glottis. - The subglottis is at the bottom. The subglottis connects to the windpipe. The larynx plays a role in breathing, swallowing, and talking. The larynx acts like a valve over the windpipe. The valve opens and closes to allow breathing, swallowing, and speaking: - Breathing: When you breathe, the vocal cords relax and open. When you hold your breath, the vocal cords shut tightly. - Swallowing: The larynx protects the windpipe. When you swallow, a flap called the epiglottis covers the opening of your larynx to keep food out of your lungs. The food passes through the esophagus on its way from your mouth to your stomach. - Talking: The larynx produces the sound of your voice. When you talk, your vocal cords tighten and move closer together. Air from your lungs is forced between them and makes them vibrate. This makes the sound of your voice. Your tongue, lips, and teeth form this sound into words. Who’s at Risk? No one knows the exact causes of cancer of the larynx. Doctors cannot explain why one person gets this disease and another does not. We do know that cancer is not contagious. You cannot “catch” cancer from another person. People with certain risk factors are more likely to get cancer of the larynx. A risk factor is anything that increases your chance of developing this disease. Studies have found the following risk factors: - Age. Cancer of the larynx occurs most often in people over the age of 55. - Gender. Men are four times more likely than women to get cancer of the larynx. - Race. African Americans are more likely than whites to be diagnosed with cancer of the larynx. - Smoking. Smokers are far more likely than nonsmokers to get cancer of the larynx. The risk is even higher for smokers who drink alcohol heavily. People who stop smoking can greatly decrease their risk of cancer of the larynx, as well as cancer of the lung, mouth, pancreas, bladder, and esophagus. Also, quitting smoking reduces the chance that someone with cancer of the larynx will get a second cancer in the head and neck region. (Cancer of the larynx is part of a group of cancers called head and neck cancers.) - Alcohol. People who drink alcohol are more likely to develop laryngeal cancer than people who don’t drink. The risk increases with the amount of alcohol that is consumed. The risk also increases if the person drinks alcohol and also smokes tobacco. - A personal history of head and neck cancer. Almost one in four people who have had head and neck cancer will develop a second primary head and neck cancer. - Occupation. Workers exposed to sulfuric acid mist or nickel have an increased risk of laryngeal cancer. Also, working with asbestos can increase the risk of this disease. Asbestos workers should follow work and safety rules to avoid inhaling asbestos fibers. Other studies suggest that having certain viruses or a diet low in vitamin A may increase the chance of getting cancer of the larynx. Another risk factor is having gastroesophageal reflux disease (GERD), which causes stomach acid to flow up into the esophagus. Most people who have these risk factors do not get cancer of the larynx. If you are concerned about your chance of getting cancer of the larynx, you should discuss this concern with your health care provider. Your health care provider may suggest ways to reduce your risk and can plan an appropriate schedule for checkups. The symptoms of cancer of the larynx depend mainly on the size of the tumor and where it is in the larynx. Symptoms may include the following: These symptoms may be caused by cancer or by other, less serious problems. Only a doctor can tell for sure. - Hoarseness or other voice changes - A lump in the neck - A sore throat or feeling that something is stuck in your throat - A cough that does not go away - Problems breathing - Bad breath - An earache - Weight loss If you have symptoms of cancer of the larynx, the doctor may do some or all of the following exams: - Physical exam. The doctor will feel your neck and check your thyroid, larynx, and lymph nodes for abnormal lumps or swelling. To see your throat, the doctor may press down on your tongue. - Indirect laryngoscopy. The doctor looks down your throat using a small, long-handled mirror to check for abnormal areas and to see if your vocal cords move as they should. This test does not hurt. The doctor may spray a local anesthesia in your throat to keep you from gagging. This exam is done in the doctor's office. - Direct laryngoscopy. The doctor inserts a thin, lighted tube called a laryngoscope through your nose or mouth. As the tube goes down your throat, the doctor can look at areas that cannot be seen with a mirror. A local anesthetic eases discomfort and prevents gagging. You may also receive a mild sedative to help you relax. Sometimes the doctor uses general anesthesia to put a person to sleep. This exam may be done in a doctor's office, an outpatient clinic, or a hospital. - CT scan. An x-ray machine linked to a computer takes a series of detailed pictures of the neck area. You may receive an injection of a special dye so your larynx shows up clearly in the pictures. From the CT scan, the doctor may see tumors in your larynx or elsewhere in your neck. - Biopsy. If an exam shows an abnormal area, the doctor may remove a small sample of tissue. Removing tissue to look for cancer cells is called a biopsy. For a biopsy, you receive local or general anesthesia, and the doctor removes tissue samples through a laryngoscope. A pathologist then looks at the tissue under a microscope to check for cancer cells. A biopsy is the only sure way to know if a tumor is cancerous. If you need a biopsy, you may want to ask the doctor the following questions: - What kind of biopsy will I have? Why? - How long will it take? Will I be awake? Will it hurt? - How soon will I know the results? - Are there any risks? What are the chances of infection or bleeding after the biopsy? - If I do have cancer, who will talk with me about treatment? When? To plan the best treatment, your doctor needs to know the stage, or extent, of your disease. Staging is a careful attempt to learn whether the cancer has spread and, if so, to what parts of the body. The doctor may use x-rays, CT scans, or magnetic resonance imaging to find out whether the cancer has spread to lymph nodes, other areas in your neck, or distant sites. People with cancer of the larynx often want to take an active part in making decisions about their medical care. It is natural to want to learn all you can about your disease and treatment choices. However, shock and stress after a diagnosis of cancer can make it hard to remember what you want to ask the doctor. Here are some ideas that might help: Your doctor may refer you to a specialist who treats cancer of the larynx, such as a surgeon, otolaryngologist (an ear, nose, and throat doctor), radiation oncologist, or medical oncologist. You can also ask your doctor for a referral. Treatment usually begins within a few weeks of the diagnosis. Usually, there is time to talk to your doctor about treatment choices, get a second opinion, and learn more about the disease before making a treatment decision. - Make a list of questions. - Take notes at the appointment. - Ask the doctor if you may use a tape recorder during the appointment. - Ask a family member or friend to come to the appointment with you. Methods of Treatment Cancer of the larynx may be treated with radiation therapy, surgery, or chemotherapy. Some patients have a combination of therapies. Radiation therapy (also called radiotherapy) uses high-energy x-rays to kill cancer cells. The rays are aimed at the tumor and the tissue around it. Radiation therapy is local therapy. It affects cells only in the treated area. Treatments are usually given 5 days a week for 5 to 8 weeks. Laryngeal cancer may be treated with radiation therapy alone or in combination with surgery or chemotherapy: - Radiation therapy alone: Radiation therapy is used alone for small tumors or for patients who cannot have surgery. - Radiation therapy combined with surgery: Radiation therapy may be used to shrink a large tumor before surgery or to destroy cancer cells that may remain in the area after surgery. If a tumor grows back after surgery, it is often treated with radiation. - Radiation therapy combined with chemotherapy: Radiation therapy may be used before, during, or after chemotherapy. After radiation therapy, some people need feeding tubes placed into the abdomen. The feeding tube is usually temporary. These are questions you may want to ask your doctor before having radiation therapy: - Why do I need this treatment? - What are the risks and side effects of this treatment? - Are there any long-term effects? - Should I see my dentist before I start treatment? - When will the treatments begin? When will they end? - How will I feel during therapy? - What can I do to take care of myself during therapy? - Can I continue my normal activities? - How will my neck look afterward? - What is the chance that the tumor will come back? - How often will I need checkups? Surgery is an operation in which a doctor removes the cancer using a scalpel or laser while the patient is asleep. When patients need surgery, the type of operation depends mainly on the size and exact location of the tumor. There are several types of laryngectomy (surgery to remove part or all of the larynx): - Total laryngectomy: The surgeon removes the entire larynx. - Partial laryngectomy (hemilaryngectomy): The surgeon removes part of the larynx. - Supraglottic laryngectomy: The surgeon takes out the supraglottis, the top part of the larynx. - Cordectomy: The surgeon removes one or both vocal cords. Sometimes the surgeon also removes the lymph nodes in the neck. This is called lymph node dissection. The surgeon also may remove the thyroid. During surgery for cancer of the larynx, the surgeon may need to make a stoma. (This surgery is called a tracheostomy.) The stoma is a new airway through an opening in the front of the neck. Air enters and leaves the windpipe (trachea) and lungs through this opening. A tracheostomy tube, also called a trach (“trake”) tube, keeps the new airway open. For many patients, the stoma is temporary. It is needed only until the patient recovers from surgery. More information about stomas can be found in the “Living with a Stoma” section. After surgery, some people may need a temporary feeding tube. Chemotherapy is the use of drugs to kill cancer cells. Your doctor may suggest one drug or a combination of drugs. The drugs for cancer of the larynx are usually given by injection into the bloodstream. The drugs enter the bloodstream and travel throughout the body. Chemotherapy is used to treat laryngeal cancer in several ways: - Before surgery or radiation therapy: In some cases, drugs are given to try to shrink a large tumor before surgery or radiation therapy. - After surgery or radiation therapy: Chemotherapy may be used after surgery or radiation therapy to kill any cancer cells that may be left. It also may be used for cancers that have spread. - Instead of surgery: Chemotherapy may be used with radiation therapy instead of surgery. The larynx is not removed and the voice is spared. Chemotherapy may be given in an outpatient part of the hospital, at the doctor’s office, or at home. Rarely, a hospital stay may be needed. These are questions you may want to ask your doctor before having chemotherapy: - Why do I need this treatment? - What will it do? - Will I have side effects? What can I do about them? - How long will I be on this treatment? - How often will I need checkups? >> Back to Larynx & Voicbox Conditions
Speed of gravity In classical theories of gravitation, the speed of gravity is the speed at which changes in a gravitational field propagate. This is the speed at which a change in the distribution of energy and momentum of matter results in subsequent alteration, at a distance, of the gravitational field which it produces. In a more physically correct sense, the "speed of gravity" refers to the speed of a gravitational wave. - 1 Introduction - 2 Static fields - 3 Newtonian gravitation - 4 Laplace - 5 Electrodynamical analogies - 6 Lorentz covariant models - 7 General relativity - 8 References - 9 External links The speed of gravitational waves in the general theory of relativity is equal to the speed of light in vacuum, c. Within the theory of special relativity, the constant c is not exclusively about light; instead it is the highest possible speed for any interaction in nature. Formally, c is a conversion factor for changing the unit of time to the unit of space. This makes it the only speed which does not depend either on the motion of an observer or a source of light and/or gravity. Thus, the speed of "light" is also the speed of gravitational waves and any other massless particle. Such particles include the gluon (carrier of the strong force), the photons that make up light, and the theoretical gravitons which make up the associated field particles of gravity (however a theory of the graviton requires a theory of quantum gravity). The speed of physical changes in a gravitational or electromagnetic field should not be confused with "changes" in the behavior of static fields that are due to pure observer-effects. These changes in direction of a static field, because of relativistic considerations, are the same for an observer when a distant charge is moving, as when an observer (instead) decides to move with respect to a distant charge. Thus, constant motion of an observer with regard to a static charge and its extended static field (either a gravitational or electric field) does not change the field. For static fields, such as the electrostatic field connected with electric charge, or the gravitational field connected to a massive object, the field extends to infinity, and does not propagate. Motion of an observer does not cause the direction of such a field to change, and by symmetrical considerations, changing the observer frame so that the charge appears to be moving at a constant rate, also does not cause the direction of its field to change, but requires that it continue to "point" in the direction of the charge, at all distances from the charge. The consequence of this is that static fields (either electric or gravitational) always point directly to the actual position of the bodies that they are connected to, without any delay that is due to any "signal" traveling (or propagating) from the charge, over a distance to an observer. This remains true if the charged bodies and their observers are made to "move" (or not), by simply changing reference frames. This fact sometimes causes confusion about the "speed" of such static fields, which sometimes appear to change infinitely quickly when the changes in the field are mere artifacts of the motion of the observer, or of observation. In such cases, nothing actually changes infinitely quickly, save the point of view of an observer of the field. For example, when an observer begins to move with respect to a static field that already extends over light years, it appears as though "immediately" the entire field, along with its source, has begun moving at the speed of the observer. This, of course, includes the extended parts of the field. However, this "change" in the apparent behavior of the field source, along with its distant field, does not represent any sort of propagation that is faster than light. Isaac Newton's formulation of a gravitational force law requires that each particle with mass respond instantaneously to every other particle with mass irrespective of the distance between them. In modern terms, Newtonian gravitation is described by the Poisson equation, according to which, when the mass distribution of a system changes, its gravitational field instantaneously adjusts. Therefore the theory assumes the speed of gravity to be infinite. This assumption was adequate to account for all phenomena with the observational accuracy of that time. It was not until the 19th century that an anomaly in astronomical observations which could not be reconciled with the Newtonian gravitational model of instantaneous action was noted: the French astronomer Urbain Le Verrier determined in 1859 that the elliptical orbit of Mercury precesses at a significantly different rate than that predicted by Newtonian theory. The first attempt to combine a finite gravitational speed with Newton's theory was made by Laplace in 1805. Based on Newton's force law he considered a model in which the gravitational field is defined as a radiation field or fluid. Changes in the motion of the attracting body are transmitted by some sort of waves. Therefore, the movements of the celestial bodies should be modified in the order v/c, where v is the relative speed between the bodies and c is the speed of gravity. The effect of a finite speed of gravity goes to zero as c goes to infinity, but not as 1/c2 as it does in modern theories. This led Laplace to conclude that the speed of gravitational interactions is at least 7×106 times the speed of light. This velocity was used by many in the 19th century to criticize any model based on a finite speed of gravity, like electrical or mechanical explanations of gravitation. From a modern point of view, Laplace's analysis is incorrect. Not knowing about Lorentz' invariance of static fields, Laplace assumed that when an object like the Earth is moving around the Sun, the attraction of the Earth would not be toward the instantaneous position of the Sun, but toward where the Sun had been if its position was retarded using the relative velocity (this retardation actually does happen with the optical position of the Sun, and is called annual solar aberration). Putting the Sun immobile at the origin, when the Earth is moving in an orbit of radius R with velocity v presuming that the gravitational influence moves with velocity c, moves the Sun's true position ahead of its optical position, by an amount equal to vR/c, which is the travel time of gravity from the sun to the Earth times the relative velocity of the sun and the Earth. The pull of gravity (if it behaved like a wave, such as light) would then be always displaced in the direction of the Earth's velocity, so that the Earth would always be pulled toward the optical position of the Sun, rather than its actual position. This would cause a pull ahead of the Earth, which would cause the orbit of the Earth to spiral outward. Such an outspiral would be suppressed by an amount v/c compared to the force which keeps the Earth in orbit; and since the Earth's orbit is observed to be stable, Laplace's c must be very large. As is now known, it may be considered to be infinite in the limit of straight-line motion, since as a static influence, it is instantaneous at distance, when seen by observers at constant transverse velocity. For orbits in which velocity (direction of speed) changes slowly, it is almost infinite. The attraction toward an object moving with a steady velocity is towards its instantaneous position with no delay, for both gravity and electric charge. In a field equation consistent with special relativity (i.e., a Lorentz invariant equation), the attraction between static charges moving with constant relative velocity, is always toward the instantaneous position of the charge (in this case, the "gravitational charge" of the Sun), not the time-retarded position of the Sun. When an object is moving in orbit at a steady speed but changing velocity v, the effect on the orbit is order v2/c2, and the effect preserves energy and angular momentum, so that orbits do not decay. At the end of the 19th century, many tried to combine Newton's force law with the established laws of electrodynamics, like those of Wilhelm Eduard Weber, Carl Friedrich Gauss, Bernhard Riemann and James Clerk Maxwell. Those theories are not invalidated by Laplace's critique, because although they are based on finite propagation speeds, they contain additional terms which maintain the stability of the planetary system. Those models were used to explain the perihelion advance of Mercury, but they could not provide exact values. One exception was Maurice Lévy in 1890, who succeeded in doing so by combining the laws of Weber and Riemann, whereby the speed of gravity is equal to the speed of light. So those hypotheses were rejected. However, a more important variation of those attempts was the theory of Paul Gerber, who derived in 1898 the identical formula, which was also derived later by Einstein for the perihelion advance. Based on that formula, Gerber calculated a propagation speed for gravity of 305 000 km/s, i.e. practically the speed of light. But Gerber's derivation of the formula was faulty, i.e., his conclusions did not follow from his premises, and therefore many (including Einstein) did not consider it to be a meaningful theoretical effort. Additionally, the value it predicted for the deflection of light in the gravitational field of the sun was too high by the factor 3/2. In 1900 Hendrik Lorentz tried to explain gravity on the basis of his ether theory and the Maxwell equations. After proposing (and rejecting) a Le Sage type model, he assumed like Ottaviano Fabrizio Mossotti and Johann Karl Friedrich Zöllner that the attraction of opposite charged particles is stronger than the repulsion of equal charged particles. The resulting net force is exactly what is known as universal gravitation, in which the speed of gravity is that of light. This leads to a conflict with the law of gravitation by Isaac Newton, in which it was shown by Pierre Simon Laplace that a finite speed of gravity leads to some sort of aberration and therefore makes the orbits unstable. However, Lorentz showed that the theory is not concerned by Laplace's critique, because due to the structure of the Maxwell equations only effects in the order v2/c2 arise. But Lorentz calculated that the value for the perihelion advance of Mercury was much too low. He wrote: The special form of these terms may perhaps be modified. Yet, what has been said is sufficient to show that gravitation may be attributed to actions which are propagated with no greater velocity than that of light. In 1908 Henri Poincaré examined the gravitational theory of Lorentz and classified it as compatible with the relativity principle, but (like Lorentz) he criticized the inaccurate indication of the perihelion advance of Mercury. Lorentz covariant models Henri Poincaré argued in 1904 that a propagation speed of gravity which is greater than c would contradict the concept of local time (based on synchronization by light signals) and the principle of relativity. He wrote: What would happen if we could communicate by signals other than those of light, the velocity of propagation of which differed from that of light? If, after having regulated our watches by the optimal method, we wished to verify the result by means of these new signals, we should observe discrepancies due to the common translatory motion of the two stations. And are such signals inconceivable, if we take the view of Laplace, that universal gravitation is transmitted with a velocity a million times as great as that of light? However, in 1905 Poincaré calculated that changes in the gravitational field can propagate with the speed of light if it is presupposed that such a theory is based on the Lorentz transformation. He wrote: Laplace showed in effect that the propagation is either instantaneous or much faster than that of light. However, Laplace examined the hypothesis of finite propagation velocity ceteris non mutatis; here, on the contrary, this hypothesis is conjoined with many others, and it may be that between them a more or less perfect compensation takes place. The application of the Lorentz transformation has already provided us with numerous examples of this. Similar models were also proposed by Hermann Minkowski (1907) and Arnold Sommerfeld (1910). However, those attempts were quickly superseded by Einstein's theory of general relativity. Whitehead's theory of gravitation (1922) explains gravitational red shift, light bending, perihelion shift and Shapiro delay. General relativity predicts that gravitational radiation should exist and propagate as a wave at lightspeed: a slowly evolving and weak gravitational field will produce, according to general relativity, effects like those of Newtonian gravitation. Suddenly displacing one of two gravitoelectrically interacting particles would, after a delay corresponding to lightspeed, cause the other to feel the displaced particle's absence: accelerations due to the change in quadrupole moment of star systems, like the Hulse–Taylor binary have removed much energy (almost 2% of the energy of our own Sun's output) as gravitational waves, which would theoretically travel at the speed of light. Two gravitoelectrically interacting particle ensembles, e.g., two planets or stars moving at constant velocity with respect to each other, each feel a force toward the instantaneous position of the other body without a speed-of-light delay because Lorentz invariance demands that what a moving body in a static field sees and what a moving body that emits that field sees be symmetrical. A moving body's seeing no aberration in a static field emanating from a "motionless body" therefore causes Lorentz invariance to require that in the previously moving body's reference frame the (now moving) emitting body's field lines must not at a distance be retarded or aberred. Moving charged bodies (including bodies that emit static gravitational fields) exhibit static field lines that bend not with distance and show no speed of light delay effects, as seen from bodies moving with regard to them. In other words, since the gravitoelectric field is, by definition, static and continuous, it does not propagate. If such a source of a static field is accelerated (for example stopped) with regard to its formerly constant velocity frame, its distant field continues to be updated as though the charged body continued with constant velocity. This effect causes the distant fields of unaccelerated moving charges to appear to be "updated" instantly for their constant velocity motion, as seen from distant positions, in the frame where the source-object is moving at constant velocity. However, as discussed, this is an effect which can be removed at any time, by transitioning to a new reference frame in which the distant charged body is now at rest. The static and continuous gravitoelectric component of a gravitational field is not a gravitomagnetic component (gravitational radiation); see Petrov classification. The gravitoelectric field is a static field and therefore cannot superluminally transmit quantized (discrete) information, i.e., it could not constitute a well-ordered series of impulses carrying a well-defined meaning (this is the same for gravity and electromagnetism). Aberration of field direction in general relativity, for a weakly accelerated observer The finite speed of gravitational interaction in general relativity does not lead to the sorts of problems with the aberration of gravity that Newton was originally concerned with, because there is no such aberration in static field effects. Because the acceleration of the Earth with regard to the Sun is small (meaning, to a good approximation, the two bodies can be regarded as traveling in straight lines past each other with unchanging velocity) the orbital results calculated by general relativity are the same as those of Newtonian gravity with instantaneous action at a distance, because they are modelled by the behavior of a static field with constant-velocity relative motion, and no aberration for the forces involved. Although the calculations are considerably more complicated, one can show that a static field in general relativity does not suffer from aberration problems as seen by an unaccelerated observer (or a weakly accelerated observer, such as the Earth). Analogously, the "static term" in the electromagnetic Liénard–Wiechert potential theory of the fields from a moving charge, does not suffer from either aberration or positional-retardation. Only the term corresponding to acceleration and electromagnetic emission in the Liénard–Wiechert potential shows a direction toward the time-retarded position of the emitter. It is in fact not very easy to construct a self-consistent gravity theory in which gravitational interaction propagates at a speed other than the speed of light, which complicates discussion of this possibility. In general relativity the metric tensor symbolizes the gravitational potential, and Christoffel symbols of the spacetime manifold symbolize the gravitational force field. The tidal gravitational field is associated with the curvature of spacetime. Possible experimental measurements The speed of gravity (more correctly, the speed of gravitational waves) can be calculated from observations of the orbital decay rate of binary pulsars PSR 1913+16 (the Hulse–Taylor binary system noted above) and PSR B1534+12. The orbits of these binary pulsars are decaying due to loss of energy in the form of gravitational radiation. The rate of this energy loss ("gravitational damping") can be measured, and since it depends on the speed of gravity, comparing the measured values to theory shows that the speed of gravity is equal to the speed of light to within 1%. However, according to PPN formalism setting, measuring the speed of gravity by comparing theoretical results with experimental results will depend on the theory; use of a theory other than that of general relativity could in principle show a different speed, although the existence of gravitational damping at all implies that the speed cannot be infinite. In September 2002, Sergei Kopeikin and Edward Fomalont announced that they had made an indirect measurement of the speed of gravity, using their data from VLBI measurement of the retarded position of Jupiter on its orbit during Jupiter's transit across the line-of-sight of the bright radio source quasar QSO J0842+1835. Kopeikin and Fomalont concluded that the speed of gravity is between 0.8 and 1.2 times the speed of light, which would be fully consistent with the theoretical prediction of general relativity that the speed of gravity is exactly the same as the speed of light. Several physicists, including Clifford M. Will and Steve Carlip, have criticized these claims on the grounds that they have allegedly misinterpreted the results of their measurements. Notably, prior to the actual transit, Hideki Asada in a paper to the Astrophysical Journal Letters theorized that the proposed experiment was essentially a roundabout confirmation of the speed of light instead of the speed of gravity. However, Kopeikin and Fomalont continue to vigorously argue their case and the means of presenting their result at the press-conference of AAS that was offered after the peer review of the results of the Jovian experiment had been done by the experts of the AAS scientific organizing committee. In later publication by Kopeikin and Fomalont, which uses a bi-metric formalism that splits the space-time null cone in two – one for gravity and another one for light, the authors claimed that Asada's claim was theoretically unsound. The two null cones overlap in general relativity, which makes tracking the speed-of-gravity effects difficult and requires a special mathematical technique of gravitational retarded potentials, which was worked out by Kopeikin and co-authors but was never properly employed by Asada and/or the other critics. Stuart Samuel also suggested that the experiment did not actually measure the speed of gravity because the effects were too small to have been measured. A response by Kopeikin and Fomalont challenges this opinion. It is important to understand that none of the participants in this controversy are claiming that general relativity is "wrong". Rather, the debate concerns whether or not Kopeikin and Fomalont have really provided yet another verification of one of its fundamental predictions. A comprehensive review of the definition of the speed of gravity and its measurement with high-precision astrometric and other techniques appears in the textbook Relativistic Celestial Mechanics in the Solar System. - Hartle, JB (2003). Gravity: An Introduction to Einstein's General Relativity. Addison-Wesley. p. 332. ISBN 981-02-2749-3. - Taylor, Edwin F. and Wheeler, John Archibald, Spacetime Physics, 2nd edition, 1991, p. 12. - U. Le Verrier, Lettre de M. Le Verrier à M. Faye sur la théorie de Mercure et sur le mouvement du périhélie de cette planète, C. R. Acad. Sci. 49 (1859), 379–383. - Laplace, P.S.: (1805) "A Treatise in Celestial Mechanics", Volume IV, Book X, Chapter VII, translated by N. Bowditch (Chelsea, New York, 1966) - Zenneck, J. (1903). "Gravitation". Encyklopädie der mathematischen Wissenschaften mit Einschluss ihrer Anwendungen (in German) 5: 25–67. doi:10.1007/978-3-663-16016-8_2. - Roseveare, N. T (1982). Mercury's perihelion, from Leverrier to Einstein. Oxford: University Press. ISBN 0-19-858174-2. - Gerber, P. (1898). "Die räumliche und zeitliche Ausbreitung der Gravitation". Zeitschrift für mathematische Physik (in German) 43: 93–104. - Zenneck, pp. 49–51 - "Gerber's Gravity". Mathpages. Retrieved 2 Dec 2010. - Lorentz, H.A. (1900). "Considerations on Gravitation". Proc. Acad. Amsterdam 2: 559–574. - Poincaré, H. (1908). "La dynamique de l'électron" (PDF). Revue générale des sciences pures et appliquées 19: 386–402. Reprinted in Poincaré, Oeuvres, tome IX, S. 551–586 and in "Science and Method" (1908) - Poincaré, Henri (1904). "L'état actuel et l'avenir de la physique mathématique". Bulletin des Sciences Mathématiques 28 (2): 302–324.. English translation in Poincaré, Henri (1905). "The Principles of Mathematical Physics". In Rogers, Howard J. Congress of arts and science, universal exposition, St. Louis, 1904 1. Boston and New York: Houghton, Mifflin and Company. pp. 604–622. Reprinted in "The value of science", Ch. 7–9. - Poincaré, H. (1906). "Sur la dynamique de l'électron" (PDF). Rendiconti del Circolo Matematico di Palermo (in French) 21 (1): 129–176. doi:10.1007/BF03013466. See also the English Translation. - Walter, Scott (2007). Renn, J., ed. "Breaking in the 4-vectors: the four-dimensional movement in gravitation, 1905–1910" (PDF). The Genesis of General Relativity (Berlin: Springer) 3: 193–252. - Will, Clifford & Gibbons, Gary. "On the Multiple Deaths of Whitehead's Theory of Gravity", to be submitted to Studies In History And Philosophy Of Modern Physics (2006). - Carlip, S. (2000). "Aberration and the Speed of Gravity". Phys. Lett. A 267 (2–3): 81–87. arXiv:gr-qc/9909087. Bibcode:2000PhLA..267...81C. doi:10.1016/S0375-9601(00)00101-8. - * Carlip, S. (2004). "Model-Dependence of Shapiro Time Delay and the "Speed of Gravity/Speed of Light" Controversy". Class. Quant. Grav. 21: 3803–3812. arXiv:gr-qc/0403060. - C. Will (2001). "The confrontation between general relativity and experiment". Living Rev. Relativity 4: 4. arXiv:gr-qc/0103036. Bibcode:2001LRR.....4....4W. - Ed Fomalont & Sergei Kopeikin (2003). "The Measurement of the Light Deflection from Jupiter: Experimental Results". The Astrophysical Journal 598 (1): 704–711. arXiv:astro-ph/0302294. Bibcode:2003ApJ...598..704F. doi:10.1086/378785. - Hideki Asada (2002). "Light Cone Effect and the Shapiro Time Delay". The Astrophysical Journal Letters 574 (1): L69. arXiv:astro-ph/0206266. Bibcode:2002ApJ...574L..69A. doi:10.1086/342369. - Kopeikin S.M. & Fomalont E.B. (2006). "Aberration and the Fundamental Speed of Gravity in the Jovian Deflection Experiment". Foundations of Physics 36 (8): 1244–1285. arXiv:astro-ph/0311063. Bibcode:2006FoPh...36.1244K. doi:10.1007/s10701-006-9059-7. - Kopeikin S.M. & Schaefer G. (1999). "Lorentz covariant theory of light propagation in gravitational fields of arbitrary-moving bodies". Physical Review D 60 (12): id. 124002 [44 pages]. arXiv:gr-qc/9902030. Bibcode:1999PhRvD..60l4002K. doi:10.1103/PhysRevD.60.124002. - Kopeikin S.M. & Mashhoon B. (2002). "Gravitomagnetic effects in the propagation of electromagnetic waves in variable gravitational fields of arbitrary-moving and spinning bodies". Physical Review D 65 (6): id. 064025 [20 pages]. arXiv:gr-qc/0110101. Bibcode:2002PhRvD..65f4025K. doi:10.1103/PhysRevD.65.064025. - Kopeikin, Sergei & Fomalont, Edward (2006). "On the speed of gravity and relativistic v/c corrections to the Shapiro time delay". Physics Letters A 355 (3): 163–166. arXiv:gr-qc/0310065. Bibcode:2006PhLA..355..163K. doi:10.1016/j.physleta.2006.02.028. - S. Kopeikin, M. Efroimsky and G. Kaplan Relativistic Celestial Mechanics in the Solar System, Wiley-VCH, 2011. XXXII, 860 Pages, 65 Fig., 6 Tab. - Kopeikin, Sergei M. (2001). "Testing Relativistic Effect of Propagation of Gravity by Very-Long Baseline Interferometry". Astrophys. J. 556 (1): L1–L6. arXiv:gr-qc/0105060. Bibcode:2001ApJ...556L...1K. doi:10.1086/322872. - Asada, Hidecki (2002). "The Light-cone Effect on the Shapiro Time Delay". Astrophys. J. 574 (1): L69. arXiv:astro-ph/0206266. Bibcode:2002ApJ...574L..69A. doi:10.1086/342369. - Will, Clifford M. (2003). "Propagation Speed of Gravity and the Relativistic Time Delay". Astrophys. J. 590 (2): 683–690. arXiv:astro-ph/0301145. Bibcode:2003ApJ...590..683W. doi:10.1086/375164. - Fomalont, E. B. & Kopeikin, Sergei M. (2003). "The Measurement of the Light Deflection from Jupiter: Experimental Results". Astrophys. J. 598 (1): 704–711. arXiv:astro-ph/0302294. Bibcode:2003ApJ...598..704F. doi:10.1086/378785. - Kopeikin, Sergei M. (Feb 21, 2003). "The Measurement of the Light Deflection from Jupiter: Theoretical Interpretation". arXiv:astro-ph/0302462. - Kopeikin, Sergei M. (2003). "The Post-Newtonian Treatment of the VLBI Experiment on September 8, 2002". Phys. Lett. A 312 (3–4): 147–157. arXiv:gr-qc/0212121. Bibcode:2003PhLA..312..147K. doi:10.1016/S0375-9601(03)00613-3. - Faber, Joshua A. (Mar 14, 2003). "The speed of gravity has not been measured from time delays". arXiv:astro-ph/0303346. - Kopeikin, Sergei M. (2004). "The Speed of Gravity in General Relativity and Theoretical Interpretation of the Jovian Deflection Experiment". Classical and Quantum Gravity 21 (13): 3251–3286. arXiv:gr-qc/0310059. Bibcode:2004CQGra..21.3251K. doi:10.1088/0264-9381/21/13/010. - Samuel, Stuart (2003). "On the Speed of Gravity and the v/c Corrections to the Shapiro Time Delay". Phys. Rev. Lett. 90 (23): 231101. arXiv:astro-ph/0304006. Bibcode:2003PhRvL..90w1101S. doi:10.1103/PhysRevLett.90.231101. PMID 12857246. - Kopeikin, Sergei & Fomalont, Edward (2006). "On the speed of gravity and relativistic v/c corrections to the Shapiro time delay". Physics Letters A 355 (3): 163–166. arXiv:gr-qc/0310065. Bibcode:2006PhLA..355..163K. doi:10.1016/j.physleta.2006.02.028. - Hideki, Asada (Aug 20, 2003). "Comments on "Measuring the Gravity Speed by VLBI"". arXiv:astro-ph/0308343. - Kopeikin, Sergei & Fomalont, Edward (2006). "Aberration and the Fundamental Speed of Gravity in the Jovian Deflection Experiment". Foundations of Physics 36 (8): 1244–1285. arXiv:astro-ph/0311063. Bibcode:2006FoPh...36.1244K. doi:10.1007/s10701-006-9059-7. - Carlip, Steven (2004). "Model-Dependence of Shapiro Time Delay and the "Speed of Gravity/Speed of Light" Controversy". Class. Quant. Grav. 21 (15): 3803–3812. arXiv:gr-qc/0403060. Bibcode:2004CQGra..21.3803C. doi:10.1088/0264-9381/21/15/011. - Kopeikin, Sergei M. (2005). "Comment on 'Model-dependence of Shapiro time delay and the "speed of gravity/speed of light" controversy". Class. Quant. Grav. 22 (23): 5181–5186. arXiv:gr-qc/0510048. Bibcode:2005CQGra..22.5181K. doi:10.1088/0264-9381/22/23/N01. - Pascual-Sánchez, J.-F. (2004). "Speed of gravity and gravitomagnetism". Int. J. Mod. Phys. D 13 (10): 2345–2350. arXiv:gr-qc/0405123. Bibcode:2004IJMPD..13.2345P. doi:10.1142/S0218271804006425. - Kopeikin, Sergei (2006). "Gravitomagnetism and the speed of gravity". Int. J. Mod. Phys. D 15 (3): 305–320. arXiv:gr-qc/0507001. Bibcode:2006IJMPD..15..305K. doi:10.1142/S0218271806007663. - Samuel, Stuart (2004). "On the Speed of Gravity and the Jupiter/Quasar Measurement". Int. J. Mod. Phys. D 13 (9): 1753–1770. arXiv:astro-ph/0412401. Bibcode:2004IJMPD..13.1753S. doi:10.1142/S0218271804005900. - Kopeikin, Sergei (2006). "Comments on the paper by S. Samuel "On the speed of gravity and the Jupiter/Quasar measurement"". Int. J. Mod. Phys. D 15 (2): 273–288. arXiv:gr-qc/0501001. Bibcode:2006IJMPD..15..273K. doi:10.1142/S021827180600853X. - Kopeikin, Sergei & Fomalont, Edward (2007). "Gravimagnetism, Causality, and Aberration of Gravity in the Gravitational Light-Ray Deflection Experiments". General Relativity and Gravitation 39 (10): 1583–1624. arXiv:gr-qc/0510077. Bibcode:2007GReGr..39.1583K. doi:10.1007/s10714-007-0483-6. - Kopeikin, Sergei & Fomalont, Edward (2008). "Radio interferometric tests of general relativity". "A Giant Step: from Milli- to Micro-arcsecond Astrometry", Proceedings of the International Astronomical Union, IAU Symposium 248 (S248): 383–386. Bibcode:2008IAUS..248..383F. doi:10.1017/S1743921308019613. - Does Gravity Travel at the Speed of Light? in The Physics FAQ (also here). - Measuring the Speed of Gravity at MathPages - Hazel Muir, First speed of gravity measurement revealed, a New Scientist article on Kopeikin's original announcement. - Clifford M. Will, Has the Speed of Gravity Been Measured?.
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Auditory localization or Sound localization is a listener's ability to identify the location or origin of a detected sound. There are two general methods for sound localization, binaural cues and monaural cues. Binaural cues[edit | edit source] Binaural localization relies on the comparison of auditory input from two separate detectors. Therefore, most auditory systems feature two ears, one on each side of the head. The primary biological binaural cue is the split-second delay between the time when sound from a single source reaches the near ear and when it reaches the far ear. This is often technically referred to as the "interaural time difference" (ITD). ITDmax = 0.63 ms. Another binaural cue, less significant in ground dwelling animals, is the reduction in loudness when the sound reaches the far ear, or the "interaural amplitude difference" (IAD) or (ILD) as "interaural level difference". This is also referred to as the frequency dependent "interaural level difference" (ILD) (or "interaural intensity difference" (IID)). Our eardrums are only sensitive to the sound pressure level differences. Note that these cues will only aid in localizing the sound source's azimuth (the angle between the source and the sagittal plane), not its elevation (the angle between the source and the horizontal plane through both ears), unless the two detectors are positioned at different heights in addition to being separated in the horizontal plane. In animals, however, rough elevation information is gained simply by tilting the head, provided that the sound lasts long enough to complete the movement. This explains the innate behavior of cocking the head to one side when trying to localize a sound precisely. To get instantaneous localization in more than two dimensions from time-difference or amplitude-difference cues requires more than two detectors. However, many animals have quite complex variations in the degree of attenuation of a sound receives in travelling from the source to the eardrum: there are variations in the frequency-dependent attenuation with both azimuthal angle and elevation. These can be summarised in the head-related transfer function, or HRTF. As a result, where the sound is wideband (that is, has its energy spread over the audible spectrum), it is possible for an animal to estimate both angle and elevation simultaneously without tilting its head. Of course, additional information can be found by moving the head, so that the HRTF for both ears changes in a way known (implicitly!) by the animal. In vertebrates, inter-aural time differences are known to be calculated in the superior olivary nucleus of the brainstem. According to Jeffress, this calculation relies on delay lines: neurons in the superior olive which accept innervation from each ear with different connecting axon lengths. Some cells are more directly connected to one ear than the other, thus they are specific for a particular inter-aural time difference. This theory is equivalent to the mathematical procedure of cross-correlation. However, because Jeffress' theory is unable to account for the precedence effect, in which only the first of multiple identical sounds is used to determine the sounds' location (thus avoiding confusion caused by echoes), it cannot be entirely correct, as pointed out by Gaskell. The tiny parasitic fly Ormia ochracea has become a model organism in sound localization experiments because of its unique ear. The animal is too small for the time difference of sound arriving at the two ears to be calculated in the usual way, yet it can determine the direction of sound sources with exquisite precision. The tympanic membranes of opposite ears are directly connected mechanically, allowing resolution of nanosecond time differences and requiring a new neural coding strategy. Ho showed that the coupled-eardrum system in frogs can produce increased interaural vibration disparities when only small arrival time and intensity differences were available to the animal’s head. Efforts to build directional microphones based on the coupled-eardrum structure are underway. Monaural (filtering) cues[edit | edit source] Monaural localization mostly depends on the filtering effects of external structures. In advanced auditory systems, these external filters include the head, shoulders, torso, and outer ear or "pinna", and can be summarized as the head-related transfer function. Sounds are frequency filtered specifically depending on the angle from which they strike the various external filters. The most significant filtering cue for biological sound localization is the pinna notch, a notch filtering effect resulting from destructive interference of waves reflected from the outer ear. The frequency that is selectively notch filtered depends on the angle from which the sound strikes the outer ear. Instantaneous localization of sound source elevation in advanced systems primarily depends on the pinna notch and other head-related filtering. These monaural effects also provide azimuth information, but it is inferior to that gained from binaural cues. In order to enhance filtering information, many animals have large, specially shaped outer ears. Many also have the ability to turn the outer ear at will, which allows for better sound localization and also better sound detection. Bats and barn owls are paragons of monaural localization in the animal kingdom, and have thus become model organisms. Processing of head-related transfer functions for biological sound localization occurs in the auditory cortex. Distance cues[edit | edit source] Neither inter-aural time differences nor monaural filtering information provides good distance localization. Distance can theoretically be approximated through inter-aural amplitude differences or by comparing the relative head-related filtering in each ear: a combination of binaural and filtering information. The most direct cue to distance is sound amplitude, which decays with increasing distance. However, this is not a reliable cue, because in general it is not known how strong the sound source is. In case of familiar sounds, such as speech, there is an implicit knowledge of how strong the sound source should be, which enables a rough distance judgment to be made. In general, humans are best at judging sound source azimuth, then elevation, and worst at judging distance. Source distance is qualitatively obvious to a human observer when a sound is extremely close (the mosquito in the ear effect), or when sound is echoed by large structures in the environment (such as walls and ceiling). Such echoes provide reasonable cues to the distance of a sound source, in particular because the strength of echoes does not depend on the distance of the source, while the strength of the sound that arrives directly from the sound source becomes weaker with distance. As a result, the ratio of direct-to-echo strength alters the quality of the sound in such a way to which humans are sensitive. In this way consistent, although not very accurate, distance judgments are possible. This method generally fails outdoors, due to a lack of echoes. Still, there are a number of outdoor environments that also generate strong, discrete echoes, such as mountains. On the other hand, distance evaluation outdoors is largely based on the received timbre of sound: short soundwaves (high-pitched sounds) die out sooner, due to their relatively smaller kinetic energy, and thus distant sounds appear duller than normal (lacking in treble). Auditory localization by species[edit | edit source] See also[edit | edit source] - Animal echolocation - Auditory acuity - Auditory perception - Coincidence detection in neurobiology - Head-related transfer function - Head shadow - Human echolocation - Rayleigh's duplex theory References[edit | edit source] - Jeff€ress, L.A., 1948. A place theory of sound localization. Journal of Comparative and Physiological Psychology 41, 35-39. - Gaskell, H., 1983. The precedence effect. Hearing Research 11, 277-303. - Miles RN, Robert D, Hoy RR. Mechanically coupled ears for directional hearing in the parasitoid fly Ormia ochracea. J Acoust Soc Am. 1995 Dec;98(6):3059-70. PMID 8550933 DOI:10.1121/1.413830 - Robert D, Miles RN, Hoy RR. Directional hearing by mechanical coupling in the parasitoid fly Ormia ochracea. J Comp Physiol [A]. 1996;179(1):29-44. PMID 8965258 DOI:10.1007/BF00193432 - Mason AC, Oshinsky ML, Hoy RR. Hyperacute directional hearing in a microscale auditory system. Nature. 2001 Apr 5;410(6829):686-90. PMID 11287954 DOI:10.1038/35070564 - Ho CC, Narins PM. Directionality of the pressure-difference receiver ears in the northern leopard frog, Rana pipiens pipiens. J Comp Physiol [A]. 2006 Apr;192(4):417-29. [edit | edit source] - Collection of references about sound localization - Scientific articles about the sound localization abilities of different species of mammals |Concepts in Neuroethology|| Feedforward · Coincidence detector · Umwelt · Instinct · Feature detector · Central pattern generator (CPG) ·NMDA receptor · Lateral inhibition · Fixed action pattern · Krogh's Principle·Hebbian theory· Sound localization |History of Neuroethology| |Methods in Neuroethology| |Model Systems in Neuroethology| |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
|This article does not cite any references or sources. (May 2009)| Pre-Columbian art is the visual arts of indigenous peoples of the Caribbean, North, Central, and South Americas until the late 15th and early 16th centuries, and the time period marked by Christopher Columbus' arrival in the Americas. Pre-Columbian art thrived throughout the Americas from at least, 13,000 BCE to 1500 CE. Many Pre-Columbian cultures did not have writing systems, so visual art expressed cosmologies, world views, religion, and philosophy of these cultures, as well as serving as mnenomic devices. During the period before and after European exploration and settlement of the Americas; including North America, Central America, South America and the islands of the Caribbean, the Bahamas, the West Indies, the Antilles, the Lesser Antilles and other island groups, indigenous native cultures produced a wide variety of visual arts, including painting on textiles, hides, rock and cave surfaces, bodies especially faces, ceramics, architectural features including interior murals, wood panels, and other available surfaces. Unfortunately, many of the perishable surfaces, such as woven textiles, typically have not been preserved, but Precolumbian painting on ceramics, walls, and rocks have survived more frequently. Mesoamerica and Central America The Mesoamerican cultures are generally divided into three periods (see Mesoamerican chronology): - Pre-classic (up to 200 CE) - Classic (ca. 200–900 CE) - Post-classic (ca. 900 to 1580 CE). The Pre-classic period was dominated by the highly developed Olmec civilization, which flourished around 1200–400 BCE. The Olmecs produced jade figurines, and created heavy-featured, colossal heads, up to 2 meters (8 ft) high, that still stand mysteriously in the landscape. The Mesoamerican tradition of building large ceremonial centres appears to have begun under the Olmecs. During the Classic period the dominant Civilization was the Maya. Like the Mississippian peoples of North America such as the Choctaw and Natchez, the Maya organized themselves into large, agricultural communities. They practised their own forms of hieroglyphic writing and even advanced astronomy. Mayan art consequently focuses on rain, agriculture, and fertility, expressing these images mainly in relief and surface decoration, as well as some sculpture. Glyphs and stylized figures were used to decorate architecture such as the pyramid temple of Chichén Itzá. Murals dating from about 750 CE were discovered when the city of Bonampak was excavated in 1946. The Post-classic period (10th–12th centuries) was dominated by the Toltecs who made colossal, block-like sculptures such as those employed as free-standing columns at Tula, Mexico. The Mixtecs developed a style of painting known as Mixtec-Puebla, as seen in their murals and codices (manuscripts), in which all available space is covered by flat figures in geometric designs. The Aztec culture in Mexico produced some dramatically expressive examples of Aztec art, such as the decorated skulls of captives and stone sculpture, of which Tlazolteotl (Woods Bliss Collection, Washington), a goddess in childbirth, is a good example. In the Andean region of South America (modern-day Peru), the Chavín civilization flourished from around 1000 BCE to 300 BCE. The Chavín produced small-scale pottery, often human in shape but with animal features such as bird feet, reptilian eyes, or feline fangs. Representations of jaguar are a common theme in Chavín art. The Chavin culture is also noted for the spectacular murals and carvings found its main religious site of Chavin de Huantar; these works include the Raimondi Stela, the Lanzón, and the Tello Obelisk. Contemporary with the Chavin was the Paracas culture of the southern coast of Peru, most noted today for their elaborate textiles. These amazing productions, some of which could measure ninety feet long, were primarily used for as burial wraps for Paracas mummy bundles. Paracas art was greatly influenced by the Chavín cult, and the two styles share many common motifs. On the south coast, the Paracas were immediately succeeded by a flowering of artistic production around the Nazca river valley. The Nazca period is divided into eight ceramic phases, each one depicting increasingly abstract animal and human motifs. These period range from Phase 1, beginning around 200 CE, to Phase 8, which declined in the middle of the eighth cetnruy. The Nasca people are most famous for the Nazca lines, though they are usually regarded as making some of the most beautiful polychrome ceramics in the Andes. On the north coast, the Moche succeeded the Chavin. The Moche flourished about 100–800 CE, and were among the best artisans of the Pre-Columbian world, producing delightful portrait vases (Moche ware), which, while realistic, are steeped in religious references, the significance of which is now lost. For the Moche, ceramics functioned as a primary way of disseminating information and cultural ideas. The Moche made ceramic vessels that depicted and re-created a plethora of objects: fruits, plants, animals, human portrais, gods, demons, as well as graphic depictions of sexual acts. The Moche are also noted for their metallurgy (such as that found in the tomb of the Lord of Sípan), as well as their architectural prowess, such as the Huaca de la Luna and the Huaca del Sol in the Moche River valley. Following the decline of the Moche, two large co-existing empires emerged in the Andes region. In the north, the Wari (or Huari) Empire, based in their capital city of the same name. The Wari are noted for their stone architecture and sculpture accomplishments, but their greatest proficiency was ceramic. The Wari produced magnificent large ceramics, many of which depicted images of the Staff God, an important deity in the Andes which during the Wari period had become specifically associated with the Lake Titicaca region on the modern Peru-Bolivia border. Similarly, the Wari's contemporaries of the Tiwanaku empire, also centered around a capital city of the same name, held the Staff God in similar esteem. Tiwanaku's empire began to expand out of Titicaca around 400 BCE, but its "Classic Period" of artistic production and poltiical power occurred between 375 and 700 CE. Tiwanaku is currently known for its magnificent imperial city on the southern side of Lake Titicaca, now in modern-day Bolivia. Especially famous is the Gate of the Sun, which depicts a large image of the Staff God flanked by other religious symbols which may have functioned as a calendar. Following the decline of the Wari Empire in the late first millennium, the Chimú people, centered out of their capital city of Chimor began to build their empire on the north and central coasts of Peru. The Chimú were preceded by a simple ceramic style known as Sicán (700-900 CE) which became increasingly decorative until it became recognizable as Chimú in the early second millennium. The Chimú produced excellent portrait and decorative works in metal, notably gold but especially silver. The Chimú also are noted for their featherwork, having produced many standards and headdresses made of a variety of tropical feathers which were fashioned into bired and fish designs, both of which were held in high esteem by the Chimú. The Chimú are best known for their magnificent palatial complex of Chan Chan just south of modern-day Trujillo, Peru; now a UNESCO World Heritage Site. The Chimú went into decline very quickly due to outside pressures and conquest from the expanding Inca Empire in the mid-15th century. At the time of the Spanish conquest, the Inca Empire (Tawantinsuyu in Quechua, the "Land of the Four Quarters") was the largest and wealthiest empire in the world, and this was depicted in their art. Most Inca sculpture was melted down by the invading Spanish, so most of what remains today is in the form of architecture, textiles, and ceramics. The Inca valued gold among all other metals, and equated it with the sun god Inti. Some Inca buildings in the capital of Cusco were literally covered in gold, and most contained many gold and silver sculptures. Most Inca art, however, was abstract in nature. Inca ceramics were primarily large vessels covered in geometric designs. Inca tunics and textiles contained similar motifs, often checkerboard patterns reserved for the Inca elite and the Inca army. Today, due to the unpopularity of abstract art and the lack of Inca gold and silver sculpture, the Inca are best known for the architecture - specifically the complex of Machu Picchu just northwest of Cusco. Inca architecture makes use of large stone blocks, each one cut specifically to fit around the other blocks in a wall. These stones were cut with such precision that the Incas did not need to make use of mortar to hold their buildings together. Even without mortar, Inca buildings still stand today; they form many of the foundations for even modern-day buildings in Cusco and the surrounding area. The Incas produced thousand of large stone structures, among them forts, temples, and palaces, even though the Inca Empire lasted for only ninety-five years. - Latin American art - Maya art - Native American art - Olmec Art - Painting in the Americas before Colonization - List of Stone Age art - Esther, Pasztory. Pre-Columbian Art. Cambridge: Cambridge University Press, 2006. |Pre-Columbian Cup, Smarthistory at Khan Academy| - The Art of Precolumbian Gold: The Jan Mitchell Collection. New York: The Metropolitan Museum of Art. 1985. ISBN 9780297786276. - Sawyer, Alan R. (1966). Ancient Peruvian ceramics: the Nathan Cummings collection by Alan R. Sawyer. New York: The Metropolitan Museum of Art. |Wikimedia Commons has media related to Pre-Columbian art.| - A Virtual Reality Tour of Pre Columbian Art - Kunsthandel Faehte - Mint Museum - Olmec, Toltec, Teotihuacan, Zapotec, Mixtec, Mayan, and Aztec art
The advent of computers and the internet has revolutionized our lives in countless ways, enabling us to connect with people from around the world, access vast amounts of information, and perform tasks that were once unimaginable. This article explores the transformative power of computers and the internet in unlocking the digital world. By examining a hypothetical scenario where an individual gains access to these technologies for the first time, we can gain insights into their potential impact on various aspects of life. Imagine a young student living in a remote village without reliable access to education resources or communication channels beyond his immediate surroundings. Through sheer luck, this student is provided with a computer connected to the internet. Suddenly, he finds himself exposed to a wealth of knowledge at his fingertips – virtual libraries offering educational materials, online courses delivered by prestigious universities worldwide, and social media platforms connecting him with like-minded individuals across continents. In this scenario, it becomes evident that computers and the internet have emerged as powerful tools capable of bridging geographical barriers and opening up new possibilities for individuals regardless of their location or socioeconomic background. The Importance of Networking Protocols Imagine a scenario where you are browsing the internet, trying to access a website that contains important information. You click on the link and wait for the page to load, but nothing happens. Frustrated, you refresh the page multiple times, hoping it will work. Little do you know, the reason behind this issue lies in networking protocols. Networking protocols play a critical role in facilitating communication between devices connected to a network. They establish rules and guidelines that ensure data is transmitted accurately and efficiently across different devices and networks. One such protocol is the Internet Protocol (IP). IP acts as a language that enables computers to communicate with each other by assigning unique addresses known as IP addresses. To understand why networking protocols are vital, consider these key points: - Reliability: By following standard protocols, devices can exchange information reliably without any loss or corruption during transmission. - Interoperability: Standardized protocols allow devices from different manufacturers and operating systems to communicate seamlessly with one another. - Scalability: With an increasing number of devices connected to networks worldwide, using standardized protocols ensures efficient scaling without compromising performance. - Security: Protocols like Transport Layer Security (TLS) provide encryption and authentication mechanisms, ensuring secure communication over untrusted networks. |TCP/IP||Transmission Control Protocol/Internet Protocol||HTTP, FTP| |UDP||User Datagram Protocol||DNS| |ICMP||Internet Control Message Protocol||Ping| In summary, networking protocols are essential for maintaining reliable, interoperable, scalable, and secure communication between devices within digital networks. Understanding their significance allows us to appreciate how they contribute to unlocking the full potential of our interconnected world. Transitioning into the next section about “Ensuring Cybersecurity in the Digital Age,” we realize that effective implementation of networking protocols is just the first step in safeguarding our digital realm. Ensuring Cybersecurity in the Digital Age Imagine a scenario where you are sitting in your living room and browsing the internet on your laptop. You click on a link to access a website, and within seconds, the webpage loads seamlessly. Have you ever wondered how this happens? Behind the scenes, networking protocols play a crucial role in ensuring that data is transmitted efficiently across networks, enabling us to unlock the vast digital world. Networking protocols serve as sets of rules and procedures that govern how devices communicate and exchange information over computer networks. One such example is the Internet Protocol (IP), which determines how data packets are addressed and routed across different network nodes. By adhering to these protocols, computers can establish connections, transmit data reliably, and enable seamless communication between users worldwide. To comprehend the significance of networking protocols further, let’s explore some key aspects: Efficient Data Transmission: Networking protocols optimize data transmission by breaking it into smaller packets that can be transmitted independently across diverse network paths. This approach ensures efficient utilization of network resources while minimizing delays during transmission. Error Detection and Correction: Protocols incorporate error detection mechanisms to identify corrupted or lost data packets during transmission. These errors can occur due to various factors like noise interference or network congestion. Through techniques such as checksums or retransmission requests, networking protocols ensure accurate delivery of data. Interoperability: With numerous devices connected to networks globally, interoperability becomes vital for seamless communication. Networking protocols define standardized formats for data representation and exchange, allowing devices from different manufacturers or operating systems to interact effectively. Scalability: As technology advances rapidly, networks need to accommodate an increasing number of devices and users without compromising performance. Networking protocols provide scalability features that support expanding networks effortlessly while maintaining stability and reliability. These essential aspects highlight just a few benefits offered by networking protocols in unlocking the digital world we rely on. By facilitating efficient data transmission, error detection and correction, interoperability, and scalability, these protocols form the backbone of modern network infrastructures. In the subsequent section on “The Evolution of Web Development,” we will delve into how web development has transformed over time to meet the growing demands and possibilities that networking protocols have unlocked. The Evolution of Web Development As technology advances and cybersecurity measures become more sophisticated, it is crucial to understand how web development has evolved to meet the demands of an increasingly interconnected world. By exploring the rapidly changing landscape of web development, we can gain insights into its impact on society and anticipate future trends. One such example that highlights this evolution is the transformation of static websites into dynamic platforms for interactive user experiences. The shift towards dynamic web development has revolutionized our online interactions by enabling real-time updates and personalized content delivery. Websites are no longer mere digital brochures but have transformed into immersive environments where users actively engage with information. Take, for instance, a news website that tailors its articles based on readers’ preferences or a social media platform that seamlessly integrates multimedia content shared by friends around the globe. These examples demonstrate how dynamic web development enhances user engagement and fosters meaningful connections within digital communities. To further comprehend the significance of these advancements, let us consider some key factors shaping modern-day web development: - User Experience (UX): With an emphasis on intuitive interfaces and responsive design, developers strive to create seamless experiences across different devices. - Accessibility: Ensuring equal access to online resources regardless of physical or cognitive impairments promotes inclusivity and diversity. - Performance Optimization: Optimizing load times and minimizing latency contribute to improved user satisfaction and overall website performance. - Security Measures: Implementing robust security protocols safeguards sensitive data against potential cyber threats. |Factors Shaping Modern-Day Web Development| |User Experience (UX)| |Enhancing usability through intuitive interfaces| |Creating engaging visual designs| |Incorporating interactive elements to enhance user engagement| By keeping these factors in mind, developers strive to create websites that not only deliver valuable content but also provide a seamless and secure online experience. This ongoing evolution of web development contributes significantly to the digital world’s growth and offers immense potential for future innovation. Transition into the subsequent section about “Exploring the Potential of Cloud Computing”: Understanding how web development has evolved is crucial as we delve into exploring the potential of cloud computing, which leverages this dynamic environment to unlock new possibilities. By harnessing the power of distributed servers and virtual resources, cloud computing revolutionizes data storage, processing capabilities, and collaborative tools, enabling us to transcend traditional limitations. Exploring the Potential of Cloud Computing As web development continues to advance, it is crucial to explore the potential of cloud computing in order to fully unlock the digital world. By harnessing the power of cloud-based services and technologies, businesses can revolutionize their operations and achieve unprecedented scalability, flexibility, and cost-efficiency. To illustrate this concept further, let us consider a hypothetical case study involving a growing e-commerce company. In our hypothetical scenario, an e-commerce company experiences exponential growth as its customer base expands rapidly. As demand for its products surges, traditional on-premise infrastructure struggles to handle increased website traffic and maintain seamless user experience. Recognizing these challenges, the company decides to adopt cloud computing solutions to overcome limitations and propel their business forward. To demonstrate how cloud computing can benefit organizations like our hypothetical e-commerce company, we present four key advantages: - Scalability: Through cloud computing platforms such as Amazon Web Services (AWS) or Microsoft Azure, businesses can easily scale up or down their resources based on fluctuating demands. This ensures that websites remain accessible even during high-traffic periods while optimizing costs by only paying for what is used. - Flexibility: Cloud computing allows companies to access data and applications remotely from any location or device with internet connectivity. This enhances collaboration among employees working remotely or across different office branches while providing uninterrupted accessibility to critical business resources. - Cost-Efficiency: Adopting cloud-based systems eliminates the need for significant upfront investments in hardware infrastructure and software licensing fees. Companies can leverage pay-as-you-go pricing models offered by various cloud service providers, resulting in reduced IT expenses. - Data Security: Leading cloud providers employ advanced security measures to safeguard sensitive data. With robust encryption protocols, regular backups, and disaster recovery plans in place, businesses can mitigate risks associated with data loss or unauthorized access. To visualize the benefits of cloud computing more effectively, consider the following comparison table: |Traditional Infrastructure||Cloud Computing| |High upfront costs||Lower capital expenditure| |Limited scalability||Elastic scaling based on demand| |Physical location dependency||Remote accessibility| |Manual backup procedures||Automated data backups and disaster recovery| In conclusion, embracing cloud computing holds immense potential for businesses seeking to unlock the full power of the digital world. The hypothetical case study highlights how organizations can achieve scalability, flexibility, cost-efficiency, and enhanced data security by leveraging cloud-based solutions. By harnessing these advantages, companies can streamline operations and focus on core business objectives while adapting swiftly to ever-changing market dynamics. Transition into subsequent section about “Data Analysis: Extracting Insights from Big Data”: With cloud computing enabling seamless management of vast amounts of data, organizations can now turn their attention towards extracting valuable insights through advanced data analysis techniques. Data Analysis: Extracting Insights from Big Data Having explored the potential of cloud computing, we now turn our attention to another powerful tool in the digital world: data analysis. By extracting insights from big data, organizations can make informed decisions and gain a competitive edge in today’s rapidly evolving landscape. Data analysis involves examining large datasets to uncover patterns, correlations, and trends that may otherwise remain hidden. To illustrate its significance, let us consider a hypothetical case study of a retail company. By analyzing customer purchase history and demographic information, this company discovered that customers aged 18-25 were more likely to buy certain products during specific times of the year. Armed with this knowledge, they could strategically target their marketing efforts towards this age group at those particular times, resulting in increased sales and customer satisfaction. To fully leverage the power of data analysis, organizations must employ various techniques and tools. Here are some key elements involved: - Data Collection: Gathering relevant data from diverse sources such as online platforms, social media networks, or internal databases. - Data Cleansing: Ensuring accuracy and consistency by removing duplicates, correcting errors, and handling missing values. - Statistical Analysis: Applying mathematical models and algorithms to identify patterns within the dataset. - Visualization: Presenting findings through charts, graphs, or interactive dashboards for better understanding and decision-making. Embracing data analysis has proven transformative across industries. Take healthcare for example – by analyzing patient records on a large scale, medical researchers have made significant breakthroughs in disease prevention and treatment strategies. Similarly, financial institutions have utilized advanced analytics to detect fraudulent transactions swiftly while improving risk management practices. As we move forward into the next section on “Web Finance: Revolutionizing the Financial Industry,” it becomes evident that harnessing the power of technology continues to reshape traditional sectors like never before. Web Finance: Revolutionizing the Financial Industry Having explored the realm of data analysis and its potential in extracting insights from big data, we now turn our attention to another groundbreaking aspect of computers and the internet. In this section, we delve into how web finance has revolutionized the financial industry, reshaping traditional practices and opening up new avenues for individuals and businesses worldwide. Imagine a world where financial transactions are no longer limited by physical boundaries or constrained by cumbersome paperwork. This is precisely what web finance, also known as online finance or digital finance, offers. Through the integration of technology with financial services, web finance provides efficient solutions that empower users to manage their finances anytime and anywhere. One compelling example is the rise of peer-to-peer lending platforms like LendingClub, which connect borrowers directly with lenders without intermediaries. Advantages of Web Finance: The advent of web finance has brought about numerous advantages for both consumers and businesses alike. Consider some key benefits: - Accessibility: With web finance, anyone with an internet connection can access various financial services conveniently from their homes or offices. - Cost-effectiveness: Online banking eliminates overhead costs associated with physical branches, enabling institutions to offer competitive interest rates on loans and higher returns on savings accounts. - Enhanced security measures: Advanced encryption technologies safeguard sensitive information during online transactions, providing users with peace of mind. - Streamlined processes: Tasks such as applying for loans or managing investments have become more streamlined through user-friendly interfaces and automated systems. Table – Comparison between Traditional Banking vs. Web Finance: |Aspect||Traditional Banking||Web Finance| |Accessible Locations||Limited Branches||Available Anywhere| |Customer Experience||In-person Interactions||Online Convenience| Embracing the Future: The rise of web finance has disrupted traditional financial structures, transforming how individuals and businesses interact with their finances. As technology continues to advance, we can expect further innovations in this field. Understanding the fundamentals of networking will provide a solid foundation for navigating this ever-evolving landscape, enabling us to make informed decisions as we navigate the digital world. With web finance shaping new possibilities in the financial industry, it is crucial to grasp the underlying mechanisms that enable these advancements. End of Section Understanding the Fundamentals of Networking Having explored how web finance has revolutionized the financial industry, we now turn our attention to understanding the fundamentals of networking. To illustrate its significance, let’s consider a hypothetical scenario where a multinational corporation depends on efficient network connectivity between its offices located in different countries. Section – Understanding the Fundamentals of Networking: Networking serves as the backbone of modern communication systems and plays a crucial role in connecting devices, facilitating data transfer, and enabling seamless access to online resources. In our hypothetical scenario, for instance, efficient networking would ensure that employees across various branches can collaborate effectively and share information instantaneously. Here are some key aspects to comprehend about networking: - Local Area Networks (LANs): These networks connect devices within a limited geographical area like an office or building. - Wide Area Networks (WANs): They span large distances and interconnect multiple LANs using routers or other networking equipment. - Virtual Private Networks (VPNs): By extending private networks over public infrastructures securely, VPNs enable remote access and safeguard data transmission. Effective communication relies on standardized protocols that govern data exchange between connected devices. Examples include TCP/IP (Transmission Control Protocol/Internet Protocol), which defines how data packets are addressed, routed, transmitted, and received over networks. With increasing cyber threats, ensuring network security is paramount. It involves implementing measures such as firewalls, encryption techniques, intrusion detection systems (IDS), and virtual private networks (VPNs) to protect against unauthorized access or data breaches. The emergence of cloud computing has transformed how businesses manage their networks by offering scalable infrastructure and services. It allows organizations to store and access data remotely, enhancing collaboration and enabling flexible resource allocation. - Enhanced connectivity fosters global collaborations, expanding opportunities for professional growth. - Improved networking capabilities enable real-time communication across continents, strengthening relationships between individuals and businesses. - Efficient networks support the seamless transfer of information, promoting innovation in fields such as research and development. - Networking advancements bridge geographical gaps, making education accessible to remote areas and empowering learners worldwide. |Increased productivity||Network vulnerabilities| |Global communication||Bandwidth constraints| |Collaborative opportunities||Maintenance complexity| |Access to vast resources||Ensuring network reliability| In light of these key aspects, it becomes apparent that understanding the fundamentals of networking is crucial in today’s digital era. By embracing innovative technologies and implementing robust security measures, we can harness the power of interconnectedness while mitigating potential risks. Transition sentence into subsequent section: As our reliance on digital networks continues to grow exponentially, protecting your online privacy and security has become more critical than ever before. Protecting Your Online Privacy and Security Imagine a small business owner named Sarah who recently started her own boutique clothing store. She wants to expand her customer base and increase brand awareness, but she’s unsure how to reach a wider audience. This is where social media comes into play, offering an array of opportunities for businesses like hers. Harnessing the potential of social media can bring numerous benefits to businesses. Here are several key advantages: - Increased visibility: Social media platforms provide an extensive network that allows businesses to showcase their products or services to a larger audience. By leveraging popular platforms such as Facebook, Instagram, or Twitter, businesses gain exposure and attract potential customers. - Direct interaction with customers: Social media enables businesses to engage directly with their target audience. Through comments, messages, and polls, companies can gather valuable feedback from customers and build meaningful relationships by promptly addressing inquiries or concerns. - Cost-effective marketing: Unlike traditional advertising methods that require substantial financial investments, social media offers cost-effective marketing options. Businesses can create engaging content organically or allocate budget-friendly advertisements targeting specific demographics. - Data-driven insights: The analytics tools provided by social media platforms allow businesses to access valuable data about their audience’s preferences and behaviors. These insights enable them to tailor their strategies accordingly and optimize the effectiveness of their campaigns. To illustrate the impact of social media on business growth further, consider the following hypothetical scenario: |Traditional Marketing||Social Media Marketing| |Interaction||One-way communication||Two-way communication| |Measurable Results||Challenging||Easily trackable| As seen in the table above, incorporating social media into marketing efforts provides significant advantages compared to traditional approaches. In light of these benefits, it becomes evident why integrating social media into business strategies is crucial. In the following section, we will explore the role of web development in modern businesses and how it complements social media efforts. Transitioning seamlessly to “The Role of Web Development in Modern Business,” let’s delve into how a well-designed website can enhance a company’s online presence and drive growth. The Role of Web Development in Modern Business Unlocking the Digital World: The Role of Web Development in Modern Business In today’s digital age, web development plays a crucial role in shaping and driving modern businesses. As companies strive to establish an online presence and connect with their target audience, effective web development strategies are essential for success. To illustrate this point, let us consider the case study of Company X. Company X, a startup specializing in e-commerce, recognized the importance of robust web development early on. By investing in professional website design and development services, they were able to create a user-friendly platform that seamlessly integrated their products and services. This not only enhanced their customer experience but also boosted their brand reputation and increased sales significantly. To fully understand the significance of web development in modern business, we must explore its key components: Responsive Design – With the increasing use of mobile devices, having a responsive website is vital for attracting and retaining customers. A responsive design ensures that your site adapts effortlessly to different screen sizes, providing users with optimal viewing experiences across all devices. User Experience (UX) Optimization – Ensuring an intuitive and seamless user experience is paramount for any business operating online. Effective UX optimization involves streamlining navigation, implementing clear call-to-action buttons, and minimizing load times to enhance user satisfaction and drive conversions. Search Engine Optimization (SEO) – In order to stand out among competitors in search engine results pages (SERPs), proper SEO techniques need to be implemented during web development. By optimizing keywords, meta tags, URLs, and other elements within your website structure, you can improve organic visibility and attract more potential customers. Content Management Systems (CMS) – Utilizing CMS platforms such as WordPress or Drupal allows businesses to easily update content without extensive technical knowledge or reliance on developers. This empowers organizations to keep their websites fresh and relevant by regularly publishing new information or promotions. Table: Benefits of Professional Web Development |Enhanced Credibility||Professionally developed websites instill trust and credibility in users, increasing the likelihood of conversions.| |Competitive Edge||A well-designed website gives businesses a competitive advantage by standing out among competitors.| |Increased Visibility||Effective web development techniques improve search engine rankings, making it easier for potential customers to find your site.| |Scalability||Properly built websites can easily accommodate growth and expansion, ensuring long-term success for businesses.| In summary, web development holds enormous potential in unlocking new opportunities for modern businesses like Company X. By implementing responsive design, optimizing user experience, focusing on SEO practices, and utilizing CMS platforms, companies can establish a strong online presence that drives customer engagement and boosts their overall success. Transitioning into the subsequent section about “Harnessing the Power of Cloud Computing,” we will now explore how this technological innovation complements web development strategies, enabling businesses to maximize productivity and efficiency while minimizing costs. Harnessing the Power of Cloud Computing Having explored the significance of Web Development for modern businesses, we now delve into another vital aspect that is revolutionizing the digital landscape — cloud computing. To illustrate its potential impact, let us consider a hypothetical scenario where an e-commerce company experiences exponential growth in customer demand. In this era of technological advancements, harnessing the power of cloud computing has become indispensable for companies seeking scalability and flexibility. By leveraging remote servers to store and process data, businesses can achieve enhanced efficiency and streamlined operations. This innovative approach offers numerous benefits, including: - Increased storage capacity: With cloud computing, organizations gain access to virtually unlimited storage space without having to invest heavily in physical infrastructure. - Cost-effectiveness: Adopting cloud-based solutions enables companies to reduce their capital expenditure by eliminating the need for on-site servers and IT maintenance costs. - Improved collaboration: Cloud platforms facilitate seamless collaboration among team members spread across different locations through real-time document sharing and simultaneous editing capabilities. - Enhanced security measures: Leading cloud providers prioritize data security by implementing robust encryption protocols and regular backups. To further highlight the advantages of embracing cloud technology, consider Table 1 below which compares traditional on-premises systems with cloud-based alternatives: |Aspect||On-Premises Systems||Cloud-Based Solutions| The above comparison clearly demonstrates how adopting cloud-based solutions can provide significant advantages over traditional on-premises systems. By leveraging the cloud, businesses can unlock new opportunities for growth and innovation. In the pursuit of digital transformation, organizations must also recognize that harnessing the power of cloud computing is just one step towards achieving business success in today’s data-driven world. The subsequent section will explore various data analysis techniques that companies can employ to gain valuable insights and make informed decisions. End transition: As we move forward into discussing Data Analysis Techniques for Business Success, it becomes evident that effectively analyzing data is crucial for maximizing organizational potential. Data Analysis Techniques for Business Success Section Title: Unlocking the Power of Cloud Computing Having explored the potential of Cloud Computing, we now turn our attention to its practical applications in various industries. For instance, let us consider a hypothetical scenario where a small e-commerce business utilizes cloud computing to enhance its operations and scalability. The benefits of employing cloud computing are numerous and extend beyond traditional business models. Here are some key advantages: Enhanced Flexibility: By leveraging cloud-based infrastructure, businesses can seamlessly scale their operations up or down based on demand. This enables them to adapt quickly to changing market conditions without significant upfront investments. Improved Collaboration: Cloud platforms facilitate real-time collaboration among team members regardless of geographical location. Through shared access to documents and files, employees can work together efficiently, fostering innovation and productivity. Cost Efficiency: One of the most compelling aspects of cloud computing is its cost-saving potential. Businesses no longer need to invest heavily in physical servers or expensive software licenses. Instead, they pay for only the resources they use, reducing overall IT expenses significantly. Data Security: Cloud service providers prioritize data security by implementing robust measures such as encryption and regular backups. These safeguards not only protect sensitive information but also provide peace of mind to businesses that may lack dedicated IT resources. Table – Advantages of Cloud Computing |Enhanced Flexibility||Scale operations up or down based on demand| |Improved Collaboration||Enable real-time collaboration regardless of location| |Cost Efficiency||Reduce IT expenses by paying for used resources| |Data Security||Implement strong measures to protect sensitive information| In summary, embracing cloud computing offers businesses newfound agility, efficiency, and security. The ability to easily adjust resources according to demand ensures they remain competitive in dynamic markets while minimizing costs. Furthermore, collaborative capabilities empower teams with seamless communication across distances, driving innovation within organizations. Transition into subsequent section (Web Finance: Managing Finances in the Digital Era): As we delve deeper into the integration of technology, our focus now shifts to another essential aspect of modern business operations – managing finances efficiently in the digital era. Web Finance: Managing Finances in the Digital Era Having explored the significance of data analysis techniques for business success, we now turn our attention to another crucial aspect that businesses must master in this digital era – web finance. In today’s interconnected world, managing finances online has become a necessity rather than a choice. This section delves into how businesses can effectively navigate the realm of web finance to ensure financial stability and growth. The impact of technological advancements on financial management cannot be underestimated. Consider the hypothetical case study of Company X, a small e-commerce startup that was struggling with traditional financial methods before transitioning to web finance solutions. By embracing digital tools and platforms, such as online payment gateways and cloud-based accounting software, Company X not only streamlined its operations but also gained real-time insights into its financial performance. As a result, it successfully expanded its customer base and increased revenue by 30% within six months. To harness the potential benefits offered by web finance, businesses should take note of several key considerations: Enhanced Security Measures: - Implement robust encryption protocols - Conduct regular security audits - Adopt two-factor authentication - Educate employees about cybersecurity best practices - Ensure compatibility between various financial software systems - Facilitate smooth data transfer across different platforms Automated Financial Processes: - Utilize automated invoicing and billing systems - Employ machine learning algorithms for fraud detection - Leverage artificial intelligence-driven predictive analytics for budgeting and forecasting - Stay updated with emerging trends in web finance technologies - Embrace innovative solutions to stay ahead of competitors Table: Advantages of Web Finance Solutions |Cost Efficiency||Reduced expenses associated with physical infrastructure, paperwork, and manual processing| |Real-Time Monitoring||Access to up-to-date financial data for informed decision-making| |Global Reach||Ability to transact internationally and expand business operations across borders| |Scalability||Easily accommodate growth or downsizing without significant disruption| In conclusion, web finance has revolutionized the way businesses manage their finances. By leveraging digital tools and embracing emerging technologies, companies can enhance security measures, automate processes, seamlessly integrate systems, and continuously adapt to evolving trends. The advantages of web finance solutions include cost efficiency, real-time monitoring capabilities, global reach, and scalability. As we move further into the digital era, mastering this aspect becomes crucial for organizations aiming to thrive in the competitive landscape of today’s interconnected world. (Note: This last paragraph does not explicitly state “In conclusion” or “Finally.”)
What’s a double pulsar? by Kate Kershner Pulsars are the dead cores of massive stars rotating on their axes, often hundreds of times per second. The pulsar’s magnetic poles emit radio and optical radiation beams that flash across our line of sight, making the star appear to blink on and off. You would not be wrong if you thought that a “pulsar” sounds like a great addition to your weekend rave. (You live in 1995.) A pulsar does kind of resemble a big, galactic strobe light and — with its steady rhythm — it could even allow you to keep time as you trip the light fantastic. But you probably wouldn’t want one at your weekend party — let alone two. Before we trip even harder imagining double pulsars, let’s talk about how a pulsar works in general. When a massive star collapses, it goes out in a giant explosion called a supernova. Now if the star is big enough, it’ll collapse into itself to form a black hole — end of the story, as we know it. But if it’s just a little smaller (and we’re still talking massive stars here, several times bigger than our sun), a pretty cool phenomenon will occur. Instead of collapsing upon itself into a super-dense point source (the black-hole scenario), the protons and electrons at the sun’s core will crush into each other until they actually combine to form neutrons. What you get is a neutron star that might be just a few miles across but has as much mass as our sun . That means that the tough little star is so dense that a teaspoon full of its neutrons would weigh 100 million tons (90,719,000 metric tons) here on Earth. But let’s not forget the “pulsing” part of pulsars. A pulsar might also emit beams of visible light, radio waves — even gamma and X-rays. If they are oriented just right, the beams can sweep toward Earth like a lighthouse signal, in an extremely regular pulse — perhaps more accurate than even an atomic clock. Pulsars also spin very quickly — as often as hundreds of times per second . But let’s get to the good stuff — what’s a double pulsar? As a close and astute reader, you’ve probably already figured out that a double pulsar is two pulsars. And while it’s not unusual to find a binary pulsar — where a pulsar is orbiting around another object, like a star or white dwarf — it’s a lot more unusual to find two pulsars orbiting each other. In fact, we only know of one of these systems, discovered in 2003. One of the coolest things about double pulsars is that they can help us understand or even confirm some huge, theoretical physics principles. Because they are such reliable astrophysical clocks, scientists immediately set to work testing parts of Einstein’s Theory of General Relativity. One section of that theory suggests that huge events, like merging two enormous black holes, could create ripples in space-time (called gravitational waves) that spread throughout the universe. Thanks to pulsars, scientists have discovered that stars wobble like tops in the curved space-time of their orbit, as predicted by Einstein. They have also observed that the orbits are becoming smaller as energy is lost because of gravitational waves carting it away – another Einstein prediction proved correct Sources: University of Manchester, Weisberg.
The diameter of a circle is the length of a straight line from one edge of the circle to the opposite edge, through the center point of the circle. The diameter is always the longest line that can be drawn from side to side. When two circles are drawn with the smaller circle inside the larger one, the inside diameter is the diameter of the smaller circle. The inside diameter of a metal pipe or other kinds of tubing is the distance from one inside edge to the opposite inside edge, crossing the center point. This mathematical concept has many practical applications for the home handyman. Draw a circle on a sheet of paper using a pencil and compass to practice measuring the inside diameter of a two-dimensional circle. Outline the circle with a thick black marker. Draw a straight line through the center point of the circle with the pencil, starting at the inside edge of the black line on the circle and ending at the opposite edge of the circle on the inside edge of the thick black line. Note that this diameter is the longest possible line that can be drawn through the circle. Align the "0" point of the ruler with the spot on the circle's edge that meets the straight line. Check the length of the line by examining the point on the ruler that touches the point on the opposite edge of the circle that meets the opposite end of the line, to determine the measurement of this inside diameter. Align the "0" point of the ruler with one of the edges of the inner part of the three-dimensional tube to be measured. Hold this edge firmly with one hand while pivoting the ruler slightly up or down at the opposite edge of the tube, estimating visually where the center point of the inner circle is and having the top edge of the ruler touch that point. Make note of the length of the distance on the ruler from the "0" point to the point where the top edge of the ruler touches the inner edge on the opposite side of the circle. Pivot the ruler up a very small amount, about 1 mm. Make note of distance from the "0" point on the ruler to the point where the ruler touches the inner edge of the tube on the other side. Pivot the ruler down this same small amount and make note of this new measurement. Repeat the process of moving the ruler slightly up and down and recording the various lengths as described in Step five until you are certain you have found a position for the ruler that results in the longest possible measurement from one side of the circle to the other. Make note of this measurement which is the length of the inside diameter of the tube.
Are you like many parents and find yourself fighting with your child about completing their homework? Finishing their chores? Finding their water bottle which has been left behind for the tenth time? If so, you are not alone. Executive Function has become a buzzword in schools and psychology offices and increasingly identified as being a significant contributor to children’s ability to succeed in school and navigate through the world independently and efficiently. Despite this, executive function processes are not always taught systematically in school and are not a focus of the curriculum. Furthermore, classroom instruction generally focuses on the content, or the what, rather than the process, or the how, of learning and does not systematically address metacognitive strategies that teach students to think about how they think and learn. While some kids have the natural capacity for using executive function skills, others have more difficulty and require explicit instruction. What are the Executive Functions? Executive Functions represent an umbrella construct for a collection of interrelated skills responsible for purposeful, goal-directed, problem-solving behavior. A useful metaphor is thinking of the brain as the “orchestra” and the executive functions as the “conductor.” Executive functioning skills enable children and adults to successfully perform such activities as planning, organizing, paying attention to and remembering details, goal-directed persistence, and time management, to name a few. Notably, these emerging skills can be observed in infants/toddlers and continue to develop through young adulthood. While executive functioning deficits have been identified in youth with a range of neuropsychological and psychiatric challenges (e.g., ADHD, Learning Disabilities, Anxiety, Depression), other children may not have a disability but still suffer from executive functioning deficits. Researchers in the field of psychology have identified a list of executive function skills and each one helps a child or adolescent successfully complete certain daily activities. In addition, areas of weakness in these skills or “executive dysfunction” has been shown to affect a child’s ability to function successfully at home or in school. For example, a child or adolescent who is having a hard time starting homework assignments or is consistently putting off projects until the last minute may be viewed as lazy or unmotivated. However, this child may be struggling with an executive functioning skill deficit known as task initiation, or the ability to recognize when it is time to get started on something and begin the task without undue procrastination. The following is list of executive function skills, their definition, and examples of what the skill looks like in children and adolescents. Inhibition- The ability to stop one’s own behavior, actions, or thoughts at the appropriate time. Example: A child can wait a short time without being disruptive. Flexibilty/Shift -The ability to move freely from one situation to another and respond appropriately. Example: This child is able to adjust when a familiar routine is disrupted or a task becomes too complicated. Emotional Control -The ability to manage emotions to achieve goals, complete tasks, or direct behavior. Example: A teenager can manage the anxiety before a game or test and still perform. Task Initiation -The ability to begin tasks in a timely manner. Example: A teenager does not wait until the last minute to begin a project. Working Memory -The ability to hold information in memory while completing a task. Example: A child is able to hold in mind and follow two or three step directions. Organization – The ability to impose order on work, play, and storage spaces. Example: A teenager can organize and locate sports equipment. Planning/Prioritization -The ability to create a road map to reach a goal or complete a task. Example: A child can think of options to settle a peer conflict or a teen can formulate a plan to get a job. Self-Monitoring – The ability to monitor one’s own performance and observe how you problem solve. Example: A young child can change a behavior in response to feedback from an adult. Interventions to Improve Executive Function Skills: We know that executive functioning skills can improve with appropriate coaching and supports. At the Portsmouth Neuropsychology Center, we offer Executive Functioning Coaching services. Gianna Alden, MA, executive function coach, consults with children, adolescents, and young adults. Such services include an initial intake meeting, gathering important information about your child’s skill sets including areas of strength and relative challenges, and an individualized intervention plan. Please contact our office to schedule a consultation.
The cornerstone of President Barack Obama’s Climate Action Plan, released on June 25, is the production of cleaner electricity by cutting carbon pollution from power plants. These facilities are the largest source of climate pollution in the United States. On September 20, the Environmental Protection Agency, or EPA, took a big step by proposing carbon-pollution standards for future coal and natural gas power plants. Coal-fired electricity produces 30 percent of all domestic carbon pollution. Although there are strict limits on other power plant pollutants—including mercury and ingredients for smog and acid rain—there are no limits on carbon pollution. Under the proposed EPA standards, however, new coal plants would have to produce 40 percent less carbon pollution than the best-performing plants in use today. The new limits would ensure that future coal plants contribute about the same amount of carbon pollution as natural gas plants. This would provide a path for future, cleaner coal-powered electricity in a carbon-constrained world. The proposed carbon-pollution standards for future power plants were developed under Section 111 of the Clean Air Act. This section requires that any “new source performance standard” for future industrial facilities must be based on “the best system of emission reduction” that the EPA determines has been “adequately demonstrated.” The Washington, D.C., federal appeals court has concluded that this provision of the law was intended to “create incentives for new technology” and “stimulate and augment the innovative character of industry in reaching for more effective, less costly systems to control air pollution.” This same federal appeals court concluded that the act “looks toward what fairly may be projected for the future, rather than the state of the art at present.” After careful analysis, the EPA found that carbon capture and storage, or CCS, technology has been adequately demonstrated and is available to meet the agency’s proposed emissions limits for future coal plants. CCS lowers emissions by capturing carbon pollution formed during power generation, compressing it into a liquid, and then injecting it into underground repositories that will store it permanently without leakage. The EPA’s proposal would limit emissions to 1,100 pounds of carbon dioxide, or CO2, per megawatt hour, or MWH. This is a common-sense standard that will save money by only requiring partial CO2 capture, even though nearly complete capture is feasible using existing technology. The proposed rule would also allow plant operators to delay operation of CCS equipment until later in the useful life of the future plant, providing time to fine tune the technology. Technologies to capture CO2 during industrial processes are well-established outside the power sector and have been successfully demonstrated in pilot-scale testing at power plants. Moreover, the EPA found evidence to “indicate[s] that geologic sequestration is a viable long term CO2 storage option.” Geological formations found across the United States have the capacity to safely hold vast amounts of CO2. Several projects to deploy CCS at commercial-scale power plants are actively progressing. The Massachusetts Institute of Technology reports that there are 24 large-scale CCS power projects worldwide, including seven in the United States and Canada that are under construction or in advanced planning stages. For instance, Southern Company is building a 582-megawatt Integrated Gasification Combined Cycle, or IGGC, plant in Kemper County, Mississippi, that will capture 65 percent of its carbon pollution (3.5 million tons annually). The nearly finished plant should begin operation next year. Poised to begin construction later this year is Summit Power’s Texas Clean Energy Project, or TCEP—a 400-megawatt IGCC unit that will capture 90 percent of its carbon pollution (2.5 million tons annually). Saskatchewan, Canada, is home to SaskPower’s Boundary Dam project, where an aging 110-megawatt coal-fired generation unit is being rebuilt with post-combustion carbon capture technology that will remove 1 million tons of carbon pollution annually. It should be on-line in early 2014. Not all CCS projects, however, have moved forward. American Electric Power, or AEP, successfully tested CCS at its Mountaineer power plant in West Virginia, but halted further deployment in 2010. The New York Times reported that the project “would not move again unless there were clear federal rules setting a timeline for when and how much coal plants have to reduce emissions.” The AEP project demonstrates that the most significant impediment to the development and deployment of CCS is the lack of a price or limit on carbon pollution. The Government Accountability Office, or GAO, reported to Congress in 2010 that “without a tax or sufficiently restrictive limit on CO2 emissions, plant operators lack an economic incentive to use CCS technologies.” Likewise, several University of Utah faculty recently surveyed 229 CCS experts on the availability of CCS and determined that “lack of a price signal or financial incentive” is a major barrier to commercialization. Nonetheless, the study found that “CCS experts share broad confidence in the technology’s readiness, despite continued calls for commercial-scale demonstration projects before CCS is widely deployed.” Clearly, the EPA’s proposed carbon-pollution rule for future power plants is precisely the signal necessary to develop a market for CCS technology, which could maintain coal as part of the future electricity-generation mix. Nonetheless, the coal industry and its congressional allies have attacked the EPA’s proposed rule and dismissed prospects for technological innovation by utility companies and their equipment suppliers. For instance, House Energy and Commerce Committee Chairman Fred Upton (R-MI) said that “the proposed standards would require the use of expensive new technologies that are not commercially viable,” even though a number of large-scale power plants using CCS will soon be in operation. Contrary to GAO and other impartial analyses, Rep. Upton also claimed that the proposed rule would “discourage investment.” Some opponents question the proposed carbon-pollution standards because the commercial-scale CCS projects underway received federal funds. As The Washington Post reported last week: Joseph Stanko, head of government relations for the law firm Hunton & Williams, said the EPA’s reliance on “federally funded demonstration projects” as the base for its new standard “is illegal, it doesn’t ‘adequately demonstrate’ technology for normal use.” But nearly every energy technology, including nuclear power, oil and gas, and renewables, has received government financial support. A study commissioned by the Nuclear Energy Institute found that the government spent $837 billion (in 2010 dollars) on federal energy incentives and support from 1950 to 2010. Twelve percent of that—$104 billion—was for coal. And federal resources account for only a portion of the funding for the Kemper and TCEP projects, with far larger amounts deriving from the issuance of bonds, cost recovery from ratepayers, or equity investments. Investors’ willingness to back these projects is a vote of confidence in the commercial potential of CCS technology. Many of the same legislators and companies who now claim that CCS technology needs additional time to mature before it can justify a pollution standard also opposed comprehensive energy and climate change legislation in the 111th Congress that would have stimulated investment in CCS. The American Clean Energy and Security Act of 2009, H.R. 2454, sponsored by Reps. Henry Waxman (D-CA) and Edward Markey (D-MA), would have provided $60 billion in incentives to speed the development of CCS. The Senate companion bill, drafted by Sens. John Kerry (D-MA) and Joe Lieberman (I-CT), which stalled in the Senate due to opposition from coal and utility companies, would have also invested billions of dollars in CCS development and deployment. Similar to these legislative proposals, the EPA’s proposed rule will speed the development and deployment of CCS. With a clear and certain technology-based pollution-reduction target, equipment vendors would have an incentive to develop new carbon capture systems, and improve existing ones to lower costs and enhance performance. Utilities could seek federal grants or loan guarantees from existing programs to defray part of the CCS costs. Investors would be more inclined to finance the initial generation of CCS plants to gain a “first mover” advantage, knowing that a market would exist for more plants as the industry scales up. Utilities are nervous that public service commissions that oversee their electricity rates will not allow them to recover the costs from the increased expense of building power plants with CCS technology. An EPA carbon-pollution standard would enable utilities to make a much stronger case for cost recovery because CCS would be a requirement for any future power plant burning coal. Captured carbon could be sold to meet the growing demand for CO2 in enhanced oil recovery, or EOR—a process where CO2 is injected into abandoned wells to recover additional petroleum. EOR now yields approximately 300,000 barrels of oil per day, but its potential is much greater. What’s more, the expansion of the existing pipeline network that supplies CO2 for EOR would provide more power plants with access to commercial markets for captured CO2, reducing the costs of CCS. Although coal power plants are the largest source of domestic carbon pollution, coal’s share of electricity generation has recently declined. In 2008, 48 percent of U.S. electricity came from coal, but this number dropped to 37 percent in 2012. Over the past decade, most of the new electricity generation has come from renewables or natural gas, according to the Energy Information Administration. Furthermore, more than 160 planned coal plants were scrapped over the past decade, and none of the 136 new electricity generators that will come on-line this year is a traditional coal-burning unit, according to The New York Times. In short, coal lost market share due to: - Competition from cleaner, cheaper, abundant natural gas - The decline in electricity demand, which was nearly 25 percent lower in 2012 compared to 2001 - The retirement of aging, dirty plants - The near doubling of no-carbon wind, solar, and other renewable electricity sources between 2008 and 2012 Every day, there is new scientific evidence that fuels the urgency to reduce the carbon and other pollution responsible for climate change. EPA’s proposed carbon-pollution standard couldn’t be more timely. With virtually no traditional coal plants under construction due to factors unrelated to EPA health rules, the claim that the agency’s proposed standards will ban future coal-burning facilities can’t hold water. Rather than attacking EPA’s proposal, the coal and utility industries should instead use the lull in new plant construction to make investments that secure a future role for coal-fired power plants while we finally slash pollution from power plants. An ambitious but attainable standard that enables coal plants to achieve emission levels comparable to those of cleaner natural gas plants via deployment of a new and viable technology would give coal a new lease on life. Blocking this rule would continue the decline of conventional coal plants. That’s why coal and utility companies—and their political allies—should endorse the EPA’s common-sense proposal to clean up future power plants. Robert Sussman was recently the senior policy counsel to EPA Administrator Lisa Jackson. Daniel J. Weiss is a Senior Fellow and the Director of Climate Strategy at the Center for American Progress.
When we gaze at the lush green canopies of forests, it’s easy to assume that what lies beneath remains hidden from our view. However, thanks to technological advancements in remote sensing, Synthetic Aperture Radar (SAR) has emerged as a powerful tool capable of peering through this dense foliage. In this article, we will delve into the fascinating world of SAR and explore its foliage penetration capabilities, revolutionizing our understanding of the hidden landscapes beyond the green canopy. Understanding Synthetic Aperture Radar (SAR): Synthetic Aperture Radar (SAR) is a remote sensing technology that uses radar waves to capture high-resolution images of the Earth’s surface. Unlike optical sensors that rely on visible light, SAR operates in the microwave portion of the electromagnetic spectrum. This enables SAR to overcome the limitations of optical sensors, particularly in regions with persistent cloud cover or during nighttime observations. Synthetic Aperture Radar SAR is a known technique for two-dimensional high-resolution ground mapping. The basic principle of any imaging radar is to emit an electromagnetic signal (which travels at the speed of light) toward a surface and record the amount of signal that bounces/echoes back, or “backscatters,” and its time delay. The resolution of a radar antenna is dependent on the antenna aperture—the larger the antenna, the better the resolution. The geometrical resolution and sensitivity of radars are significantly improved using advanced signal processing techniques. Examples of the latter are synthetic-aperture radar (SAR) and ground moving target indication (GMTI) which enables both stationary and moving targets to be detected, positioned and classified at large stand-off distances. A SAR takes advantage of motion of the antenna to achieve an apparent antenna length, or aperture, greater than its actual length. As the antenna moves along a flight path, successive echoes are received from the same target and may be processed to give spatial resolution equivalent to an antenna as long as the distance the antenna moved when receiving the target echoes. Thus, the terminology “synthetic aperture radar” is used to describe the radar system. The Challenge of Foliage Penetration: Atmospheric conditions such as clouds and rain do not significantly degrade the SAR signal. However, the presence of foliage (trees, brush, grasses) can greatly attenuate the signal. One of the significant challenges in remote sensing is the obstructive nature of vegetation. Thick layers of foliage can impede the detection and characterization of objects on the ground, limiting the effectiveness of traditional remote sensing methods. SAR can be used to image a wide variety of targets, including land, sea, and air. SAR has the unique capability to penetrate through the green canopy, allowing us to gain insights into what lies beneath. SAR capablity of penetrating vegetation, which makes it a valuable tool for applications such as forestry, agriculture, and military surveillance. For deeper understanding of Foliage Penetration Synthetic Aperture Radar (SAR) and applications please visit: Synthetic Aperture Radar: Exploring the Hidden Landscape and Unveiling Earth’s Secrets through Foliage Penetration The Science Behind Foliage Penetration: SAR’s ability to penetrate foliage is primarily due to the longer wavelengths of microwave signals. These signals can partially penetrate through the leaves, branches, and trunks of vegetation, interacting with the underlying terrain or objects on the ground. By analyzing the complex interactions between the radar signals and vegetation, SAR can generate detailed images that reveal hidden landscapes and features. The foliage penetration (FOPEN) SAR is an ultrawide-band system that uses lower frequencies to “see” through the foliage and achieve foliage penetration. Static objects in forest terrain can be detected with low-frequency SAR, i.e. with a wavelength in the range 0.3-15 m. The low frequencies have the property of penetrating the vegetation layer with little attenuation and only causing a weak back-scattering from the coarse structures of the trees. Thus, static objects, such as stationary vehicles, can be detected also in thick forest by combining low frequencies with SAR technique which gives a resolution of wavelength size. This has been scientifically demonstrated in a plurality of experiments in recent years. Low frequency ultra-wideband (UWB) SARs in particular uses low frequencies and a large bandwidth that provide them with penetration capabilities and high resolution. UWB SARs are typically used for imaging applications such as foliage penetration, through the wall imaging and ground penetration. The FOPEN SAR is an ultrawide-band (UWB) system that uses low frequencies to achieve foliage penetration. The system has a very high frequency (VHF) frequency range of approximately 20-70 MHz and ultra high frequency (UHF) range of approximately 200-500 MHz. Its basic operating principle involves transmitting of pulsed radio frequency waves and receiving the echoes scattered from targets and the ground surface. The echoes are subjected to analog preprocessing, digitized, and further digitally processed to produce the final imagery. The UHF band is a fully polarimetric (HH1 , VV2 , HV3 ) side-looking radar. The VHF band operates with HH polarization. Airborne Foliage penetration (FOPEN) SAR The primary advantage of using an airborne-based system is the ability to acquire a large amount of data covering a wide area in a relatively short period of time. Current methods for estimating the extent of a UXO-contaminated site are multiphase efforts, ground-based, and the site evaluation generally requires several weeks or months. With the airborne SAR, the time frame could be reduced to days. Although the FOPEN SAR system can rapidly gather data over a large area, it was designed to detect large tactical vehicles and its resolution limits the size of UXO that can be detected. Ordnance are typically found in clusters within the primary radius of a firing range, and a cluster of smaller UXO, which are not detectable individually, may be imaged. However, on the fringes of a range where the distribution of UXO is sparse, only the larger ordnance may be detected. Applications of SAR Foliage Penetration: - Environmental Monitoring: SAR’s foliage penetration capabilities have revolutionized the study of forests, enabling scientists to estimate forest structure, monitor deforestation, and assess biomass content without the need for ground-based measurements. - Agriculture and Crop Monitoring: SAR can penetrate through crops and provide valuable information on plant health, moisture content, and biomass estimation. This helps farmers optimize irrigation, detect diseases, and manage crop yields more effectively. - Disaster Management: During emergencies such as floods or landslides, SAR can penetrate through the foliage and provide vital information on affected areas, helping emergency responders in search and rescue operations. - Defense and Security: SAR’s foliage penetration capabilities have significant applications in defense and security, including reconnaissance, surveillance, and target detection in dense forested areas. Ongoing counter-terrorism and counter-insurgency operations present tough challenges that our forces must face each day. They need surveillance and reconnaissance capabilities that provide a long-term stare at specific geographic locations so they can detect environmental changes, patterns, and asymmetric tactics. Situation awareness and accurate target identification are critical requirements both for security forces engaged in counterterrorism operations. Ground, airborne and space-borne radars and EO sensors have proved to be of great utility in detection, tracking, and imaging of vehicles to high-speed fighter aircraft, locating mortars and artillery, over the horizon capability, and all-weather long-range surveillance. However, these sensors are limited in capability to detect targets concealed in the foliage. Conventional forces and terrorists/insurgents have exploited this weakness by employing camouflage, concealment, and deception tactics like hiding in camouflage nets and forests for long. Foliage penetration SAR has also been found efficient and cost-effective means of estimating the extent of contamination at UXO sites. The clearing of areas contaminated with unexploded ordnance (UXO) is the Army’s highest priority Environmental Restoration problem. The Department of Defense (DoD) currently spends millions of dollars annually on UXO cleanup efforts. Foliage penetration systems The early history of FOPEN Radar was driven by interesting developments in radar technology that enabled our ability to detect fixed and moving objects under dense foliage. The most important part of that technology was the widespread awareness of the benefits of long dwell coherent radar and the advent of digital signal processing. Almost as important were the quantification of the radar propagation through the foliage, and the impact of radio frequency interference on image quality. These systems were developed for both military and commercial applications and during a time of rapid awareness of the need and ability to operate in a dense signal environment. Finally, there is a clear benefit for use of polarization in the target characterization and false alarm mitigation. New research in Multi-mode Ultra-Wideband Radar, with the design of both SAR and moving target indication (MTI) FOPEN systems. At common FOPEN frequencies, the systems have generally been either SAR or MTI due to the difficulties of obtaining either bandwidth or aperture characteristics for efficient operation. Lockheed Martin’s High Resolution, Penetrating RADAR Detects, Geo-Locates and Communicates Threats Lockheed Martin’s Tactical Reconnaissance and Counter-Concealment (TRACER) is a cutting-edge surveillance and reconnaissance system designed to address the challenges faced in counter-terrorism and counter-insurgency operations. TRACER offers a long-endurance surveillance capability, allowing for sustained monitoring of specific geographic locations to detect environmental changes, patterns, and asymmetric tactics. At the core of TRACER is a lightweight, low-frequency Synthetic Aperture Radar (SAR) that operates in the UHF/VHF frequency ranges. This dual band (UHF/VHF) SAR can peer through foliage, rain, darkness, dust storms or atmospheric haze to provide real-time, high-quality tactical ground imagery, anytime it is needed, day or night. By utilizing low-frequency radio waves, TRACER can penetrate through dense forest canopies and even detect objects below the ground. This unique feature enables TRACER to overcome various environmental obstacles such as foliage, rain, darkness, dust storms, or atmospheric haze. The advanced penetrating SAR technology of TRACER provides a significant advantage in detecting threats and illicit activities. Its ability to operate in challenging conditions and see through various obstacles makes it a valuable tool for surveillance and reconnaissance missions. - The FOPEN system is still in use by the U.S. military. It has been upgraded several times over the years, and it is now capable of detecting and tracking a wider range of targets. - The FOPEN system is used in a variety of missions, including intelligence gathering, surveillance, and target acquisition. - The FOPEN system is a valuable tool for the U.S. military, and it is likely to remain in use for many years to come. - The TRACER system is a newer system than FOPEN. It is smaller, lighter, and more portable than FOPEN. - The TRACER system is also more accurate than FOPEN. It can detect and track targets that are smaller and more difficult to see than FOPEN can. - The TRACER system is still in development, but it is already being used by the U.S. military in some missions. - The TRACER system is a promising new technology, and it is likely to become more widely used by the U.S. military in the future. TRACER’s design is predicated on Lockheed Martin’s proven foliage penetration (FOPEN) technology, developed specifically to detect vehicles, buildings, and large metallic objects in broad areas of dense foliage, forested areas and wooded terrain. The TRACER system builds upon the FOPEN technology advancements by not only shrinking and modernizing the radar, but also by configuring it for unmanned endurance aircraft. The radar’s advanced detection capability suppresses background clutter and returns from stationary objects, while revealing the positions of mobile and portable targets. These technology advances, coupled with lessons learned from ongoing FOPEN operations, have contributed to new concepts of operations for the system. The system can be operated from low to very-high altitudes –– on manned and unmanned platforms. TRACER also incorporates data link technology, allowing the processed results from the airborne platform to be immediately down-linked to ground stations. This real-time data transmission enables commanders and operators on the ground to access high-quality tactical ground imagery promptly. Additionally, TRACER includes a portable ground station that facilitates mission planning, data collection, and imagery exploitation. The combination of TRACER’s penetrating detection capability, persistent surveillance platform, and real-time data link provides commanders at all levels with actionable intelligence in a tactical and timely manner. This comprehensive system equips forces with the necessary tools to detect threats, monitor activities, and make informed decisions in challenging operational environments. Overall, Lockheed Martin’s TRACER represents a significant advancement in surveillance and reconnaissance capabilities, providing a long-endurance solution that can penetrate through obstacles, detect threats, and deliver real-time intelligence for effective decision-making in the field. Swedish FOI’s CARABAS & LORA systems The Swedish Defence Research Agency – FOI (formerly known as FOA) – has performed research since the mid-80´s in the area of airborne ultra-wideband VHF-band synthetic aperture radar (SAR). The work has resulted in two airborne CARABAS systems operating in the 20-90 MHz band. The prime application is for detection of man-made objects concealed by foliage or camouflage. Results have also shown that CARABAS is capable of accurately mapping forest stand volume (m3 /ha), or biomass (ton/ha), up to about 1000 m3 /ha, which is of high interest for environmental and commercial applications CARABAS, an acronym for “Coherent All Radio Band Sensing“, is an airborne, horizontal-polarization SAR operating across the frequency band 20–90 MHz, conceived, designed and built by FOA in Sweden. The original motivation for designing such a low frequency system was that a large relative or fractional bandwidth could be achieved at low frequencies. For reasons to be explained, a large fractional bandwidth was considered to be of potential benefit for radar detection in severe clutter environments. A feasibility study of a short wave ultra-wideband radar started at FOA in 1985. Actual construction of the CARABAS system commenced 1987, aircraft integration took place during 1991 and the first radar tests were conducted in early 1992. From the fall of 1992 onwards, field campaigns and evaluation studies have been conducted as a joint effort between FOA and MIT Lincoln Laboratory in the US. - The CARABAS system is still in use by the Swedish military. It has been upgraded several times over the years, and it is now capable of detecting and tracking a wider range of targets. - The CARABAS system is used in a variety of missions, including intelligence gathering, surveillance, and target acquisition. - The CARABAS system is a valuable tool for the Swedish military, and it is likely to remain in use for many years to come. LORA (low-frequency radar) is FOI:s new airborne radar which will succeed CARABAS. It will operate from 20 MHz to 800 MHz and will be used for demonstrating new defence and civilian applications. The main application is expected to be detection of man-made objects in a wide range of operating conditions, i.e. both stationary and moving objects located in the open or under concealment. LORA has been designed as a multi-function VHF/UHF band radar system which can simultaneously operate in both SAR and ground moving target indication (GMTI) modes. It operates in two basic configurations: 1) Ultra-wideband SAR/GMTI 200-800 MHz, and 2) Ultra-wideband SAR 20-90 MHz. The latter will be a replacement for the CARABAS system which will be completed during 2003. The main application is expected to be detection of man-made objects in a wide range of operating conditions, i.e. both stationary and moving objects located in the open or under concealment. - The LORA system is still in development, but it is expected to be completed in 2023. - The LORA system is a newer system than CARABAS. It is smaller, lighter, and more portable than CARABAS. - The LORA system is also more accurate than CARABAS. It can detect and track targets that are smaller and more difficult to see than CARABAS can. - The LORA system is expected to be used by the Swedish military in a variety of missions, including intelligence gathering, surveillance, and target acquisition. - The LORA system is a promising new technology, and it is likely to become more widely used by other militaries in the future. Foliage penetration technology A SAR system consists of a transmitter, a receiver, an antenna (including a pointing or steering mechanism), image processor and display unit. An imaging radar system must distinguish between single and multiple scatters located in close proximity. Resolution is the minimum distance needed between adjacent scatters to separate them in the imaging map. Fine resolution provides the capability to image a complex object or scene as a number of separate scattering centers. The bulk of the development effort in radar imaging is at improving the radar resolution. Generally, the range resolution is inversely proportional to the bandwidth of the transmitted signal. A wide bandwidth means finer range resolution. In conventional radar, resolution in the azimuth direction improves as the antenna beamwidth becomes smaller. Antenna beamwidth becomes smaller as antenna aperture size or radar frequency increases. Hence practical constrains such as antenna size and transmit frequency will limit the azimuth resolution of the conventional aperture radar. The development of advanced processing algorithms solved this problem, leading to a new generation of imaging radars called Synthetic Aperture Radar. A Synthetic Aperture Radar (SAR), or SAR, is a coherent mostly airborne or spaceborne sidelooking radar system which utilizes the flight path of the platform to simulate an extremely large antenna or aperture electronically, and that generates high-resolution remote sensing imagery. Over time, individual transmit/receive cycles (PRT’s) are completed with the data from each cycle being stored electronically. The signal processing uses magnitude and phase of the received signals over successive pulses from elements of a synthetic aperture. After a given number of cycles, the stored data is recombined (taking into account the Doppler effects inherent in the different transmitter to target geometry in each succeeding cycle) to create a high resolution image of the terrain being over flown. The SAR works similar of a phased array, but contrary of a large number of the parallel antenna elements of a phased array, SAR uses one antenna in time-multiplex. The different geometric positions of the antenna elements are result of the moving platform now. Detection performance of a radar system is directly related to the target-to-background backscattering statistics evaluated for the specific operating conditions. In general, radar backscattering is a complicated function of target geometry and its electromagnetic properties. Backscattering from the target background also contributes and competes with the target backscattering in the radar resolution cell. The coherent combination of target and background backscattering results in a statistical variability that reduces detection performance. For SAR systems operating in the UHF and VHF bands, backscatter phenomenology is quite different from microwave frequencies. Target sizes are often in the resonance region, i.e. of wavelength size, and the angular variation of the backscattering is much smaller than at microwave frequencies. Another important effect is the interaction between the target and the ground surface, i.e. the coherent combination of the direct and ground-reflected backscattered waves. This effect reduces target backscattering for lower frequencies since the direct and reflected waves tend to cancel each other. The effect becomes more pronounced for grazing angles and for small target heights above the ground compared to the wavelength. A number of experiments have been performed in order to investigate the optimum choice of frequency band. The main conclusion is that foliage becomes increasingly transparent below 1 GHz, and below 100 MHz the two-way attenuation is most often less than 3 dB. In terms of the foliage backscatter, it is only below 100 MHz that backscattering decreases when the tree stems enter the Rayleigh scattering regime. However, even below 100 MHz it still significantly affects detection performance. Typically, stems have a diameter up to about half a meter which implies that their backscattering drops significantly when the radar wavelength is larger than about five meters. This mechanism suggests that the optimum radar wavelength for detection of a vehicle-sized target under foliage is in the VHF-band with a weak dependence on stem diameter. FOPEN SAR is the synthetic aperture radar for foliage penetration application, it is used to image targets concealed by foliage areas. For achieving the higher range resolution, UWB waveform is utilized into FOPEN SAR. To further achieve high resolution in the azimuth as well as range, long synthetic apertures or large integration angles are required. However, these large integration angles lead to severe range migration, or motion through resolution cells (MTRC). Scatterers at different locations in an imaged scene experience different levels of MTRC. The variation in MTRC makes selection of proper image formation algorithms critical. Moreover, the large integration angle, together with UWB waveform, brings about new complexities and challenges to the traditional SAR imaging techniques Receivers with large analogue and digital dynamic range have a rather narrow bandwidth. LORA’s solution to this problem is to use a stepped-frequency waveform so that the instantaneous bandwidth is much smaller than the full bandwidth. The latter is reconstructed by stitching together frequency bands in the signal processing. Each pulse in the waveform has a large time-bandwidth product (typically, a chirp) to meet average power requirements. One of the most problematic issues when designing an ultra-wideband radar below 1 GHz is the challenging radio frequency environment. The radar system must be able to share the frequency bands with a large number of other services, i.e. without causing harmful interference. Furthermore, the system must not saturate due to external interference which requires a receiver subsystem with very large dynamic range and out-of-band suppression. The dynamic range requirement is in direct conflict with the large bandwidth in order to achieve high resolution SAR. The system is operated in one of two modes—spot or strip. In spot mode, the radar is focused on a single point and data are gathered at different angles as the aircraft flies over the area. An image 3 km by 3 km is typically obtained in spot mode. Strip mode differs from spot mode in that the radar viewing angle is held fixed and a swath of ground is imaged along the flight path. Strip mode produces a 2 km by 7 km image. Image resolution varies but is typically less than 1 m in the UHF band for both modes. The data acquired during this test were collected in strip mode. Spotlight SAR has an advantage over the strip-map SAR in terms of its resolution. In particular, VHF-band SAR provides a robust means for detecting truck-sized targets concealed in foliage. Targets concealed in foliage are most often visible in CARABAS-II SAR images but the backscattering from tree stems cause false alarms. The false alarm rate may, however, be significantly reduced by applying change detection. The latter relies on collecting images at several occasions and detects targets using classification methods. Change detection is applied to VHF-band SAR images and significantly improves target detection performance Challenges and Future Directions: While SAR has proved its efficacy in foliage penetration, certain challenges remain. The complex scattering and interference patterns introduced by vegetation require sophisticated data processing techniques. Furthermore, advancements in SAR technology, including higher-resolution sensors and improved processing algorithms, hold the potential for even greater foliage penetration capabilities in the future. Beyond the green canopy lies a hidden world waiting to be explored, and Synthetic Aperture Radar (SAR) has emerged as a powerful tool to unveil this concealed landscape. Its foliage penetration capabilities have opened new avenues in environmental monitoring, agriculture, disaster management, and defense. As we continue to push the boundaries of remote sensing technology, SAR’s ability to see beyond the green canopy promises to transform our understanding of the Earth’s hidden wonders.
What are derivatives in calculus The following is provided under a Creative Commons License. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. Professor: In today's lecture I want to develop several more formulas that will allow us to reach our goal of differentiating everything. So these are derivative formulas, and they come in two flavors. The first kind is specific, so some specific function we're giving the derivative of. And that would be, for example, x^n or (1/x). Those are the ones that we did a couple of lectures ago. And then there are general formulas, and the general ones don't actually give you a formula for a specific function but tell you something like, if you take two functions and add them together, their derivative is the sum of the derivatives. Or if you multiply by a constant, for example, so (cu), the derivative of that is (cu)' where c is constant. All right, so these kinds of formulas are very useful, both the specific and the general kind. For example, we need both kinds for polynomials. And more generally, pretty much any set of forumulas that we give you, will give you a few functions to start out with and then you'll be able to generate lots more by these general formulas. So today, we wanna concentrate on the trig functions, and so we'll start out with some specific formulas. And they're going to be the formulas for the derivative of the sine function and the cosine function. So that's what we'll spend the first part of the lecture on, and at the same time I hope to get you very used to dealing with trig functions, although that's something that you should think of as a gradual process. Alright, so in order to calculate these, I'm gonna start over here and just start the calculation. So here we go. Let's check what happens with the sine function. So, I take sin (x delta x), I subtract sin x and I divide by delta x. Right, so this is the difference quotient and eventually I'm gonna have to take the limit as delta x goes to 0. And there's really only one thing we can do with this to simplify or change it, and that is to use the sum formula for the sine function. So, that's this. That's sin x co delta x plus Oh, that's not what it is? OK, so what is it? Sin x sin delta x. OK, good. Plus cosine. No? Oh, OK. So which is it? OK. Alright, let's take a vote. Is it sine, sine, or is it sine, cosine? Professor: OK, so is this going to be. cosine. All right, you better remember these formulas, alright? OK, turns out that it's sine, cosine. All right. Cosine, sine. So here we go, no gotta do x here, sin (delta x). Alright, so now there's lots of places to get confused here, and you're gonna need to make sure you get it right. Alright, so we're gonna put those in parentheses here. Sin (a b) is sin a (cos b) cos a (sin b). All right, now that's what I did over here, except the letter x was a, and the letter b was delta x. Now that's just the first part. That's just this part of the expression. I still have to remember the - sin x. That comes at the end. Minus sin x. And then, I have to remember the denominator, which is delta x. OK? Alright, so now. The next thing we're gonna do is we're gonna try to group the terms. And the difficulty with all such arguments is the following one: any tricky limit is basically / when you set delta x equal to 0. If I set delta x equal to 0, this is sin x - sin x. So it's a / 0 term. Here we have various things which are and various things which are non-zero. We must group the terms so that a stays over a 0. Otherwise, we're gonna have no hope. If we get some 1 / term, we'll get something meaningless in the limit. So I claim that the right thing to do here is to notice, and I'll just point out this one thing. When delta x goes to 0, this cosine of is 1. So it doesn't cancel unless we throw in this extra sine term here. So I'm going to use this common factor, and combine those terms. So this is really the only thing you're gonna have to check in this particular calculation. So we have the common factor of sin x, and that multiplies something that will cancel, which is (cos delta x - 1) / delta x. That's the first term, and now what's left, well there's a cos x that factors out, and then the other factor is (sin delta x) / (delta x). OK, now does anyone remember from last time what this thing goes to? How many people say 1? How many people say 0? All right, it's 0. That's my favorite number, alright? 0. It's the easiest number to deal with. So this goes 0, and that's what happens as delta x tends to 0. How about this one? This one goes to 1, my second favorite number, almost as easy to deal with as 0. And these things are picked for a reason. They're the simplest numbers to deal with. So altogether, this thing as delta x goes to goes to what? I want a single person to answer, a brave volunteer. Alright, back there. Professor: Cosine, because this factor is 0. It cancels and this factor has a 1, so it's cosine. So it's cos x. So our conclusion over here - and I'll put it in orange - is that the derivative of the sine is the cosine. OK, now I still wanna label these very important limit facts here. This one we'll call A, and this one we're going to call B, because we haven't checked them yet. I promised you I would do that, and I'll have to do that this time. So we're relying on those things being true. Now I'm gonna do the same thing with the cosine function, except in order to do it I'm gonna have to remember the sum rule for cosine. So we're gonna do almost the same calculation here. We're gonna see that that will work out, but now you have to remember that cos (a b) = cos cos, no it's not cosine^2, because there are two different quantities here. It's (cos a cos b) - (sin a sin b). All right, so you'll have to be willing to call those forth at will right now. So let's do the cosine now. So that's cosine ((x delta x) - cos x) / delta x. OK, there's the difference quotient for the cosine function. And now I'm gonna do the same thing I did before except I'm going to apply the second rule, that is the sum rule for cosine. And that's gonna give me (cos x cos delta x) - (sin x sin delta x). And I have to remember again to subtract the cosine divided by this delta x. And now I'm going to regroup just the way I did before, and I get the common factor of cosine multiplying ((cosine delta x - 1) / delta x). And here I get the sin x but actually it's - sin x. And then I have (sin delta x) / delta x. All right? The only difference is this minus sign which I stuck inside there. Well that's not the only difference, but it's a crucial difference. OK, again by A we get that this is as delta x tends to 0. And this is 1. Those are the properties I called A and B. And so the result here as delta x tends to is that we get negative sine x. That's the factor. So this guy is negative sine x. I'll put a little box around that too. Alright, now these formulas take a little bit of getting used to, but before I do that I'm gonna explain to you the proofs of A and B. So we'll get ourselves started by mentioning that. Maybe before I do that though, I want to show you how A and B fit into the proofs of these theorems. So, let me just make some remarks here. So this is just a remark but it's meant to help you to frame how these proofs worked. So, first of all, I want to point out that if you take the rate of change of sin x, no let's start with cosine because a little bit less obvious. If I take the rate of change of cos x, so in other words this derivative at x = 0, then by definition this is a certain limit as delta x goes to 0. So which one is it? Well I have to evaluate cosine at delta x, but that's just delta x. And I have to subtract cosine at 0. That's the base point, but that's just 1. And then I have to divide by delta x. And lo and behold you can see that this is exactly the limit that we had over there. This is the one that we know is by what we call property A. And similarly, if I take the derivative of (sin x) at x= 0, then that's going to be the limit as delta x goes to of sine delta x / delta x. And that's because I should be subtracting sine of but sine of 0 is 0. So this is going to be 1 by our property B. And so the remark that I want to make, in addition to this, is something about the structure of these two proofs. Which is the derivatives of sine and cosine at x = give all values of d/dx sin x, d/dx cos x. So that's really what this argument is showing us, is that we just need one rate of change at one place and then we work out all the rest of them. So that's really the substance of this proof. That of course really then shows that it boils down to showing what this rate of change is in these two cases. So now there's enough suspense that we want to make sure that we know that those answers are correct. OK, so let's demonstrate both of them. I'll start with B. I need to figure out property B. Now, we only have one alternative as to a type of proof that we can give of this kind of result, and that's because we only have one way of describing sine and cosine functions, that is geometrically. So we have to give a geometric proof. And to write down a geometric proof we are going to have to draw a picture. And the first step in the proof, really, is to replace this variable delta x which is going to with another name which is suggestive of what we're gonna do which is the letter theta for an angle. OK, so let's draw a picture of what it is that we're going to do. Here is the circle. And here is the origin. And here's some little angle, well I'll draw it a little larger so it's visible. Here's theta, alright? And this is the unit circle. I won't write that down on here but that's the unit circle. And now sin theta is this vertical distance here. Maybe, I'll draw it in a different color so that we can see it all. OK so here's this distance. This distance is sin theta. Now almost the only other thing we have to write down in this picture to have it work out is that we have to recognize that when theta is the angle, that's also the arc length of this piece of the circle when measured in radians. So this length here is also arc length theta. That little piece in there. So maybe I'll use a different color for that to indicate it. So that's orange and that's this little chunk there. So those are the two pieces. Now in order to persuade you now that the limit is what it's supposed to be, I'm going to extend the picture just a little bit. I'm going to double it, just for my own linguistic sake and so that I can tell you a story. Alright, so that you'll remember this. So I'm going to take a theta angle below and I'll have another copy of sin theta down here. And now the total picture is really like a bow and its bow string there. So what we have here is a length of 2 sin theta. So maybe I'll write it this way, 2 sin theta. I just doubled it. And here I have underneath, whoops, I got it backwards. Sorry about that. Trying to be fancy with the colored chalk and I have it reversed here. So this is not 2 sin theta. 2 sin theta is the vertical. That's the green. So let's try that again. This is 2 sin theta, alright? And then in the denominator I have the arc length which is theta is the first half and so double it is 2 theta. So if you like, this is the bow and up here we have the bow string. And of course we can cancel the 2's. That's equal to sin theta / theta. And so now why does this tend to 1 as theta goes to 0? Well, it's because as the angle theta gets very small, this curved piece looks more and more like a straight one. Alright? And if you get very, very close here the green segment and the orange segment would just merge. They would be practically on top of each other. And they have closer and closer and closer to the same length. So that's why this is true. I guess I'll articulate that by saying that short curves are nearly straight. Alright, so that's the principle that we're using. Or short pieces of curves, if you like, are nearly straight. So if you like, this is the principle. So short pieces of curves. Alright? So now I also need to give you a proof of A. And that has to do with this cosine function here. This is the property A. So I'm going to do this by flipping it around, because it turns out that this numerator is a negative number. If I want to interpret it as a length, I'm gonna want a positive quantity. So I'm gonna write down (1 - cos theta) here and then I'm gonna divide by theta there. Again I'm gonna make some kind of interpretation. Now this time I'm going to draw the same sort of bow and arrow arrangement, but maybe I'll exaggerate it a little bit. So here's the vertex of the sector, but we'll maybe make it a little longer. Alright, so here it is, and here was that middle line which was the unit. Whoops. OK, I think I'm going to have to tilt it up. OK, let's try from here. Alright, well you know on your pencil and paper it will look better than it does on my blackboard. OK, so here we are. Here's this shape. Now this angle is supposed to be theta and this angle is another theta. So here we have a length which is again theta and another length which is theta over here. That's the same as in the other picture, except we've exaggerated a bit here. And now we have this vertical line, which again I'm gonna draw in green, the bow string. But notice that as the vertex gets farther and farther away, the curved line gets closer and closer to being a vertical line. That's sort of the flip side, by expansion, of the zoom in principle. The principle that curves are nearly straight when you zoom in. If you zoom out that would mean sending this vertex way, way out somewhere. The curved line, the piece of the circle, gets more and more straight. And now let me show you where this numerator (1 - cos theta) is on this picture. So where is it? Well, this whole distance is 1. But the distance from the vertex to the green is cosine of theta. Right, because this is theta, so dropping down the perpendicular this distance back to the origin is cos theta. So this little tiny, bitty segment here is basically the gap between the curve and the vertical segment. So the gap = 1 - cos theta. So now you can see that as this point gets farther away, if this got sent off to the Stata Center, you would hardly be able to tell the difference. The bow string would coincide with the bow and this little gap between the bow string and the bow would be tending to 0. And that's the statement that this tends to 0 as theta tends to 0. The scaled version of that. Yeah, question down here. Student: Doesn't the denominator also tend to 0 though? Professor: Ah, the question is "doesn't the denominator also tend to 0?" And the answer is yes. In my strange analogy with zooming in, what I did was I zoomed out the picture. So in other words, if you imagine you're taking this and you're putting it under a microscope over here and you're looking at something where theta is getting smaller and smaller and smaller and smaller. But now because I want my picture, I expanded my picture. So the ratio is the thing that's preserved. So if I make it so that this gap is tiny. Let me say this one more time. I'm afraid I've made life complicated for myself. If I simply let this theta tend in to 0, that would be the same effect as making this closer and closer in and then the vertical would approach. But I want to keep on blowing up the picture so that I can see the difference between the vertical and the curve. So that's very much like if you are on a video screen and you zoom in, zoom in, zoom in, and zoom in. So the question is what would that look like? That has the same effect as sending this point out farther and farther in that direction, to the left. And so I'm just trying to visualize it for you by leaving the theta at this scale, but actually the scale of the picture is then changing when I do that. So theta is going to 0, but I I'm rescaling so that it's of a size that we can look at it, And then imagine what's happening to it. OK, does that answer your question? Student: My question then is that seems to prove that that limit is equal to 0/0. Professor: It proves more than it is equal to 0 / 0. It's the ratio of this little short thing to this longer thing. And this is getting much, much shorter than this total length. You're absolutely right that we're comparing two quantities which are going to 0, but one of them is much smaller than the other. In the other case we compared two quantities which were both going to and they both end up being about equal in length. Here the previous one was this green one. Here it's this little tiny bit here and it's way shorter than the 2 theta distance. Yeah, another question. Student: (Cos theta -1) / (cos theta) is the same as (1- cos theta) / theta? Professor: Cos theta - 1 over. Professor: So here, what I wrote is (cos delta x - 1) / delta x, OK, and I claimed that it goes to 0. Here, I wrote minus that, that is I replaced delta x by theta. But then I wrote this thing. So (cos theta - 1) - 1 is the negative of this. And if I show that this goes to 0, it's the same as showing the other one goes to 0. Another question? Professor: So the question is, what about this business about arc length. So the word arc length, that orange shape is an arc. And we're just talking about the length of that arc, and so we're calling it arc length. That's what the word are length means, it just means the length of the arc. Professor: Why is this length theta? Ah, ok so this is a very important point, and in fact it's the very next point that I wanted to make. Namely, notice that in this calculation it was very important that we used length. And that means that the way that we're measuring theta, is in what are known as radians. Right, so that applies to both B and A, it's a scale change in A and doesn't really matter but in B it's very important. The only way that this orange length is comparable to this green length, the vertical is comparable to the arc, is if we measure them in terms of the same notion of length. If we measure them in degrees, for example, it would be completely wrong. We divide up the angles into 360. and that's wrong unit of measure. The correct measures is the length along the unit circle, which is what radians are. And so this is only true if we use radians. So again, a little warning here, that this is in radians. Now here x is in radians. The formulas are just wrong if you use other units. Ah yeah? Professor: OK so the second question is why is this crazy length here 1. And the reason is that the relationship between this picture up here and this picture down here, is that I'm drawing a different shape. Namely, what I'm really imagining here is a much, much smaller theta. OK? And then I'm blowing that up in scale. So this scale of this picture down here is very different from the scale of the picture up there. And if the angle is very, very, very small then one has to be very, very long in order for me to finish the circle. So, in other words, this length is 1 because that's what I'm insisting on. So, I'm claiming that that's how I define this circle, to be of unit radius. Another question? Student: [INAUDIBLE] the ratio between 1 - theta and theta and theta will get closer and closer to 1. I don't understand [INAUDIBLE]. Professor: OK, so the question is it's hard to visualize this fact here. So let me let me take you through a couple of steps, because I think probably other people are also having trouble with this visualization. The first part of the visualization I'm gonna try to demonstrate on this picture up here. The first part of the visualization is that I should think of a beak of a bird closing down, getting narrower and narrower. So in other words, the angle theta has to be getting smaller and smaller and smaller. OK, that's the first step. So that's the process that we're talking about. Now, in order to draw that, once theta gets incredibly narrow, in order to depict that I have to blow the whole picture back up in order be able to see it. Otherwise it just disappears on me. In fact in the limit theta = 0, it's meaningless. It's just a flat line. That's the whole problem with these tricky limits. They're meaningless right at the (0, 0) level. It's only just a little away that they're actually useful, that you get useful geometric information out of them. So we're just a little away. So that's what this picture down below in part A is meant to be. It's supposed to be that theta is open a tiny crack, just a little bit. And the smallest I can draw it on the board for you to visualize it is using the whole length of the blackboard here for that. So I've opened a little tiny bit and by the time we get to the other end of the blackboard, of course it's fairly wide. But this angle theta is a very small angle. Alright? So I'm trying to imagine what happens as this collapses. Now, when I imagine that I have to imagine a geometric interpretation of both the numerator and the denominator of this quantity here. And just see what happens. Now I claimed the numerator is this little tiny bit over here and the denominator is actually half of this whole length here. But the factor of 2 doesn't matter when you're seeing whether something tends to 0 or not. Alright? And I claimed that if you stare at this, it's clear that this is much shorter than that vertical curve there. And I'm claiming, so this is what you have to imagine, is this as it gets smaller and smaller and smaller still that has the same effect of this thing going way, way way, farther away and this vertical curve getting closer and closer and closer to the green. And so that the gap between them gets tiny and goes to 0. Alright? So not only does it go to 0, that's not enough for us, but it also goes to faster than this theta goes to 0. And I hope the evidence is pretty strong here because it's so tiny already at this stage. Alright. We are going to move forward and you'll have to ponder these things some other time. So I'm gonna give you an even harder thing to visualize now so be prepared. OK, so now, the next thing that i'd like to do is to give you a second proof. Because it really is important I think to understand this particular fact more thoroughly and also to get a lot of practice with sines and cosines. So I'm gonna give you a geometric proof of the formula for sine here, for the derivative of sine. So here we go. This is a geometric proof of this fact. This is for all theta. So far we only did it for theta = and now we're going to do it for all theta. So this is a different proof, but it uses exactly the same principles. Right? So, I want do this by drawing another picture, and the picture is going to describe Y, which is sin theta, which is if you like the vertical position of some circular motion. So I'm imagining that something is going around in a circle. Some particle is going around in a circle. And so here's the circle, here the origin. This is the unit distance. And right now it happens to be at this location P. Maybe we'll put P a little over here. And here's the angle theta. And now we're going to move it. We're going to vary theta and we're interested in the rate of change of Y. So Y is the height P he but we're gonna move it to another location. We'll move it along the circle to Q. Right? So here it is. Here's the thing. So how far did we move it? Well we moved it by an angle delta theta. So we started theta, theta is going to be fixed in this argument, and we're going to move a little bit delta theta. And now we're just gonna try to figure out how far the thing moved. Well, in order to do that we've got to keep track of the the height, the vertical displacement here. So we're going to draw this right angle here, this is the position R. And then this distance here is the change in Y. Alright? So the picture is we have something moving around a unit circle. A point moving around a unit circle. It starts at P it moves to Q. It moves from angle theta to angle theta delta theta. And the issue is how much does Y move? And the formula for Y is sin theta. So that's telling us the rate of change of sin theta. Alright, well so let's just try to think a little bit about what this is. So, first of all, I've already said this and I'm going to repeat it here. Delta Y is PR. It's going from P and going straight up to R. That's how far Y moves. That's the change in Y. That's what I said up in the right hand corner there. Oops. I said PR but I wrote PQ. Alright, that's not a good idea. Alright. So delta Y is PR. And now I want to draw the diagram again one time. So here's Q, here's R, and here's P, and here's my triangle. And now what i'd like to do is draw this curve here which is a piece of the arc of the circle. But really what I want to keep in mind is something that I did also in all these other arguments. Which is, maybe I should have called this orange, that I'm gonna think of the straight line between. So it's the straight line approximation to the curve that we're always interested in. So the straight line is much simpler, because then we just have a triangle here. And in fact it's a right triangle. Right, so we have the geometry of a right triangle which is going to now let us do all of our calculations. OK, so now the key step is this same principle that we already used which is that short pieces of curves are nearly straight. So that means that this piece of the circular arc here from P to Q is practically the same as the straight segment from P to Q. So, that's this principal. Well, let's put it over here. Is that PQ is practically the same as the straight segment from P to Q. So how are we going to use that? We want to use that quantitatively in the following way. What we want to notice is that the distance from P to Q is approximately delta theta. Right? Because the arc length along that curve, the length of the curve is delta theta. So the length of the green which is PQ is almost delta theta. So this is essentially delta theta, this distance here. Now the second step, which is a little trickier, is that we have to work out what this angle is. So our goal, and I'm gonna put it one step below because I'm gonna put the geometric reasoning in between, is I need to figure out what the angle QPR is. If I can figure out what this angle is, then I'll be able to figure out what this vertical distance is because I'll know the hypotenuse and I'll know the angle so I'll be able to figure out what the side of the triangle is. So now let me show you why that's possible to do. So in order to do that first of all I'm gonna trade the boards and show you where the line PQ is. So the line PQ is here. That's the whole thing. And the key point about this line that I need you to realize is that it's practically perpendicular, it's almost perpendicular, to this ray here. Alright? It's not quite because the distance between P to Q is non-zero. So it isn't quite, but in the limit it's going to be perpendicular. Exactly perpendicular. The tangent line to the circle. So the key thing that I'm going to use is that PQ is almost perpendicular to OP. Alright? The ray from the origin is basically perpendicular to that green line. And then the second thing I'm going to use is something that's obvious which is that PR is vertical. OK? So those are the two pieces of geometry that I need to see. And now notice what's happening upstairs on the picture here in the upper right. What I have is the angle theta is the angle between the horizontal and OP. That's angle theta. If I rotate it by ninety degree, the horizontal becomes vertical. It becomes PR and the other thing rotated by 90 degrees becomes the green line. So the angle that I'm talking about I get by taking this guy and rotating it by 90 degrees. It's the same angle. So that means that this angle here is essentially theta. That's what this angle is. Let me repeat that one more time. We started out with an angle that looks like this, which is the horizontal that's the origin straight out horizontally. That's the thing labeled 1. That distance there. That's my right arm which is down here. My left arm is pointing up and it's going from the origin to the point P. So here's the horizontal and the angle between them is theta. And now, what I claim is is that if I rotate by 90 degrees up, like this, without changing anything - so that was what I did - the horizontal will become a vertical. That's PR. That's going up, PR. And if I rotate OP 90 degrees, that's exactly PQ. So let me draw it on there one time. Let's do it with some arrows here. So I started out with this and then, we'll label this as orange, OK so red to orange, and then I rotate by 90 degrees and the red becomes this starting from P and the orange rotates around 90 degrees and becomes this thing here. Alright? So this angle here is the same as the other one which I've just drawn. Different vertices for the angles. Well I didn't say that all arguments were supposed to be easy. Alright, so I claim that the conclusion is that this angle is approximately theta. And now we can finish our calculation, because we have something with the hypotenuse being delta theta and the angle being theta and so this segment here PR is approximately the hypotenuse length times the cosine of the angle. And that is exactly what we wanted. If we divide, we divide by delta theta, we get (delta Y) / (delta theta) is approximately cos theta. And that's the same thing as. So what we want in the limit is exactly the delta theta going to 0 of (delta y) / (delta theta) = cos theta. So we get an approximation on a scale that we can visualize and in the limit the formula is exact. OK, so that is a geometric argument for the same result. Namely that the derivative of sine is cosine. Yeah? Professor: You will have to do some kind of geometric proofs sometimes. When you'll really need this is probably in 18.02. So you'll need to make reasoning like this. This is, for example, the way that you actually develop the theory of arc length. Dealing with delta x's and delta y's is a common tool. Alright, I have one more thing that I want to talk about today, which is some general rules. We took a little bit more time than I expected with this. So what I'm gonna do is just tell you the rules and we'll discuss them in a few days. So let me tell you the general rules. So these were the specific ones and here are some general ones. So the first one is called the product rule. And what it says is that if you take the product of two functions and differentiate them, you get the derivative of one times the other plus the other times the derivative of the one. Now the way that you should remember this, and the way that I'll carry out the proof, is that you should think of it is you change one at a time. And this is a very useful way of thinking about differentiation when you have things which depend on more than one function. So this is a general procedure. The second formula that I wanted to mention is called the quotient rule and that says the following. That u / v' has a formula as well. And the formula is ((u'v - uv' ) / v^2). So this is our second formula. Let me just mention, both of them are extremely valuable and you'll use them all the time. This one of course only works when v is not 0. Alright, so because we're of time we're not gonna prove these today but we'll prove these next time and you're definitely going to be responsible for these kinds of proofs.Source: ocw.mit.edu
Square and circles The square in the picture has a side length of a = 20 cm. Circular arcs have centers at the vertices of the square. Calculate the areas of the colored unit. Express area using side a. Thank you for submitting an example text correction or rephasing. We will review the example in a short time and work on the publish it. Showing 0 comments: You need to know the following knowledge to solve this word math problem: We encourage you to watch this tutorial video on this math problem: video1 Next similar math problems: - Eq triangle minus arcs In an equilateral triangle with a 2cm side, the arcs of three circles are drawn from the centers at the vertices and radii 1cm. Calculate the content of the shaded part - a formation that makes up the difference between the triangle area and circular cuts - Equilateral triangle v3 Calculate the content of the colored gray part. Equilateral triangle has side length 8 cm. Arc centers are the vertices of a triangle. - Ratio of squares A circle is given in which a square is inscribed. The smaller square is inscribed in a circular arc formed by the side of the square and the arc of the circle. What is the ratio of the areas of the large and small squares? - Quarter of a circle Calculate the circumference of a quarter circle if its content is S = 314 cm2. - Arc and segment Calculate the length of circular arc l, area of the circular arc S1 and area of circular segment S2. Radius of the circle is 11 and corresponding angle is ?. - Circular ring Square with area 16 centimeters square are inscribed circle k1 and described circle k2. Calculate the area of circular ring, which circles k1, k2 form. - Four circles 1) Calculate the circle radius if its area is 400 cm square 2) Calculate the radius of the circle whose circumference is 400 cm. 3) Calculate circle circumference if its area is 400 cm square 4) Calculate the circle's area if perimeter 400 cm. Calculate span of the arc, which is part of a circle with diameter d = 20 m and its height is 6 m. - Two circles Two circles with the same radius r = 1 are given. The center of the second circle lies on the circumference of the first. What is the area of a square inscribed in the intersection of given circles? The areas of the two circles are in the ratio 2:20. The larger circle has diameter 20. Calculate the radius of the smaller circle. - Recursion squares In the square ABCD is inscribed a square so that its vertices lie at the centers of the sides of the square ABCD.The procedure of inscribing square is repeated this way. Side length of square ABCD is a = 22 cm. Calculate: a) the sum of perimeters of all s - Math heart Stylized heart shape created from a square with side 5 cm and two semicircles over his sides. Calculate the content area and its circumference. - Tripled square If you tripled the length of the sides of the square ABCD you increases its content by 200 cm2. How long is the side of the square ABCD? - Circular segment Calculate the area S of the circular segment and the length of the circular arc l. The height of the circular segment is 2 cm and the angle α = 60°. Help formula: S = 1/2 r2. (Β-sinβ) - Circle and square An ABCD square with a side length of 100 mm is given. Calculate the radius of the circle that passes through the vertices B, C and the center of the side AD. arc length = 17 cm area of sector = 55 cm2 arc angle = ? the radius of the sector = ? - Two annuluses The area of the annular circle formed by two circles with a common center is 100 cm2. The radius of the outer circle is equal to twice the radius of the inner circle. Determine the outside circle radius in centimeters.
Aerosols, i.e., particulate matter (PM) in the atmosphere, play a key role in radiation transfer in the Earth system. Aerosols scatter or absorb sunlight, thereby cooling or heating atmospheric layers. Individual aerosol particles also act as condensation nuclei or ice nuclei in the troposphere. Therefore, knowledge of the location and optical properties of aerosols is crucial for understanding the thermal balance of the atmosphere.1 In addition, PM near the Earth’s surface is a known risk factor for human health. The International Agency for Research on Cancer has classified PM as a high-level risk factor for lung cancer and other cancers.2 Thus, PM near the Earth’s surface is one of the most important properties of the ambient atmospheric environment that should be monitored. The most powerful remote sensing technique to determine the vertical distribution of an aerosol layer in near real time is light detection and ranging (lidar). Although a precipitation radar can cover a horizontal area several hundred kilometers in diameter, lidar can measure the aerosol distribution just above the lidar instrument, or of around several kilometers, even if the lidar instrument has a horizontal scanning mechanism. Thus, to understand the three-dimensional distribution of aerosols on a regional scale, a lidar network is indispensable. Several lidar networks have already been constructed. For example, the European Aerosol Research Lidar Network (EARLINET), which has stations in Europe, was established in 2000.3 Lidars in EARLINET are organized to observe atmosphere in the region synchronously, and their data quality is assured for research activities related to the radiative forcing. In East Asia, the National Institute for Environmental Studies (NIES), Japan, has developed a network of lidar instruments called Asian dust and aerosol lidar observation network (AD-Net), which is a collaborative project of many universities, institutes, and national and local governments. Although AD-Net was initially established to monitor the transportation of Asian dust, anthropogenic particles are another important target in these days. Micropulse Lidar Network (MPLNET)4 led by NASA (National Aeronautics and Space Administration) has the widest geographical distribution of lidars in the world, from the equatorial to the polar regions, and has a close relation to AERONET (NASA Aerosol Robotic Network). Latin American Lidar Network (LALINET aka ALINE)5 was established to study the climatology of aerosols and is also devoted to the early warning of volcanic eruptions. Although each network has its original strategy fitting technical bases and location, these lidar networks contribute to GALION (GAW Aerosol Lidar Observation Network) for monitoring of the atmospheric conditions around the globe. In this paper, we describe the historical and current status of AD-Net along with the hardware specifications of the lidar systems and the data processing method, and we present examples of the application of lidar data. We also introduce the new generation of lidar equipment and techniques used in AD-Net. Finally, we review the future outlook of this ground-based lidar network. Elastic-Scattering Lidar in AD-Net History of AD-Net and Station Locations In 1996, NIES, in Tsukuba, Japan, developed an automated elastic scattering lidar (EL) system that could operate continuously without human intervention. At that time, only the total backscatter signal intensity at 532 nm () was recorded, and only aerosols and cloud layers could be identified. In 1999, the system was modified so that the polarization state of the backscattered light could also be recorded, allowing aerosol particle shapes to be investigated. By this modification, the NIES automated lidar system acquired the capability to distinguish mineral dust particles from anthropogenic particles and ice particles in cirrus clouds from water droplets in convective clouds. In 2001, an initial lidar network was constructed for the ACE-Asia campaign,6 an important aim of which was to investigate the distribution of Asian dust. For ACE-Asia, NIES operated lidar observatories at Tsukuba, Nagasaki, and Beijing and collaborated with other lidar observatories in Japan.7,8 Since then, the number of NIES-type lidar observatories, which follow a mostly uniform operational strategy, has increased; at present, AD-Net comprises a total of 20 observatories in East Asia (i.e., in Japan, Korea, China, and Mongolia). Figure 1 shows the locations of AD-Net lidar observatories in operation as of July 2016. Nowadays some lidars were modified to Raman lidar (RL) or multiwavelength Raman lidar (MRL) systems (see Sec. 3). However, the data treatment for elastic scattering channels of such lidars is the same as that for EL. Thus, we describe EL system at first (Sec. 2.2) and then introduce the common data processing procedures on elastic scattering channels in Sec. 2.3. Hardware of an Elastic-Scattering Lidar System The fundamental and common information about aerosol is retrieved from elastic scattering channels in all types of AD-Net lidars. Here, the specifications and configuration of the AD-Net elastic lidar (EL) system are described. Table 1 and Fig. 2(a) summarize the specs and configurations of EL. The EL employs a flashlamp-pumped Q-switching Nd:YAG laser as the light source. Pulsed light is emitted with a fundamental wavelength of 1064 nm and a second harmonic of 532 nm, produced by second harmonics generation. A pulse with a mean power of 50 mJ (sum of 20 mJ at 1064 nm and 30 mJ at 532 nm) and a duration of 8 ns is expanded by a beam expander and emitted at a frequency of 10 Hz in the direction of the zenith. Backscattered light from the atmosphere is collected by a Schmidt–Cassegrain telescope with a diameter of 20 cm and a field of view of 1 mrad and separated by a dichroic mirror into (light intensity at 1064 nm) and (light intensity at 532 nm). Then, is further separated by a beam splitter cube into two components: (the polarizing angle is parallel to that of the emitted laser beam) and (the polarizing angle is perpendicular). The extinction ratio of the beam splitter cube is 0.001. and are both detected by photomultiplier tubes (PMTs), and is detected by an avalanche photodiode. Signal intensities are sampled at 25 MHz (corresponding to a range resolution of 6 m), digitized with 12- or 16-bit AD converters, and recorded by a personal computer as vertical profiles , , and , where denotes the altitude above the lidar. To obtain a high signal-to-noise ratio, signals are accumulated for 5 min (i.e., 3000 laser shots). To reduce the flashlamp consumption, a 10-min rest period follows each 5-min emission period. Thus, the system obtains four profiles each hour. Specifications of the AD-Net lidars. |Lidar type||EL and RL||MRL||RHL| |Laser||Nd:YAG, linearly polarized, (Quantel, Brilliant Ultra)||Nd:YAG, linearly polarized, (Quantel, Brilliant Easy)||Nd:YAG, linearly polarized, injection-seeded, (Continuum, Surelite I)| |Wavelength (nm)||1064, 532||1064, 532, 355||1064, 532, 355| |Pulse energy||20 mJ for 1064 nm, 30 mJ for 532 nm||110, 110, 50 mJ for 1064, 532, 355 nm||100 mJ for each wavelength| |Repetition rate||10 Hz| |Telescope||Schmidt–Cassegrain, Dia.=20 cm (Celestron, C8-AXLT)||Schmidt–Cassegrain, Dia.=20 cm (Celestron, C8-AXLT)||Cassegrain, Dia.=21 cm (Takahashi, CN-212)| |Band-path filter||1 nm for each wavelength| |Detectors||• APD (Licel, APD-1.5) for 1064 nm||• APD (Licel, APD-1.5) for 1064 nm||• APD (Licel, APD-1.5) for 1064 nm| |• PMT (Hamamatsu, H7421-40) or PMT (Licel, PM-HV-20) for 607 nm*||• PMT (Licel, R9880-20) for 607 nm||• PMTs (Licel, PM-HV-20) for 532 nm| |• PMTs (Hamamatsu, H6780-02) for 532 nm||• PMTs (Licel, R9880-110) for 532, 387, 355 nm||• PMTs (Licel, PM-HV-03) for 387, 355 nm| |• A/D converter: 25 MHz, 12 or 16 bit (Turtle Industry) for 1064, 532 nm||• A/D converter: 25 MHz, 16 bit (Turtle Industry) for 1064, 532, 355 nm||• A/D converter: 40 MHz, 12 (Licel) for 1064, 532, 355 nm| |• Photon counter: 100 MHz count rate (SigmaSpace) or 250 MHz count rate (Licel) for 607 nm*||• Photon counter: 250 MHz count rate (Licel) for 607, 387 nm||• Photon counter: 250 MHz count rate (Licel) for 387 nm| The lidar system is installed in a room with a glass window in the roof. Because the observations are made through the glass, the system can be operated in all weather conditions, including rain and snow. Every hour, the observation results are transferred to an NIES data server via the Internet, the condition of the lidar system is checked, and an initial analysis is performed. An exception is the Beijing lidar, which by Chinese law is not allowed to send data to NIES in real time. To obtain the optical properties of aerosols, the following analyses are applied to the AD-Net observation results in elastic-scattering channels of EL, RL, and MRL. The background removed and range corrected signal intensities of the three channels (, , and ) must be calibrated before physical quantities such as the attenuated backscatter coefficient can be determined. First, total backscatter intensity is calculated as2.2). Once has been obtained, a frequency histogram of all in the height range of 1200 to 6000 m is constructed. The histogram peak corresponds empirically to the system calibration constant () because the molecular backscatter is dominant in this height range with lighter aerosol loading conditions. Then, the attenuated backscatter coefficient at 532 nm () is estimated as the product of and , and the volume linear depolarization ratio (), defined as Fig. 3, and the calibration procedure is summarized in Fig. 4. The depolarization ratio, a measure of the irregularity of the scatterer shape,9 is the most important property of Asian dust measured by lidar systems. To calculate the depolarization ratio, it is important to first calibrate the signal intensities of the and channels. In AD-Net, the difference in sensitivity between the two PMTs used to detect these components is checked routinely by the following method. A sheet polarizer whose polarizing direction is set at 45 deg to the polarizing plane of emitted light is inserted in front of the beam splitter cube, and the backscatter signal from the sky is recorded as a reference signal. In this reference record, the light intensities of the two channels are equal after the sheet polarizer, so the calibration constant can be obtained by comparing the recorded values of and . Next, the sheet polarizer is rotated 90 deg to set the polarizing angle at , and another reference signal is recorded. This pair of reference signals reduces any error caused by poor positioning of the sheet polarizer.10 The reference signals are usually recorded once per year for each lidar. The cloud base height must be determined first because the aerosol layer analysis cannot be applied to cloud layers. In AD-Net, the cloud base height is determined from the vertical gradient of and the peak value of between the cloud base and the apparent cloud top, where is equal to that of the cloud base. The threshold values are determined empirically because they depend on the vertical resolution and signal accumulation time of the lidar measurements and there are no true cloud base height reference data with equivalent lidar time and height resolutions at each observatory. Therefore, the thresholds are determined subjectively by estimating the cloud distribution on time–height sections [see Figs. 3(d) and 3(e)]. Scattering by rain droplets or snowflakes obscures the signal from aerosols. Thus, data recorded during rainy or snowy conditions must be eliminated before the analysis. Moreover, the affected profiles must be selected using the lidar data without ancillary data because not all observatories have a surface rain gauge or an equivalent rain or snow detection system. AD-Net uses the color ratio (, the ratio of to ) to distinguish rainy and clear (no rain) regions. Large droplets have a large value, so once exceeds a threshold (1.1) over a certain vertical interval in the lower atmosphere, the profile is classified as a rain or snow profile and not used for further analysis of aerosols. At present, in the AD-Net lidar systems, the overlap of the laser beam and the field of view of the telescope is insufficient for near-field observation. Typically, the full overlap is achieved at around 500 to 600 m altitude. The compensation function is therefore inferred from vertical profiles obtained on a day when the planetary boundary layer (PBL) is well developed, when the aerosol distribution is expected to be homogeneous near the surface. is determined such that the slope of the compensated signal is constant near the surface. is redetermined after routine maintenance of the lidar equipment is carried out. With this compensation, the optical properties are provided above 120 m altitude. Recently, a small telescope with a wide field of view has been deployed at several lidar observatories in AD-Net, allowing the signal to be measured near the ground (above 60 m altitude). At these observatories, can be determined without assuming homogeneous mixing near the surface. The lidar equation for elastic scattering is resolved into two components (particles and molecules) by the method described by Fernald.11 The ratio of the extinction coefficient to the backscatter coefficient for aerosols (lidar ratio, ) is assumed to be 50 sr. This value was initially determined from values reported in the literature on Asian dust because originally the main target of this lidar network was Asian dust. The vertical profile of molecular densities is obtained from the CIRA-86 global climatology of atmospheric parameters.12 Usually, the maximum height of Fernald’s inversion is set to an altitude of 9 km (6 km before 2012) if the signal-to-noise ratio within that interval is enough to solve the equation. Because 9 km is usually in the troposphere, where the aerosol extinction value is unknown, the lidar equation is first solved with the initial extinction value set to zero. If the resulting aerosol extinction value is negative anywhere between 0 and 9 km height, the extinction at 9 km is increased slightly and the lidar equation is solved again. Although this method is not rigorous, the obtained extinction is sometimes validated by independent measurements made near the surface or by columnar optical depth observations. Finally, the particulate depolarization ratio () is calculated from the volume depolarization ratio () and the particulate backscatter coefficient. These procedures are explained in detail in Ref. 7 and outlined in Fig. 5. In East Asia, the aerosols being analyzed are assumed to be an external mixture of two components. One component is mineral dust, mainly Asian dust that has been transported a long distance, which has a particulate depolarization ratio () of 35%.7 The other component consists of spherical particles and is assumed to be a mixture of sulfate, nitrate, organic carbon, elemental carbon particles, and sea-salt droplets. The of the latter component is zero because the particles are spherical and Mie scattering theory is applicable. The observed particulate extinction coefficient and particulate linear depolarization values are the linear combination of these two components. Thus, it is possible to use the observed to separate the total particulate extinction coefficient () into two components, the dust extinction coefficient () and the spherical particle extinction coefficient (), by calculating the optical dust mixing ratio () by following equation:13Figs. 3(d) and 3(e) for examples of time–height sections of and . Although was initially estimated from a histogram of all data, a more precise estimation is made after inversion. The time series at 600 m height is compared with the total backscatter coefficient (sum of molecular backscatter from CIRA-86 and particulate backscatter obtained by inversion) at 600 m height to evaluate the system calibration constant again. The revised is then utilized to estimate from . is similarly recalculated. Finally, the AD-Net server generates numerical files in netCDF format that contain time–height sections of , , , , , , and . Quantities below 120 m altitude are eliminated from published data because their reliability is not high owing to the uncertainty of the overlap correction function . Currently, files are generated hourly for the month and archived one per month. Symbols for two types of missing values ( for cloud layers and −999 for regions above clouds, with rainy or snowy conditions, or with no observations) are embedded into the , , and . Simultaneously, figures of time–height sections for these parameters are plotted and both the numerical files and the figures are uploaded onto the AD-Net web page every hour. The procedure is outlined in Fig. 6. Validation of Dust Extinction and Comparison with Other Instruments It is not a simple task to validate dust extinction coefficients obtained by lidar because this parameter cannot be obtained directly by other instruments. However, the surface mass concentration of particles consists mainly of mineral dust during periods with heavy loading of Asian dust. Thus, dust extinction near the surface can be compared with filter-sampled mass concentrations by making some assumptions about vertical mixing near the surface. This comparison was first done in Beijing, China.13 In Ref. 13, it is reported that the dust extinction coefficient retrieved by lidar is almost proportional to the mass concentration of total suspended particles. A similar comparison has been conducted at many stations in Japan during several Asian dust events.14 In addition, in Ref. 15, the iron concentration in daily PM2.5 samples is compared with the dust extinction coefficient. The results of all these comparisons suggest that dust extinction coefficients determined by lidar are usable as an index of surface dust loading. Utilization of the Retrieved Optical Parameters Use with chemical transport models A typical use of dust and spherical extinction coefficients is to validate chemical transport models (CTMs). A CTM calculates the four-dimensional distribution of various chemical components (constituent gaseous species and particulates) using meteorological data, emission inventories, chemical reactions, and physical processes (transport, removal, etc.). AD-Net lidar data were first compared with a CTM called CFORS, developed by Kyushu University.7 To validate CTM results, lidar data should be separated into independent chemical components. Thus, the dust extinction coefficient is useful for validating dust processes in the model. The spherical extinction coefficient is compared with the total extinction due to sulfate, nitrate, organic carbon, elemental carbon, and sea salt.16,17 A more sophisticated application of lidar data is to incorporate them into the CTM by assimilation. Data assimilation is a technique for modifying the aerosol loading in the model in accordance with observed results. In Ref. 18, lidar dust extinction data were assimilated to correct the dust emission factor at the source region of Asian dust in the model. Once dust extinction data have been assimilated, the dust concentration at the surface (PM10) simulated by the model corresponded more realistically to that observed at several observatories in Japan. Epidemiology of Asian dust To study the health effects of PM, the mass concentration of particles is usually used as an exposure index. In Japan, environmental standards have been determined for both SPM (suspended PM and almost equivalent to PM615) and PM2.5. However, the chemical components of PM are more important for investigating the actual mechanism of the health effects of the particles. The dust extinction coefficient is suitable for studies of the epidemiology of Asian dust because it captures the quantity of dust continuously. The daily mean value near the surface has been utilized in several epidemiological studies in Japan, some of which found a correlation between health impact and the dust extinction coefficient.1920.21.22.–23 Climatology of aerosols The climatology of aerosols in East Asia can be derived from AD-Net data. Horizontal, seasonal, and interannual variations of extinction coefficients are calculated for each of the two components. In Fig. 7, mean vertical profiles of dust and spherical particle extinction coefficients are plotted. The exponential decrement with height is apparent in the dust extinction coefficient in most stations; however, a boundary layer structure is found at around 1.8 to 2.3 km in the spherical particle extinction coefficient for many stations. To depict horizontal, annual, and interannual variations of spherical particle extinction coefficients more clearly, the probability density of a time series at a certain height is examined by calculating the 5, 25, 50, 75, and 95 percentiles at the selected height and comparing the results among observatories or among months or years. The spatial distribution of extinction coefficients of spherical particles among observatories is shown in Fig. 8. An apparent gradient from west to east can be seen except for two stations in Mongolia, but this effect disappears at higher altitudes. Intra-annual changes differ depending on altitude and among stations. Figure 9 shows monthly percentiles based on data from 2010 to 2015. In the PBL (500 m), seasonal changes are apparent in large cities (Beijing, Seoul, and Tokyo), whereas the annual cycle of changes is different in the lower troposphere (2500 m). These results are consistent with the findings of a comprehensive analysis of anthropogenic particles in this region.16 has shown a slight interannual decrease in these cities from 2008 to 2015 (Fig. 10). These results confirm the findings of several studies that have reported a decrease in emissions of anthropogenic gaseous pollutants in this region since 2006.24 Qualitative analysis of internal mixtures using the color ratio The backscatter signal at 532 nm obtained by AD-Net has been fully utilized for aerosol studies, as shown above. However, the signal at 1064 nm has so far been used in our procedures only to detect rainy and cloudy conditions. Thus, more direct use of the 1064-nm signal for aerosol analysis is a challenge for AD-Net. One example of the utilization of 1064 nm in aerosol studies is related to the internal mixture of dust and anthropogenic particles. If dust with high and values is mixed externally with anthropogenic particles with low and values, the observed and values should reflect the mixture. However, in the real atmosphere, observed and values sometimes do not correspond to the values expected for an external mixture. In some cases, lower and higher are detected. This implies that the mineral dust particles were chemically modified (e.g., partly deliquesced) and the shapes were modified to be more spherical during the transportation with anthropogenic particles. In such cases, it is possible that dust and anthropogenic particles have become internally mixed.25,26 Obtaining More Optical Parameters by Multichannel Lidar We have used ELs for a long time to monitor aerosols as well as clouds in East Asia and provided mineral dust and spherical aerosol extinctions as well as total aerosol optical properties and cloud base height from the lidar measurements. Lidar measurements with more channels enable providing more detailed information on aerosol optical and microphysical properties and more advanced classification of aerosol components. Thus, we introduced independent extinction measurement by RL and high spectral resolution (HSR) lidar techniques and its multiwavelength measurement to several main observation sites of the AD-Net. Use of Raman Lidar Techniques The AD-Net uses the nitrogen RL technique. Details on the RL system, data analysis method, calibration method, measurement uncertainties, and observation results are given in Ref. 27. Here, a summary on the RL observation of AD-Net is given. We improved the ELs at five sites of the AD-Net (Tsukuba, Matsue, Fukue, Seoul, and Beijing) by adding a nitrogen Raman scatter measurement channel at 607 nm and have conducted continuous observations since 2009. As a result, this RL system can provide extinction coefficient (), backscatter coefficient (), and depolarization ratio () of particles at 532 nm and attenuated backscatter coefficient () at 1064 nm. The configuration and specifications are given in Fig. 2(a) and Table 1. Furthermore, we built a multiwavelength RL system (MRL) providing , , , , , , and at Fukuoka, Toyama, and Hedo sites of the AD-Net. The MRL observations started from 2013 at Fukuoka, from 2014 at Hedo, and from 2015 at Toyama. The configuration and specifications are given in Fig. 2(b) and Table 1. For the RL and MRL measurements, we conduct photon-counting measurement for Raman backscatter and analog measurement for elastic backscatter. No Raman channel data are available in the daytime due to strong sunlight. After reducing signal noises using wavelet transform analysis and moving average, the uncertainties of , , , and lidar ratio () of aerosols in the PBL derived from the RL and MRL measurements are evaluated to be less than 5%, which indicates that the RL and MRL have enough accuracy to understand aerosol optical properties in the PBL. Use of High Spectral Resolution Lidar Techniques The HSR lidar is a more highly sensitive technique than the RL, indicating that the HSR lidar can provide measurements with enough signal-to-noise ratio in the daytime as well as the nighttime to retrieve , , and . We developed a Raman-HSR lidar (RHL) implementing both the nitrogen RL technique at 387 nm and the HSR lidar technique at 532 nm [Fig. 2(c) and Table 1] at the Tsukuba site from 2014. This RHL provides the same products as the MRL (i.e., , , and at 355 and 532 nm, respectively, and at 1064 nm). This system uses the iodine absorption filter28 to introduce the HSR lidar technique, indicating that the laser wavelength must be tuned to the center of the iodine absorption line to block the elastic scattering light and transmit the Rayleigh scattering light efficiently. We developed and implemented an automatical feedback control system using two acousto optic modulators29 to tune the laser wavelength to the absorption line stably for a long period. We conduct photon-counting measurement for Raman backscatter and analog measurement for elastic backscatter including the Rayleigh scatter. No Raman channel data are available in the daytime. After reducing signal noises in a similar manner as the data analysis of the MRL, the uncertainties of , , , and of aerosols in the PBL are evaluated to be comparable to or less than the uncertainties of MRL. Those uncertainties for nighttime are less than half of uncertainties for daytime. Aerosol Component Retrieval A two-component (i.e., mineral dust and spherical particles) algorithm14 using the EL data was implemented in the standard data processing, and the component products as well as the other products have been provided in public (see Sec. 2.3). Furthermore, as another algorithm using the EL data, we developed a three-component (i.e., mineral dust, sea-salt, and air pollution particles) algorithm30,31 to estimate a vertical distribution of extinction coefficient for the three aerosol components. This algorithm uses the difference of due to particle size and the difference of due to particle shape among the aerosol components. For the RL, MRL, and RHL measurements, a four-component (i.e., mineral dust, sea-salt, black carbon, and air pollution particles except black carbon) algorithm27,32 that further uses the difference of due to light absorption property among the aerosol components was developed. The aerosol component products by three- and four-aerosol component algorithms as well as the aerosol and cloud optical property data (i.e., , , , and ) will be made public in the future. Since the developed three- and four-aerosol component retrieval algorithms do not use all the data of MRL and RHL, an aerosol component retrieval algorithm using all the data of MRL and RHL more effectively is being developed to provide more detailed optical and microphysical properties of each aerosol component. Twenty-first century spaceborne lidar (e.g., Ref. 33) covers a wide area of the globe, including oceans and deserts. In contrast, ground-based lidar can cover only the atmosphere above the observatory. However, it can acquire continuous time series of data, and it is useful for monitoring environmental conditions directly related to human populations. Thus, ground-based lidar networks are still very important. In particular, in East Asia, rapid economic growth and changes in environmental protection policy have been significant in several countries. As a result, atmospheric conditions in East Asia can vary greatly. Optical remote sensing is a feasible method for detecting such variation and for detecting emergency conditions, so alerts can be issued for the local inhabitants. Also, data from lidar networks are useful for various types of studies in the field of atmospheric science. AD-Net is now affiliated with GALION [Global Atmosphere Watch (GAW) Lidar Observations Network], a World Meteorological Organization program. Through GALION, lidar data from around the world are distributed to scientific users such as CTM developers. NIES provides netCDF files of optical properties through its website. AD-Net will continue to monitor the atmosphere in East Asia and to supply information, including data on optical properties obtained with new technologies, to society. Furthermore, as part of AD-Net observation, we conducted ship-borne lidar measurements using the EL from 1999 to 2015 (e.g., Refs. 34, 35) and using a 532-nm HSR lidar with water–vapor Raman channel at 660 nm in 2011 (e.g., Refs. 36, 37) in corroboration with the Japan Agency for Marine-Earth Science and Technology (JAMSTEC). The MRL with water-vapor Raman scattering detection channel (660 nm) has been used since 2015. It is essential for observing temporal and spatial distributions of aerosols and clouds and their optical and microphysical properties over land and ocean to understand and evaluate global environmental change. This study was partly supported by the Environment Research and Technology Development Fund (5-1502) of the Ministry of the Environment, Japan, by a Grant-in-Aid for Scientific Research on Innovative Areas of the Ministry of Education, Culture, Sports, Science and Technology, Japan (20120006), and a Grant-in-Aid for Scientific Research (25220101) from the Japan Society for the Promotion of Science. The authors are grateful to all scientists who operate the lidar instruments in AD-Net and Ichiro Matsui of mss. Authors express sincere gratitude to anonymous reviewers. Intergovernmental Panel on Climate Change, Intergovernmental Panel on Climate Change, Cambridge University Press (2014).Google Scholar International Agency for Research on Cancer, Air Pollution and Cancer, World Health Organization (2013)Google Scholar J. L. Guerrero-Rascado et al., “Latin American Lidar Network (LALINET) for aerosol research: diagnosis on network instrumentation,” J. Atmos. Sol. Terr. Phys. 138–139, 112–120 (2016).JASPF31364-6826http://dx.doi.org/10.1016/j.jastp.2016.01.001Google Scholar J. H. Seinfeld et al., “ACE-ASIA: regional climatic and atmospheric chemical effects of Asian dust and pollution,” Bull. Am. Meteorol. Soc. 85, 367–380 (2004).BAMIAT0003-0007http://dx.doi.org/10.1175/BAMS-85-3-367Google Scholar A. Shimizu et al., “Continuous observations of Asian dust and other aerosols by polarization lidars in China and Japan during ACE-Asia,” J. Geophys. Res. 109, D19S17 (2004).JGREA20148-0227http://dx.doi.org/10.1029/2002JD003253Google Scholar T. Murayama et al., “An intercomparison of lidar-derived aerosol optical properties with airborne measurements near Tokyo during ACE-Asia,” J. Geophys. Res. 108, 8561 (2004).JGREA20148-0227http://dx.doi.org/10.1029/2002JD003259Google Scholar Y. Iwasaka et al., “Transport of Asian dust (KOSA) particles; importance of weak KOSA events on the geochemical cycle of soil particles,” Tellus 35B, 189–196 (1988).TELLAL0040-2826http://dx.doi.org/10.1111/teb.1983.35B.issue-3Google Scholar V. Freudenthaler et al., “Depolarization ratio profiling at several wavelengths in pure Saharan dust during SAMUM 2006,” Tellus 61, 165–179 (2009).TELLAL0040-2826http://dx.doi.org/10.1111/j.1600-0889.2008.00396.xGoogle Scholar Committee on Space Research; NASA National Space Science Data Center, COSPAR International Reference Atmosphere (CIRA-86): Global Climatology of Atmospheric Parameters, NCAS British Atmospheric Data Centre (2006).Google Scholar N. Sugimoto et al., “Record heavy Asian dust in Beijing in 2002: observations and model analysis of recent events,” Geophys. Res. Lett. 30, 1640 (2003).GPRLAJ0094-8276http://dx.doi.org/10.1029/2002GL016349Google Scholar A. Shimizu et al., “Relationship between lidar-derived dust extinction coefficients and mass concentrations in Japan,” Sci. Online Lett. Atmos. 7A, 1–4 (2011).http://dx.doi.org/10.2151/sola.7A-001Google Scholar N. Kaneyasu et al., “Comparison of Lidar-derived dust extinction coefficients and the mass concentrations of surface aerosol,” J. Jpn. Soc. Atmos. Environ. 47, 285–291 (2012).http://doi.org/10.11298/taiki.47.285Google Scholar Y. Hara et al., “Seasonal characteristics of spherical aerosol distribution in eastern Asia: Integrated analysis using ground/space-based lidars and a chemical transport model,” Sci. Online Lett. Atmos. 7, 121–124 (2011).http://dx.doi.org/10.2151/sola.2011-031Google Scholar D. Goto et al., “An evaluation of simulated particulate sulfate over East Asia through global model intercomparison,” J. Geophys. Res. 120, 6247–6270 (2015).JGREA20148-0227http://dx.doi.org/10.1002/2014JD021693Google Scholar K. Yumimoto et al., “Adjoint inversion modeling of Asian dust emission using lidar observations,” Atmos. Chem. Phys. 8, 2869–2884 (2008).ACPTCE1680-7324http://dx.doi.org/10.5194/acp-8-2869-2008Google Scholar K. Ueda et al., “Long-range transported Asian dust and emergency ambulance dispatches,” Inhalation Toxicol. 24, 858–867 (2012).INHTE50895-8378http://dx.doi.org/10.3109/08958378.2012.724729Google Scholar K. Kanatani et al., “Effect of desert dust exposure on allergic symptoms: a natural experiment in Japan,” Ann. Allergy, Asthma Immunol. 116(5), 425–430 (2016).http://dx.doi.org/10.1016/j.anai.2016.02.002Google Scholar T. Higashi et al., “Effects of Asian dust on daily cough occurrence in patients with chronic cough: a panel study,” Atmos. Environ. 92, 506–513 (2014).AENVEQ0004-6981http://dx.doi.org/10.1016/j.atmosenv.2014.04.034Google Scholar M. Watanabe et al., “Association of sand dust particles with pulmonary function and respiratory symptoms in adult patients with asthma in western Japan using light detection and ranging: a panel study,” Int. J. Environ. Res. Public Health 12(10), 13038–13052 (2015).http://dx.doi.org/10.3390/ijerph121013038Google Scholar S. Kashima et al., “Asian dust and daily all-cause or cause-specific mortality in western Japan,” Occup. Environ. Med. 69, 908–915 (2012).OEMEEM1351-0711http://dx.doi.org/10.1136/oemed-2012-100797Google Scholar Z. Lu et al., “Sulfur dioxide emissions in China and sulfur trends in East Asia since 2000,” Atmos. Chem. Phys. 10, 6311–6331 (2010).ACPTCE1680-7324http://dx.doi.org/10.5194/acp-10-6311-2010Google Scholar N. Sugimoto et al., “Observation of dust and anthropogenic aerosol plumes in the Northwest Pacific with a two-wavelength polarization lidar on board the research vessel Mirai,” Geophys. Res. Lett. 29, 71–74 (2002).GPRLAJ0094-8276http://dx.doi.org/10.1029/2002GL015112Google Scholar N. Sugimoto et al., “Detection of internally mixed Asian dust with air pollution aerosols using a polarization optical particle counter and a polarization-sensitive two-wavelength lidar,” J. Quant. Spectrosc. Radiat. Transfer 150, 107–113 (2015).JQSRAE0022-4073http://dx.doi.org/10.1016/j.jqsrt.2014.08.003Google Scholar T. Nishizawa et al., “Ground-based network observation using Mie-Raman Lidars and multi-wavelength Raman lidars and algorithm to retrieve distributions of aerosol components,” J. Quant. Spectrosc. Radiat. Transf. (2016).http://dx.doi.org/10.1016/j.jqsrt.2016.06.031Google Scholar Z. Liu, I. Matsui and N. Sugimoto, “High-spectral-resolution lidar using an iodine absorption filter for atmospheric measurements,” Opt. Eng. 38, 1661–1670 (1999).http://dx.doi.org/10.1117/1.602218Google Scholar T. Nishizawa, N. Sugimoto and I. Matsui, “Development of a dual-wavelength high-spectral-resolution lidar,” Proc. SPIE 7860, 78600D (2010).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.870068Google Scholar T. Nishizawa et al., “An algorithm that retrieves aerosol properties from dual-wavelength polarization lidar measurements,” J. Geophys. Res. 112, D06212 (2007).JGREA20148-0227http://dx.doi.org/10.1029/2006JD007435Google Scholar T. Nishizawa et al., “Algorithm to retrieve aerosol optical properties from two-wavelength backscatter and one-wavelength polarization lidar considering nonsphericity of dust,” J. Quant. Spectrosc. Radiat. Transfer 112, 254–267 (2011).JQSRAE0022-4073http://dx.doi.org/10.1016/j.jqsrt.2010.06.002Google Scholar T. Nishizawa et al., “Algorithm to retrieve aerosol optical properties from high-spectral-resolution lidar and polarization Mie-scattering lidar measurements,” IEEE Trans. Geosci. Remote Sens. 46, 4094–4103 (2008).IGRSD20196-2892http://dx.doi.org/10.1109/TGRS.2008.2000797Google Scholar D. M. Winker et al., “Overview of the CALIPSO mission and CALIOP data processing algorithms,” J. Atmos. Oceanic Technol. 26, 2310–2323 (2009).JAOTES0739-0572http://dx.doi.org/10.1175/2009JTECHA1281.1Google Scholar N. Sugimoto et al., “Latitudinal distribution of aerosols and clouds in the western Pacific observed with a lidar on board the research vessel Mirai,” Geophys. Res. Lett. 28, 4187–4190 (2001).GPRLAJ0094-8276http://dx.doi.org/10.1029/2001GL013510Google Scholar T. Nishizawa et al., “Aerosol retrieval from two-wavelength backscatter and one-wavelength polarization lidar measurement taken during the MR01K02 cruise of the R/V Mirai and evaluation of a global aerosol transport model,” J. Geophys. Res. 113, D21201 (2008).JGREA20148-0227http://dx.doi.org/10.1029/2007JD009640Google Scholar J. Suzuki et al., “The occurrence of cirrus clouds associated with eastward propagating equatorial n=0 inertio-gravity and Kelvin waves in November 2011 during the CINDY2011/DYNAMO campaign,” J. Geophys. Res. 118, 12, 941–12, 947 (2013).JGREA20148-0227http://dx.doi.org/10.1002/2013JD19960Google Scholar Atsushi Shimizu is a senior researcher at the National Institute for Environmental Studies (NIES), Japan. He received his BS degree from the Science Faculty, Kyoto University, in 1994 and his MS degree and PhD in atmospheric physics from the Graduate School of Science, Kyoto University, in 1996 and 1999, respectively. Tomoaki Nishizawa received his BS, MS, and DSc degrees in geophysics from Tohoku University, Japan, in 1999, 2001, and 2004, respectively. He has worked for NIES, Japan since 2007 and is currently a head of the Advanced Remote Sensing Section of NIES. His research field is related to atmospheric environment and climate. He is engaged in aerosol and cloud observation using lidars and development of lidar systems and data analysis methods. Yoshitaka Jin is a research associate of the NIES, Japan. He received his DSc degree from Nagoya University in 2014. Since 2009, he has conducted research on aerosol optical properties with active remote sensing. His current research covers the development of high-spectral-resolution lidar methods and applications of ceilometer for aerosol measurement. He is a member of the Japan Association of Aerosol Science and Technology, and the Meteorological Society of Japan. Sang-Woo Kim is an associate professor of the School of Earth and Environmental Sciences, Seoul National University, Republic of Korea. He received his PhD in atmospheric sciences from Seoul National University in 2005 and has conducted research on the monitoring of air pollutants and atmospheric aerosols and evaluating their climate effects based on ground-based and airborne in situ and active optical remote sensing measurements. Batdorj Dahdondog is a senior officer of the National Agency for Meteorology and Environmental Monitoring of Mongolia (NAMEM). He received his master’s degree from the National University of Mongolia in 2005. Since he joined the Institute of Meteorology and Hydrology (IMH) in 2003. He has conducted various research in the climatology and environmental field. Nobuo Sugimoto is a fellow of NIES, Japan. He received his DSc degree in laser spectroscopy from the University of Tokyo in 1985. Since he joined NIES in 1979, he has conducted various research on the development and applications of active optical remote sensing methods, including differential absorption lidars, high-spectral-resolution lidars, and ground-based networks of lidars. He is a member of SPIE and a senior member of OSA.
An Era of Expansion Chapter 13: Westward Expansion I. Oregon Fever A. Oregon Country: A Varied Land 1. By the 1820s, white settlers occupied much of the land between the Appalachians and the Mississippi River. Families in search of good farmland continued to move west. a) Few settled on the Great Plains between the Mississippi and the Rockies. They were drawn to lands in the Far West. 2. Oregon Country was the huge area beyond the Rockies. Today, this land includes Oregon, Washington Idaho, and parts of Wyoming, Montana, and Canada. B. Competing Claims 1. In the early 1800s, four countries had competing claims to Oregon. a) They were the United States, Great Britain, Spain, and Russia. 2. In 1818, the United States and Britain reached an agreement. The two countries would occupy Oregon jointly. a) Citizens of each nation would have equal rights in Oregon. b) Spain and Russia had few settlers in the area and agreed to drop their claims. C. Fur Trappers in the Far West 1. At first, the only people who settled Oregon country were a few hardy trappers. a) Mountain Men - They hiked through the forests, trapping animals and living off of the land. b)By the late 1830s, the fur trade was dying out. Animals had grown scarce. Beaver hats were no longer in style. D. Wagon Trains West 1. Throughout the 1840s, "Oregon Fever" broke out. a) Beginning in 1843, wagon trains left every spring for Oregon following the Oregon Trail. 2. Leaving from Independence. Families planning to go west met at Independence, Missouri, in the early spring. a) When enough families had gathered, they formed a wagon train. Each group elected leaders to make decisions along the way. b) Oregon bound pioneers hurried to leave Independence in May. Timing was important. Travelers had to reach Oregon by early October, before it began to snow. This meant that pioneers had to cover 2,000 miles on foot in five months! 3. The long trek west held many dangers. a) During spring rains, travelers risked their lives floating wagons across swollen rivers. b) In summer, they faced blistering heat on the treeless plains. c) Early snowstorms often blocked passes through the mountains. d) The biggest threat was sickness. Cholera and other diseases could wipe out whole wagon trains. 4. As they moved west toward the Rockies, pioneers often saw Indians. a) The Indians seldom attacked the whites. b) Many Native Americans traded with the wagon trains. c) Hungry pioneers were grateful for food the Indians sold. 5. Despite the many hardships, more than 50,000 people reached Oregon between 1840 and 1860. a) Their wagon wheels cut so deeply into the plains that the ruts can still be seen today. b) By the 1840s, Americans greatly outnumbered the British in parts Of Oregon. c) Many Americans began to feel that Oregon should belong to the United States alone. II. A Country Called Texas A. Americans in Mexican Texas 1. Since the early 1800s, American farmers had looked eagerly at the vast region called Texas. Then in 1821, Spain gave Moses Austin a land grant in Texas. Austin died before he could set up a colony. a) His son Stephen took over the project. b) Meanwhile, Mexico had won its independence from Spain. c) The new nation agreed to let Stephen Austin lead settlers into Texas. d) Mexico hoped that the Americans would help develop the area and control Indian attacks. 2. Mexico gave Stephen Austin and each settler a large grant of land. a) In return, the settlers agreed to become citizens of Mexico, obey its laws, and worship in the Roman Catholic Church. b) By 1830, about 20,000 Americans had resettled in Texas. B. Mexico Tightens Its Laws 1. Americans who later flooded into Texas felt no loyalty to Mexico. a) They spoke only a few words of Spanish, the official language of Mexico. b) Most of the Americans were Protestants. 2. Conflict soon erupted between the newcomers and the Mexican government. a) In 1830, Mexico passed a law forbidding any more Americans to move to Texas. b) Mexico feared that the Americans wanted to make Texas part of the United States. c) The United States had tried to buy Texas in 1826 and again in 1829. 3. Mexico also decided to make Texans obey Mexican laws that they had ignored for years. a) One law banned slavery in Texas. b) Another required Texans to worship in the Catholic Church. c) Texans resented the laws and the Mexican troops who came north to enforce them. 4. 1833 - General Antonio Lopez de Santa Anna came to power in Mexico. a) Santa Anna, some said, intended to drive all Americans out of Texas. C. Texans Take Action 1. Americans in Texas felt that the time had come for action. In this, they had the support of many Tejanos, Mexicans who lived in Texas. a) The Tejanos did not necessarily want independence from Mexico. But they hated General Santa Anna, who ruled as a military dictator. 2. Fighting begins. a) October 1835 - Texans in the town of Gonzales defeated the Mexicans, forcing them to withdraw. b) Inspired by the victory, Stephen Austin and other Texans aimed to "see Texas forever free from Mexican domination." c) Two months later, Texans stormed and took San Antonio. d) Santa Anna was furious. He marched north with a large army. 3. Declaring independence- While Santa Anna massed his troops, Texans declared their independence from Mexico on March 2, 1836. a) They set up as a new nation called the Republic of Texas and appointed Sam Houston as commander of their army. b) Free blacks, slaves, and Tejanos, as well as other people of many nationalities, joined to fight for Texan independence. 4. By the time Santa Anna arrived in San Antonio, fewer than 200 Texans remained as defenders. a) Despite the odds against them, the Texans refused to give up. Instead, they retired to an old Spanish mission called the Alamo. D. Remember the Alamo! 1. The Spanish had built the Alamo in the mid 1700s. a) It was like a small fort, surrounded by walls 12 feet high and 3 feet thick. 2. Texan defenders who gathered in the Alamo in the winter of 1835 1836 were not well prepared. a) Supplies of ammunition and medicine were low. b) Food consisted of some beef and corn, and access to water was limited. c) Many of the men had only a blanket and a single flannel shirt. d) Of most concern was the fact that there were only 187 Texans in the Alamo. This was not enough to defend it against 6,000 Mexican troops. 3. William Travis, hardly more than a boy, commanded the Texans. a) Volunteers inside the mission included the famous frontiersmen Jim Bowie and Davy Crockett. 4. On February 23, 1836, Santa Anna's army arrived. a) The first shots from the Alamo were rapid and deadly and took the Mexicans by surprise. Commander Travis had four rifles placed by each man's side. In that way, a Texan could fire three or four shots," in the time it took a Mexican to fire one. 5. Still, Travis knew that unless he received help, he and his men were doomed. On February 24, he sent a Texan through the Mexican lines with a message. It was addressed "to the People of Texas and all the Americans in the World": "Fellow Citizens and Compatriots. I am besieged by a thousand or more of the Mexicans under Santa Anna. I have sustained a continual bombardment for 24 hours and have not lost a man. The enemy have demanded a surrender.... I have answered the demand with a cannon shot and our flag still waves proudly from the walls. I shall never surrender or retreat. I call on you in the name of Liberty,' of patriotism, and of everything dear to the American character to come to our aid with all dispatch. The enemy are receiving reinforcements daily.... If this call is neglected, I am determined to sustain myself as long as possible & die like a soldier who never forgets what is due to his own honor or that of his country. Victory or Death! W. Barret Travis" 6. For 12 days the Mexicans bombarded the Alamo. Then, at dawn on March 6, 1836, Mexican cannon fire broke through the Alamo walls. a) Thousands of Mexican soldiers poured into the mission. When the bodies were counted, 183 Texans and almost 1,500 Mexicans lay dead. b) The five Texan survivors, including Davy Crockett, were promptly executed at Santa Anna's order. 7. The slaughter at the Alamo angered Texans and set off cries for revenge. The fury of the Texans grew even stronger three weeks later, when Mexicans killed several hundred Texan soldiers at Goliad after they had surrendered. a) Volunteers flooded into Sam Houston's army. b) Men from the United States also raced south to help the Texan cause. E. Texan Independence 1. While the Mexicans were busy at the Alamo, Sam Houston organized his army. Six weeks later, on April 21, 1836, he attacked Santa Anna and his troops at the Battle of San Jacinto. a) With cries of "Remember the Alamo!" the Texans charged the surprised Mexicans. b) The fighting lasted only 18 minutes. c) Although they were outnumbered, Texans killed 630 Mexicans and captured 700 more. 2. The next day they captured Santa Anna himself. a) He was forced to sign a treaty granting Texas its independence. F. The Lone Star Republic 1. In battle, Texans carried a flag with a single white star. After winning independence they nicknamed their nation the Lone Star Republic. a) They drew up a constitution. b) Elected Sam Houston as President. 2. The new country faced huge Problems. a) Mexico refused to accept the treaty signed by Santa Anna. b) Texas was nearly bankrupt. 3. Most Texans thought that the best way to solve both problems was for Texas to become part of the United States. a) In the United States, Americans were divided about whether to annex Texas. b) To annex means to add on. c) Most white southerners favored the idea. d) Knowing that many Texans owned slaves, northerners did not want to allow Texas to join the Union. e) President Andrew Jackson also worried that annexing Texas would lead to war with Mexico. As a result, the United States refused to annex Texas. III. Manifest Destiny A. New Mexico Territory 1. The entire Southwest belonged to Mexico in the 1840s. This huge region was called New Mexico Territory. a) It included most of the present day states of Arizona and New Mexico, all of Nevada and Utah, and parts of Colorado. 2. The explorer Juan de Onate claimed the territory of New Mexico for Spain in 1598. In the early 1600s, the Spanish built Santa Fe as the capital of the territory. a) Santa Fe grew into a busy trading town. b) Spain refused to let Americans settle in New Mexico. c) Only after Mexico won its independence in 1821 were Americans welcome in Santa Fe. 3. William Becknell, a merchant and adventurer, was the first American to head for Santa Fe. a) 1821- Becknell led a group of traders on the long trip from Franklin, Missouri, across the plains. When they reached Santa Fe, they found Mexicans eager to buy their goods. b) Other Americans soon followed Becknell's route. It became known as the Santa Fe Trail. B. Early Years in California 1. Spanish soldiers and priests built the first European settlements in California. a) 1769- Captain Gaspar de Portold led a group of soldiers and missionaries up the Pacific coast. 2. The chief missionary was Father Junipero Serra. a) He built his first mission at San Diego. b) He went on to build 20 other missions along the California coast. Each mission claimed the surrounding land and soon took care of all its own needs. c) Spanish soldiers built forts near the missions. 3. California Indians lived in small, scattered groups. They were generally peaceful people. a) They did not offer much resistance to soldiers who forced them to work for the missions. b) Native Americans herded sheep and cattle and raised crops for the missions. In return they lived at the missions and learned about the Catholic religion. c) Mission life was hard for Native Americans. Thousands died from overwork and diseases. C. Expansion: A Right and a Duty 1. Many Americans saw the culture and the democratic government of the United States as the best in the world. They believed that the United States had the right and the duty to spread its rule all the way to the Pacific Ocean. a) In the 1840s, a New York newspaper coined a phrase for this belief. The phrase was Manifest Destiny. b) Americans who believed in Manifest Destiny thought that the United States was clearly meant to expand to the Pacific. 2. Election of 1844- Manifest Destiny played an important part in the election of 1844. a) The Whigs nominated Henry Clay for President. b) The Democrats chose a little known man named James K. Polk. c) Voters soon came to know Polk as the candidate who favored expansion. Polk demanded that Texas and Oregon be added to the United States. 3. Polk made Oregon a special campaign issue. He insisted on the whole region for the United States all the way to its northern border at latitude 54˚40'N. "Fifty four forty or fight!" cried the Democrats. a) On Election Day, Americans showed that they favored expansion by electing Polk President. IV. The Mexican War A. Annexing Texas 1. In 1844, Sam Houston, the president of Texas, signed a treaty of annexation with the United States. a) The Senate refused to ratify the treaty. b) Senators feared that it would cause a war with Mexico. c) To persuade Americans to annex Texas, Houston pretended that Texas might become an ally of Britain. d) In 1845, Congress passed a joint resolution admitting Texas to the Union. 2. Annexing Texas led at once to a dispute with Mexico. a) Texas claimed that its southern border was the Rio Grande. b) Mexico argued that it was the Nueces River, some 200 miles north of the Rio Grande. B. Dividing Oregon 1. Despite his expansionist beliefs, President Polk did not really want a war with Britain. In 1846, he agreed to a compromise. a) Oregon was divided at latitude 49O N. b) Britain got the lands north of the line, and the United States got the lands south of the line. c) The United States named its portion the Oregon Territory. d) The states of Oregon (1859), Washington (1889), and Idaho (1890) were later carved out of the Oregon Territory. C. War with Mexico 1. Mexico had accepted the independence of Texas. Now, the annexation of Texas made Mexicans furious. a) They also were concerned that the example set by Texas would encourage Americans in California and New Mexico to rebel. 2. Americans, in turn, were angry with Mexico. President Polk offered to pay Mexico $30 million for California and New Mexico. a) Mexico refused the offer. b) Many Americans felt that Mexico stood in the way of Manifest Destiny. 3. Sparking the war- In January 1846, Polk ordered General Zachary Taylor to cross the Nueces River and set up posts along the Rio Grande. Polk knew that Mexico claimed this land and that the move might spark a war. a) In April 1846, Mexican troops crossed the Rio Grande and fought briefly with the Americans. Soldiers on both sides were killed. 4. President Polk claimed that Mexico had "shed American blood upon the American soil." At his urging, Congress declared war on Mexico. a) Americans were divided over the war. b) Many people in the South and West wanted more land and so were eager to fight. c) Northerners opposed the war. They saw it as a southern plot to add, slave states to the Union. d) Still, many Americans joined the war effort. e) When the call for recruits went out, the response was overwhelming, especially in the South and West. 5. General Zachary Taylor crossed the Rio Grande into northern Mexico. There, he won several battles against the Mexican army. a) In February 1847, he defeated General Santa Anna at the Battle of Buena Vista. b) Meanwhile, General Winfield Scott landed another American army at the Mexican port of Veracruz. c) After a long battle, the Americans took the city. 6. Rebellion in California - A third army, led by General Stephen Kearny, captured Santa Fe without firing a shot. a) After several battles, he took control of southern California early in 1847. 7. Americans in northern California had risen up against Mexican rule even before hearing of the Mexican War. a) Led by John Fremont, the rebels declared California an independent republic on June 14, 1846. b) They called their new nation the Bear Flag Republic. c) Later in the war, Fremont joined forces with the United States Army. D. A Nation's Dream Comes True 1. By 1847, the United States controlled all of New Mexico and California. a) Meanwhile, General Scott had reached the outskirts of the Mexican capital, Mexico City. There his troops faced a fierce battle. b) Young Mexican soldiers made a heroic last stand at Chapultepec, a fort just outside Mexico City. c) Like the Texans who died at the Alamo, the Mexicans at Chapultepec fought to the last man. Today, Mexicans honor these young men as heroes. 2. With the American army in Mexico City, the Mexican government had no choice but to make peace. In 1848, Mexico signed the Treaty of Guadalupe Hidalgo a) Mexico was forced to cede, or give, all of California and New Mexico to the United States. b) These lands were called the Mexican Cession c) In return for these lands, the United States paid Mexico $15 million. Americans also agreed to respect the rights of Spanish speaking people in the Mexican Cession. 3. A few years after the Mexican War, the United States completed its expansion across the continent. a) 1853- it agreed to pay Mexico $10 million for 1 strip of land in present day Arizona and New Mexico. b) The land was called the Gadsden Purchase. E. A Rich Heritage 1. Texas, New Mexico, and California added vast new lands to the United States. 2. A mix of cultures- a) English speaking settlers poured into the Southwest bringing their own culture with them, including their ideas about democratic government. b) Mexican Americans taught newcomers how to irrigate the soil. c) They also showed them how to mine silver and other minerals. d) Many Spanish and Indian words became part of the English language. Among these words were stampede, buffalo, soda, and tornado. IV. Surge to the Pacific A. Mormons Seek Refuge in Utah 1. The largest group of settlers to move into the Mexican Cession were the Mormons. a) Mormons belonged to the Church of Jesus Christ of Latter day Saints. The church was founded by Joseph Smith in 1830. 2. Smith's teachings angered many non Mormons. a) For example, Mormons at first believed that property should be owned in common. b) Smith also said that a man could have more than one wife. c) Angry neighbors forced the Mormons to leave New York for Ohio. From Ohio, they were forced to move to Missouri and from there to Illinois. d) In the 1840s, the Mormons built a community called Nauvoo, in Illinois. 3. In 1844, an angry mob killed Joseph Smith. a) The Mormons chose Brigham Young as their new leader. b) Brigham Young realized that the Mormons needed a home where they would be safe. He had read about a valley between the Rocky Mountains and the Great Salt Lake in Utah. Young decided that the isolated valley would make a good home for the Mormons. 4. In Utah, the Mormons had to survive in a harsh desert climate. a) Young planned an irrigation system to bring water to farms. b) He also drew up plans for a large city, called Salt Lake City, to be built in the desert. 5. Congress recognized Brigham Young as governor of the Utah Territory in 1850. B. Gold in California! 1. In 1848, James Marshall was helping John Sutter build a sawmill on the American River, north of Sacramento, California. On the morning of January 24, Marshall set out to inspect a ditch his crew was digging. He later told a friend what he saw that day: "It was a clear, cold morning; I shall never forget that morning. As I was taking my usual walk.... my eye was caught with the glimpse of something shining in the bottom of the ditch. There was about a foot of water running then. I reached my hand down and picked it up; it made my heart thump, for I was certain it was gold. " 2. As news spread, thousands of Americans caught gold fever. People in Europe and South America joined the rush as well. a) forty niners - more than 80,000 people made the long journey to California in 1849. 3. The first miners needed little skill. Because the gold was near the surface of the earth, they could dig it out with knives. Later, the miners found a better way. a) They loaded sand and gravel from the riverbed into a washing pan. Then, they held the pan under water and swirled it gently. The water washed away lighter gravel leaving the heavier gold in the pan. This process was known as "panning for gold." b) Most went broke trying to make their fortunes. Although many miners left the gold fields, they stayed in California. 4. The Gold Rush brought big Changes to life in California. Almost overnight, San Francisco grew from a sleepy town to a bustling city. a) Greed turned some forty niners into criminals. b) Murders and robberies plagued many mining camps. c) To fight crime, miners formed vigilance committees. d) Vigilantes, self appointed law enforcers, dealt out punishment even though they had no legal power to do so. e) Sometimes an accused criminal was lynched, or hanged without a legal trial. 5. Californians realized they needed a government to stop the lawlessness. a) 1849 -they drafted a state constitution. b) They then asked to be admitted to the Union. c) Their request caused an uproar in the United States. Americans wondered whether or not the new state would allow slavery. After a heated debate, California was admitted to the Union in 1850 as a free state. C. California's Unique Culture 1. Most mining camps included a mix of peoples. One visitor to a mining town met runaway slaves from the South, Native Americans, and New Englanders. There were also people from Hawaii, China, Peru, Chile, France, Germany, Italy, Ireland, and Australia. 2. During the wild days of the Gold Rush, they often ignored the rights of other Californians. a) Many Native Americans were driven off their lands and later died of starvation or diseases. Others were murdered. b) When the Chinese staked claims in the gold fields, white miners often drove them off. 3. Free blacks, like other forty niners, rushed to the California gold fields hoping to strike it rich. a) By the 1850s California had the richest African American population of any state. b) Yet African Americans were also denied certain rights. c) For example, California law denied blacks and other minorities the right to testify against whites in court. After a long struggle blacks gained this right in 1863.
The Earth Radiation Budget Satellite (ERBS) was a NASA scientific research satellite. The satellite was one of three satellites in NASA's research program, named Earth Radiation Budget Experiment (ERBE), to investigate the Earth's radiation budget. The satellite also carried an instrument that studied stratospheric aerosol and gases. |Mission type||Earth observation| |Mission duration||21 years and 9 days| |Launch mass||2,449 kg (5,399 lb)| |Dry mass||2,307 kg (5,086 lb)| |Dimensions||4.6 × 3.5 × 1.5 m (15.1 × 11.5 × 4.9 ft)| |Start of mission| |Launch date||5 October 1984, 22:18UTC| |Rocket||Space Shuttle Challenger (STS-41-G)| |Launch site||Kennedy LC-39A| |End of mission| |Deactivated||14 October 2005| |Decay date||January 8, 2023| |Perigee altitude||572 km (355 mi)| |Apogee altitude||599 km (372 mi)| |Epoch||5 October 1984| ERBS was launched on October 5, 1984, by the Space Shuttle Challenger during the STS-41-G mission and deactivated on October 14, 2005. It re-entered the Earth's atmosphere on January 8, 2023, over the Bering Sea near the Aleutian Islands. The ERBS spacecraft was deployed from Space Shuttle Challenger on October 5, 1984 (first day of flight) using the Canadian-built RMS (Remote Manipulator System), a mechanical arm of about 16 m in length. On deployment, one of the solar panels of ERBS failed initially to extend properly. Hence, mission specialist Sally Ride had to shake the satellite with the remotely-controlled robotic arm and then finally place the stuck panel into sunlight for the panel to extend. The ERBS satellite was the first spacecraft to be launched and deployed by a Space Shuttle mission. It orbited in a non-sun synchronous orbit at 610 km (that dropped to 585 km by 1999). It was at an inclination of 57deg which did not provide full Earth coverage. It had a design life of 2 years, with a goal of 3, but lasted 21 years suffering several minor hardware failures along the way. The command memory was subject to random bit flips since launch. The ERBE scanner failed in 1990. There was a partial memory failure in October 1993. One of two Digital Telemetry Units failed in April 1998. In September 1999, a failure in the elevation gimbal of the non-scanner instrument suspended solar measurements for the solar monitor. Measurements resumed on December 22, 1999, when a new command sequence was defined. Only 1 of the 5 gyros was still functioning at the end of the mission, and thruster performance was unstable. During decommissioning, it was discovered that the fuel tank bladder had failed. Battery failures led to the ultimate decision to decommission the spacecraft. Despite estimates that the satellite could continue to work until 2010, there was concern that if the satellite lost power before the batteries were disconnected from the solar arrays, the batteries could explode, creating a cloud of space debris that would endanger other satellites. In September 1989, the performance of the two batteries began to diverge. There were battery cell shorts on Battery 1 in August 1992 and again that September, and as a result Battery 1 was taken offline in October. Battery 2 was then supporting all loads and suffered a cell short in June 1993 and then July 1993. There was an attempt to bring battery 1 back online in August 1993, but it failed due to poor load sharing. Another cell failure began in June 1998 which culminated in a complete cell failure on January 15, 1999. That cell failure caused battery voltage to drop so low that Attitude Control system became unreliable and the satellite went into a very slow tumble. These cell failures and shorts all resulted in a loss of science data for a few days or months. The satellite was recovered and battery 1 was brought back online. Following that, battery 2 was disabled. By the end of the mission, battery 2 had experienced 5 cell failures and been disconnected from the main bus; and battery 1 had experienced 3 cell failures. In 2002, the satellite's perigee was lowered more than 50 km to ensure that the vehicle would naturally decay within 25 years after its end of mission. This proved to be wise, because when the spacecraft was finally decommissioned in 2005 the propulsion and attitude control systems had become so degraded that the risks associated with eliminating the remaining fuel by performing post-science mission delta-V maneuvers were deemed too significant, and therefore those maneuvers were not performed. Decommissioning and re-entryEdit The order to decommission the satellite was issued on July 12, 2005, and efforts began at that time. The instruments were turned off in August and the active steps began in September. During decommissioning, the last of the fuel was depleted, the batteries were discharged, the tape recorder was played back one last time, on-board memory was scrubbed and the solar arrays were disconnected from the battery. On the final ERBS contact, during its 114,941st orbit, the attitude and momentum control system was disabled and the power system was put in discharge. The final commands opened the thrusters to allow the remaining fuel to seep out, and the transponders were powered off for the last time. The satellite is believed to have re-entered the Earth's atmosphere on January 8, 2023, at 6:04 PM HAST over the Bering Sea near the Aleutian Islands. Most of the satellite is believed to have been burned up in the atmosphere, but some large pieces may have survived and fallen to the sea. Prior to re-entry, NASA had estimated the odds that the falling debris would cause any injury at about 1-in-9,400. SAGE II measured the decline in ozone over Antarctica from the time the ozone hole was first described in 1985. That data was key in the international community's decision-making process during the 1987 Montreal Protocol Agreement, which has resulted in a near elimination of CFCs in industrialized countries. It also created an aerosol data record on polar stratospheric clouds (PSC) which was crucial to understanding the ozone hole process. SAGE II data was used to understand the impact of volcanic aerosols on climate. ERBS carried three instruments the Earth Radiation Budget Experiment (ERBE) Scanner, the ERBE non-scanner and Stratospheric Aerosol and Gas Experiment (SAGE II). ERBE was a continuation of the Radiation Budget studies carried out by Nimbus-6 and 7. SAGE-II was a follow-on to the SAGE satellite which lasted from 1979 to 1981. ERBS was one of three satellites in the ERBE and it carried two instruments as part of that effort. The ERBE scanner was a set of three detectors that studied longwave radiation, shortwave radiation and total energy radiating from the Earth along a line of the satellite's path. The ERBE non-scanner was a set of five detectors that measured the total energy from the Sun, and the shortwave and total energy from the entire Earth disk and the area beneath the satellite. The other two ERBE missions were included on the NOAA-9 satellite when it was launched in January 1985, and the NOAA-10 satellite when it was launched in October 1986. The ERBE scanner on ERBS stopped functioning on February 2, 1990, and after numerous attempts to recover it, it was powered off for good in March 1991. The non-scanner lost the ability to perform bi-weekly internal and Solar-calibrations, but no degradation in data quality was detected as a result. The Clouds and the Earth's Radiant Energy System missions, which began in 1997 with NASA's Tropical Rainfall Measuring Mission and continued through to the Joint Polar Satellite System 1 (JPSS-1) to be launched in 2017, use a legacy instrument that continue the data record of ERBE. The non-scanner was powered off on August 22, 2005, in preparation for decommissioning. The measurements of the ERBE mission were continued with the seven Clouds and the Earth's Radiant Energy System (CERES) instruments launched between 1998 and 2017, and will be furthered by the Radiation Budget Instrument (RBI) to be launched on Joint Polar Satellite System-2 (JPSS-2) in 2021 and JPSS-4 and 2031. The other instrument on ERBS was the Stratospheric Aerosol and Gas Experiment (SAGE II). SAGE II was the 2nd of 4 SAGE missions, with the most recent one SAGE III-ISS having been installed on the International Space Station in 2017. The SAGE II instrument experienced a failure in July 2000. SAGE II was unable to lock on to either sunrise or sunset events. This was believed to be due to excessively noisy azimuth potentiometer readings in selected azimuth regions. An operational work-around was developed which allowed SAGE II to collect approximately 50 percent of the nominal science data. SAGE-II was powered off in August 2005 in preparation for decommissioning. - "ERBS". NASA Space Science Data Coordinated Archive. Retrieved 2017-09-16. - "ERBS (Earth Radiation Budget Satellite)". Earth Observation Portal Directory. Retrieved 2017-09-16. - "Trajectory Details: ERBS". National Space Science Data Center. NASA. Retrieved 5 December 2016. - "Twenty-five Years After ERBS and SAGE II: The Life, Data and Death of a Satellite Mission". Retrieved 5 May 2022. - Atkinson, Joe (January 6, 2023). "Retired NASA Earth Radiation Budget Satellite Reenters Atmosphere". NASA. - Foust, Jeff (January 7, 2023). "Defunct NASA satellite reenters". Space News. - "S (Earth Radiation Budget Satellite)". eoportal.org. Retrieved 24 October 2017. - Rink, Chris. "Twenty-five Years After ERBS and SAGE II: The Life, Data and Death of a Satellite Mission". nasa.gov. Retrieved 25 October 2017. - Hughes, John; Marius, Julio L.; Montoro, Manuel; Patel, Mehul; Bludworth, David (19 June 2006). Development and Execution of End-of-Mission Operations Case Study of the UARS and ERBS End-of-Mission Plans (PDF). SpaceOps 2006 Conference. doi:10.2514/6.2006-5511. Retrieved 25 October 2017. - Rao, Gopalakhrisna; Ahmad, Anisa; Encisco, Marlon. "In Orbite Earth Radiation Budget Satellite (ERBS) Battery Switch" (PDF). - Johnson, Nicholas. "THE DISPOSAL OF SPACECRAFT AND LAUNCH VEHICLE STAGES IN LOW EARTH ORBIT" (PDF). Retrieved 25 October 2017. - ERBS at NASA.gov
Temporal range: Middle Miocene–present Miocene – present |Wild specimen in south-eastern Australia| D. novaehollandiae novaehollandiae (Latham, 1790) |The Emu inhabits the areas shown in pink.| Casuarius novaehollandiae Latham, 1790 The emu (//, sometimes US //; Dromaius novaehollandiae) is the largest bird native to Australia and the only extant member of the genus Dromaius. It is the second-largest extant bird in the world by height, after its ratite relative, the ostrich. There are three subspecies of emus in Australia. The emu is common over most of mainland Australia, although it avoids heavily populated areas, dense forest, and arid areas. The soft-feathered, brown, flightless birds reach up to 2 metres (6.6 ft) in height. They have long thin necks and legs. Emus can travel great distances at a fast, economical trot and, if necessary, can sprint at 50 km/h (31 mph). Their long legs allow them to take strides of up to 275 centimetres (9.02 ft) They are opportunistically nomadic and may travel long distances to find food; they feed on a variety of plants and insects, but have been known to go for weeks without food. Emus ingest stones, glass shards and bits of metal to grind food in the digestive system. They drink infrequently, but take in copious fluids when the opportunity arises. Emus will sit in water and are also able to swim. They are curious birds who are known to follow and watch other animals and humans. Emus do not sleep continuously at night but in several short stints sitting down. Emus use their strongly clawed feet as a defence mechanism. Their legs are among the strongest of any animal, allowing them to rip metal wire fences. They are endowed with good eyesight and hearing, which allows them to detect predators in the vicinity. The plumage varies regionally, matching the surrounding environment and improving its camouflage. The feather structure prevents heat from flowing into the skin, permitting emus to be active during the midday heat. They can tolerate a wide range of temperatures and thermoregulate effectively. Males and females are hard to distinguish visually, but can be differentiated by the types of loud sounds they emit by manipulating an inflatable neck sac. Emus breed in May and June and are not monogamous; fighting among females for a mate is common. Females can mate several times and lay several batches of eggs in one season. The animals put on weight before the breeding season, and the male does most of the incubation, losing significant weight during this time as he does not eat. The eggs hatch after around eight weeks, and the young are nurtured by their fathers. They reach full size after around six months, but can remain with their family until the next breeding season half a year later. Emus can live between 10 and 20 years in the wild and are predated by dingos, eagles and hawks. They can jump and kick to avoid dingos, but against eagles and hawks, they can only run and swerve. The Tasmanian emu and King Island emu subspecies that previously inhabited Tasmania and King Island became extinct after the European settlement of Australia in 1788, and the distribution of the mainland subspecies has been influenced by human activities. Once common on the east coast, emus are now uncommon there; by contrast, the development of agriculture and the provision of water for stock in the interior of the continent have increased the range of the emu in arid regions, and it is of Least Concern for conservation. They were a food and fuel source for indigenous Australians and early European settlers. Emus are farmed for their meat, oil, and leather. Emu is a lean meat and while it is often claimed by marketers that the oil has anti-inflammatory and anti-oxidative effects, this has not been scientifically verified in humans. The emu is an important cultural icon of Australia. It appears on the coat of arms, various coins, features prominently in Indigenous Australian mythology, and hundreds of places are named after the bird. There are reports the emu was first sighted by European explorers in 1696 when they made a brief visit to the coast of Western Australia. It was thought to have been spotted on the east coast of Australia before 1788 when the first European settlement occurred. It was first described under the name of the "New Holland cassowary" in Arthur Phillip's Voyage to Botany Bay, published in 1789. The species was named by ornithologist John Latham on a specimen from the Sydney, Australia area, which was referred to as New Holland at the time. He collaborated on Phillip's book and provided the first descriptions of and names for many Australian bird species; its name is Latin for "fast-footed New Hollander". The etymology of the common name "emu" is uncertain, but is thought to have come from an Arabic word for large bird that was later used by Portuguese explorers to describe the related cassowary in Australia and New Guinea. Another theory is that it comes from the word "ema", which is used in Portuguese to denote a large bird akin to an ostrich or crane. In Victoria, some terms for the emu were Barrimal in the Dja Dja Wurrung language, myoure in Gunai, and courn in Jardwadjali. It was known as murawung or birabayin to the local Eora and Darug inhabitants of the Sydney basin. Taxonomy and systematics In his original 1816 description of the emu, Vieillot used two generic names; first Dromiceius, then Dromaius a few pages later. It has been a point of contention ever since which is correct; the latter is more correctly formed, but the convention in taxonomy is that the first name given stands, unless it is clearly a typographical error. Most modern publications, including those of the Australian government, use Dromaius, with Dromiceius mentioned as an alternative spelling. The emu was long classified with its closest relatives, the cassowaries, in the family Casuariidae, part of the ratite order Struthioniformes, but an alternate classification has been recently adopted which splits the Casuariidae into their own order, Casuariformes. Two different Dromaius species were common in Australia before European settlement, and one additional species is known from fossil remains. The insular dwarf emus – D. baudinianus and D. n. ater – both became extinct shortly after the arrival of Europeans. D. novaehollandiae diemenensis, a subspecies known as the Tasmanian emu, became extinct around 1865. However, the mainland sub-species of D. novaehollandiae, remain common. Their population size vary from decade to decade, largely dependent on rainfall; current estimates range from 625,000 to 725,000 birds, with 100,000–200,000 in Western Australia and the remainder mostly in New South Wales and Queensland. Emus were introduced to Maria Island off Tasmania and Kangaroo Island near South Australia during the 20th century. While the Maria Island population became extinct in the mid-1990s, the Kangaroo Island birds have established a breeding population. - In the southeast, D. novaehollandiae novaehollandiae, with its whitish ruff when breeding; - In the north, D. novaehollandiae woodwardi, slender and paler; and - In the southwest, D. novaehollandiae rothschildi, darker, with no ruff during breeding. Examination of DNA of the King Island emu shows it to be closely related to the mainland emu and hence best treated as a subspecies. Emus are large birds. The largest can reach up to 1.5–1.9 m (4.9–6.2 ft) in height, 1–1.3 m (3.3–4.3 ft) at the shoulder. In length measured from the bill to the tail, emus range from 139 to 164 cm (55 to 65 in), with males averaging 148.5 cm (58.5 in) and females averaging 156.8 cm (61.7 in). Emus weigh between 18 and 60 kg (40 and 132 lb), with an average of 31.5 and 36.9 kg (69 and 81 lb) in males and females, respectively. Females are usually larger than males by a small amount, and are substantially wider across the rump. They have small vestigial wings, the wing chord measuring around 20 cm (7.9 in) long, and have a small claw at the tip of the wing. The bill is quite small, measuring 5.6 to 6.7 cm (2.2 to 2.6 in). The emu flaps its wings when it is running and it is believed that they stabilise the bird when it is moving. It has a long neck and legs. Their ability to run at high speeds, 48 km/h (30 mph), is due to their highly specialised pelvic limb musculature. Their feet have only three toes and a similarly reduced number of bones and associated foot muscles; they are the only birds with gastrocnemius muscles in the back of the lower legs. The pelvic limb muscles of emus have a similar contribution to total body mass as the flight muscles of flying birds. When walking, the emu takes strides at every 100 cm (3.3 ft), but at full gallop, a stride can be as long as 275 cm (9.02 ft). Its legs are devoid of feathers and underneath its feet are thick, cushioned pads. Like the cassowary, the emu has sharp claws on its toes which are its major defensive attribute. This is used in combat to inflict wounds on opponents by kicking. The toe and claw are a total of 15 centimetres (5.9 in). They have a soft bill, adapted for grazing. The emu has good eyesight and hearing, which allows it to detect nearby threats. Its legs are among the strongest of any animals, powerful enough to tear down metal wire fences. The neck of the emu is pale blue and shows through its sparse feathers. They have brown to grey-brown plumage of shaggy appearance; the shafts and the tips of the feathers are black. Solar radiation is absorbed by the tips, and the loose-packed inner plumage insulates the skin. The resultant heat is prevented from flowing to the skin by the insulation provided by the coat, allowing the bird to be active during the heat of the day. A unique feature of the emu feather is its double rachis emerging from a single shaft. Both of the rachis have the same length, and the texture is variable; the near the quill it is rather furry, but the external ends resemble grass. The sexes are similar in appearance, although the male's penis can become visible when it defecates. The plumage varies in colour due to environmental factors, giving the bird a natural camouflage. Feathers of emus in more arid area with red soil have a similarly tinted plumage but are darker in animals residing in damp conditions. The eyes of an emu are protected by nictitating membranes. These are translucent, secondary eyelids that move from the end of the eye closest to the beak to cover the other side. This is used by the emu as a protective visor to protect its eyes from dust that is prevalent in windy and arid deserts. The emu also has a tracheal pouch, which becomes more prominent during the mating season. It is often used during courting, and it has speculated that it is used for communication on a day-to-day basis. The pouch is more than 30 centimetres (12 in), is spacious and the wall in very thin. Its opening's width is only 8 centimetres (3.1 in). The quantity of air that goes through the pouch, as determined by the emu deciding to open or close it, affects the pitch of an emu's call. Females typically cry more loudly than males. On very hot days, emus pant to maintain their body temperature, their lungs work as evaporative coolers and, unlike some other species, the resulting low levels of carbon dioxide in the blood do not appear to cause alkalosis. For normal breathing in cooler weather, they have large, multifolded nasal passages. Cool air warms as it passes through into the lungs, extracting heat from the nasal region. On exhalation, the emu's cold nasal turbinates condense moisture back out of the air and absorb it for reuse. As with other ratites, the emu has great homeothermic ability, and can maintain this status from −5 to 45 degrees. The thermoneutral zone of emus lies between 10–15 degrees and 30 degrees. As with other ratites, the emu has a relatively low rate of metabolism compared to other types of birds, but the rate depends on activity, especially due to resulting changes to thermodynamics. At −5 degrees, the metabolism rate of an emu while sitting down is around 60% of the value for one that is standing, as the lack of feathers under its stomach leads to a higher rate of heat loss when it is standing up and exposing the underbelly. Their calls consist of loud booming, drumming, and grunting sounds that can be heard up to 2 kilometres (1.2 mi) away. The booming sound is created in an inflatable neck sac that is 30 cm (12 in) long and thin-walled. The different sounds produced can be used to distinguish males and females. The loud booming caused by inflation of the cervical sac corresponds to females, while loud grunts are limited to male emus. Behaviour and ecology Emus live in most habitats across Australia, although they are most common in areas of sclerophyll forest and savanna woodland, and least common in populated and very arid areas, except during wet periods. Emus predominately travel in pairs, and while they can form enormous flocks, this is an atypical social behaviour that arises from the common need to move towards food sources. Emus have been shown to travel long distances to reach abundant feeding areas. In Western Australia, emu movements follow a distinct seasonal pattern – north in summer and south in winter. On the east coast their wanderings do not appear to follow a pattern. Emus are also able to swim when necessary, although they rarely do so unless the area is flooded or they need to cross a river. They are also known to be inquisitive animals, and are known to approach humans if they see movement of a limb or a piece of clothing. They may follow and observe humans in the wild. Sometimes they poke other animals and then run away after drawing a reaction, as though they are playing a game. An emu spends much of its time preening its plumage with its beak. Emus sleep during the night, and begin to settle down at sunset, although it does not sleep continuously throughout the night. It can awake and arise up to eight times per night in order to feed or defecate. Before going into a deep sleep, the emu squats on its tarsus and begins to enter a drowsy state. However, it is alert enough to react to visual or aural stimuli and return to an awakened state. During this time, the neck descends closer to the body and the eyelids begin to lower. If there are no aural or visual disturbances, it will go into a deep form of sleep after 20 minutes. During this time the body is lowered until it is touching the ground and its legs are folded. The feathers direct any rain downwards along the mound-like body into the ground, and it has been surmised that the sleeping position is a type of camouflage meant to mimic a small hill. The neck is brought down very low and the beak turned down so that the whole neck becomes S-shaped and folding onto itself. An emu will typically awake from the deep sleep once every 90 to 120 minutes and stand in a tarsal position to eat or defecate. This lasts for ten to twenty minutes and the cycle is repeated four to six times during most nights. Overall, an emu sleeps for around seven hours every day. Young emus are known to sleep with their neck flat and stretching forward along the ground surface. Emus forage in a diurnal pattern. They eat a variety of native and introduced plant species; the type of plants eaten depends on seasonal availability. They also eat insects, including grasshoppers and crickets, lady birds, soldier and saltbush caterpillars, Bogong and cotton-boll moth larvae and ants. This forms a large part of its protein requirements and intake. In Western Australia, food preferences have been observed in travelling emus: they eat seeds from Acacia aneura until it rains, after which they eat fresh grass shoots and caterpillars; in winter they feed on the leaves and pods of Cassia[verification needed]; in spring, they feed on grasshoppers and the fruit of Santalum acuminatum: a sort of quandong. They are also known to eat wheat crops, and any fruit or other crops that it can access, easily climbing over high fences if required. Emus serve as an important agent for the dispersal of large viable seeds, which contributes to floral biodiversity. One undesirable effect of this occurred in Queensland in the 1930s and 1940s when emus ate cactus in the outback there. They defecated the seeds in various places as they moved around, spreading the unwanted plant. This led to constant hunting campaigns to stop the cactus from being spread. Emus also require pebbles and stones to assist in the digestion of the plant material. Individual stones may weigh 45 g (1.6 oz) and they may have as much as 745 g (1.642 lb) in their gizzard at one time. They also eat charcoal, however scientists still have not ascertained why. Captive emus are also known to eat shards of glass, marbles, car keys, jewellery and nuts and bolts. Emus drink at infrequent intervals, but ingest large amounts when they do so. They typically inspect the water body in groups for a period before kneeling down at the edge of the water and drinking. They are observed to prefer kneeling on solid earth while drinking, rather than in rocks or mud, presumably due to a fear of sinking. They often drink continuously for 10 minutes, unless disturbed by danger, in which case they interrupt themselves to deal with the threat before resuming. Due to the arid environment, they often go one or two days without finding a source of water and drinking. They typically drink once per day or night, but can do so several times daily if supply is abundant. In the wild, they often share water sources with kangaroos, other birds and wild camels and donkeys that were let loose by European settlers. Emus are suspicious of these other species and tend to wait in bushes and wait for other types of animals to leave; they choose to drink separately to the other animals. If an emu senses abnormal circumstances or a threat, it drinks while standing. Emus form breeding pairs during the summer months of December and January, and may remain together for about five months. During this time they wander around in an area a few miles in diameter. It is believed they guard or find territory during this time. Both males and females increase in weight during this time and the female is slightly heavier at between 45 and 58 kg (99 and 128 lb). This weight is lost during the incubation period, the males losing around 9 kg (20 lb). Mating occurs in the cooler months of May and June, and the exact timing is determined by the climate, as the birds nest during the coldest part of the year. During the breeding season, males experience hormonal changes, including an increase in luteinizing hormone and testosterone levels, and their testicles double in size. It is the females that court the males, and during the mating season, they become physically more attractive. The female's plumage darkens slightly and the small patches of bare, hairless skin just below the eyes and near the beaks turn turquoise-blue, although this is a subtle change. The female strides around confidently, often circling the male, and pulls its neck back while puffing out her feathers and crying out a low, monosyllabic sound that has been compared to human drums. This calling can occur when the males are not in view and more than 50 metres (160 ft) away and when the male's attention has been gained, the female can circle in a radius of 10–40 m. As the female circles its prospective mate, it continues to look towards him by turning its neck, while keeping its rump facing him. During this time, the female's cervical air sac may remain inflated as it calls out. The passive male retains the same colour hair, although the bare patches of skin also turn a light blue. The female has more black hairs on its head but gender differentiation can be difficult for humans. If the male shows interest in the parading female, he will move closer; the female continues to tantalise its target by shuffling further away and continuing to circle him as before. Females are more aggressive than males during the courting period, often fighting one another for access to mates. Fights among females accounted for more than half of the violent incidents in one mating season study. If a female tried to woo a male that already had a partner, the incumbent female will try and repel the competitor by walking towards her challenger and staring in a stern way. If the male showed interest in the second female by erecting his feathers and swaying from side to side, the incumbent female will attack the challenger, usually resulting in a backdown by the new female. Some female-female competitions can last up to five hours, especially when the target male is single and neither female has the advantage of incumbency. In these cases, the animals typically intensify their mating calls and displays, which increase in extravagance. This is often accompanied by chasing and kicking by the competing females. Males lose their appetite and construct a rough nest in a semi-sheltered hollow on the ground from bark, grass, sticks, and leaves. The nest is almost always a flat surface rather than a segment of a sphere, although in cold conditions the nest is taller, up to 7 cm tall, and more spherical to provide more insulation. When other material is lacking, it can also use spinifex grass bushes more than a metre across, despite the prickly nature. The nest can be placed in open ground or near scrubs and rocks, although thick grass is usually present if the emu takes the former option. The nests are usually placed in an area where the emu has a clear view of the surrounds and can detect predators. If a male is interested, he will stretch his neck and erect his feathers and bend over and peck at the ground. He will then sidle up to the female, swaying his body and neck from side to side, and rubbing his breast against his partner's rump, usually without calling out. The female would accept by sitting down and raising her rump. The pair mates every day or two, and every second or third day the female lays one of an average of 11 (and as many as 20) very large, thick-shelled, dark-green eggs. The shell is around 1 mm thick although indigenous Australians say that northern eggs are thinner. The number of eggs varies with rainfall. The eggs are on average 134 by 89 millimetres (5.3 in × 3.5 in) and weigh between 700 and 900 grams (1.5 and 2.0 lb), which is roughly equivalent to 10–12 chicken eggs in volume and weight. The first verified occurrence of genetically identical avian twins was demonstrated in the emu. The egg surface is granulated and pale green. During the incubation period, the egg turns dark green, although if the egg never hatches, it will turn white from the bleaching effect of the sun. The male becomes broody after his mate starts laying, and begins to incubate the eggs before the laying period is complete. From this time on, he does not eat, drink, or defecate, and stands only to turn the eggs, which he does about 10 times a day. Sometimes he will walk away at night; he chooses such a time as most predators of emu eggs are not nocturnal. Over eight weeks of incubation, he will lose a third of his weight and will survive only on stored body-fat and on any morning dew that he can reach from the nest. As with many other Australian birds, such as the Superb Fairy-wren, infidelity is the norm for emus, despite the initial pair-bond: once the male starts brooding, the female mates with other males and may lay in multiple clutches; thus, as many as half the chicks in a brood may be fathered by others, or by neither parent as emus also exhibit brood parasitism. Some females stay and defend the nest until the chicks start hatching, but most leave the nesting area completely to nest again; in a good season, a female emu may nest three times. If the parents stay together during the incubation period, they will take turns standing guard over the eggs while the other drinks and feeds within earshot. If it perceives a threat during this period, it will lie down on top of the nest and try to blend in with the similar-looking surrounds, and suddenly stand up and confront and scare the other party if it comes close. Incubation takes 56 days, and the male stops incubating the eggs shortly before they hatch. The male also increases the temperature of the nest during the eight-week period. Although the eggs are laid sequentially with days of separation, they tend to hatch within two days within one another, as the eggs that were laid later were subject to higher temperatures and developed more quickly. During the process, the precocial emu chicks need to develop a capacity for thermoregulation. During incubation, the embryos are ectothermic but need to develop endothermic behaviour by the time it is hatched. Newly hatched chicks are active and can leave the nest within a few days. They stand about 12 centimetres (5 in) tall, weigh .5 kg (18 oz), and have distinctive brown and cream stripes for camouflage, which fade after three months or so. The male stays with the growing chicks for up to 7 months, defending them and teaching them how to find food. Chicks grow very quickly and are full-grown in 5–6 months; they may remain with their family group for another six months or so before they split up to breed in their second season. During their early life, the young emus are defended by their father, who adopts a belligerent and standoffish stance towards other emus, even including the mother. The father does so by ruffling his feathers, emitting sharp grunts, and kicking his legs to shoo off other animals. He can also bend his knees to shield his smaller children. At night, he envelops his young with his feathers. As the young emus cannot travel far, the parents must choose an area with plentiful food in which to breed. In the wild, emus live between 10 to 20 years; captive birds can live longer than those in the wild. There are few native natural predators of emus still alive. Early in its species history it may have faced numerous terrestrial predators now extinct, including the giant lizard Megalania, the Thylacine, and possibly other carnivorous marsupials, which may explain their seemingly well-developed ability to defend themselves from terrestrial predators. The main predator of emus today is the dingo, which was originally introduced by Aboriginals thousands of years ago from a stock of semi-domesticated wolves. Dingoes try to kill the emu by attacking the head. The emu typically tries to repel the dingo by jumping into the air and kicking or stamping the dingo on its way down. The emu jumps as the dingo barely has the capacity to jump high enough to threaten its neck, so a correctly timed leap to coincide with the dingo's lunge can keep its head and neck out of danger. Despite the potential prey-predator relationship, the presence of predaceous dingoes does not appear to heavily influence emu numbers, with other natural conditions just as likely to cause mortality. Wedge-tailed eagles are the only avian predator capable of attacking fully-grown emus, though are perhaps most likely to take small or young specimens. The eagles attack emus by swooping downwards rapidly and at high speed and aiming for the head and neck. In this case, the emu's jumping technique as employed against the dingo is not useful. The birds try to target the emu in open ground so that it cannot hide behind obstacles. Under such circumstances, the emu can only run in a chaotic manner and change directions frequently to try and evade its attacker. Other raptors, perentie monitors and introduced red foxes occasionally predate emu eggs or kill small chicks. Status and conservation Emus were used as a source of food by indigenous Australians and early European settlers. Aboriginal Australians used a variety of techniques to catch the bird, including spearing them while they drank at waterholes, poisoning waterholes, catching emus in nets, and attracting them by imitating their calls or with a ball of feathers and rags dangled from a tree. The indigenous Australians used pituri or other poisonous plants to contaminate water supplies and were easily able to catch disoriented emus that drank the water. They also sometimes disguised themselves using the skins of emus they had previously killed. Emus were also lured into capture in camouflaged pits using rags or imitation calls. Aboriginal Australians did not kill the animals except to eat them, and frowned on peers who hunted the emus but then left the meat unused. They also used almost every part of the carcass for some purpose. Aside from the meat, the fat was harvested for oil used for polishing their weapons, and the bones and tendon were used as makeshift knives and tools, and for tying, respectively. Europeans killed emus to provide food and to remove them if they interfered with farming or invaded settlements in search of water during drought. An extreme example of this was the Emu War in Western Australia in 1932, when emus that flocked to Campion during a hot summer scared the town’s inhabitants and an unsuccessful attempt to drive them off was mounted, with the army called in to dispatch them in the so-called 'war'. There were two phases, the second of which started on 12 November with mixed results. There have been two documented cases of humans being attacked by emus. The early white settlers also used emu fat for fuelling lamps. In the 1930s, emu killings in Western Australia peaked at 57,000 per year, and culls were also plentiful in Queensland at the same time due to rampant crop damage. Even in the 1960s, bounties were still paid in Western Australia for killing emus. In John Gould's Handbook to the Birds of Australia, first published in 1865, he laments the loss of the emu from Tasmania, where it had become rare and has since become extinct; he notes that emus were no longer common in the vicinity of Sydney and proposes that the species be given protected status. Wild emus are formally protected in Australia under the Environment Protection and Biodiversity Conservation Act 1999, though the IUCN rates their status as Least Concern. Their occurrence range is between 4,240,000–6,730,000 km2 (1,640,000–2,600,000 sq mi), and a 1992 population estimate was between 630,000 and 725,000. Although the population of emus on mainland Australia is thought to be higher now than before European settlement, some wild populations are at risk of local extinction due to small population size. Threats to small populations include the clearance and fragmentation of areas of habitat; deliberate slaughter; collisions with vehicles; and predation of the young and eggs by foxes, feral and domestic dogs, and feral pigs. The isolated emu population of the New South Wales North Coast Bioregion and Port Stephens is listed as endangered by the New South Wales Government. Relationship with humans The emu was an important source of meat to Aboriginal Australians in the areas to which it was endemic. Emu fat was used as bush medicine, and was rubbed on the skin. It also served as a valuable lubricant. It was mixed with ochre to make the traditional paint for ceremonial body adornment, as well as to oil wooden tools and utensils such as the coolamon. "Emus are around all the time, in green times and dry times. You pluck the feathers out first, then pull out the crop from the stomach, and put in the feathers you've pulled out, and then singe it on the fire. You wrap the milk guts that you've pulled out into something [such as] gum leaves and cook them. When you've got the fat off, you cut the meat up and cook it on fire made from river red gum wood." Commercial emu farming started in Western Australia in 1987 and the first slaughtering occurred in 1990. In Australia, the commercial industry is based on stock bred in captivity and all states except Tasmania have licensing requirements to protect wild emus. Outside Australia, emus are farmed on a large scale in North America, with about 1 million birds in the US, Peru, and China, and to a lesser extent in some other countries. Emus breed well in captivity, and are kept in large open pens to avoid leg and digestive problems that arise with inactivity. They are typically fed on grain supplemented by grazing, and are slaughtered at 50–70 weeks of age. They eat two times a day and prefer 2.25 kilograms (5 lb) of leaves each meal. Emus are farmed primarily for their meat, leather, and oil. Emu meat is a low-fat meat (less than 1.5% fat), and with cholesterol at 85 mg/100 g, it is comparable to other lean meats. Most of the usable portions (the best cuts come from the thigh and the larger muscles of the drum or lower leg) are, like other poultry, dark meat; emu meat is considered for cooking purposes by the USDA to be a red meat because its red colour and pH value approximate that of beef, but for inspection purposes it is considered poultry. Emu fat is rendered to produce oil for cosmetics, dietary supplements, and therapeutic products. The oil is harvested from the subcutaneous and retroperitoneal fat from the macerated adipose tissue, and filtering the liquefied fat to get the oil, and has been used by indigenous Australians and the early white settlers for purported healing benefits. The oil consists mainly of fatty acids; oleic acid (42%), linoleic and palmitic acids (21% each) are the most prominent components. It also contains various anti-oxidants, notably carotenoids and flavones. There is some evidence that the oil has anti-inflammatory properties; however, there have not yet been extensive tests, and the US Food and Drug Administration regards pure emu oil product as an unapproved drug. Nevertheless, the oil has been linked to the easing of gastrointestinal inflammation, and tests on rats have shown that it has a significant effect in treating arthritis and joint pain, more so than olive or fish oils. It has been scientifically shown to improve the rate of wound healing, but the mechanism responsible for such aforementioned effects is not understood. A 2008 study has claimed that emu oil has a better anti-oxidative and anti-inflammatory potential than other avian and ratite oils, and linked this to emu oil's higher proportion of unsaturated fatty acids, in comparison to the amount of saturated fatty acids. While there are no scientific studies showing that emu oil is effective in humans, it is marketed and promoted as a dietary supplement with a wide variety of claimed health benefits. Commercially marketed emu oil supplements are poorly standardised. Such products are sometimes marketed deceptively; the USFDA highlighted emu oil in a 2009 article on "How to Spot Health Fraud". Emu leather has a distinctive patterned surface, due to a raised area around the feather follicles in the skin; the leather is used in such small items as wallets and shoes, often in combination with other leathers. The feathers and eggs are used in decorative arts and crafts. In particular, emptied emu eggs have been engraved with portraits, similar to cameos, and scenes of other Australian native animals. The Salem district administration in India advised farmers in 2012 not to invest in the emu business. In the United States, as of 2013, many ranchers had left the emu business; it was estimated that the number of growers had dropped from about 5,500 in 1998 to 1 or 2 thousand in 2013; remaining growers increasingly relying on sales of oil as a profit center; although, leather, eggs, and meat are also sold. The emu has a prominent place in Australian Aboriginal mythology, including a creation myth of the Yuwaalaraay and other groups in NSW who say that the sun was made by throwing an emu's egg into the sky; the bird features in numerous aetiological stories told across a number of Aboriginal groups. One story from Western Australia holds that a man once annoyed a small bird, who responded by throwing a boomerang, severing the arms of the man and transforming him into a flightless emu. The Kurdaitcha man of Central Australia is said to wear sandals made of emu feathers to mask his footprints. Many Aboriginal language groups throughout Australia have a tradition that the dark dust lanes in the Milky Way represent a giant emu in the sky. Several of the Sydney rock engravings depict emus. The animals are also depicted in indigenous dances. The emu is popularly but unofficially considered as a faunal emblem – the national bird of Australia. It appears as a shield bearer on the Coat of arms of Australia with the red kangaroo, and as a part of the Arms also appears on the Australian 50 cent coin. It has featured on numerous Australian postage stamps, including a pre-federation New South Wales 100th Anniversary issue from 1888, which featured a 2 pence blue emu stamp, a 36 cent stamp released in 1986, and a $1.35 stamp released in 1994. The hats of the Australian Light Horse are decorated with an emu feather plume. There are around 600 gazetted places named after the emu in Australia, including mountains, lakes, creeks, and towns. During the 19th and 20th centuries, many Australian companies and household products were named after the bird; for example, in Western Australia, Emu beer has been produced since the early 20th century. The Swan Brewery continues to produce a range of beers branded as "Emu". Emu – Austral Ornithology is the quarterly peer-reviewed publication of the Royal Australasian Ornithologists Union, also known as Birds Australia. - Patterson, C.; Rich, Patricia Vickers (1987). "The fossil history of the emus, Dromaius (Aves: Dromaiinae)". Records of the South Australian Museum 21: 85–117. - BirdLife International (2012). "Dromaius novaehollandiae". IUCN Red List of Threatened Species. Version 2013.2. International Union for Conservation of Nature. Retrieved 26 November 2013. - Davies, S.J.J.F. (2003). "Emus". In Hutchins, Michael. Grzimek's Animal Life Encyclopedia. 8 Birds I Tinamous and Ratites to Hoatzins (2 ed.). Farmington Hills, MI: Gale Group. pp. 83–87. ISBN 0-7876-5784-0. - Brands, Sheila (14 August 2008). "Systema Naturae 2000 / Classification, Dromaius novaehollandiae". Project: The Taxonomicon. Retrieved 4 February 2009. - "Names List for Dromaius novaehollandiae (Latham, 1790)". Department of the Environment, Water, Heritage and the Arts. Retrieved 3 November 2008. - "Emu". Oxford English Dictionary Online. Retrieved 23 January 2013. - "Emu". Merriam-Webster Online. Retrieved 16 February 2011. - Davies, S. J. J. F. (1963). "Emus". Australian Natural History 14: 225–229. - "Emu". NSW department of Environment & Heritage. Retrieved 5 February 2013. - Eastman, p. 5. - Gould, John (1865). Handbook to the Birds of Australia 2. London. - Gotch, A. F. (1995) . "16". Latin Names Explained. A Guide to the Scientific Classifications of Reptiles, Birds & Mammals. London: Facts on File. p. 179. ISBN 0-8160-3377-3. - Boles, Walter (6 April 2010). "Emu". Australian Museum. Retrieved 21 September 2010. - Wesson, Sue C. (2001). Aboriginal flora and fauna names of Victoria: As extracted from early surveyors' reports (PDF). Melbourne: Victorian Aboriginal Corporation for Languages. Retrieved 3 November 2008. - Troy, Jakelin (1993). The Sydney language. Canberra: Jakelin Troy. p. 54. ISBN 0-646-11015-2. - Alexander, W.B. (1927). "Generic name of the Emu" (PDF). Auk 44 (4): 59293. doi:10.2307/4074902. Retrieved 17 January 2011. - Tudge, Colin (2009). The Bird: A Natural History of Who Birds Are, Where They Came From, and How They Live. Random House Digital, Inc. p. 116. ISBN 0-307-34204-2. - Heupink, Tim H.; Huynen, Leon; Lambert, David M. (2011). "Ancient DNA Suggests Dwarf and 'Giant' Emu Are Conspecific". PLoS ONE 6 (4): e18728. doi:10.1371/journal.pone.0018728. PMC 3073985. PMID 21494561. - Ivory, Alicia. "Dromaius novaehollandiae: Information". University of Michigan. Retrieved 3 November 2008. - Reddy, A. Rajashekher (2005). "Commercial Emu and Ostrich rearing". Poulvet. Retrieved 3 November 2008. - "Tinamous and Ratites: Struthioniformes – Physical Characteristics – Kilograms, Pounds, Feathers, and Weigh – JRank Articles". Animals.jrank.org. Retrieved 14 August 2012. - Burnie D and Wilson DE (Eds.), Animal: The Definitive Visual Guide to the World's Wildlife. DK Adult (2005), ISBN 0789477645 - Eastman, p. 6. - Patak, A. E.; Baldwin, J. (1998). "Pelvic limb musculature in the Emu Dromaius novaehollandiae (Aves : Struthioniformes: Dromaiidae): Adaptations to high-speed running". Journal of Morphology 238 (1): 23–37. doi:10.1002/(SICI)1097-4687(199810)238:1<23::AID-JMOR2>3.0.CO;2-O. PMID 9768501. - Eastman, p. 9. - Eastman, p. 7. - Maloney, S. K.; Dawson, T. J. (1995). "The heat load from solar radiation on a large, diurnally active bird, the Emu (Dromaius novaehollandiae)". Journal of Thermal Biology 20 (5): 381–87. doi:10.1016/0306-4565(94)00073-R. - Eastman, pp. 5–6. - Eastman, p. 23. - Coddington and Cockburn, p. 366. - Maloney, S. K.; Dawson, T. J. (1994). "Thermoregulation in a large bird, the Emu (Dromaius novaehollandiae)". Comparative Biochemistry and Physiology. B, Biochemical Systemic and Environmental Physiology 164 (6): 464–472. doi:10.1007/BF00714584. - Maloney, S. K.; Dawson, T. J. (1998). "Ventilatory accommodation of oxygen demand and respiratory water loss in a large bird, the Emu (Dromaius novaehollandiae), and a re-examination of ventilatory allometry for birds". Physiological Zoology 71 (6): 712–719. PMID 9798259. - Maloney, p. 1293. - Maloney, p. 1295. - Davies, S. J. J. F. (1976). "The natural history of the Emu in comparison with that of other ratites". In Firth, H. J.; Calaby, J. H. (eds.). Proceedings of the 16th international ornithological congress. Australian Academy of Science. pp. 109–120. ISBN 0-85847-038-1. - Eastman, p. 15. - Eastman, p. 10. - Immelmann, K. (1960). "The Sleep of the Emu". Emu 60 (3): 193–195. doi:10.1071/MU960193. - Barker, R. D.; Vertjens, W. J. M. The Food of Australian Birds 1 Non-Passerines. CSIRO Australia. ISBN 0-643-05007-8. - Eastman, p. 44. - Powell, Robert (1990). Leaf and Branch. Department of Conservation and Land Management. p. 197. ISBN 0-7309-3916-2. "Quandong's fruits are an important food for the emu. ...major dispersers..." - Eastman, p. 31. - McGrath, R. J.; Bass, D. (1999). "Seed dispersal by Emus on the New South Wales north-east coast". Emu 99 (4): 248–252. doi:10.1071/MU99030. - Malecki, I. A. et al. (1998). "Endocrine and testicular changes in a short-day seasonally breeding bird, the Emu (Dromaius novaehollandiae), in southwestern Australia". Animal Reproduction Sciences 53 (1–4): 143–155. doi:10.1016/S0378-4320(98)00110-9. PMID 9835373. - Coddington and Cockburn, p. 367. - Coddington and Cockburn, p. 369. - Eastman, p. 24. - Reader's Digest Complete Book of Australian Birds. Reader's Digest Services. ISBN 0-909486-63-8. - Bassett, S. M. et al. (1999). "Genetically identical avian twins". Journal of Zoology 247 (4): 475–478. doi:10.1111/j.1469-7998.1999.tb01010.x. - Eastman, p. 25. - Taylor, E. L. et al. (2000). Genetic evidence for mixed parentage in nests of the Emu (Dromaius novaehollandiae) 47. Behavioural Ecology and Sociobiology. pp. 359–364. - Eastman, p. 26. - Maloney, p. 1299. - Eastman, p. 27. - "Emu". Parks Victoria. 2006. Retrieved 21 September 2010. - Eastman, p. 29. - Caughley, Grigg, Caughley & Hill (1980). "Does Dingo Predation Control the Densities of Kangaroos and Emus?". Australian Wildlife Resources 7: 1–12. doi:10.1071/WR9800001. - Wedge-Tailed Eagle (Australian Natural History Series) by Peggy Olsen. CSIRO Publishing (2005), ISBN 978-0643091658 - Eastman, p. 63. - "Attacked by an emu". The Argus. 10 August 1904. p. 8. Retrieved 20 September 2010. - "Victoria". The Mercury. 24 March 1873. p. 2. Retrieved 20 September 2010. - "Emu Dromaius novaehollandiae". BirdLife International. Retrieved 6 February 2009. - "Emu population in the NSW North Coast Bioregion and Port Stephens LGA – profile". Department of Environment, Climate Change and Water. Retrieved 21 September 2010. - Eastman, pp. 62–64. - Turner, Margaret–Mary (1994). Arrernte Foods: Foods from Central Australia. Alice Springs, Northern Territory: IAD Press. p. 47. ISBN 0-949659-76-2. - O'Malley, P. 1997. Emu Farming in The New Rural Industries. Rural Industries Research & Development Corporation - "Ratites (Emu, Ostrich, and Rhea)". United States Department of Agriculture. 28 April 2006. Retrieved 21 September 2010. - "Emu, full rump, raw". USDA National Nutrient Database for Standard Reference, Release 22. United States Department of Agriculture. 2009. Retrieved 21 September 2010. - Howarth, Lindsay, Butler and Geier, p. 1276. - Yoganathan, S.; Nicolosi, R.; Wilson, T. et al. (June 2003). "Antagonism of croton oil inflammation by topical emu oil in CD-1 mice". Lipids 38 (6): 603–607. doi:10.1007/s11745-003-1104-y. PMID 12934669. - "How to Spot Health Fraud". U.S. Food and Drug Administration. Retrieved 2011-11-16. - Bennett, Darin C.; Code, William E.; Godin, David V.; Cheng, Kimberly M. (2008). "Comparison of the antioxidant properties of emu oil with other avian oils". Australian Journal of Experimental Agriculture 48 (10): 1345–1350. doi:10.1071/EA08134. - Politis M. J.; Dmytrowich, A. (December 1998). "Promotion of second intention wound healing by emu oil lotion: comparative results with furasin, polysporin, and cortisone". Plastic and Reconstructive Surgery 102 (7): 2404–2407. doi:10.1097/00006534-199812000-00020. PMID 9858176. - Whitehouse, M. W.; Turner, A. G.; Davis, C. K.; Roberts, M. S. (1998). "Emu oil(s): A source of non-toxic transdermal anti-inflammatory agents in aboriginal medicine". Inflammopharmacology 6 (1): 1–8. doi:10.1007/s10787-998-0001-9. PMID 17638122. - Kurtzweil, Paula (30 April 2009). "How to Spot Health Fraud". U.S. Food and Drug Administration. Retrieved 29 June 2009. - Carved emu eggs, National Museum of Australia - Saravanan, L (21 April 2012). Don’t invest in Emu farms, say Salem authorities. The Times of India - Jim Robbins (7 February 2013). "Ranchers Find Hope in Flightless Bird’s Fat". The New York Times. Retrieved 8 February 2013. - Dixon, Roland B. (1916). "Australia". Oceanic Mythology. Charleston, South Carolina: Bibliobazaar. ISBN 0-8154-0059-4. Retrieved 30 September 2010. - Eastman, p. 60. - Norris, Ray P.; Hamacher, Duane W. (2010). "Astronomical Symbolism in Australian Aboriginal Rock Art". arXiv:1009.4753 [physics.hist-ph]. - Eastman, p. 62. - "Australia's Coat of Arms". Department of Foreign Affairs and Trade. January 2008. Retrieved 21 September 2010. - "Fifty cents". Royal Australian Mint. 2010. Retrieved 7 November 2011. - "Emu Stamps". Bird Stamps. Retrieved 1 November 2011. - "Tabulam and the Light Horse Tradition". Australian Light Horse Association. 2011. Retrieved 7 November 2011. - "Place Names Search Result". Geoscience Australia. 2004. Retrieved 30 September 2010. - "Emu Austral Ornithology". Royal Australasian Ornithologists´ Union. 2011. Retrieved 7 November 2011. - "Emu set for television comeback". BBC News. 8 June 2006. Retrieved 8 June 2006. - Coddington, Catherine L.; Cockburn, Andrew (1995). "The Mating System of Free-living Emus". Australian Journal of Zoology 43 (4): 365–372. doi:10.1071/ZO9950365. - Eastman, Maxine (1969). The life of the emu. London; Sydney: Angus and Robertson. ISBN 0-207-95120-9. - Howarth, Gordon S.; Lindsay, Ruth J.; Butler, Ross N.; Geier, Mark S. (2008). "Can emu oil ameliorate inflammatory disorders affecting the gastrointestinal system?". Australian Journal of Experimental Agriculture 48 (10): 1276–1279. doi:10.1071/EA08139. - Maloney, Shane K. (2008). "Thermoregulation in ratites: a review". Australian Journal of Experimental Agriculture 48 (10): 1293–1301. doi:10.1071/EA08142. - Stiglec, R.; Ezaz, T.; Graves, J. A. M. (2007). "A new look at the evolution of avian sex chromosomes". Cytogenet. Genome Res. 117 (1–4): 103–109. doi:10.1159/000103170. PMID 17675850. |Wikimedia Commons has media related to Dromaius novaehollandiae.| |Wikispecies has information related to: Emu| - Emu chicks emerging, article with sound clips, photos and videos. - "Kangaroo feathers" and the Australian Light Horse from the Australian War Memorial - Emu videos, photos & sounds on the Internet Bird Collection - A discussion of Emu eggs and how to cook them - National Museum of Australia Collection of hollow carved emu eggs featuring portraits of prominent Indigenous Australians - Beach, Chandler B., ed. (1914). "Emu". The New Student's Reference Work. Chicago: F. E. Compton and Co. |Look up emu in Wiktionary, the free dictionary.|
About This Chapter GACE Physics: Force & the Laws of Motion - Chapter Summary See what makes our instructors qualified to make your study strategies more efficient and enhance your existing practical knowledge of the laws of motion and force. This chapter will help you prepare for the GACE Physics assessment by addressing these topics: - Newton's laws of motion - Differences between inertia and mass - States of motion and velocity - Different types of force - Air resistance and free fall - Free-body diagrams - Types of friction - Inclined planes Our highly accessible video lessons are equipped with a unique tagging system that makes it easy for you to jump to the topics you need to study. You can complete the chapter exam once you've finished the last lesson quiz. 1. Newton's First Law of Motion: Examples of the Effect of Force on Motion Newton's first law, the law of inertia, states that an object will remain at rest or in motion unless acted upon by another force. Learn about whether Newton's first law applies to liquids or to human bodies, as well as whether it is applicable in space. 2. Distinguishing Between Inertia and Mass Inertia is an object's capacity in resisting change of motion, whereas mass is the amount of matter contained. Explore each of these concepts and learn to distinguish them and their properties in the real world. 3. Mass and Weight: Differences and Calculations Mass is the amount of stuff, or matter, that an object contains, while weight is the force on that object because of gravity. Learn the differences between mass and weight, how to calculate them, and how to perform conversions using Newtons and kilograms. 4. State of Motion and Velocity Motion describes movement, and velocity describes how fast and in which direction. Explore the state of motion, properties of speed and velocity, and learn how the concept that ~'motion is relative~' applies to travel. 5. Force: Definition and Types A force is the push and pull objects exert on each other. Discover types, both contact and non-contact forces, through examples including different measures used to calculate them. 6. Forces: Balanced and Unbalanced Forces, the push-pull interaction of objects, are often balanced in their size and opposed in direction--but not always! Learn examples of both balanced, and unbalanced forces, and the expected physics results from an encounter. 7. Free-Body Diagrams Forces are represented in free-body vector diagrams. Understand forces and vectors, define free-body diagrams, and explore examples of free-body diagrams and how they work. 8. Net Force: Definition and Calculations The net force is the difference between two forces that are acting on an object. Understand the definition of net force in relation to forces and vectors, explore how to calculate net force, and examine how net force changes the state of motion and body-free diagrams. 9. Newton's Second Law of Motion: The Relationship Between Force and Acceleration Newton's second law of motion is related to acceleration and force. Learn about net force, implications for this particular law of motion, and calculations for acceleration and force for moving objects. 10. Determining the Acceleration of an Object Acceleration is the rate of change in an object's velocity. Learn how to calculate acceleration using inertia, velocity, and time, and see how objects accelerate in free fall. 11. Determining the Individual Forces Acting Upon an Object The individual forces that act upon an object are gravity, normal force, friction, air resistance, applied force, tension, spring force, electric force, and magnetic force. Explore each of these types of forces and analyze them through free-body diagrams. 12. Air Resistance and Free Fall Air resistance is the force of friction of air that deters the acceleration of objects in free fall. Explore this concept through the ratio of force and mass, and how air resistance determines an object's terminal velocity. 13. Newton's Third Law of Motion: Examples of the Relationship Between Two Forces Newton's third law of motion states that forces come in pairs. Learn about how this law is applied in interactions between objects both on Earth and in space. 14. Newton's Laws and Weight, Mass & Gravity Gravity, which is the pull between two objects that have mass, is a force, which causes different objects to accelerate as they fall. Explore the differences between Newton's laws of weight, mass, and gravity, including how to calculate them, how gravity affects them, and how they are commonly misconceived. 15. Identifying Action and Reaction Force Pairs Newton's second and third laws of motion state how action and reaction force pairs affect objects' interactions with each other. Explore how forces come in pairs and identify action and reaction, the effect of the forces, and how action equals reaction. 16. The Normal Force: Definition and Examples In physics, normal force refers to the contact force that occurs when one object is touching another object, such as when a gallon of milk is set on a countertop. Explore the definition and examples of the normal force, and review how normal force occurs on a flat surface as well as on an inclined plane. 17. Friction: Definition and Types Friction is the force that resists motion between two objects or surfaces that are in contact with each other. Learn about the definition of friction, discover the two types of friction -- static friction and sliding friction, and explore their differences. 18. Inclined Planes in Physics: Definition, Facts, and Examples In physics, an inclined plane is a tilted, or sloping surface, that attaches a higher elevation to a lower elevation. Explore the definition, facts, and examples of inclined planes in physics, learn about tilted surfaces, determining net force, how friction affects net force, and discover the relationship between gravity and normal force. Earning College Credit Did you know… We have over 220 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level. To learn more, visit our Earning Credit Page Other chapters within the GACE Physics (530) Prep course - About the GACE Test - GACE Physics: History & Nature of Science - GACE Physics: Science & Society - GACE Physics: Role in Daily Life & the Environment - GACE Physics: Scientific Inquiry - GACE Physics: Lab Safety Procedures & Hazards - GACE Physics: Scientific Measurement & Data - GACE Physics: Math for Physics - GACE Physics: Properties of Matter - GACE Physics: Kinematics - GACE Physics: Oscillations - GACE Physics: Laws of Gravitation - GACE Physics: Principles of Fluids - GACE Physics: Linear Momentum - GACE Physics: Work & Energy - GACE Physics: Equilibrium & Elasticity - GACE Physics: Rotational Motion - GACE Physics: Thermodynamics - GACE Physics: Atoms - GACE Physics: Modern & Nuclear Physics - GACE Physics: Relativity - GACE Physics: Electrostatics - GACE Physics: Magnetism - GACE Physics: Electric Circuits - GACE Physics: Waves, Sound & Light - GACE Physics: Wave Optics - GACE Physics: Energy Production - GACE Physics Flashcards
Feel like "cheating" at Calculus? Check out our Practically Cheating Calculus Handbook, which gives you hundreds of easy-to-follow answers in a convenient e-book. Polar coordinates are very similar to the “usual” rectangular coordinates: both systems are two dimensional, they locate a point in space, and both use two points: the rectangular system uses (x, y) and the polar coordinate system uses (r, θ). Plotting Polar Coordinates To plot polar coordinates, you need two pieces of information, r and θ: - θ tells you the ray’s angle from the polar axis (the positive part of the x-axis). - “r” tells you how to move on the ray. If r > 0, move on the ray. If r < 0, move on the opposite ray. The easiest way to understand how to plot polar coordinates is to plot one first on the familiar x-y axis. Let’s say you wanted to plot point P, located at (4, 15°). Here, r = 3, so that’s going to be equal to x = 3. The angle of the ray from the x-axis is 15 degrees, so the goal is to draw the 15 degree ray, then locate the point: What about (-4, 15°)? To get a negative “x” value, just move in the opposite direction, just like you’d do with regular coordinates. The whole grid looks like this: In the above example, the negative number in (-4, 15°) meant that you travel in the opposite direction from the 15 degree ray. You could also write the exact same point as (-4, 195°). Why Use Polar Coordinates? Polar coordinates make it easier to understand some natural phenomena with circular motion from a central point, like the motion of planets around the sun or atoms around a nucleus. Cylindrical coordinates are an extension of two-dimensional polar coordinates to three-dimensions. With cylindrical coordinates, the usual x- and y-coordinates of a point in the Cartesian plane are replaced by polar coordinates. A point with P = (x, y, z) has cylindrical coordinates P = (r, θ, z) where (r, θ) are polar coordinates of (x, y). Use of Cylindrical Coordinates Primarily used in physics, the cylindrical coordinate system (formally called the circular cylindrical or circular polar coordinate system) eases calculations involving cylinders or cylindrical symmetries (Hassani, 2009). For example: - Pipe flows with no-slip conditions at the wall drive pipe flow and require the use of cylindrical coordinates (Orlandi, 2012). - A magnetic field, generated by a current flowing in a long straight wire, is more conveniently expressed in cylindrical form (Rogawski, 2007). Converting from Rectangular (Cartesian) to Cylindrical Coordinates To convert, replace the x and y coordinates with the polar coordinates r and θ, using the following relations for x and y: - r = √(x2 + y2) - tanθ = y/x The z-coordinate remains unchanged. Example question: Convert (√3, 1, 4) from rectangular to cylindrical coordinates. Solution: Take each point (x, y, z) one at a time and convert (√3, 1, 4) using the above relations, to get: - r = √(x2 + y2) = √(3 + 1) - θ = tan-1(y/x) = tan-1(1/√3) = π/6 - z = 4. Putting those together, we get (2, π/6, 4). Converting from Cylindrical to Rectangular Coordinates Use the relations: - x = r cosθ - y = r sinθ. The z-coordinate remains unchanged. Example question: Convert the cylindrical coordinates (3, π/3, -4) to rectangular coordinates. - x = r cosθ = 3 cos (π/3) = 3(½) = 3/2 - y = r sinθ = 3 sin (π/3) = 3 (√3/2) = (3√3)/2 The z-coordinate remains unchanged, giving: (3/2, (3√3)/2, -4). A polar function relates a radius vector (a distance, r) and an angle vector (θ). These functions are based in the polar coordinate system.Many natural phenomena can be represented by a polar function. For example: - Anywhere objects move in circles (for example: movement of electrons), - A plan position indicator (a type of radar display) used in air traffic control, ship navigation and meteorology, - Radiance functions for material brightness can be represented by a polar function on the unit sphere (Robles-Kelly & Hancock, 2004). Types of Polar Function A polar function is defined by the polar equation r = f(θ). Many polar functions have been classified in detail, including: A polar function of the form r2 = a2 sin (2θ) and r2 = a2 cos (2θ) are lemniscates (from the Latin lēmniscātus meaning decorated with ribbons). Click here for an interactive version of this graph on Desmos.com. Limaçons (from the Latin limax meaning snail) are formed by the following equations: - r = a + b sin θ, - r = a – b sin θ, - r = a + b cos θ, - r = a – b cos θ. Click here for an interactive version of the graph. A cardioid (a heart shaped curve) is a special case of the Limaçon, graphed when a = b. 3. Rose Polar Function A polar function with the form r = a sin nθ or r = a cos nθ graph roses. The two functions look almost identical, except they are shifted: For an interactive graph (where you can change the values for r and θ) click here to go to Desmos. 4. The Archimedean Spiral The Archimedean spiral is defined by r = aθ. For an interactive graph, go to: Desmos.com. The polar derivative generalizes the usual derivative to polar coordinates. In other words, the derivative rules you used in elementary calculus only work in the Cartesian plane. In order to find the derivative of a polar function, you have to use a different formula. As polar coordinates are based on angles, it should be no surprise that derivatives involve a little trigonometry. A polar coordinate can be expressed in terms of: - The distance from the origin (r) and - An angle (θ). The first derivative of a polar curve uses these coordinates in the formula: Polar Derivative: Example Problem There are a couple of ways to find a polar derivative. The first is to use the above formula. However, instead of memorizing yet another formula, you could convert your coordinates and use the product rule instead; The following example shows how this method works to get the same result. Example Question: Find the polar derivative of r = 2 sin(θ) at π/2. - x = r cos(θ) - y = r sin(θ) We’re given r = 2 sin(θ) in the question, so: - x = (2 sinθ) cos(θ) - y = (2 sinθ) sin(θ) Step 2: Find the derivative of y from Step 1. The function y = (2 sinθ) sin(θ) is two functions multiplied together, so for this example, use the product rule: (f * g)′ = f′ * g + f * g′. Inserting the value for “y” from Step 1 into the product rule formula, we get: (2 sin θ)(cos θ) + (sin θ)(2 cosθ) Step 3: Find the derivative of x from Step 1. The function x = (2 sinθ) cos(θ) can be differentiated with the product rule as well, so: (2 sin θ)(-sin θ) + (cos θ)(2 cosθ) Step 4: Divide Step 2 (dy) by Step 3 (dx), to get: Step 5: Insert your value for θ, which is given in the question as π/2: Step 6: Simplify, using a calculator to find values. For example, cosθ = 0. Solution: The derivative is 0. Polar Derivative and Complex Numbers The polar derivative can be defined for complex numbers as: Dα p(z) = np(z) + (α – z) p′(z); Where α = a complex number and p ∈ Pn (Li, 2011). There are other slightly different notations. For example, you could write the polar derivative with respect to zeta(ζ): fζ(z): = nf(z) + (ζ – z) f′(z), where ζ is a complex number. If degree f(z) = n, then fζ(z) is a polynomial function with degree n – 1. If ζ = ∞, then f∞ is equal to the ordinary derivative (Barsegian et al., 2006). Polar Derivative: References Barsegian, G. et al., (2006). Value Distribution Theory and Related Topics. Advances in Complex Analysis and Its Applications. Book 3. Springer Science & Business Media. Glahodny, G. Section 13.9: Cylindrical and Spherical Coordinates. Retrieved December 2, 2020 from: https://www2.math.tamu.edu/~glahodny/ Hassani, S. (2009). Mathematical Methods For Students of Physics and Related Fields. Springer. Li, X. (2011). A Comparison Equality for Rational Functions. Proceedings of the American Mathematical Society. Volume 139, Number 5. Retrieved September 2, 2020 from: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.353.2854&rep=rep1&type=pdf Occhiogrosso, M. (2007). Polar Coordinates and Trigonometric Form: Trigonometry. Milliken Publishing Company. Orlandi, P. (2012). Fluid Flow Phenomena: A Numerical Toolkit (Fluid Mechanics and Its Applications Book 55), Kindle Edition. Springer. Robles-Kelly, A. & Hancock, E. (2004). Radiance Function Estimation for Object Classification. In Progress in Pattern Recognition, Image Analysis and Applications. 9th Iberoamerican Congress on Pattern Recognition, CIARP. Springer. Rogawski, J. (2007). Multivariable Calculus: Early Transcendentals. W. H. Freeman. Shiver, J. Polar Equations and Their Graphs. Retrieved September 4, 2020 from: http://jwilson.coe.uga.edu/EMT668/EMAT6680.2003.fall/Shiver/assignment11/PolarGraphs.htm Image of coordinate grid: Mets501 [CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)] Stephanie Glen. "Polar Derivative, Coordinates & Function" From CalculusHowTo.com: Calculus for the rest of us! https://www.calculushowto.com/polar-derivative/ Need help with a homework or test question? With Chegg Study, you can get step-by-step solutions to your questions from an expert in the field. Your first 30 minutes with a Chegg tutor is free!
Triangulation is a simple geometrically based process that makes accurate map-making possible. It works by determining the location and distance of a faraway object , by measuring angles to it from known points at either end of a fixed ,known ,baseline It is based on a system devised over two thousand years ago by Euclid, the father of geometry . Trigonometry (the study of triangles) emerged about the third century BC . It stated that a triangle has six parts—three sides and three angles,and it postulated that given almost any three of them–three sides, two sides and one angle, or most importantly ,-one side and two angles –then the other three unknowns can be found. Thus if we have a baseline and two angles, we can draw the triangle, and calculate not only the distance from the apex to the baseline but also the length of the other two sides.. This is the very simple basis of the trigonometrical /triangulation principle of distance measurement. The entire map can then be composed of a series of small triangles. Triangulation was also originally used to calculate the distance of stars and planets by taking angular sight readings to a distant stellar object from both sides of the globe across the diameter of the earth (the known base line). This method was further improved in accuracy by taking angular sight readings across the diameter of the earth’s orbit at six month intervals ,thus extending the width of the baseline for greater precision. In mapping a landscape ,if we have the distance between two trig points (the baseline) and two angular sightings from both ends of this baseline to a distant trig point on another summit ,then we can calculate the triangle and determine the various distances. Only one baseline ever needs to be manually measured, because all of the subsequent sides (i.e distances in km) of all the other triangles can be calculated. Hence ,incredible precision and a meticulous approach was needed when measuring the baseline many years ago . A single error in the baseline would be multiplied hundreds of times in all the subsequent calculated triangles. The Triangulation Mapping Survey of Ireland 1824-1842 Europe, post the French Revolution was in turmoil, with Austria ,Russia ,Turkey and Napoleon’s France all flexing their collective muscles, and the Napoleonic wars were just over the horizon in towards 1803.Without good maps Britain felt vulnerable could not position its army strategically nor defensively The Ordnance Survey (OS ) was established in Britain in 1791 to prepare a detailed map of the country and thereby to help defend Britain from external attack…. concern was rife about Napoleon’s ambitions at that time. The origins of Ordnance Survey in Britain go back to a triangulation survey carried out for King George III and The Royal Society between 1784 and 1790. The survey was determining the relative positions of the Greenwich Observatory and L’Observatoire de Paris, and measuring the distance between the two observatories. Major General William Roy carried out the survey under the authority of the Master General of the Board of Ordnance, and Roy’s first action was to measure a survey base-line across Hounslow Heath during the summer of 1784. He used a baseline of approximately five miles long running through Hounslow Heath. The two terminals of this base-line are now marked by contemporary military cannon set in the ground muzzle upward. The north-western portion of the base-line is now occupied by Heathrow Airport. This baseline was used in the future mapping of Britain and became the the core liner measurement for the entire triangulation process for mapping Great Britain. By 1794 mapmakers had started the triangulation of the English coast from Sussex to Dorset. The coastal areas were initially of greater priority to the OS because of their strategic and defensive nature. It was also believed in London that Ireland required attention in this regard also, or broadly similar military reasons as well as for the purposes land taxation. Accordingly, after 1800,many of the British OS staff were shipped across the sea to Ireland to instigate an Irish mapping project and to produce a detailed six inch map of Ireland for military and for land taxation purposes. The Ordnance Survey of Ireland (OSI) ,was established in 1824 as part of this British army ( post 1800 Act of Union) plan to create a detailed map of Ireland. This mapping project for Ireland had the objective of providing an accurate representation of lands ,landscape, town lands, villages and holdings. This would in turn facilitate the collection of local taxes ,identify boundaries, and also would be useful to Britain for other regulatory concerns and military planning. The failed insurrection of 1798 and Emmet’s 1803 abortive uprising were still fresh in British minds, and 1798 in particular had stressed the importance of knowing and policing the Irish landscape, all of which had led to Lord Cornwallis’s building of the Military Road In the early 1820s, a consistent island -wide valuation of property was initiated by the British parliament as a basis for an effective taxation system, involving landlords and their large estates, as well as smaller town lands and parcels of land. The Irish map was intended to be much more detailed than the British one. At that time, the OSI office was located in Mountjoy House in the Phoenix park …a building that was originally constructed in 1728. It later housed the cavalry of the Lord Lieutenant who lived nearby in the Vice Regal Lodge in the park. The OSI continued to operate under the agency of the Dept Defence until 1924 when it was transferred to the fledgling Dept Defence. The comprehensive mapping and triangulation of Ireland was commenced by the OSI in 1824 and was completed in 1842. Ireland was the first country in the world to be so extensively mapped in such detail and at a scale of six inches to one mile. The work was carried out by 2000 members of the Royal engineers. Major Thomas Colby was in charge of the survey which produced many innovations and introduced many novel scientific techniques. Some independent Irish engineers were also recruited and were involved in the survey .These engineers were under the control of Richard Griffith ..later to become famous for his national valuation. Numerous triangulation stations (trig stations) and triangulation buildings were built and established at various high points in Ireland . These buildings were used for visual observations using light sources and reflectors Triangulation building at Magilligan. Thomas Drummond was the principal surveyor in this project. He had a problem in that some of the visual sightings from station to station extended over long distances , and thus Colby needed more powerful light than that provided by the Argand Lamps he was using He invented a vapour called limelight which was bright enough to be seen at great distances, and he also invented the heliostat reflector which facilitated sighting of the limelight at distant trig stations in hazy and cloudy conditions. . These latter devices allowed visualisation and measurement from one trig station to another . Colby and Drummond also invented “compensation bars” of iron and brass which allowed more precise measurement of linear distance used for the baseline. These allowed expansion and contraction of the metals, when temperatures varied, and Colby factored in the coefficient of expansion for greater accuracy in the eventual measurement of the linear distance.. Thus greater consistency and precision was guaranteed in his linear measurement. Measuring the Baseline The process of triangulation need a baseline to be accurately measured. This is the base of the triangle which, with two angles, allows the triangle to be drawn and the distance to be determined.. In order to complete the network of triangles, the length of one leg of one triangle had to be measured carefully. The longer the leg the greater the angular accuracy. The OS in Britain had done all this years earlier in 1784 and calculated a baseline using a five mile stretch along Hounslow Heath. The leg chosen in Ireland as the core national triangulation baseline ,was along the shores of Lough Foyle and this was known as the Lough Foyle baseline. Once this was measured ,all the other smaller triangles and measurements fell into place. The accuracy of measurement of the Lough Foyle baseline was paramount .Errors due to inaccuracies in the baseline measurement would amplify as more triangles were calculated. Measurement of the Lough Foyle baseline began in 1827 and it lasted for 60 days. The distance of 7.89 miles was carefully measured by 70 men using tripods and compensation bars. The baseline started at Magilligan and extended to Ballykelly near Derry. Once the two end- angles were known, this single measurement alone was sufficient to calculate the lengths of all the other sides of all of the other triangles. The Lough Foyle base was re measured again, using modern techniques in 1960, and the result differed from the 1828 value by only 2,5 cm !! Other persons were involved in the 1824 survey because the engineers of the British army and british OS staff could not undertake every single aspect of the work such as local gaelic place names, Irish language derivations,, archaeology, and local history. Musicologist and archaeologist George Petrie assembled a team for this purpose, including Irish scholars , John O’ Donovan ,Eugene O Curry, and James Larcom, and some painters and poets including James Clarence Mangan. These people supplied much of the corroborative detail and local colour into the mathematical detail of the large scale maps e.g… place names, gaelic derivations, populations,e conomy ,agriculture, etc. Petrie , Larcom and John O Donovan, in particular , extensively researched the origins and history of local Irish place names and drew up translations . Petrie also headed the Survey Topographical department Benchmarks were also a part of this first survey, and they were used to record the height of land features above sea level. The height of a point above sea level was marked with the shape of a crows foot cut into bricks, walls, corners and buildings. The reference point for sea level was set at the low watermark of spring tide at Poolbeg Lighthouse on April 8th 1837. This reference point remained until 1970 when it was superceded by a point at Malin Head. These cut arrows /crowsfeet were the commonest form of benchmarks and consisted of a horizontal bar cut into a wall or benchmark. This bar was the actual benchmark. A broad arrow was cut immediately below the centre of the horizontal bar. This gave the “crows” feet appearance. By 1846 , the entire island of Ireland had been surveyed and a series of 2000 maps at a scale of six inches to the mile had been published. The total cost of the survey was £860,000. Ireland was the first country in the world to be mapped at this scale. By 1867 from Fair Head to Mizen head had been surveyed in great detail and Sir Richard Griffith and his team moved in to commence their valuations (Griffiths‘s Valuation) The Re triangulation of Ireland 1959. The OSI began their re triangulation mapping project of Ireland in 1959. This time instead of the buildings known as “triangulation stations” used in the 1824 survey, high measurement points were replaced by the construction of “triangulation pillars” or trig pillars. In the UK in the early 20th century, map making was still based on the Principal Triangulation which was a piecemeal collection of observations taken between 1783 and 1853. The system was starting to collapse and couldn’t support the more accurate mapping needed to track the rapid socio economic development of Britain after the Great War. This led to Hotine’s development of the trig pillar and ,using it ,a much more more accurate mapping of the UK commenced in 1935 In the UK, the process of placing trig pillars on top of prominent hills and mountains began in 1935 to assist in the accurate re triangulation of Great Britain. Over 6500 trig pillars were constructed across the length and breadth of the UK to map the country more accurately and with greater precision. They continued being used until 1962. The Irish trig pillars were essentially designed and constructed on the principle of the British Hotine trig pillar model which had been used in the UK from 1935 onwards into the 1940’s These concrete trig pillars were constructed by the OSI in Ireland in 1959, often using donkey power to deliver the construction materials to the summits. All the pillars were made by hand and amazingly all the materials and equipment used to build these pillars were carried to the tops of the hills by the surveyors and their staff or assistants. Recording of data and sight lines was not always easy on account of problems of high winds or visibility at summits and often the surveyors had to camp on the summits. In Ireland as in the UK ,these new trig pillars gave rise to more accurate triangulation techniques because they used precision theodolites on top of the pillar and were positioned over a centralised sunken brass bolt.. This very accurate process of triangulation measurement gave rise to the Ordnance Survey maps as we know them today. The coordinate system used on these trig pillar based OS maps is known as the National Grid The Development of the Trig Pillar. Martin Hotine & the Retriangulation of Britain in 1935 In the UK in 1935 the British Ordnance Survey, in a project led by Brigadier Martin Hotine, decided to implement a complete new mapping and grid network for the whole country and at the same time to unify the mapping from local county projections onto a single national grid , and reference system. This lead to the establishment of the OS GB 36 datum and the UK National Grid, both of which are still operational today. A key point of this measurement system was the trig pillar. The man responsible for the trig pillar that we all recognise today on mountaintops all over Ireland was Brigadier Martin Hotine. Born in 1898 in, London, Hotine became head of the Trigonometrical and Levelling Division at OS in the UK. . Hotine was responsible for the design, planning and implementation of the retriangulation process of mapping. In order to provide a solid base for the theodolites used by the survey teams and to improve the accuracy of the readings obtained he invented and designed the iconic trig pillar. As a result, they are sometimes referred to as ‘Hotine Pillars’. The pillars became a key feature of the accurate triangulation and mapping of Britain . In actuality it became rather difficult to locate and identify key sites for locating many of the trig pillars. They needed on the one hand to be located at high altitude ,but on the other hand , this of course necessitated carrying and transporting the heavy and cumbersome materials to remote sites and then building the trig pillars at the summit. In most parts of Ireland and the UK, trig points are truncated square concrete pyramids or obelisks tapering towards the top. On the top, a brass plate with three arms and a central depression is fixed: it is used to mount and centre a theodolite used to take angular measurements to neighbouring trig points. A benchmark is usually set on the side, marked with the letters “O S B M” (Ordnance Survey Bench Mark) and the reference number of the trig point. (Within and below the visible trig point, there may be concealed reference marks whose National Grid References are precisely known.) Use of Trig Pillars Trig points are the common name for “triangulation pillars”. These are concrete pillars, about 4′ tall, which were used by the Ordnance Survey in the UK and Ireland in order to assist in cartography and distance measurement.. They are generally constructed at the highest altitude possible in an area, so that there is a direct and unobstructed line of sight from one trig pillar to the next.. This process is called “triangulation”. Careful measurements of the angles between the lines-of-sight of other trig points then allowed the construction of a system of triangles which could then be referenced back to a single baseline to construct a highly accurate measurement system that covered the entire country. A theodolite is used as the key instrument in such calculations. A theodolite, in essence is a protractor (angle measurerer) set into a telescope. It can operate in the horizontal and vertical plane. By sitting the theodolite on the top of the flat concrete pillar,–the “spider” or “top plate”- accurate angles between other nearby trig points can be measured. In practice, a theodolite would have been secured to the top mounting plate and made level. It would then be directly over the brass bolt underneath the pillar. Angles were then measured from the pillar to other surrounding points. A theodolite is a precison instrument and before focussing and measuring angles through the eyepiece a number of preliminary procedures must be undertaken:- – Setting up – fixing the theodolite onto a tripod or base along with approximate levelling and centering over the station mark. The theodolite had to be lined up directly with the brass bolt below and within the pillar. – Centering – bringing the vertical axis of theodolite immediately over station mark using a centering plate also known as a tribrach. – Levelling – levelling of the base of the instrument to make the vertical axis vertical usually with an in-built bubble-level. Thus to facilitate the accuracy of the theodilite , a trig pillar must be accurately and precisely constructed ,in terms of the level spider or top plate, the central core and the brass bolt set into the base. . With the advancement of modern scientific procedures and the arrival of better and more accurate technologies and digital techniques such as, satellite technology,and GPS and digital combinations, the original traditional trig pillar is now obsolete and redundant in its original guise. Nonetheless, an interesting point however is that despite their rudimentary nature, the original measurements made via theodolites and trig pillars were incredibly accurate and when compared with GPS measurement years later, the distances calculated , if indeed they vary at all, do so by only millimetres or a few centimetres.!. The pillars are about four feet high on a wide base and taper towards a flat top with a mounting-plate to hold a theodolite. Trigonometrical pillars are grouped together to form a triangulation network. Each pillar is in the clear sight line of several other trig pillars on distant summits so that their relative positions may be determined. Angles can be measured very precisely by taking theodolite sightings to distant peaks. In this way a network of triangles can be built up covering the entire country. Larger triangles can be subdivided into smaller ones, yielding a detailed mesh and interlocking network which facilitates fine-scale mapping. If we know the length of one side of a triangle – the baseline – and the two angles at its ends, we can use trigonometry, the mathematics of triangles, to calculate the lengths of the other two sides. Thus distances between hills or to distant hills are easily measured. The Ordnance Survey originally mapped Ireland in the 1820s. However a more accurate and detailed re-triangulation of the country began in 1959, and the familiar concrete pillars were erected on many hilltops at this time. The OSI took charge of the planning, overseeing and construction of these pillars, based on the British Hotine model. Thus the trig pillars we see on Irish hilltops today mostly date from sixty years ago. Structure and Construction of Trig Pillars Since angular measurement is such a precise visual science, the pillar on which the theodolite is placed must be of very solid and reliable design .In practice, a theodolite would have been secured to the top mounting plate and made level. It would then be directly over the brass bolt underneath the pillar.The pillar was of concrete built over a concrete box that just protrudes over soil level. The top of the pillar had a spider or top plate for the theodolite. A flush bracket was located on the side of the pillar displaying a bench march and OSI serial number .Originally this was an indentation holding a metal plate. Then it was made “flush” with the pillar (flush bracket) .It always displays the bench mark (BM) giving the height above sea level and the serial number of the trig pillar. The trig pillar is usually a hand-cast concrete pillar, 4 feet high and 2 feet square at the base, tapering towards a flat top. Each pillar is in view of several others so that their position may be triangulated. TPs usually have a ‘spider’ or ‘top plate’ used to fix a theodolite or other ordnance surveying device to. Many have a Flush Bracket fixed to the side that has an identifying number on it. The original Hottine design, consisted of a brass bolt set in concrete, ( The bolt over which the theodolite would be centred) inserted at a sufficient depth below ground level to be independant of the pillar foundations. The lower buried centre mark, consisting of a brass bolt set in concrete, is first inserted at a sufficient depth below ground level to be independent of the pillar foundations. This is a key central reference point. This depth naturally varies with the soil; on boggy ground, which was often encountered on hilltops, it was sometimes necessary to excavate as much as 15 feet before reaching rock or firm soil on which to emplace the lower mark. In such a case a correspondingly deep pillar foundation is necessary, whereas on out-cropping solid rock, a bolt is simply cemented in a hole drilled in the rock. Often times ,depending on the soil, the trig pillar could be compared to an iceberg with more below the surface than above it. The lower centre mark and its concrete setting was covered with a small wooden box (which eventually distintegrates) to prevent adherance to the pillar base. Concreting of the pillar base was commenced immediately over and around the box covering the lower mark, Concreting the pillar base was continued up to ground level where it was left rough to set. Four angle iron bars were then set in the base to project well up into the corners of the pillar, as a means of preventing fracture between the base and the pillar. The pillar bolt (upper centre mark) was also set in the base. The pillar bolt was next covered with a small wooden box, which was provided with side holes (to take the inner ends of the four sighting and drainage pipes) and a top hole (to take the lower end of the galvanized pipe running down the centre of the pillar). Wooden shuttering, was then erected on the pillar base. This shuttering had four side holes to take the outer ends of the four sighting pipes, which are then inserted, and a wedge fillet to which the level flush bracket in one side of the pillar may be wired in a vertical position. It also carries wooden corner fillets to provide an automatic chamfering to the edges of the pillar. The centre pipe, which serves as further reinforcement, was set in position and plumbed, the plumbing being continually being checked during concreting. Before the concrete set, the brass spider, complete with holding down bolts, was set over the centre pipe and was carefully plumbed over the pillar bolt from a special temporary fitting to the spider Concreting was then carried up to the top of the spider . On the top of every trig point is a brass plate with three arms which was used to mount the theodolite from which accurate measurements and angles to neighbouring trig points could be made. Top Plate or “spider” for theodolite On one side there is an indentation with a metal plate and here is found the benchmark Trig point built on boggy soil ,now eroded, showing the large fraction of the pillar that lies underground Benchmarks on the side of Trig Pillars. A benchmark (BM) forms the reference frame for heights above mean sea level. If the exact height of one BM is known, the exact height of the next can be found by measuring the difference in heights, through a process of spirit levelling. Benchmarks are on the sides of all Trig Pillars in the form of a flush bracket. The term benchmark, derives from the chiseled horizontal indented line that the surveyors made in stone structures, into which an angle-iron could be placed to form a “bench” for a leveling rod (graduated measuring rod). A surveyors “bench” is a type of bracket, onto which measuring equipment is mounted. These lines were usually indicated with a chiseled arrow below the horizontal line. The term is generally applied to any item used to mark a point as an elevation reference when compared to sea level. In 1837 the reference sea level point was set as the low watermark of spring tide at Poolbeg Lighthouse on April 8th 1837 Dublin Bay for the primary sea level comparative standard. The network of bench marks from the first levelling left a mark on the landscape in the form of the crows foot cut into walls buildings and bridges all over Ireland. , This Poolbeg reference point remained in use until it was superceded by Mean Sea Level at Malin Head in 1970.. Flush bracket showing bench mark on side of trig pillar.The BM number ,the recess for the “bench” (bracket ),and the arrow (crow’s foot” )are clearly visible. Bench marks were historically used to record the height above sea level of a location as surveyed against the Mean Sea Level data . Thousands of bench marks were sited all around the UK & Ireland from the mid 19th to late 20th centuries. The recorded altitude refers to the small horizontal platform at the point of the broad arrow marked on the plate face Bench marks can also be seen not only on trig pillars but also on stone buildings near the corner and not very high. A cut bench mark is the commonest form of bench mark. It consists of a horizontal bar cut into vertical brickwork or similar surfaces. A broad arrow is cut immediately below the centre of the horizontal bar. And Finally : Although trig pillars are now obsolete and generally in some disrepair, their role having been overtaken by satellite technology and GPS, many hill walkers now like to bag trig pillars (trigpointing) and add to their collection.! Trig pointing is now becoming quite a popular hobby, perhaps more so in GB (there’s a lot more pillars there). (In 2016 a UK hillwalker, Rob Woodall , bagged and visited all of the 6000 plus trig pillars in the UK for which he received an award.). With the advance of satellite mapping, the Ordnance Survey in Britain has decided to retire 5,000 of its 6,000 trig pillars because they are no longer needed to pinpoint accurately the positions of landmarks. The Ordnance Survey (UK) has decided to stop inspecting and maintaining these pillars and is looking for people who will volunteer to do it for them. In the UK thousands of people all over Britain are volunteering to adopt abandoned ‘trig pillars’, those familiar concrete monuments used by the Ordnance Survey to map the country. About 2,000 of the 5,000 redundant pillars have already been assigned on a first come, first served basis. The person adopting must inspect the pillar twice yearly and paint it when necessary. This clearly gives another meaning to the term “pillars of the community”!! Maybe this might also happen in Ireland !! In Ireland, although trig stations and trig pillars are now redundant, being unnecessary for modern surveying purposes, they are greatly loved by hikers and mountaineers as navigational aids and as a type of comfort blanket and a visual objective . It is always a pleasure for most hill walkers and hikers to reach a summit, no matter how big or small ,have a photo taken at the trig pillar, or to take time out and sit in the sunshine and unwrap one’s sandwiches, while resting against these historical small monoliths.!! And of course ,while you are there, read your OSI map and check your grid reference, and don’t forget as you lean against the pillar ,that this is where it all started !!
By now, you probably know about the concept of elasticity. In layman terms, it means that some substances get back to their original shape after being stretched. You have played with a slingshot. Haven’t you? That is an elastic material. Let us get into the concepts of elasticity and plasticity and learn more about these two properties of matter. Elasticity and Plasticity Elasticity is the property of a body to recover its original configuration (shape and size) when you remove the deforming forces. Plastic bodies do not show a tendency to recover to their original configuration when you remove the deforming forces. Plasticity is the property of a body to lose its property of elasticity and acquire a permanent deformation on the removal of deforming force. Browse more Topics under Mechanical Properties Of Solids - Applications of Elastic Behaviour of Materials - Stress and Strain - Elastic Moduli - Hooke’s Law and Stress-strain Curve The restoring force (F) per unit area (A) is called stress. The unit of stress in S.I system is N/m2 and in C.G.S-dyne/cm2. The dimension of stress = [M1L-1T-2]. Stress is given by, Stress = F/A Types of Stress Stress could be of the following types: - Normal stress:- Normal stress has the restoring force acting at right angles to the surface. - Compressional stress:- This stress produces a decrease in length per volume of the body. - Tensile stress:- This stress results in an increase in length per volume of the body. - Tangential stress:- Stress is said to be tangential if it acts in a direction parallel to the surface. The strain is the relative change in configuration due to the application of deforming forces. It has no unit or dimensions. The strain could be of the following types: - Longitudinal Strain: It is the ratio between the change in length (l) to its original length (L). Longitudinal strain = l/L - Lateral Strain: The lateral strain is the ratio between the change in diameter to its original diameter when the cylinder is subjected to a force along its axis. Lateral strain = change in diameter /original diameter - Volumetric Strain: It is the ratio between the change in volume (v) to its original volume (V). Volume strain = v/V. It states that within elastic limits, stress is proportional to strain. Within elastic limits, tension is proportional to the extension. So, Stress ∝ Strain or F/A∝l/L. Therefore, we have for: - Stretching: Stress = Y×strain or Y=FstretchL/A(l) - Shear: Stress = η×strain or η=FshearL/A(l) - Volume Elasticity: Stress = B×strain or B = – P/(v/V) - The coefficient of elasticity: It is basically the ratio between stress and strain. - Young’s modulus of elasticity (Y): It is the ratio between normal stress to the longitudinal strain. Y = normal stress/longitudinal strain = (F/A)/(l/L) = (Mg×L)/(πr2×L) - Bulk modulus of elasticity (B): It is the ratio between normal stress to the volumetric strain. B = normal stress/volumetric strain = (F/A)/(v/V) = pV/v Other Important Terms The compressibility of a material is the reciprocal of its bulk modulus of elasticity. Compressibility = 1/B. Workdone in Stretching - Workdone, W = ½ ×(stress)×(strain)×(volume) = ½ Y(strain)2×volume = ½ [(stress)2/Y]×volume - Potential energy stored, U = W = ½ ×(stress)×(strain)×(volume) - Potential energy stored per unit volume, U = ½ ×(stress)×(strain) Workdone During Extension (Energy Density) W =½ F×l = ½ tension ×extension Elastometer produces a large strain with a small stress. The phenomenon by virtue of which a substance exhibits a delay in recovering its original configuration if it had been subjected to a stress for a longer time, is called elastic fatigue. Poisson’s Ratio (σ) Poisson’s ratio of the material of a wire is the ratio between lateral strains per unit stress to the longitudinal strain per unit stress. σ = lateral strain/longitudinal strain = β/α = (ΔD/D)/(ΔL/L). Values of σ lie between -1 and 0.5. Relations Among Elastic Constants - B= Y/[3(1-2σ)] - η = Y/[2(1+ σ)] - 9/Y = 3/η + 1/B - σ = [3B-2η]/[6B+2η] Solved Examples For You Q: Which of the following is/are true about deformation of a material? - Deformation capacity of the plastic hinge and resilience of the connections are essential for good plastic behaviour. - Deformation capacity equations considering yield stress and gradient of the moment. - Different materials have different deformation capacity. - All of the above. Solution: D) In a well-designed steel frame structure, inelastic deformation under severe seismic loading is confirmed in beam plastic hinges located near the beam-to-column connections. Thus, deformation capacity of the plastic hinge and resilience of the connections are essential for good plastic behaviour at the hinge is strongly influenced by the difference of material properties. Generally, the material properties are specified in terms of yield stress and/or ultimate strength. However, the characteristics of the materials are not defined by only these properties. Thus, the characteristics of various materials aren’t reflected in present building codes, particularly on deformation capacity classification.
Simple Linear Regression | An Easy Introduction & Examples Simple linear regression is used to estimate the relationship between two quantitative variables. You can use simple linear regression when you want to know: - How strong the relationship is between two variables (e.g., the relationship between rainfall and soil erosion). - The value of the dependent variable at a certain value of the independent variable (e.g., the amount of soil erosion at a certain level of rainfall). Regression models describe the relationship between variables by fitting a line to the observed data. Linear regression models use a straight line, while logistic and nonlinear regression models use a curved line. Regression allows you to estimate how a dependent variable changes as the independent variable(s) change. If you have more than one independent variable, use multiple linear regression instead. Assumptions of simple linear regression Simple linear regression is a parametric test, meaning that it makes certain assumptions about the data. These assumptions are: - Homogeneity of variance (homoscedasticity): the size of the error in our prediction doesn’t change significantly across the values of the independent variable. - Independence of observations: the observations in the dataset were collected using statistically valid sampling methods, and there are no hidden relationships among observations. - Normality: The data follows a normal distribution. Linear regression makes one additional assumption: - The relationship between the independent and dependent variable is linear: the line of best fit through the data points is a straight line (rather than a curve or some sort of grouping factor). If your data do not meet the assumptions of homoscedasticity or normality, you may be able to use a nonparametric test instead, such as the Spearman rank test. If your data violate the assumption of independence of observations (e.g., if observations are repeated over time), you may be able to perform a linear mixed-effects model that accounts for the additional structure in the data. How to perform a simple linear regression Simple linear regression formula The formula for a simple linear regression is: - y is the predicted value of the dependent variable (y) for any given value of the independent variable (x). - B0 is the intercept, the predicted value of y when the x is 0. - B1 is the regression coefficient – how much we expect y to change as x increases. - x is the independent variable ( the variable we expect is influencing y). - e is the error of the estimate, or how much variation there is in our estimate of the regression coefficient. Linear regression finds the line of best fit line through your data by searching for the regression coefficient (B1) that minimizes the total error (e) of the model. While you can perform a linear regression by hand, this is a tedious process, so most people use statistical programs to help them quickly analyze the data. Simple linear regression in R R is a free, powerful, and widely-used statistical program. Download the dataset to try it yourself using our income and happiness example. Load the income.data dataset into your R environment, and then run the following command to generate a linear model describing the relationship between income and happiness: This code takes the data you have collected data = income.data and calculates the effect that the independent variable income has on the dependent variable happiness using the equation for the linear model: To learn more, follow our full step-by-step guide to linear regression in R. Interpreting the results To view the results of the model, you can use the summary() function in R: This function takes the most important parameters from the linear model and puts them into a table, which looks like this: This output table first repeats the formula that was used to generate the results (‘Call’), then summarizes the model residuals (‘Residuals’), which give an idea of how well the model fits the real data. Next is the ‘Coefficients’ table. The first row gives the estimates of the y-intercept, and the second row gives the regression coefficient of the model. Row 1 of the table is labeled (Intercept). This is the y-intercept of the regression equation, with a value of 0.20. You can plug this into your regression equation if you want to predict happiness values across the range of income that you have observed: The next row in the ‘Coefficients’ table is income. This is the row that describes the estimated effect of income on reported happiness: Estimate column is the estimated effect, also called the regression coefficient or r2 value. The number in the table (0.713) tells us that for every one unit increase in income (where one unit of income = 10,000) there is a corresponding 0.71-unit increase in reported happiness (where happiness is a scale of 1 to 10). Std. Error column displays the standard error of the estimate. This number shows how much variation there is in our estimate of the relationship between income and happiness. t value column displays the test statistic. Unless you specify otherwise, the test statistic used in linear regression is the t value from a two-sided t test. The larger the test statistic, the less likely it is that our results occurred by chance. The last three lines of the model summary are statistics about the model as a whole. The most important thing to notice here is the p value of the model. Here it is significant (p < 0.001), which means that this model is a good fit for the observed data. Presenting the results When reporting your results, include the estimated effect (i.e. the regression coefficient), standard error of the estimate, and the p value. You should also interpret your numbers to make it clear to your readers what your regression coefficient means: It can also be helpful to include a graph with your results. For a simple linear regression, you can simply plot the observations on the x and y axis and then include the regression line and regression function: Can you predict values outside the range of your data? No! We often say that regression models can be used to predict the value of the dependent variable at certain values of the independent variable. However, this is only true for the range of values where we have actually measured the response. We can use our income and happiness regression analysis as an example. Between 15,000 and 75,000, we found an r2 of 0.73 ± 0.0193. But what if we did a second survey of people making between 75,000 and 150,000? The r2 for the relationship between income and happiness is now 0.21, or a 0.21-unit increase in reported happiness for every 10,000 increase in income. While the relationship is still statistically significant (p<0.001), the slope is much smaller than before. What if we hadn’t measured this group, and instead extrapolated the line from the 15–75k incomes to the 70–150k incomes? You can see that if we simply extrapolated from the 15–75k income data, we would overestimate the happiness of people in the 75–150k income range. If we instead fit a curve to the data, it seems to fit the actual pattern much better. It looks as though happiness actually levels off at higher incomes, so we can’t use the same regression line we calculated from our lower-income data to predict happiness at higher levels of income. Even when you see a strong pattern in your data, you can’t know for certain whether that pattern continues beyond the range of values you have actually measured. Therefore, it’s important to avoid extrapolating beyond what the data actually tell you. Frequently asked questions about simple linear regression - What is a regression model? A regression model is a statistical model that estimates the relationship between one dependent variable and one or more independent variables using a line (or a plane in the case of two or more independent variables). A regression model can be used when the dependent variable is quantitative, except in the case of logistic regression, where the dependent variable is binary. - What is simple linear regression? Simple linear regression is a regression model that estimates the relationship between one independent variable and one dependent variable using a straight line. Both variables should be quantitative. For example, the relationship between temperature and the expansion of mercury in a thermometer can be modeled using a straight line: as temperature increases, the mercury expands. This linear relationship is so certain that we can use mercury thermometers to measure temperature. - How is the error calculated in a linear regression model? Linear regression most often uses mean-square error (MSE) to calculate the error of the model. MSE is calculated by: - measuring the distance of the observed y-values from the predicted y-values at each value of x; - squaring each of these distances; - calculating the mean of each of the squared distances. Linear regression fits a line to the data by finding the regression coefficient that results in the smallest MSE. Cite this Scribbr article If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
Cause: The British Authorities needed to create cash to aid the Army so that they created the Stamp Act of 1765. This act required colonists to pay for an legit stamp, or seal, when they purchased paper items. Effect: The colonists protested against the Stamp Act immediately. The Stamp Act was handed on March 22, 1765, resulting in an uproar within the colonies over a controversy that became to be a serious cause of the Revolution: taxation without representation. Enacted in November 1765, the controversial act pressured colonists to buy a British stamp for every legit rfile they obtained. Secondly, how did the Stamp Act impact people’s lives? It imposed a wide-reaching tax within the American colonies with the aid of requiring the colonists to pay a tax on each piece of printed paper used. Therefore, this tax impacted almost every colonist residing in British America. Preserving this in consideration, what have been the effects of the Stamp Act? The Stamp Act became exceeded by means of the British Parliament on March 22, 1765. The new tax became imposed on all American colonists and required them to pay a tax on every piece of printed paper they used. Ship’s papers, authorized documents, licenses, newspapers, other publications, and even playing cards were taxed. How did the stamp act end? After months of protest, and an appeal with the aid of Benjamin Franklin before the British Residence of Commons, Parliament voted to repeal the Stamp Act on March 18, 1766. However, an identical day, Parliament surpassed the Declaratory Acts, putting forward that the British government had loose and complete legislative energy over the colonies. How did colonists reply to the Stamp Act? It required the colonists to pay a tax, represented by a stamp, on a number of papers, documents, and playing cards. Unfavourable colonial response to the Stamp Act ranged from boycotts of British goods to riots and assaults on the tax collectors. How did the Stamp Act cause the American Revolution? The Stamp Act, however, was an instantaneous tax at the colonists and brought about an uproar in America over a controversy that turned into to be a serious trigger of the Revolution: taxation devoid of representation. The colonists greeted the appearance of the stamps with violence and economic retaliation. Why the Stamp Act became unfair? In 1765, Britain handed the Stamp Act. This act taxed anything revealed on paper. Many colonists suggested the new taxes were unfair. Colonists had no say in making tax legal guidelines due to the fact they did not have representatives in Parliament. Why did Britain impose the Stamp Act? The Britain imposed taxes on the colonists because it might be used to assist pay the price of defending the colonies. The Stamp Act positioned a tax on published materials such as, authorized documents, newspapers, and playing cards in the colonies. What was the main purpose of the Stamp Act? On March 22, 1765, the British Parliament handed the “Stamp Act” to help pay for British troops stationed within the colonies during the Seven Years’ War. The act required the colonists to pay a tax, represented by means of a stamp, on numerous kinds of papers, documents, and playing cards. Where became the Stamp Act passed? The Stamp Act Congress was held in New York in October 1765. Twenty-seven delegates from 9 colonies have been the members of the Congress, and their obligation was to draft a group of formal petitions mentioning why Parliament had no right to tax them. Among the delegates have been many important guys in the colonies. Why did the Stamp Act so anger the colonists? All of the colonists were mad due to the fact they notion the British Parliament shouldn’t have the correct to tax them. The colonists believed that the only people that should tax them should be their own legislature. They desired them to take returned the law to pay taxes on stamps. Was the Stamp Act justified? The Stamp Act of 1765 was a tax to help the British pay for the French and Indian War. The British felt they were well justified in charging this tax because the colonies were receiving the benefit of the British troops and had to assist pay for the expense. The colonists didn’t consider the same. How did the Stamp Act change history? The Stamp Act of 1765 turned into the 1st inner tax levied directly on American colonists by the British Parliament. The failings of taxation and representation raised with the aid of the Stamp Act strained family members with the colonies to the point that, 10 years later, the colonists rose in armed uprising against the British. What did the colonists do to rebel opposed to Britain? The King and Parliament believed they’d the right to tax the colonies. Many colonists felt that they should not pay these taxes, because they have been handed in England by means of Parliament, no longer with the aid of their very own colonial governments. They protested, asserting that these taxes violated their rights as British citizens. How did the British respond to the colonists boycotting the Stamp Act? The colonists were unhappy with the passage of the Townshend Acts. This was a further instance of a tax the colonists felt became unfair. Because of this law, the colonists agreed to boycott British items and to make their very own products. The British merchants were concerned in regards to the colonists making their very own products. Was the Stamp Act Congress successful? The Stamp Act turned into eventually repealed certainly in accordance with monetary concerns expressed by means of British merchants. However parliament so as to reassert its power and constitutional issues over its correct to tax its colonies handed the Declaratory Act. Why did the colonists item to the recent taxes in 1764 and back in 1765 What arguments did they use? Why did the colonists item to the recent taxes in 1764 and again in 1765? The political allies of British retailers who traded with the colonies raised constitutional objections to new taxes created with the aid of Parliament. Also, colonist claimed that the Sugar Act could wipe out commerce with the French islands. How a lot became a stamp in the Stamp Act? The 2-shilling 6- pence stamp is the most typical of all the Stamp Act revenues. There are approximately 40 to fifty stamps recorded. However, all yet eleven or twelve are off document. A lot of the off-document examples are unused stamps on coloured paper stapled to vellum.
A parameter tells us about an entire population – for example, population mean is a parameter. However, we cannot always poll the entire population, which is where a statistic comes in. So, what is the difference between a parameter and a statistic? A parameter is a number that describes an entire population (it is calculated by taking every member of the population into account). A statistic is a number that is calculated from a sample (a subset of the population). A statistic estimates a parameter when we cannot poll an entire population. Of course, when estimating a population parameter from a sample, a larger sample is better (as long as we are using a representative sample). Taking a sample of only a few data points from the population will not tell us much. In this article, we’ll talk about the difference between a statistic and a parameter. We’ll also look at some examples to see how the two differ. Let’s get started. What Is The Difference Between A Parameter & A Statistic? - a statistic is a number that is based on a sample (subset) of the population, while a parameter is a number that takes into account every member of the population. - a parameter cannot be “wrong”, since it takes all data points into account, while a statistic can vary from sample to sample and may differ from the true value of the parameter (a statistic is often used as an estimate for the population parameter). - when calculating a statistic, the sample size n will affect our error in estimating the parameter, but when calculating a parameter, there is no error. The table below summarizes the difference a statistic and a parameter. |based on a | subset of the |based on | |gives you an | |gives you the | |changes with | |does not | |error in the | based on the |there is no | between a statistic and a parameter. Example: Statistic (Sample Mean) & Parameter (Population Mean) Let’s say that we have a city with a population of 1 million people. If we want to find the average (mean) age parameter of people in the city, we would need to: - ask all 1 million people their age (and hopefully get honest answers!) - sum up all 1 million ages - divide by 1 million (the population size) This would give us an exact value, but it would be impractical to poll all 1 million people in the city (the cost in time and money would be prohibitive). However, it would be much more reasonable to take a sample of 1,000 citizens, ask their ages, and find the average to get a statistic to estimate the mean age. In this case, we would have to: - choose a sample of 1,000 people (hopefully a representative sample that is not biased towards young or old) - ask our entire sample of 1,000 people their age (and hopefully get honest answers!) - sum up all 1,000 ages - divide by 1,000 (the sample size) Once we have the average age of the sample (our statistic), we would use this statistic to estimate the parameter. It could be off by a bit, but the large sample size means that we won’t be too far off (as long as the sample was not biased). What Is Parameter Estimation? Parameter estimation is when we use a statistic (calculated from a sample from the population) to get an estimate for a parameter (which is normally calculated by using data from the entire population). Remember: it is not always practical to poll an entire population. It can also be difficult to get accurate data for a large population, which presents a challenge when trying to find a parameter. A solution is to take a sample of the population that is large enough to make inferences about the entire population. Then, we can use the statistic that we calculated from the sample to make an estimate of the parameter. To avoid bias, we should take a representative sample of the population. For example, if the population of interest is “all U.S. citizens”, then we should not use a sample of women aged 35 to 45 from the state of California. Instead, we should try to find people of all ages from every state for our sample. The larger the sample size “n”, the better, since the sample statistic will approach the value of the population parameter as n gets larger. Is A Statistic Used To Estimate A Parameter? A statistic is used to estimate a parameter when we do not have access to data from the entire population. As mentioned earlier, this lack of data on the entire population could be due to several reasons, including: - Impracticality – it may not be practical to poll the entire population if there are millions of data points to collect. The time and money required to get data on every population member would be prohibitive. - Refusal to participate – in some cases, people will refuse to answer a poll, even if you are able to reach them. They may value their privacy, or they may not have time to answer the poll. Is A Statistic A Random Variable? A statistic is a random variable that is a function of the data from a sample of the population. Note that the sample is also a random variable, since it can vary depending on how we take our sample, when we take our sample, where we take our sample, etc. As we take a larger sample from the population, the variability of the parameter estimate decreases (since we are taking more of the data from the population into account by using a larger sample). To use two extreme examples: - if we take a sample size of just one person from a population of millions to estimate the mean height, there can be a huge variation in the calculated sample mean (perhaps from 4 feet to 8 feet). - if we take a sample size of all but one person in the population (N – 1), then the calculated sample mean will be very close to the true population mean (since one person out of millions will not have a large impact on the mean). Does A Parameter Ever Change? A parameter can only change if the population changes in some way. For example, let’s say that our parameter is the average age of all the people in a small town. If the oldest person in the town dies, then the population size decreases by 1, and the average age will decrease slightly. Similarly, if a new baby is born in town, then the population size increases by 1, and the average age will decrease slightly. Of course, if nobody is born or dies in the town over the course of the next year, and nobody moves in or out of the town over the next year, then the average age will increase by 1 (since everyone is now 1 year older). The key thing to remember is that changing the sample size or sampling method does not change the value of the parameter. Whether I sample 10, 100, or 1000 people, the average height of the people in a city of 1 million does not change. Here is another way to think about it: no matter how I choose to estimate the number of marbles in a jar (by volume of the jar divide by volume of a marble, or by number of layers times marbles per layer, etc.), the actual number of marbles in the jar does not change. In other words, my guess about the true number does not change the true number. When we estimate a parameter with a statistic calculated from a sample, a key idea is the sampling distribution. The sampling distribution is the probability distribution of a statistic calculated with a random sample from the population. If you were to take a large number of different samples from a population and calculate a statistic for each sample, the result would give you your sampling distribution. For example, if you calculated the mean height for 5 samples of 100 people from a city of 1 million, you might get sample means of 69.5 inches, 70.3 inches, 70.4 inches, 70.9 inches, and 71 inches. Given enough of these sample statistics, you could graph a sampling distribution, which might look something like the one below. Note that the graph above looks like a bell curve, or a normal distribution curve. Why Is The Sampling Distribution Approximately Normal? The sampling distribution is approximately normal because the sample size is “large enough”. With a larger sample size, the sampling distribution gets closer to a normal distribution. This is a consequence of the Central Limit Theorem, which states that the sum of independent random variables tends towards a normal distribution – even when the individual random variables are not normal. One way to see this is with dice rolling and sums. Example: Central Limit Theorem & Normal Distribution For Dice Rolling Sums When we add up the faces on multiple dice and divide by the number of dice, the result approaches a normal distribution – with the same mean as for rolling one die, but with much less variation (smaller standard deviation). For just one fair six-sided die, we get a uniform distribution: there is a 1/6 chance of rolling each of the numbers 1 through 6. For the average of two fair six-sided dice, we get a distribution that is not uniform anymore. The table below shows the possible outcomes of rolling two dice. However, the graph of the average of two dice is starting to look more like a normal distribution: For the average of three fair six-sided dice, we get a graph that looks even more like a normal distribution: As we increase the number of dice, the distribution approaches a normal distribution. Its mean is still 3.5 (the same as for a single die), but the standard deviation becomes smaller. This is because when we roll lots of dice, it is more likely that some small die rolls (1 and 2) will be offset by some large die rolls (5 and 6). Now you know the difference between a statistic and a parameter. You also know when to use each one and how to calculate them. I hope you found this article helpful. If so, please share it with someone who can use the information.
Binary Search Trees A binary tree is a tree where each node can only have a left successor and a right successor, or, recursively, as either empty or a root with a left subtree and a right subtree (both are binary trees). To implement a binary tree, usually each node has two pointers to its successors, though it may also contains a pointer to its predecessor. To implement a (general) tree, it is possible to use an array or linked list to point to its successors, or convert the tree into a corresponding binary tree, following the "first child : left child, next sibling : right child" mapping, as shown in the above figure. In the discussion on heap, we have seen a complete binary tree can be efficiently stored in an array, though for binary trees in general, too much memory will be wasted using that approach. A systematic visit of the nodes of a tree is called "walk" or "traversal". For a binary tree, there are three common walk orders, all defined recursively: The other two can be obtained by changing the position of the recursive calls. For general trees, only the last two orders are defined. These orders can be followed manually by tracing the outline drawn around the tree: Given this definition, an inorder walk of a binary search tree lists all of its node in order. In this sense, BST is "sorted" horizontally. In comparison, a heap is sorted vertically, so only maintains order in a path, which is a partial order among all the nodes in the tree. For such a binary tree, search is similar to binary search in a sorted array. The algorithm can be either recursive or iterative. The path the algorithm following is from the root to where a node is or should be in the tree, therefore the running time is proportional to the length of the path, and the worst case running time is proportional to the height of the tree. We can take the search for the minimum and maximum keys as special cases of the search operation. In these cases, the comparisons in the path become unnecessary, and the algorithm simply goes to the end of one direction: left for the minimum and right for the maximum. Given a node x in a binary search tree, its (inorder) successor is the node with the smallest key greater than x.key, so in an inorder tree walk this node will immediately follow x. The following algorithm requires the pointer to parent in each node. If x has a right subtree, then its successor is the minimum node in it, otherwise its successor is its closest ancestor that x is in its left subtree. The Tree-Predecessor algorithm is symmetric to the above one. Repeatedly calling Tree-Successor will give us a non-recursive inorder tree walk algorithm. All the search algorithms on BST run in O(h) time, where h is the height of the tree. The following algorithm inserts node z into BST T (assume z is not already in T): In the algorithm, x traces a path to the insertion point, and y indicates the parent of x. The deletion algorithm is more complicated, because after a non-leaf node is deleted, the "hole" in the structure needs to be filled by another node. There are three possibilities: This solution is realized with the help of an algorithm TRANSPLANT that replaces one subtree with root u with another subtree with root v. In the following algorithm, z is an input argument referring to the node to be deleted from the BST T, and the local variable y refers to its successor. Both above algorithms have run time O(h). Since in BST all major operations have run time O(h), the height of a binary search tree determines the worst-case run time. For a binary tree with n nodes, the shortest tree (complete binary tree) has a height h = Θ(lg(n)), and the highest tree (linear list) has a height h = Θ(n). A randomly formed BST has an expected height h = Θ(lg(n)).
Perpendicular lines are those that form a right angle at the point at which they intersect. Parallel lines, though in the same plane, never intersect. Perpendicular lines are lines that intersect one another at a 90 degree angle. If two lines are perpendicular, then multiplying the slopes of the two lines together equals -1. One common example of perpendicular lines in real life is the point where two city roads intersect. When one road crosses another, the two streets join at right angles to each other and form a cross-type pattern. Perpendicular lines form 90-degree angles, or right angles, to each other on a two-dime In Euclidean geometry, two perpendicular lines intersect at a single point called the intersection. If the two lines are y = ax + b and y = cx + d, then their intersection has x coordinate (d-b)/(a-c) and y coordinate [a(d-b)/(a-c) + b]. Perpendicular parking is done at a 90-degree angle to the curb. Perpendicular spaces make maneuvering the vehicle more difficult than angle parking, but the procedure requires fewer steps than parallel parking. A triangle can have two perpendicular sides. If two sides are perpendicular, the angle they form is a right angle. A triangle can have only one right angle. The phrase "perpendicular lines intersect to form right angles" can be turned into an if-then statement by saying: If perpendicular lines intersect, then the lines form right angles. It can also be phrased as: If two lines form right angles, then the lines are perpendicular. A line that is perpendicular to the x-axis has an undefined slope. All of the points on such a line have the same x-coordinate. If the value of x never changes, then the formula for slope, (y2 - y1)/(x2 - x1), has a denominator of zero, which is mathematically undefined. For each vector, the angle of the vector to the horizontal must be determined. Using this angle, the vectors can be split into their horizontal and vertical components using the trigonometric functions sine and cosine. The horizontal components for the vectors are solved separately from the vertical A perpendicular bisector is a line that cuts across the midpoint of a given line segment at a 90-degree angle. It divides the line segment into two equal parts. A common method for drawing perpendicular bisectors uses a compass and straight edge or ruler.
Elementary School Curriculum The goal of the Language Arts Program in Grades 1-5 is to continue to enrich young minds and to promote learning across the curriculum. Strong reading skills are developed through various sequential and structured reading programs and exposure to culturally diverse, award-winning literature. The use of a balanced reading program includes the development of strong decoding skills, vocabulary, comprehension strategies, and the attainment of fluency. Students are taught to be critical, purposeful, and careful readers, thinkers, and writers. Written work also stresses clear, neat handwriting, increasingly accurate spelling, and the use of appropriate rules of grammar and punctuation. Students in Grades 1-5 are beginning a rich tradition of oral communication. Through weekly “community time” and school performances, students actively participate in public speaking and presentations. This intense focus on language arts is supported by a large amount of uninterrupted time where students practice their reading, writing and oral language skills. In addition, each classroom has “reading rotations” which allow students to work on their skills independently, in small groups, or with the classroom teacher at developmentally appropriate stages, giving each student the opportunity to learn at his or her level and to progress under a watchful eye. This style of learning not only encourages independent study but also self-expression. Mathematics at DAIS utilizes a “spiraling” curriculum that takes students on a journey from the concrete to the abstract. Students learn mathematics through interactions in meaningful events. Using manipulatives, traditional computation, charts, graphs, geometry, games and units of measure, students build a solid mathematical foundation and methodically build on their knowledge year after year, constantly solidifying their skills. The program is both rigorous and balanced. Emphasis is placed on conceptual understanding while building mastery of basic skills and exploration of the full mathematics spectrum. Students learn through all major mathematical content domains—number sense, algebra, measurement, geometry, data analysis, and probability. Students use everyday, real-world problems and situations in the development of higher-order and critical-thinking skills. These skills and habits provide students with the keys to unlock sophisticated concepts. From their earliest days at DAIS, develop familiarity and agility with numbers, learning how to count, beginning to work with money and computation, length, capacity, volume, weight and geometry. They are exposed to the earliest forms of mathematical concepts in their daily work with numbers, shapes and patterns. In Grades 1-5, students begin working with up to six-digit numbers, odd and even numbers, and continue their work with addition, subtraction, multiplication and division. They collect and evaluate data, work with money and currency calculations, discover probability and chance, and master fractions. Problem solving throughout all content areas is at the core of the Elementary School mathematics program. Completion of the Elementary School curriculum allows for successful integration to the study of mathematics in Middle School. The primary goal of the science program is to encourage students’ natural curiosity and sense of wonder. The best way for students to appreciate the scientific enterprise, learn important scientific concepts, and develop the ability to think well is to actively construct ideas through their own inquiries, investigations, and analyses. The curriculum is designed to teach students the scientific method of investigating phenomena. They begin the process by posing questions and then working either in small groups, with a partner or individually to perform experiments that test their hypotheses. Students analyze their results, summarize their findings in group discussions, and keep written records. The scientific expedition in the Elementary School takes students from pebbles, sand and silt to balance and motion, and from magnetism and electricity to solar energy. Health and the workings of the human body are also important components of the science program. With each passing year, the curriculum becomes more advanced. The lessons of the Elementary School science program provide a solid foundation for future study of biology, chemistry, physics and earth science. Students leave the Elementary School curious and prepared for more complex study in Middle and High School. The social studies curriculum at DAIS uses a theme-based approach, exploring a variety of topics. Building and evolving at each grade level, study progresses through the grades students expand their awareness of the world in which they live. When students understand the smallest of worlds around them and how best to negotiate them, they can continually expand those worlds and their study of them. The Elementary School social studies program introduces important concepts and generalizations from history, geography, and other social sciences through an integrated study of children and their families, homes, schools, neighborhoods, and communities. In the early years, students develop a foundation for the entire social studies program and a beginning sense of value as participating citizens. Students begin with their familiar environment and advance to families, homes, schools, neighborhoods and communities in other environments. The approach enhances students’ abilities to examine the perspectives of children in other places and times. Students learn to work in groups, to share, to respect the rights of others, and to care for themselves and their possessions. They acquire knowledge of history to understand the present and plan for the future. Social studies at this level provides students with the skills needed for problem solving and decision making, as well as for making thoughtful value judgments. DAIS believes that visual arts education develops critical, creative and reflective thinking and problem solving. The visual arts provide a source of pleasure and enjoyment, and allow students to gain a deeper awareness of themselves, their place in history, culture and the world. In visual arts, the purpose of education is to enable the learner to become visually literate and expressive at a level consistent with their intellectual, emotional and physical development. To reach their potential for visual expression, students apply reading, writing and verbal skills, use mathematics as a tool for understanding time, space and quantity, integrate learning from other subject areas, and explore a discrete body of knowledge about the art discipline that entails facts, concepts, and skills. Music education enables all learners to explore, create, perceive, and communicate thoughts, images, and feeling through music. Shared experiences in music also significantly contribute to the development of a healthier society through activities that respect and reflect the diversity of human experiences. Throughout the music curriculum, objectives progress from one grade level to the next, with growing sophistication. Students are provided the opportunity to develop literacy in music, including familiarity with the conventions of written music; explore, create and interpret self and global awareness through the study of music and the music traditions of various cultures; and develop discipline and confidence through experiences that require focused and sustained practice. The mission of the computer education program is to integrate technology into the education process by creating an environment that allows all students to have optimum personal and educational growth through the infusion of appropriate technology into their daily school experiences. All students have opportunities to develop technology skills that support learning, personal productivity, and decision-making. Students are presented with a range of learning activities that allows for individual and group work. There will also be extended project work that allows students to locate, collect, and evaluate information from a variety of sources. In becoming technologically proficient, students develop skills over time, through all content areas, through integrated activities. Host Country Language The host country language studies program emphasizes an understanding and appreciation of the Mandarin language and Chinese culture. This is achieved by taking advantage of the school’s local environment and China’s geographical and cultural history. Through the program, listening, speaking, reading and writing is stressed. Students have a variety of opportunities to actively experience the Mandarin language. The vocabulary and phrasing chosen for instruction is planned to helps students’ entry into the local culture and everyday life. The curriculum is structured into nine themes: Introductions, Family, School, Shopping, Food, Community, Travel and Getting Around, Leisure, and Chinese Holidays. The physical education program aims to foster wellness, lifelong fitness, social skills, fair play and fun. In an activity-based, sequential program, students work to maximize their physical fitness while individually developing person and social competence. Good nutrition, self-esteem, decision-making, problem-solving and proper attitudes and practices all play an integral part in the physical education program. Students work to master these skills individually and in groups where sportsmanship and cooperative play are stressed. Character development is an ongoing process throughout the academic program, and social and community life of the school. One of the primary intentions of the program is to infuse students with a concrete ethical foundation, an interactive understanding of community, a self-motivated sense of discipline, respect and responsibility. The school’s Honor Code is a set of core ethical values by which students are to abide. The code is ever present, and students learn to uphold the values set forth in it within their daily interactions with each other, teachers, parents, and the entire school community.